uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,566,815
arxiv
\section{Introduction} \subsection{An overview} There are many well known and important numerical invariants in the context of dynamical systems, including for example entropy, Lyapunov exponents, and the Hausdorff dimension of invariant sets. Even in the context of uniformly hyperbolic systems, many of these invariants do not have simple explicit expressions, nor are they easy to estimate. This leads naturally to the search for useful alternative expressions for each of these quantities which, in particular, lend themselves to accurate numerical evaluation. Our aim in this account is to describe an approach developed over some years, based on periodic points for (hyperbolic) dynamical systems. In particular, in addition to presenting a general overview we will also give a general recipe for converting this information into formulae and useful numerical estimates for a variety of characteristic values, such as those mentioned above. Perhaps the most important aspect of this side of the work is that we obtain rigorous bounds on the errors. Given the data available, the idea is to minimise the error estimate, subject to the practical constraints imposed by the computational power available. There is always some flexibility in the choices, which we can exploit in order to obtain the best error estimate, i.e., the smallest bound on the error. Typically, the quantities that we can expect to study in this approach are those that can be expressed in terms of the thermodynamic pressure. The main ingredient in our approach is a complex function (called the determinant) which packages together the data on periodic orbits and from which can be derived estimates on the pressure, and consequently the quantities of interest. In our presentation, no specialist knowledge is required to implement the general result. However, for completeness we give a fairly complete outline of the proof (with some details on operator theory postponed to Appendix A). \subsection{A historical perspective} The starting point for our approach is the important work of Grothendieck in the 1950s on nuclear operators \cite{grothendieckthesis, grothendieckbullsmf}. This extended the classical theory of trace class operators and Fredholm determinants \cite{fredholm}. Although the impact of this theory in functional analysis and operator theory was well understood, it was not until 20 years later that Ruelle employed it with great effect in ergodic theory and dynamical systems, in his application to dynamical zeta functions \cite{ruelleinventiones}. This viewpoint was highly influential in the work of mathematical physicists (e.g., the well known study by Cvitanovi\'c in his monumental online tome \cite{cvitanovic}). With the advent of modern computers, an important component in ergodic theory and dynamical systems has been the focus on \emph{explicit computation} of quantities arising in the context of dynamical systems. Typically, these approaches are based on the quantity in question being expressed in terms of an associated Ruelle transfer operator, implicitly assumed to act on some appropriate space of functions, and then a finite dimensional approximation to the operator is used to reduce this to a finite dimensional matrix problem, to be solved numerically. In contrast, the approach we will describe is to exploit \emph{real-analytic} properties of the underlying dynamical system by introducing Ruelle transfer operators with strong spectral properties (in particular nuclearity) which allows us to exploit the earlier circle of ideas initiated by Grothendieck. \subsection{A selection of applications} By way of an appetiser to this approach, we now list a cross section of actual and potential applications to a number of different areas in mathematics. We will elaborate these later, but for the purposes of motivation we list them here. \begin{enumerate} \item In number theory, a recent breakthrough in the understanding of the Markov and Lagrange spectra $M, L \subset (0,+\infty)$ from Diophantine approximation has been brought about by the work of Matheus \& Moreira who were able to estimate the dimension of the difference $M\setminus L$ of these spectra, obtaining first a lower bound \cite{matheusmoreira} and then an upper bound \cite{matheusmoreira2}. The methods, in both cases, involved the approximation of the dimension of certain fractal sets which would be amenable to the techniques developed in \cite{jpeffective} for proving rigorous high quality bounds on the dimension (cf. Example \ref{deleteddigitsexample} (b)). \item In the field of spectral geometry, there is a strong tradition of computing eigenvalues of the Laplacian, dating back to pioneering work of Hejhal \cite{hej} using classical methods. However, McMullen's approximations for the lowest eigenvalue for certain infinite volume hyperbolic manifolds were based on the dimension of the limit set and using this viewpoint could be accurately estimated \cite{mcmullen3}. \item Within dynamical systems, among the most widely studied numerical quantities are Lyapunov exponents, measuring the exponential instability of solutions to various problems. There is also interest in estimating Lyapunov exponents in the theory of random matrix products; in particular, in information theory this leads to the computation of entropy rates for binary symmetric channels, related to examples of hidden Markov chains \cite{holliday}. \end{enumerate} We will develop this last setting in the following subsection. \subsection{Illustrative Example: Lyapunov exponents for Bernoulli interval maps}\label{illustrative} We begin by specifying a suitable class of hyperbolic transformations. In particular, we want to assume that the systems we are studying are both real analytic and uniformly hyperbolic, for example either a real analytic expanding map, or a real analytic hyperbolic diffeomophism or flow on a locally hyperbolic set. Before formulating statements in greater generality, let us consider a very specific example of a one dimensional map. Let $X = \mathbb R/\mathbb Z$ be the unit circle and let $T: X\to X$ be a piecewise $C^\omega$ Bernoulli map of the interval which is expanding, i.e., there exists $\lambda > 1$ such that $|T'(x)| \geq \lambda$ for all $x$. In this setting it is well known that there is a unique $T$-invariant probability measure $\mu$ which is absolutely continuous with respect to Lebesgue measure (i.e., $\frac{d\mu}{dx} \in L^1(X)$) by the famous Lasota-Yorke theorem (see \cite[\S 5.1]{kh}). \begin{example}\label{doubling} More concretely, suppose $T:\mathbb R/\mathbb Z \to \mathbb R/\mathbb Z$ is defined by $T(x) = 2x + \epsilon \sin(2\pi x) \pmod 1$ where $|\epsilon| < \frac{1}{4\pi}$, so in particular $|T'(x)| > (2 - 2\pi \epsilon) > \frac{3}{2}>1$ for all $x\in \mathbb R/\mathbb Z$. \end{example} The Lyapunov exponent of the unique absolutely continuous $T$-invariant probability measure $\mu$ is defined by \begin{equation}\label{(1.1)} L(\mu) = \int \log |T'(x))| \, d\mu(x)\,, \end{equation} and by the well known Rohlin identity this also equals the entropy $h(\mu)$. \begin{figure} \centerline{ \begin{tikzpicture} \draw (0,6) -- (0,0) -- (6,0); \draw (0,0) .. controls (1.5,4) .. (3,6); \draw (3,0) .. controls (4.5,2) .. (6,6); \node at (0,-0.5) {$0$}; \node at (3,-0.5) {$\frac{1}{2}$}; \node at (6,-0.5) {$1$}; \node at (-0.5,6) {$1$}; \end{tikzpicture} } \caption{The graph of an expanding map of $\mathbb R/\mathbb Z$ represented on the unit interval.} \end{figure} \medskip We briefly summarise the method for estimating $L(\mu)$ in three steps. \medskip \noindent {\bf Step 1 (Complex functions and coefficients)}. We wish to consider period-$n$ points $T^nx=x$ and then define for $t\in\mathbb{R}$ the coefficients $a_1(t), a_2(t), \ldots$ using the Taylor expansion $$ \exp \left( -\sum_{n=1}^\infty \frac{z^n}{n}\sum_{T^nx=x} \frac{|(T^n)'(x)|^{-t}}{1- ((T^n)'(x))^{-1}} \right) = 1 + \sum_{n=1}^\infty a_n(t) z^n + \cdots . $$ \medskip \noindent {\bf Step 2 (Coefficients and Lyapunov exponents)}. It can be shown that the Lyapunov exponent $L(\mu)$ given by (\ref{(1.1)}) admits the alternative formulation \begin{equation*}\label{(1.2)} L(\mu) = \frac{ - \sum_{n=1}^\infty a_n'(0)}{\sum_{n=1}^\infty n a_n(0)} , \end{equation*} where both the numerator and denominator are absolutely convergent series. Truncating these series to give the computable quantity $$ L_N = \frac{ - \sum_{n=1}^N a_n'(0)}{\sum_{n=1}^N n a_n(0)} \,, $$ we note that $$ L_N \to L\, \hbox{ as } N \to +\infty\,, $$ and so for a large natural number $N$, the quantity $L_N$ is an approximation to $L$. \medskip \noindent {\bf Step 3 (Error bounds)}. The quality of the approximation to $L$ given by $L_N$ may be indicated heuristically by comparing how closely $L_N$ and $L_{N-1}$ agree, though more accurate errors in the approximation can be obtained using: \begin{enumerate} \item A certain value $\theta\in (0,1)$, which we refer to as the \emph{contraction ratio}, measuring the extent to which a complex disc $D$ is mapped inside itself by the inverse branches $T_j$ of $T$; \item The integrals $\beta_k = \frac{1}{r^{2k}} \int_{0}^1 \left| \sum_{j} T_j'(x + r e^{2\pi i t}) (T_j(x + re^{2\pi i t}) )^k \right|^2 dt$, where $1 \leq k \leq L$; and \item The weights $\alpha_k = \sqrt{\sum_{l=k+1}^L \beta_l^2}$ for $N \leq k \leq L$, \end{enumerate} where $N <L$ are suitably chosen. In particular, we can then use these values to bound the coefficients $a_n(t)$, with $n > N$, for which it is impractical to explicitly compute them with effective error estimates. This will be explained in greater detail in \S 5. \bigskip Now that we have illustrated the general theme using the specific example of Lyapunov exponents for one dimensional expanding maps, we can turn to more general dynamical settings, and more general characteristic values. In the next section we describe the broad context of the results and consider the pressure function, from which many of the quantities we want to consider can be derived. \section[Hyperbolic maps]{Hyperbolic maps and the pressure function} To set the scene, we begin by introducing two natural classes of discrete dynamical system, then consider the associated pressure function, which will be useful in providing the bridge between the dynamics and the various quantities we wish to describe. \subsection{Hyperbolic maps} Let us begin with the discrete setting. Assume that $T: M \to M$ is either a smooth expanding map (perhaps on an invariant repeller $Y \subset M$) or Anosov diffeomorphism (see \cite[\S 6.4]{kh}). Later we will consider the more general settings of repellers and the natural generalisation to flows. However, for the present we will restrict to the discrete cases above and, whenever more convenient, to the case of expanding maps. \begin{definition} A partition $X = \cup_{i=1}^k X_i$ is called a {\it Markov partition} for the expanding map $T: X \to X$ if \begin{enumerate} \item $ \overline{\hbox{\rm int}(X_i)} \cap \overline{\hbox{\rm int}(X_j)}$ for $i \neq j$; \item $X_i = \overline{\hbox{\rm int}(X_i)}$ for $i=1, \cdots, k$; and \item each $T(X_i)$ for $i=1, \cdots, k$ is a union of other elements of the partition. \end{enumerate} \end{definition} Eventually, we will want to assume that each of the restrictions $T|X_i$ ($i=1, \cdots, k$) is real analytic, in the sense of having an analytic extension (via charts) to a complex neighbourhood $U_i$. However, to set up the definitions we only require that it be $C^1$. In the case of Anosov diffeomorphisms the approach is somewhat similar, except that one uses Markov partitions for invertible maps. In the case of Anosov flows one can expect to use Markov Poincar\'e sections to reduce the analysis to the discrete case. \subsection{Pressure} The pressure function was introduced into the study of hyperbolic dynamical systems by Ruelle (see e.g.~\cite{ruellebook}). The importance of pressure stems from the fact that it yields a unifying concept to describe dynamical and geometric invariants. For example, it is well known that various dynamically and geometrically defined fractals (e.g.~limit sets and Julia sets) have the property that their Hausdorff dimension is given by solving an associated \emph{pressure equation} (usually known as the Bowen formula). More generally, there are a host of other dynamical quantities that can be expressed in terms of the pressure function, some of which are listed below in subsection \ref{relating}. To define pressure $P$, since we are considering hyperbolic maps we have the luxury of expressing this in terms of periodic orbits $T^nx=x$, for $n \geq 1$, and define $P: C^0\left(\coprod_i X_i\right) \to \mathbb R $ ($i=1, \cdots, k$) on the disjoint union of the elements of the Markov Partition. \begin{definition} The {\it pressure} of the continuous function $g$ is given by $$ P(g) := \limsup_{n \to +\infty} \frac{1}{n} \log\left( \sum_{T^nx=x} \exp \left( \sum_{j=0}^{n-1} g(T^ix) \right) \right)\,, $$ and admits the alternative variational definition $$ P(g) = \sup\left\{ h(\mu) + \int g d\mu \hbox{ : } \mu \hbox{ is a $T$-invariant probability measure}\right\}. $$ \end{definition} When $g$ is H\"older continuous, there is a unique probability measure $\mu_g$ realising the above supremum \cite{bowenbook}. \begin{definition} The measure $\mu_g$ is called the {\it equilibrium measure} (or {\it Gibbs measure}) for $g$. \end{definition} If $g=0$ then $P(0)=h_{\rm top}(f)$ is the topological entropy, and the corresponding equilibrium measure $\mu_g$ is called the measure of maximal entropy (and the Bowen-Margulis measure in the case of Anosov systems). \begin{example}\label{acim} For expanding maps $T:M \to M$, in the special case $g(x)=-\log |\hbox{\rm Jac}(D_xT)|$ then $P(g)=0$, and the corresponding equilibrium measure $\mu_g$ is a $T$-invariant probability measure equivalent to the volume on $M$. For Anosov maps $T:M \to M$, in the special case that $g(x)=-\log |\hbox{\rm Jac}(D_xT|E^u)|$, where $E^u$ is the unstable bundle, then $P(g)=0$ and the corresponding equilibrium measure $\mu_g$ is called the Sinai-Ruelle-Bowen measure (or SRB-measure) (see \cite{kh}, \S20.4). \end{example} It is this pressure function that often helps to relate periodic orbits to the quantities in which we are interested, and which we ultimately want to numerically estimate. A simple, but important, application is the following result due to Ruelle (see \cite[p. 99]{ruellebook}): \begin{lemma}\label{ruellederivatives} For any H\"older continuous functions $g_0, g: X \to \mathbb R$ the function $$t \mapsto P( g_0+ tg) \in \mathbb R, \hbox{ for $t \in \mathbb R$},$$ is analytic. Moreover \begin{enumerate} \item $\frac{d P(g_0 + tg)}{dt}|_{t=0} = \int g d\mu_{g_0}$, and \item $\frac{d^2 P(g_0 + tg)}{dt^2} |_{t=0} = \lim_{n \to +\infty} \frac{1}{n} \int \left(\sum_{i=0}^{n-1} g(T^ix) \right)^2 d\mu_{g_0}(x)$ provided $\int g d\mu_{g_0}=0$. \end{enumerate} \end{lemma} \smallskip The quantity in part 2 of Lemma \ref{ruellederivatives} is often called the {\it variance}. It is a feature of the method we use that we can obtain fairly explicit expressions for derivatives of pressure. In particular, those quantities that can be written in terms of the derivative expressions can therefore, in turn, be written in terms of periodic points. \subsection{Relating pressure to characteristic values}\label{relatingpressure}\label{relating} We can now consider a number of familiar quantities that we can write in terms of the pressure function. Below we list a few simple examples. Later we shall consider other applications, but for the present these three examples illustrate well our theme. \medskip \noindent {\bf (I) Lyapunov exponents.} For an expanding map $T: M \to M$ we can write the Lyapunov exponent for the absolutely continuous invariant measure $\mu$ by $$ L(\mu)= \int \log \|D_xT\| d\mu(x). $$ In the particular case of one dimension this reduces to the situation described in subsection \ref{illustrative}. The following follows immediately from part 1 of Lemma \ref{ruellederivatives}. \begin{lemma} If we let $g_0(x)=-\log |\hbox{\rm Jac}(D_xT)|$ and $g = - \log \|D_xT\|$ then $ L(\mu) = \frac{d}{dt} e^{P(g_0 + tg)}|_{t=0}. $ \end{lemma} \medskip \noindent {\bf (II) Variance.} For an expanding map $T: M \to M$ we can write the variance for the absolutely continuous $T$-invariant measure $\mu$ and a H\"older continuous function $g: X \to \mathbb R$ with $\int g d\mu=0$ defined by $$ \Sigma(g, \mu) := \lim_{n \to +\infty} \frac{1}{n} \int \left(\sum_{i=0}^{n-1} g(T^ix) \right)^2 d\mu(x). $$ The following follows immediately from part 2 of Lemma \ref{ruellederivatives}. \begin{lemma} We can write $$ \frac{d^2 P(g_0 + tg)}{dt^2}|_{t=0} = \Sigma(g, \mu). $$ \end{lemma} This plays an important role in the Central Limit Theorem \cite{bowenbook}, i.e., for any real numbers $a< b$ we have $$ \lim_{n \to +\infty} \mu\left(\left\{ x \in X \hbox{ : } \frac{1}{\sqrt{n}}\sum_{i=0}^{n-1} g(T^ix) \in [a,b]\right\}\right) = \frac{1}{\sqrt{2\pi}} \int_a^b e^{-u^2/2\sigma} du. $$ \medskip \noindent {\bf (III) Linear response.} Let $T_\lambda: M \to M$ be a smooth family of expanding maps ($-\epsilon < \lambda < \epsilon$) and let $\mu_{g_\lambda}$ be the associated absolutely continuous measure, arising as the Gibbs measure for $g_\lambda(x) = -\log |\hbox{\rm Jac}(D_xT_\lambda)| $. Then by part 2 of Lemma \ref{ruellederivatives} we have $$\frac{\partial P(g_\lambda + t g)}{\partial t} |_{t=0} = \int g d\mu_{g_\lambda}.$$ Thus differentiating in $\lambda$ formally gives: $$ \frac{d^2 P(g_\lambda + t g)}{dt\, d\lambda} |_{\lambda=0}=\frac{d}{d\lambda} \left(\int g d\mu_{g_\lambda}\right)|_{\lambda=0}. $$ (A slight subtlety here is that the differentiation of the pressure is easier with respect to the fixed expanding map $T_0: M \to M$ and thus it is appropriate to introduce a family of conjugacies $\mu_\lambda: M \to M$ between $T_0$ and $T_\lambda$ and to consider $\frac{d^2 P(g_0\circ \pi_\lambda + tg\circ \pi_\lambda)}{dt d\lambda} |_{\lambda=0}$). The above list does not exhaust the possible quantities that can be derived from the pressure, but gives a selection we hope illustrates our general approach. \bigskip In the next section, we will introduce a standard tool, the \emph{transfer operator}, which allows us to analyse the pressure, and thus its many derivative properties, using basic ideas from linear operator theory. We will also describe the connection with a family of complex functions called determinants. \section[Transfer operators]{Transfer operators and determinants} A central object in thermodynamic formalism is the \emph{transfer operator}, from which important dynamical and geometric invariants such as entropy, Lyapunov exponents, invariant measures, and Hausdorff dimension can be obtained. Let us now restrict (for the present) to the case of expanding maps $T: X \to X$. The analyticity of the pressure, as well as other properties including the proof of Lemma \ref{ruellederivatives}, depend on the use of {\it transfer operators}. Eventually, we will want to consider operators acting on spaces of analytic functions, but for the purposes of defining them it suffices for the present to consider the Banach space of $C^1$ function $C^1(X, \mathbb C)$ with the norm $\|f\| = \|f\|_\infty + \|Df\|_\infty$. The operators are then defined as follows: \begin{definition} If $T:X \to X$ is a $C^1$ expanding map, and $g: X \to \mathbb R$ is $C^1$, then we define the {\it transfer operator} $\mathcal L_{g}: C^1(X, \mathbb C) \to C^1(X, \mathbb C)$ by $$ \mathcal L_g w(x) = \sum_{Ty = x} e^{g(y)} w(y) \,, $$ the summation being over the inverse images $y$ of the point $x \in X$. \end{definition} This operator preserves various function spaces and exhibits certain positivity properties which ensure that a Perron-Frobenius type theorem holds: when acting on $C^1(X, \mathbb C)$, $\mathcal L_g$ has a leading eigenvalue which is simple, positive, and isolated. Moreover, the connection with the pressure comes from the following basic result due to Ruelle \cite{ruellebook} (see also \cite{bowenbook}). \begin{lemma} The spectral radius of $\mathcal L_g$ is $e^{P(g)}$. In particular, $e^{P(g)}$ is a maximal isolated eigenvalue for $\mathcal L_g$. \end{lemma} In particular, the differentiability (indeed analyticity) of the pressure follows by standard perturbation theory and the expression for the derivatives in the lemma follow by explicit manipulations. \begin{example} In the special case that $g(x)=-\log |\hbox{\rm Jac}(D_xT)|$ then $P(g)=0$ and $\mathcal L_{g}$ is known as the {\sl Ruelle-Perron-Frobenius operator}. The eigenmeasure $m_g = \mathcal L_g^*m_g$ is normalised Lebesgue measure, and the equilibrium measure $\mu_g = h_g m_g$ is the unique $T$-invariant measure absolutely continuous with respect to Lebesgue measure, where $h_g = \mathcal L_g h_g $ is the maximal eigenfunction \cite{bowenbook}. \end{example} Thus far we have been following a very traditional approach. However, now we introduce an extra ingredient. \subsection{Determinants and their coefficients} Let $T: X \to X$ be a $C^1$ expanding map. For any continuous function $G: X \to \mathbb R$ and each period-$n$ point $T^nx=x$, $n \geq 1$, we can associate the weight $$ G^n(x):= \sum_{i=0}^{n-1} G(T^ix) \in \mathbb R.$$ Later we will want to assume that $T$ and $G$ are real analytic, but for the purposes of introducing the determinant we need only assume these weaker hyotheses. It is convenient to package up the information from individual periodic points into a single (generating) complex function. \begin{definition} Given a continuous function $G: X \to \mathbb R$ we can formally define a function of the single complex variable by: $$ D(z) = D_{G,T}(z) = \exp\left(- \sum_{n=1}^\infty \frac{z^n}{n} \sum_{T^nx=x} \frac{\exp\left({\sum_{i=0}^{n-1}G(T^ix)}\right)}{ \det(I- [D(T^n)(x)]^{-1})} \right), \quad z\in \mathbb C\,. $$ \end{definition} \bigskip The radius of convergence of the infinite series $D(z)$ is related to the pressure. More precisely, we can see that this converges to an analytic function provided the series converges, i.e., $|z| e^{P(G)} < 1 $ where $$e^{P(G)} = \lim_{n\to +\infty} \left|\sum_{T ^nx=x} \frac{\exp\left(\sum_{i=0}^{n-1}G(f^ix)\right)}{\det(I- [D(T^n)(x)]^{-1})}\right|^{1/n} \left(= \lim_{n\to +\infty} \left|\sum_{T ^nx=x} \exp\left(\sum_{i=0}^{n-1}G(f^ix)\right)\right|^{1/n} \right). $$ In particular, writing $D(z)$ as a power series $$ D(z) = 1 + \sum_{n=1}^\infty a_n z^n,\eqno(3.1) $$ with coefficients $a_n = a_n(T,G)$ depending on $T$ and $G$, we see it has radius of convergence at least $e^{-P(G)}$. \begin{example}[Expanding maps of the interval] In the particular case of an expanding map $T: X \to X$ of the interval $X$, given a continuous function $G: X \to \mathbb R$, the function $D(z)$ takes the simpler form: $$ D(z) = \exp\left(- \sum_{n=1}^\infty \frac{z^n}{n} \sum_{T^nx=x} \frac{\exp\left({\sum_{i=0}^{n-1}G(T^ix)}\right)}{ 1- 1/(T^n)'(x)} \right), \quad z\in \mathbb C\,. $$ \end{example} This naturally leads to asking about the meromorphic extension of $D(z)$. To proceed further, we need to assume more regularity on the function $G$. This brings us to the following important result of Ruelle \cite{ruelleinventiones}. \begin{lemma}\label{ruelle-domain} If $T$ is real analytic then \begin{enumerate} \item $D_{G,T}(z)$ is analytic in all of $\mathbb C$. \item The value $z= e^{-P(G)}$ is a simple zero for $D_{G,T}(z)$ in this extension. \end{enumerate} \end{lemma} In particular, we see from part 1 of Lemma \ref{ruelle-domain} that we can improve the result on the radius of convergence of the power series to $\lim_{n\to +\infty} |a_n|^{1/n} = 0$, i.e., for any $0 < \theta < 1$ there exists $C>0$ such that $|a_n| \leq C \theta^n$. In fact, the original proof of Lemma \ref{ruelle-domain} due to Ruelle, and inspired by work of Grothendieck, works in this way by giving estimates on the coefficients $a_n$. We will later describe quite precise bounds on the coefficients $a_n$ which establishes part 1. We will return to this point in the next subsection. \begin{rem} If we assume that $T$ and $G$ are $C^\infty$ then we would still have that $\lim_{n\to +\infty} |a_n|^{1/n} = 0$. However, as we shall see, in the analytic case we have more effective estimates on $|a_n|$. \end{rem} \subsection{Pressure and the characteristic quantities} In order to relate $D(z)$ back to the pressure, and thus the various dynamical quantities, we need to make different choices for $G$. More precisely, we can consider the special case of the function $G = g_0 + t g$ where $g_0, g: X \to \mathbb C$ and $t\in \mathbb R$. This leads to the following particular case of the previous definition. \begin{definition}\label{defn} We formally define the {\it determinant} for $g_0$, $g_1$ to be the bi-complex function $$ d_{g_0,g}(z, t):= \exp \left( -\sum_{n=1}^\infty \frac{z^n}{n} \sum_{T^nx=x} \frac{ \exp \left( \sum_{j=0}^{n-1} (g_0 + tg)(T^ix) \right) }{ 1 - 1/(T^n)'(x) } \right), $$ where $z,t \in \mathbb C$. \end{definition} The relationship between the determinant and the pressure in Lemma \ref{ruelle-domain} now implies the following. \begin{cor}\label{cor} Assume that $g_0$, $g_1$ are $C^\omega$. \begin{enumerate} \item The function $d_{g_0,g}(z, t)$ has a bi-analytic extension to all of $\mathbb C^2$. \item The value $z= e^{-P(g_0 + tg)}$ occurs as a simple zero for $z \mapsto d_{g_0,g}(z, t)$. \end{enumerate} \end{cor} We can make the further simplifying assumption that $P(g_0)=0$, where we can replace $g_0$ by $g_0 - P(g_0)$ if necessary. In particular, the first zero for $z \mapsto d_{g_0,g}(z, 0)$ (where $t=0$) occurs at $z=e^{-P(g_0)}$. We can use part 2 of Corollary \ref{cor}, and the implicit function theorem, to write the derivative of the pressure as $$ \frac{d P(g_0 + tg)}{dt}|_{t=0} = \frac{\partial d_{g_0,g}(1, t)}{\partial t}|_{t=0}/ \frac{\partial d_{g_0,g}(z, 0)}{\partial z}|_{z=1} $$ in terms of partial derivatives of the determinant. Furthermore, in light of part 1 of Corollary \ref{cor}, for each $t \in \mathbb R$ we can formally expand $$ d_{g_0,g}(z, t) = 1 + \sum_{n=1}^\infty a_n(t) z^n. $$ It is these values which we need to relate to the quantities in which we are interested, and which we ultimately want to numerically estimate. This gives us the simplest definition of the numbers $(a_n(t))_{n=1}^\infty$, although we can explicitly expand these in terms of periodic points too. This reveals the following simple, but crucial, fact. \begin{lemma} For each $n \in \mathbb N$, we can express the value $a_n(t)$ in terms of the periodic points of period at most $n$. \end{lemma} In particular, this ensures that the coefficients $a_n(t)$ are relatively easy to estimate. Moreover, we can rapidly approximate $ d_{g_0,g}(z, t) = 1 + \sum_{n=1}^\infty a_n(t) z^n $ by the truncated series $$ 1 + \sum_{n=1}^N a_n(t) z^n $$ for $N$ moderately large. The rapidity of the approximation is justified by the following result. \begin{cor}\label{super} The coefficients $a_n=a_n(t)$ tend to zero at a super-exponential rate. \end{cor} When $X$ is one dimensional then there exists $\theta \in(0, 1)$ such that $|a_n| = O(\theta^{n^2})$ as $n\to\infty$. We will give very explicit estimates for the implied constants in the $O(\cdot)$ term. \begin{rem} When $X$ is $d$-dimensional (with $d \geq 2$) then there exists $0 < \theta < 1$ such that $|a_n| = O(\theta^{n^{1+1/d}})$ as $n\to\infty$. \end{rem} \begin{rem} More generally, we can assume we have a family of real analytic functions $g_1, \cdots, g_m: X \to \mathbb C$ and $$ d(z, t):= \exp \left( -\sum_{n=1}^\infty z^n \sum_{T^nx=x} \exp \left( \sum_{j=0}^{n-1} (g_0 + t_1g_1 + \cdots + t_mg_m)(T^jx) \right) \right) $$ for $z \in \mathbb C$ and $t_1,\ldots,t_m \in \mathbb R$. \end{rem} To summarize, we now have a method of approaching the pressure function which might be considered to have a simple analogy to that of estimating the largest eigenvalue of a matrix by computing the characteristic polynomial, which is a complex function whose zeros give the eigenvalues. This simple viewpoint relating the determinant and transfer operators ultimately leads to a surprisingly efficient method of computing pressure. Furthermore, we can accurately estimate the error terms (see \S 5). Having related the various quantities which we want to estimate to the zero(s) of the determinant $d(z,t)$ (often via the pressure function, cf. subsection 2.3) we need to answer three key questions: {\it How can we use this formulation to get numerical estimates? Why does this approach lead to superior approximation estimates? How can we estimate the quantities with validated rigour?} We can now turn to the practical problem of writing explicit expressions for the approximations to quantities we described in subsection \ref{relating}, in terms of the first $N$ coefficients in the expansion of the determinant. \subsection{Dynamical quantities and coefficients}\label{quantities} We have established (in subsection 2.3) that many of the quantities that we want to estimate can be expressed in terms of pressure, and its derivatives, which in turn can be written in terms of the determinant and its derivatives. Since the determinant has a power series expansion it is a straightforward, but useful, exercise to write these expressions explicitly in terms of the derivatives of the coefficients. More precisely, let us write $$ A = \sum_{n=1}^\infty n a_n(0),\quad B = \sum_{n=1}^\infty n(n-1) a_n(0),\quad C = \sum_{n=1}^\infty a_n'(0),\quad D = \sum_{n=1}^\infty n a_n'(0),\quad E = \sum_{n=1}^\infty a_n^{\prime \prime}(0) $$ and the associated finite sums $$ A_N = \sum_{n=1}^N n a_n(0),\ \ B_N = \sum_{n=1}^N n(n-1) a_n(0),\ \ C_N = \sum_{n=1}^N a_n'(0),\ \ D_N = \sum_{n=1}^N n a_n'(0),\ \ E_N = \sum_{n=1}^N a_n^{\prime \prime}(0) $$ for $N \geq 1$. A recurrent theme in our discussions is that we want to express the dynamical quantities in terms of $A$, $B$, $C$, $D$, $E$, etc., and then approximate these expressions by using instead the more computationally tractable quantities $A_N$, $B_N$, $C_N$, $D_N$, $E_N$. \begin{rem} In practical applications, even for quite simple examples, we might currently only expect to compute these values up to $N=25$, say, in a reasonable time frame. However, with further technological advances, one might expect that this value can be improved. \end{rem} To illustrate this principle, we can now reformulate the three key quantities described in subsection \ref{relatingpressure}, and their approximations, in terms of these series and summations. We list these below. \medskip \noindent {\bf (I) Lyapunov exponents.} We can write the Lyapunov exponent for $\mu$ as $$L(\mu) = -\frac{\sum_{n=1}^\infty a_n'(0)}{\sum_{n=1}^\infty n a_n(0)} = -\frac{C}{A}\,, $$ and in particular the Lyapunov exponent can be approximated by the computable quantities $$ -\frac{C_N}{A_N}, \quad N \geq 1\,. $$ \medskip \noindent {\bf (II) Variance.} The variance is given by $$ \Sigma^2 = \left(\frac{C}{A}\right)^2 + \frac{1}{A} \left(B \left(\frac{C}{A}\right)^2 - 2 D B \left(\frac{C}{A}\right) + E\right) \,, $$ and in particular we can approximate the variance by $$ \left(\frac{C_N}{A_N}\right)^2 + \frac{1}{A_N} \left(B_N \left(\frac{C_N}{A_N}\right)^2 - 2 D_N B_N \left(\frac{C_N}{A_N}\right) + E_N\right), \quad N \geq 1\,. $$ \medskip \noindent {\bf (III) Linear response.} We can write $$ \int g d\mu_T = - \frac{C}{A} \,, $$ and in particular we can approximate the integral by $$ -\frac{C_N}{A_N}, \quad N \geq 1. $$ Finally, by replacing $T$ by $T_\lambda$, differentiating both sides in $\lambda$ and we get an expression for the linear response in terms of the (derivatives of the) coefficients $a_n$. \section{Rates of mixing and dimension}\label{mixing} In this section we want to make a slight detour to introduce another two important quantities which, although not quite fitting into the same framework described in the previous section, can also be approximated using the determinant. First we consider the rate(s) of mixing, which can be studied via the zeros of the determinant. \subsection{Rates of mixing}\label{rates} Let $X$ be $d$-dimensional and let $T: X \to X$ be a $C^\omega$ conformal expanding map. More precisely, we can write the derivative $DT(x) = w(x) \Theta(x)$ where $w: X \to \mathbb R$ and $\Theta: X \to SO(d)$. In the particular case that $d=1$ then the one dimensional map $T$ is automatically conformal. Let $g_0: X \to \mathbb R$ be real analytic and let $\mu = \mu_{g_0}$ be the equilibrium-Gibbs measure associated to $g_0$. \begin{example}As we observed in Example \ref{acim}, when $g_0(x) = -\log |\hbox{\rm Jac}(T)(x)|$ the associated measure $\mu_{g_0}$ is the unique absolutely continuous invariant probability measure. \end{example} The following is an important object in ergodic theory. \begin{definition} Given a real analytic function $g$, we can consider the {\it correlation function} defined by $$c(n)= \int g \circ T^n g d\mu - \left(\int g d\mu\right)^2, \quad n \geq 1. $$ \end{definition} Since $T:(X, \mu) \to (X, \mu)$ is mixing we know that $c(n) \to 0$, as $n \to +\infty$. The (exponential) rate of mixing is given by the smallest value $0 < \lambda_1 < 1$ such that $c(n)= O(\lambda_1^n)$ for all such $g$ and $n \geq 1$. Since $\lambda_1$ corresponds to the modulus of the second eigenvalue of the transfer operator, the connection to the determinant comes through the following simple lemma. \begin{lemma}\label{rho} The rate of mixing $0 < \lambda_1 < 1$ is the reciprocal of the modulus $\rho>1$ of the second smallest (in modulus) zero of $d_{g_0,g}(z,0)$, i.e.~$\lambda_1 = 1/\rho$. \end{lemma} In particular, the value $\rho$ in Lemma \ref{rho} is a zero of the (real valued) series $$ d_{g_0,g}(z,0) = 1 + \sum_{n=1}^\infty z^n a_n(0) $$ and so in order to get rigorous bounds on $\rho$ we can use the intermediate value theorem. More precisely, given $\epsilon_1, \epsilon_2>0$ and $N \in \mathbb N$ we look for bounds $\alpha_N < \rho < \beta_N$ by choosing $\alpha_N, \beta_N$ such that $$ 1 + \sum_{n=1}^N \alpha_N^n a_n(0) \leq -\epsilon_1 \hbox{ and } 1 + \sum_{n=1}^N \beta_N^n a_n(0) \geq \epsilon_2 $$ and $$ \epsilon_1 > \left|\sum_{n=N+1}^\infty \alpha_N^n a_n(0)\right| \hbox{ and } \epsilon_2 > \left|\sum_{n=N+1}^\infty \beta_N^n a_n(0)\right|. $$ \noindent Thus finding good bounds $\alpha_N < \rho < \beta_N$ reduces to: \begin{enumerate} \item getting good estimates on the coefficients $a_i(0)$ ($i=1, \cdots, N$); and \item finding effective bounds on the tails $\sum_{n=N+1}^\infty a_n(0)$. \end{enumerate} The first is a basic problem in practical computing. The second is a more challenging mathematical problem. We can illustrate this with a simple example. \begin{example}[Lanford map, cf.~\cite{jpv,lanford}]\label{lanford} Let $T: [0,1] \to [0,1]$ be defined by $$ T(x) = 2x + \frac{1}{2} x(1-x) \pmod 1\,. $$ Let $\mu$ denote the unique absolutely continuous $T$-invariant probability measure. We can now evaluate dynamical quantities such as the rate of mixing by approximating the infinite series for the determinant by finite truncations. For example, using periodic points up to period $N= 16$ we might estimate the first ten zeros (i.e.~the ten zeros of smallest modulus) of the determinant by locating the first ten zeros of the degree-16 polynomial truncation (see Table 1). \begin{table}[h!] \begin{center} \begin{tabular}{| c | l |} \hline $n$ & zero $z$\\ \hline $1$ & $1.0000000000000000000000033711203720152 $\\ $2$& $1.72986531066431681927894069519181629 $\\ $3$ & $2.6922698183465737455729975178528581 $\\ $4$&$4.1132466756759777783645672672979956 $\\ $5$& $6.2454853205721291176033177124804291 $\\ $6$ & 9.4538916717397326544473431332123370 \\ $7$ & $ 14.282734336458524434000313080802510 $ \\ $8$ & $21.549229994532327179757991408669084$ \\ $9$ & $32.516266102701803490675636630907193$ \\ $10$ & $47.82910484702218758446773289813753 $\\ \hline \end{tabular} \end{center} \caption{Estimates on the first $10$ zeros of the determinant} \end{table} More accurate estimates on the second eigenvalue $\lambda_1$ of the transfer operator, i.e.~the exponential rate of mixing, are obtained by approximating the reciprocal of the second zero of the determinant, given in Table 2, using degree-$N$ truncations with $N$ ranging up to $N=25$. \begin{table}[h!] \begin{tabular}{| c | l |} \hline $N$ & second eigenvalue $\lambda_1$ estimate\\ \hline 12& 0.5780796887515271422742765368788953299348846128812023109203951947004787498004165\\ 13& 0.5780796885356306834127405345836355663641109763750019611087170244976104563627485\\ 14& 0.5780796885371288506764371131157188309769151998850254045247866596035386808066373\\ 15& 0.5780796885371219470570630291328371225537224787114789966418506438634692131905786\\ 16& 0.5780796885371219681960432055118626393344991913606205477442507113706445878179303\\ 17& 0.5780796885371219681530107872274433995896003010891980049121721575572541941602200\\ 18& 0.5780796885371219681530690475964044549434630264578745046610737538545621059499499\\ 19& 0.5780796885371219681530689951228630661230046404665596121577924787108887084624467\\ 20& 0.5780796885371219681530689951543111607290086789469044407358342311002102577614591\\ 20& 0.5780796885371219681530689951543111607290086789469044407358342311002102577614591\\ 21& 0.5780796885371219681530689951542986173994750355886730998629996708085266331162364\\ 22& 0.5780796885371219681530689951542986207295902353572376170142239221705813115693187\\ 23& 0.5780796885371219681530689951542986207290016813780055122529428636713090405312973\\ 24& 0.5780796885371219681530689951542986207290017506309910801987018396260494147990458\\ 25& 0.5780796885371219681530689951542986207290017506255654011865278736341388970140860\\ \hline \end{tabular} \caption{Estimates on the second eigenvalue $\lambda_1$ coming from the reciprocals of zeros for $1 + \sum_{n=1}^N z^n a_n(0)$, for $12\leq N \leq 25$.} \end{table} \end{example} We will address the rigorous bounds on the error later. \begin{rem} The rate of mixing has a more subtle generalisation of the following form. The speed of mixing is controlled by a sequence of complex numbers $\lambda_j \to 0$ ordered to be decreasing in modulus, and polynomials $P_j$. More precisely, for any $\delta > 0$ there exist $M$ and $c_j = c_j(g)$ ($j=1, \cdots, M$) such that $$ c(n) = \sum_{j=1}^M c_j P_j(n)\lambda_j^n $$ where $P(n)=1$ in the case of a single zero of the determinant. Since we can rewrite $$ c(n) = \int g \left( \mathcal L_{g_0}^n g\right) d\mu - \left( \int g d\mu\right)^2 $$ the values $\lambda_j$ are actually the {\it other} non-zero eigenvalues of the transfer operator. Furthermore, $$ P_j(n) = \begin{cases} 1 & \hbox{ if } \lambda_j \hbox{ has multiplicity }m = 1\\ \hbox{ a polynomial of degree at most $n$} & \hbox{ if } \lambda_j \hbox{ has multiplicity $>1$}.\\ \end{cases} $$ Furthermore, one can identify the eigenvalues in terms of the {\it other} zeros $\rho_j = 1/\lambda_j$ for $z \mapsto d_{f,z}(z,1)$. \noindent All this said, in the case of the Lanford map the first 10 zeros of the determinant are simple. \end{rem} \subsection{Dimension of repellers}\label{repellerssubsection} Let $T: X \to X$ be a $C^\omega$ conformal expanding map. \begin{definition} A closed invariant set $Y \subset X$ is called a repeller if there is an open set $U$ satisfying $Y \subset U \subset X$ such that $Y = \cap_{n=1}^\infty T^{-n}U$. \end{definition} We refer the reader to the book of Falconer for the definition and basic properties of Hausdorff dimension \cite{falconer}. In the present context we have a convenient dynamical formulation. Let $g_0(x) = -\log |\hbox{Jac}(T)(x)|$ and then consider the restriction $T: Y \to Y$. The following standard result relates the dimension $ \hbox{\rm dim}_H(X)$ to the pressure function for the transformation $T: Y \to Y$ and function $sg_0: Y \to \mathbb R$, for $s > 0$ (see \cite{ruellerepellers}). \begin{lemma}[Bowen-Ruelle]\label{br} We can characterise the Hausdorff dimension of the limit set by $s = \hbox{\rm dim}_H(Y)$ such that $P(sg_0) = 0$. \end{lemma} \begin{figure} \centerline{ \begin{tikzpicture} \draw[<->] (0,6) -- (0,0) -- (8,0); \draw (-0.5,5.5) .. controls (2,2) .. (6,-1); \node at (0,-0.5) {$0$}; \node at (5.5,0.5) {$\dim_H(Y)$}; \node at (1.75,4.0) {$P(sg_0)$}; \node at (8,-0.5) {$s$}; \end{tikzpicture} } \caption{Graph of a pressure curve $s\mapsto P(sg_0)$} \end{figure} The following examples fit into the setting of Lemma \ref{br}. \begin{example} If $T:J\to J$ is a hyperbolic rational map acting on its Julia set $J$, and $f=-s\log|T'|$, then the Hausdorff dimension $\dim_H(J)$ is given by the unique zero $s$ of $P(-s\log|T'|)$ by Lemma \ref{br}. \end{example} It is a result of Ruelle \cite{ruellerepellers} that the dimension of a hyperbolic Julia set $J$ varies analytically with $f$. In the interests of clarity, for the discussion below we shall restrict attention to the specific example of the quadratic map $f_c(z) = z^2+c$ where $c$ is in the main cardioid of the Mandelbrot set. \begin{thm}[Ruelle]\label{ruelleasymp} Let $c$ be in the main cardioid of the Mandelbrot set. Then the mapping $c \mapsto \dim_H(J_c)$ is real analytic. Moreover, if $|c|$ is sufficiently small then there is an asymptotic expansion $$ \dim_H(J_c) = 1 + \frac{|c|^2}{2\log 2} + O(|c|^3)\,. $$ \end{thm} We briefly sketch the argument for the analyticity below. Although not central to our survey, for completeness we also include a brief account of the asymptotic expansion in Appendix B. To make use of Lemma 4.7 we first need that there exists a holomorphic function $\Phi_c$ which conjugates $f_c$ to $f_0$, i.e. $f_0 \circ \Phi_c = \Phi_c \circ f_c$ (see \cite{Zin00}) and if $z$ has $|z| > 1$ then $\Psi_0(z) = z$. Moreover, this function is holomorphic in $c$ for $c$ in the main cardioid $\mathcal{C} = \{w (1-w) \hbox{ : } |w| < \frac{1}{2} \}$ of the Mandelbrot set, as is $\Psi_c = \Phi_c^{-1}$. Moreover, for $c$ inside this main cardioid: \begin{enumerate} \item For any $z$ with $|z|>1$, the map $c \mapsto \Psi_c(z)$ is holomorphic in $c$. \item For $z$ in the unit circle $J_0$, the map $z \mapsto \log 2|\Psi_c(z)|$ is H\"older continuous. \item For any $c \in \mathcal{C}$, the map $z \mapsto \Psi_c(z)$ is injective on $\{z \hbox{ : } |z| > 1\}$. \end{enumerate} The function $\Psi_c$ extends to a holomorphic map on $\{z \hbox{ : } |z| \geq 1\}$, with $\Psi_c|_{J_0}: J_0 \to J_c$ and for $z$ on the unit circle, $z \mapsto \log2|\Psi_c(z)|$ is H\"older continuous. By Lemma 4.7, \begin{equation*} P(-t\log(2|\Psi_c(z)|)|_{J_0}) = 0. \end{equation*} So $(c,t) \mapsto P(-t\log2|\Psi_c|)$ is analytic in a neighbourhood of $(0,1)$. We then have: \begin{equation*} \frac{\partial}{\partial t} (P(-t\log 2|\Psi_{c=0}(z)|)) |_{t=0}= \int_{J_0} -\log 2|\Psi_{c=0}(z)| \, dz \neq 0. \end{equation*} \begin{rem} Theorem \ref{ruelleasymp} can be generalised to the context of hyperbolic Julia sets of rational maps (see \cite{ruellerepellers}). \end{rem} \begin{rem} The real analyticity of Theorem \ref{ruelleasymp} may fail outside of the main cardioid of the Mandelbrot set. For example, when $c = \frac{1}{4}$ then $ f_c(\frac{1}{2})= \frac{1}{2}$ and $|f'(1/2)| = 1$, so $J_{c}$ is no longer hyperbolic, and in fact $c \mapsto \dim_H(J_c)$ is not even continuous at $\frac{1}{4}$ (see \cite{dsz}). \end{rem} The same ideas apply to the Hausdorff dimension of limit sets of certain Fuchsian groups (see, for example, \cite{bowenpublihes}). Lemma \ref{br} allows us to define the dimension $\hbox{\rm dim}_H(Y)$ implicitly in terms of the determinant (defined in terms of periodic points for $T: Y \to Y$, i.e., those contained in $Y$). In particular, setting $g_0=0$, $g = -\log |T'|$ and $z=1$ in Definition \ref{defn} we have $s = \hbox{\rm dim}_H(X)$ satisfies $d_{0, g}(1, s) = 0$. Equivalently, the Hausdorff dimension of the limit set is a solution $s = \hbox{\rm dim}_H(X)$ to the absolutely convergent series $$1 + \sum_{n=0}^\infty a_n(s) = 0.$$ As before, when studying the rate of mixing in the previous subsection, in practice we can use the intermediate value theorem to get effective bounds $\alpha_N < \rho < \beta_N$ by choosing $\alpha_N, \beta_N$ such that $$ 1 + \sum_{n=1}^N a_n(\alpha_N) \geq \epsilon_1 \hbox{ and } 1 + \sum_{n=1}^N a_n(\beta_N) \leq - \epsilon_2 $$ where $$ \epsilon_1 > \sum_{n=N+1}^\infty a_n(\alpha_1) \hbox{ and } \epsilon_2 > \sum_{n=N+1}^\infty a_n(\alpha_2) $$ Thus, as before, finding good bounds comes down to: \begin{enumerate} \item getting good estimates on the coefficients $a_i(s)$ ($i=1, \cdots, N$); and \item finding effective bounds on the tail $\sum_{n=N+1}^\infty a_n(s)$. \end{enumerate} As we mentioned before, the first is a basic problem in computing and the second is a more challenging mathematical problem which we will address in \S 5. \smallskip We now turn to a class of deceptively simple examples. \begin{example}[Continued fractions and deleted digits] \label{deleteddigitsexample} Consider a finite set $F \subset \mathbb N$ and the set $E_F \subset [0,1]$ given by $$ E_F = \{x = [x_1, x_2, \cdots ] \hbox{ : } x_1, x_2, \cdots \in F\}, $$ i.e., the Cantor set of points whose continued fraction expansion has all coefficients lying in $F$. This can be viewed as a repeller for the map $T: E_F \to E_F$ defined by $T(x) = \frac{1}{x} \pmod 1$. \medskip \noindent (a) In the case $F = \{1,2\}$ the set $E_F$ is usually denoted $E_2$ and is of historical interest, with its Hausdorff dimension studied by Good \cite{good} as far back as 1941, after even earlier work of Jarnik \cite{jarnik}. Using our algorithm we were able to compute the dimension accurately, and rigorously, to over 100 decimal places (see \cite{jpeffective}), improving on the previous best rigorous estimate due to Falk \& Nussbaum \cite{falknussbaum}, using a subtler variant of Ulam's method. \medskip \noindent (b) In the case $F = \{1,2,3,4,5\}$, the dimension $\dim_H (E_{ \{1,2,3,4,5\}})$ appears as a crucial ingredient in the work of Huang \cite{huang}, refining the work of Bourgain \& Kontorovich \cite{bourgainkontorovich} on a density one solution to the Zaremba Conjecture. Here we were able to use the algorithm to compute the dimension accurately, and rigorously, to 8 decimal places \cite{jpzaremba}. \end{example} \begin{rem} The Cantor sets above are also special cases of limit sets of one dimensional iterated function schemes. More precisely, we can consider a (finite) set of $C^2$ contractions $\psi_i:I \to I$ ($i=1, \cdots, N$) on the unit interval $I$ (with $\max_{1\leq i \leq N}\sup_{x\in I} |\psi_i'(z)| < 1$). The associated limit set $X \subset I$ is the smallest non-empty closed set satisfying $X = \cup_{i=1}^N \psi_i(X)$. The Hausdorff dimension $\dim_H(X)$ can then be expressed in terms of the pressure function and the corresponding version of Lemma \ref{br}. In the particular case that $\psi_i(x)= \frac{1}{x+i} $, for $i \in F$, the limit set recovers the Cantor sets $E_F$. \end{rem} \begin{rem} There are natural analogues of continued fractions for which the contractions are $\psi_b(z) = \frac{1}{z+b}$, where $b \in \mathcal B \subset \{m + i n \hbox{ : } m \in \mathbb N, n \in \mathbb Z\}$ \cite{mu}. These map the domain $D = \{z\in \mathbb C \hbox{ : } |z-\frac{1}{2}| \leq \frac{1}{2}\}$ inside itself. We denote by $X_{\mathcal B}$ the associated limit set, i.e., the smallest non-empty closed subset of $D$ such that $\cup_{b \in \mathcal B} \psi_b(X_{\mathcal B}) = X_{\mathcal B}$. The dimension of this set was studied in \cite{mu} and in \cite{falknussbaum1} the bounds $ 1.85574 \leq \dim_H(X_{\mathcal B}) \leq 1.85590$ were established. \end{rem} \begin{rem} These examples also help to highlight some of the limitations of the periodic point method for estimating values. Most of the examples we have studied have been one dimensional and involved uniformly expanding maps (or uniformly contracting iterated function schemes) with only finitely many branches. If we consider analytic maps in higher dimensions then the method still applies, although it is less efficient as the dimension grows. However, if we consider the case of infinitely many branches then we have the complication that there can be infinitely many periodic points of any given period and this genuinely makes the method less applicable. \end{rem} \section{Rigorous error bounds} We now come to one of the most interesting and challenging aspects of the estimation problem: {\it finding rigorous upper bounds for the errors in the approximations.} In the interests of clarity, and notational simplicity, we shall explain the ideas in the particular case of estimating the Hausdorff dimension of limit sets (corresponding to conformal iterated function schemes). The more general settings require variants of this basic approach. In particular, we need to provide an estimate on the error when we truncate the series which comes from bounds on the terms $|a_n(t)|$ for large values of $n$. Our bounds will involve a number of variables in whose choice we have some limited flexibility. In particular, we can select these so as to optimise the error terms. \subsection{The contraction ratio $\theta$ for expanding maps} Let us assume that we can associate a Markov partition $\mathcal P = \{ P_1, \cdots, P_K\}$. In the particular case that $T: X \to X$ is an expanding map we can consider \begin{enumerate} \item charts and the complexification of the maps $T$ to neighbourhoods $ \mathbb C^d \supset U_i \supset P_i$ ($i=1, \cdots, K$); and \item associated contractions $\psi_{ij}:U_i \to U_j$ wherever $T(\hbox{\rm int}(U_j)) \supset \hbox{\rm int}(U_i)$. \end{enumerate} In more fortunate situations we can assume that we have \emph{Bernoulli} contractions where $U= U_i $ ($i=1, \cdots, K$) are identical and $\psi_{ij}$ = $\psi_i$ ($i=1, \cdots, K$) are independent of $j$. (This applies in the case of Example \ref{doubling} and the Lanford map in Example \ref{lanford}) \begin{definition} Choose $0 < \Theta_i < 1$ such that we can choose complex polydiscs $ P_i \subset B(c_i, r_i) \subset U_i$ where $$ B(c_i, r_i) = \{\underline z \in \mathbb C \hbox{ : } |z_i - c_i| < r_i, \quad i = 1, \cdots, K\} $$ such that $$ \overline {\psi_{ij}(B(c_i, r_i))} \subset B(c_j, \Theta_j r_j). $$ \bigskip Let $\theta := \max_i \{\Theta_i^{1/K}\}$. In the particular case of Bernoulli contractions we can take $\theta := \max_i \{\Theta_i\}$. \end{definition} \medskip \begin{rem} It may not always be possible to extend the contractions analytically to discs about elements of the partitions. However, in that case we can consider instead the more refined partition by elements $P_{i_0} \cap T^{-1} P_{i_1} \cap \cdots \cap T^{-n} P_{i_n} $, for suitable $n$. \end{rem} \medskip \begin{example} For an expanding map of the interval then we can take $d=1$. For a Bernoulli expanding map $T$ we would like to take $k=1$ (i.e., a single disc $B(c,r)$ and contractions $\psi_i: B(c,r) \to B(c,r)$ for $i=1, \cdots, k$) arising from the inverse branches of $T$. The $P_i$ will be a partition of the interval into subintervals and the discs $P_i \subset U_i \subset \mathbb C$ extend into the complex plane. \end{example} \medskip \noindent \emph{First parameter choice:} Choose a real number $0 < \theta < 1$. \bigskip There is no canonical choice of $\theta$ and one can try to arrange the partition and the polydiscs so as to minimise the possible choices. This can be achieved either by trial and error, or by simple calculus. \begin{figure} \centerline{ \begin{tikzpicture} \draw (0,3.5) -- (0,-3.5); \draw (3.5,0) -- (-3.5,0); \draw[black] (0,0) circle (3cm); \draw[black, dashed] (0,0) circle (2cm); \draw[red] (1,0) circle (1cm); \draw[red] (-0.5,0) circle (1.3cm); \draw[<->] (0,0) -- (-2.12,2.12); \node at (-1.7,1.9) {$r$}; \node at (0.2,0.2) {$x$}; \draw[<->] (0,0) -- (-1.42,-1.42); \node at (-0.9,-0.5) {$\theta r$}; \node at (2.9,0.5) {$\psi_2(B(x,r))$}; \node at (-2.7,0.5) {$\psi_1(B(x,r))$}; \end{tikzpicture} } \caption{The choice of $0 < \theta < 1$ for two contractions $\psi_1, \psi_2: B(c,r) \to B(c,r) $ with $B(c, \theta r) \supset \psi_1(B(c,r)) \cup \psi_2(B(c,r))$.} \end{figure} In the next subsection we begin to elaborate on the description of the error bounds mentioned in subsection \ref{illustrative}. \subsection{Bounds on the determinant coefficients} The key to obtaining validated estimates on characteristic values is to get accurate bounds on the coefficients $a_n$, especially for large $n \geq 1$. \bigskip \noindent \emph{Second parameter choice:} Choose a natural number $N > 0$. \bigskip \noindent This should be chosen as large as is practicable. Typically this will depend on the time, computer memory and computing power available to compute periodic points. For $\ell$ contractions, to compute $a_n$ for $1 \leq n \leq N$ would require making estimates on up to of the order of $\ell^N$ periodic points. This involves quantities $\beta_k$, $t_m$ and $B_k$ which we define below. \begin{enumerate} \item[i)] Let $\epsilon_1 = \epsilon_1(N) > 0$ be the corresponding error bound, by which we mean that for $1 \le n\le N$ the coefficient $a_n$ can be computed to a guaranteed accuracy of no more than $\epsilon_1$. \end{enumerate} \bigskip \noindent \emph{Third parameter choice:} Choose a natural number $L>N$. \bigskip \noindent Typically this choice will depend on the time, computer memory and computing power available to compute integrals. This will involve us in numerically integrating (to appropriate precision) approximately $L-N$ integrals. \begin{enumerate} \item[ii)] Let $\epsilon_2 = \epsilon_2(L) > 0$ be the bound on the tail $$\sum_{k=L}^\infty \|\mathcal L q_k\|^2 \leq \frac{C\theta^L}{1-\theta}< \epsilon_2$$ where $$ \begin{aligned} \|\mathcal L q_k\|^2 &= r_i^{-2k}\int_{0}^{1} | \sum_{j} \psi_j'(c_j + r_i e^{2\pi i t})(c_i + r_i \psi_j (e^{2\pi i t}))^k |^2dt \leq C\theta^n \end{aligned} $$ and where: \begin{enumerate} \item $C = K^2\max_j \|\psi_j'\|_\infty^s$; and \item $q_k(z) = (z-c_i)^kr_i^{-k}$, with $i = i(k)$ (mod $K$), \end{enumerate} where $K$ is the number of discs needed. (In the Bernoulli case we have the integral around the same curve). \end{enumerate} \begin{enumerate} \item[iii)] Let $\epsilon_3 = \epsilon_3(L,N) > 0$ be the bound on rigorous computational estimates $(\beta_k)_{k=N}^L$ such that $$ \beta_k := \|\mathcal L q_k\|^2 = r_i^{-2k}\int_{0}^{1} \left| \sum_{j} \psi_j'(c_j + r_i e^{2\pi i t})(c_i + r_i \psi_j (e^{2\pi i t}))^k \right|^2dt \hbox{ for $N < k < L$} $$ up to an error of $\epsilon_3$. \end{enumerate} We can now use these choices to define a sequence $(t_m)$ for the next set of bounds. \begin{definition} We can define a sequence of positive real numbers $$ t_m:= \begin{cases} \left(\sum_{k=m+1}^L \beta_k + (L-m)\epsilon_3 + \epsilon_2 \right)^{1/2} &\hbox{ for } m \leq L\\ C \theta^m &\hbox{ for } m > L. \end{cases} $$ for $C>0$ as above. \end{definition} In particular, these numbers will tend to zero as $m\to\infty$. We can combine the values of $t_n$ by introducing the following definition of $B_k$. \begin{definition} For $1 \leq k\leq L$, define positive real numbers $$ B_k := \sum_{m_1 < \cdots < m_k \leq L} t_{m_1} \cdots t_{m_k}. $$ \end{definition} \noindent Typically, the coefficients $B_k$ will tend to zero quite quickly. Moreover, $B_k$ is defined up to an error $$ \epsilon_4 := \epsilon_3 \left(\max\{t_i\}\right)^{L-1}L^k.$$ The quantities $\beta_k$, $t_m$ and $B_k$ are used in giving the following bounds on the coefficients $a_n$ for the determinant. \begin{thm}[Coefficient bounds]\label{coef} Let $c = \frac{1}{\prod_{k=1}^\infty (1-\theta^j)}$. Then \begin{enumerate} \item for $N < n \leq L $ we can bound, $$ |a_n| \leq \gamma:= c\sum_{k=1}^{n} (B_k +\epsilon_4) (\theta^L C)^{n-k}; \hbox{ and } $$ \item for $n > L$ we can bound $$ |a_n| \leq \xi (\theta^L C)^{n} \hbox{ where } \xi:= c \left( \sum_{k=1}^{L} (B_k +\epsilon_4) (\theta^L C)^{-k}\right). $$ \end{enumerate} \end{thm} \noindent In particular, we get effective bounds for $|a_n|$ when $n > N$ (combining these different bounds for $N < n \leq L$ and $n > L$). \begin{rem} In part 1 of Proposition \ref{coef} we have used the simply proven bound $$\sum_{r_1, \cdots, r_{n-k}=L}^\infty C^{n-k} \theta^{r_1+ \cdots + r_{n-k}} \leq c (\theta^L C)^{n-k}. $$ \end{rem} The proof of Theorem \ref{coef} follows the same lines as the arguments in \cite{jpeffective,jpzaremba,jpv}. For the convenience of the reader we give a brief account of the underlying operator theory ideas in the proof in Appendix A. \subsection{Applying the bounds} There are two different ways that the bounds in Theorem \ref{coef} might be applied to estimate the accuracy of our approximations to the relevant characteristic values, depending on the quantity in question. \bigskip \noindent {\bf (a) Explicit values.} Assume that we have an expression that can be written in terms of $d(z,t)$ and its derivatives $\frac{\partial^{i+j}}{\partial z^i \partial s^j} d(z,s)$ ($i,j \geq 0$). The value of $d(z,t)$ can be approximated using the preceding estimates $$ \left| d(z,t)- \left(1+ \sum_{n=1}^N a_n(t) z^n \right) \right| \leq N |z|^N \epsilon_1 + \sum_{n=N+1}^M \gamma_n |z|^n + \xi \frac{|z|(\theta^L C)^L}{1- |z|(\theta^L C)}. $$ More generally, we can bound $$ \left| \frac{\partial^{i+j}}{\partial z^i \partial s^j} d(z,s) - \sum_{n=1}^N z^n \frac{n!}{(n-i)!} \frac{\partial^i}{\partial s^i} a_n(s)\right| $$ where we bound $\frac{\partial^i}{\partial s^i} d(z,s)$ using Cauchy's theorem. \bigskip These estimates can be applied, for example, to computing the error terms in the estimates on Lyapunov exponents, variance and linear response as in subsection \ref{quantities}. \bigskip \noindent {\bf (b) Implicit values.} If we are seeking a zero of $d(z,s)$ then provided we can choose $z_1 < z_2$ close and with validated bounds $$d(z_1, s) < 0 < d(z_2, s) \hbox{ or } d(z_2, s) < 0 < d(z_1, s)$$ using $z=z_1$ or $z=z_2$, then there must be a zero in the interval $[z_0,z_1]$. \bigskip \noindent This approach can be used to estimate errors in computing the rates of mixing and Hausdorff dimension as described in section \ref{mixing}. We will illustrate this in a specific instance in the next section. \section{A worked example: Hausdorff dimension of the set $E_2$} To illustrate how Theorem \ref{coef} can be applied in practice, we want to consider a particular concrete problem. In particular, we will describe its use in estimating the Hausdorff dimension of a specific Cantor set (see \cite{jpeffective}). Recall from Example \ref{deleteddigitsexample} that $E_2$ is the subset of $[0,1]$ consisting of those reals whose continued fraction expansion contains only the numbers 1 and 2. In other words, if $$ T_1(x) := \frac{1}{1+x}\ \hbox{ and }\ T_2(x) := \frac{1}{2+x} $$ then $E_2$ is the corresponding limit set (i.e.~the smallest non-empty closed set $X$ such that $T_1(X) \cup T_2(X) = X$). Let us consider the estimation of the Hausdorff dimension of $E_2$, referring to \cite{jpeffective} for full details. \bigskip Defining $\underline i = (i_1, \cdots, i_n) \in \{1, 2\}^n$ and $|\underline i|=n$, and letting $x_{\underline i} = T_{\underline i}(x_{\underline i})$ be the fixed point for $$T_{\underline i} = T_{i_1} \circ \cdots \circ T_{i_n}: [0,1] \to [0,1]\,,$$ we have the determinant $$ d(z,t):= \exp \left(- \sum_{n=1}^\infty \frac{z^n}{n} \sum_{|\underline i|=n} \frac{|(T_{\underline i})'(x_{\underline i})|^t} {1- (T_{\underline i})'(x_{\underline i})} \right)\,, $$ and $\dim_H(E_2)$ is the value $t$ such that $d(1,t)=0$. This value is then approximated using the following four steps: \begin{enumerate} \item For each $t$ approximate $ z \mapsto d (z, t)$ by a polynomial $z \mapsto d_N (z, t)$; \item Set $z=1$ and consider $t \mapsto d_N (z, 1)$; \item Solve for $t_N = t$: $d_N (1, t)=0$; \item Then $t_N \to \hbox{\rm dim}_H(E_2)$ as $N \to +\infty$. \end{enumerate} In particular, for each $N \in \mathbb N$ we obtain an approximation $t_N$ to $\hbox{\rm dim}_H(E_2)$. The sequence $t_N$ gives an intuitive estimate on the quality of the approximation in terms of the difference $|t_N - t_{N-1}|$. We can write the series expansion $$ d(z, t) = 1+ \sum_{n=1}^\infty a_n(t) z^n = \underbrace{1+ \sum_{n=1}^N a_n(t) z^n}_{=: d_N(z,t)} + \underbrace{\sum_{n=N+1}^\infty a_n(t) z^n}_{=:\epsilon_N(z,t)} $$ for some $N \geq 1$, and take for the approximating polynomial $$ d_N(z,t)= 1+ \sum_{n=1}^N a_n(t) z^n $$ where $N$ is chosen to be sufficiently large that (with $z=1$ and $0 \leq t \leq 1$) the error $\epsilon_N$ is small, but sufficiently small that the terms $a_n(t)$, $n=1,2, \cdots, N$ can be calculated in a reasonable time. In the present setting one might choose $N=25$, say. \smallskip We can explain part of the mechanism for effective estimates on the coefficients $a_n$ ($n \geq 1$) as follows. \bigskip \noindent \emph{Step 1.} Choose $z_0 \in \mathbb R$ and $r>0$ such that $$D = \{ z \in \mathbb C \hbox{ : } |z-z_0| < r\} \supset [0,1] \hbox{ and } T_1D, T_2 D \subset D.$$ For example, we could let $z_0 = 1$ and $r = \frac{3}{2}$. Consider the family of transfer operators defined on analytic functions $w: D \to \mathbb C$ by $$ {\mathcal L}_t w(z) = \frac{1}{(z+1)^{2t}} w\left( \frac{1}{z+1} \right) + \frac{1}{(z+2)^{2t}} w\left( \frac{1}{z+2} \right), \quad t \in \mathbb R. $$ \bigskip \noindent \emph{Step 2.} Let $q_k(z) = \frac{(z-z_0)^k}{r^k}$, for $z\in D$ and $k \geq 1$. We can then define $$ t_m = \left(\sum_{k=m-1}^\infty \|\mathcal L (q_k)\|^2 \right)^\frac{1}{2} (m \geq 1) \hbox{ where } \|q_k\|^2 = \int_0^1| q_k(z_0 + re^{2\pi i t})|^2 dt \quad (k \geq 1). $$ \bigskip \noindent \emph{Step 3.} We can bound the coefficients $a_n$ ($n > N$) by $$ |a_n| \leq \sum_{m_1 < \cdots < m_n} t_{m_1} t_{m_2} \cdots t_{m_n}. $$ We will give more details of the underlying operator theory in Appendix A. Given $M > 0$ (in the present setting one can choose $M=600$) we can numerically estimate $\|\mathcal L(q_k)\|$ for $k \leq M$, and we can trivially bound $\|\mathcal L(q_k)\|$ for $k > M$. Combining these various bounds gives the estimate. \section{Future directions} In this note we have discussed how the periodic point algorithm works and how the error terms can be efficiently estimated. However, there may be further scope to fine-tune the underlying analysis and estimates, perhaps by using different Hilbert spaces or other operators. This leads to the first very general question. \begin{question} Can we improve the approach to computing the dynamical determinant and, in particular, the error estimates? \end{question} We have illustrated the general approach with a number of applications and examples. However, we now want to propose some further potential applications. \subsection{Dynamical invariants and Lyapunov exponents} We have already discussed the theoretical use of our method to rigorously estimate certain dynamical quantities. However this has only been done practically in only a small number of cases (e.g., Lyapunov exponents, variance). However, it remains a task yet to be completed to explore how to estimate rigorously other quantities (e.g., resonances, linear response) for simple test examples (such as the Lanford map). \begin{question} Apply this approach to compute more dynamical invariants in simple examples. \end{question} Lyapunov exponents also occur naturally in the theory of random matrix products, which has been an active area of research since the pioneering work of Kesten and Furstenberg in the 1960s. For example, given $k \times k$ square matrices $A_1, \cdots, A_k$ {\it with positive entries} we define the {\it Lyapunov exponent} by $$ \lambda = \lim_{n\to +\infty} \frac{1}{d^n} \sum_{i_1, \cdots, i_n \in \{1, \cdots, d\}} \frac{\log \| A_{i_1} \cdots A_{i_n}\|}{n}. $$ There was an implementation of the basic algorithm in \cite{pollicottinventiones}, \begin{question} Can the error bounds in the computation of $\lambda$ be made more effective? \end{question} An interesting application is to the case of binary symmetric channels in information theory. In this context there are positive matrices and $\lambda$ is related to a useful value called the Entropy Rate. \subsection{ Connections to number theory}\label{numbertheorysubsection} We have already mentioned the application to the density one Zaremba conjecture (see \cite{jpzaremba}). However, we now want to describe a different application to number theory. Given an irrational number $\alpha$ we can associate the number $$ \mu(\alpha) = \limsup_{p,q \to +\infty}|q^2| \left|\alpha - \frac{p}{q}\right| $$ (i.e., the best constant in diophantine approximation for $\alpha$). The {\it Lagrange spectrum} is defined to be the set $\mathcal L = \{ 1/\mu(\alpha) \hbox{ : } \alpha \in \mathbb R - \mathbb Q \}$. On the other hand, we can consider those binary quadratic forms $f(x,y) = a x^2 + b xy + c y^2$ ($a,b,c \in \mathbb R$) with discriminant $D(f) = b^2 - 4ac > 0$ and denote $$\lambda(f) = \inf \left\{|f(x,y)| \mbox{ : } (x,y \in \mathbb Z^2 -\{(0,0)\})\right\}/\sqrt{D(f)}.$$ The {\it Markov spectrum} is defined to be the set $\mathcal M = \{ 1/\lambda(f) \mbox{ : } \alpha \in \mathbb R - \mathbb Q \}$. It is known that $\mathcal L \subset \mathcal M$ and Matheus \& Moreira showed that $0.513 \cdots < \dim_H(\mathcal M - \mathcal L) < 0.98 \cdots$ \cite{matheusmoreira2}. Moreover, in their article they conjecture that the bounds can be improved to $\dim_H(\mathcal M - \mathcal L) < 0.88$, based on empirical estimates using our algorithm. \bigskip \begin{question} Obtain improved rigorous bounds on $dim_H(\mathcal M - \mathcal L)$. \end{question} \medskip This involves rigorously computing the Hausdorff dimension of limit sets associated to iterated function systems $\{\phi_i: I \to I\}$, but with a Markov condition, i.e., there is a $0$-$1$ matrix $A$ and compositions $\phi_i \circ \phi_j$ are only allowed if $A(i,j)=1$. Whereas the basic algorithm still applies in this setting, the major complication is to get effective error estimates. \medskip \subsection{Spectral geometry and the Selberg zeta function}\label{geometrysubsection} Given a compact surface $V$, with a metric $\rho$ of constant negative curvature, we can associate the Selberg zeta function defined by $$ Z_\rho(s) = \prod_{n=0}^\infty \prod_\gamma\left(1 - e^{-s(n+s)l(\gamma)} \right), \quad s \in \mathbb C, $$ where $\gamma$ denotes a closed geodesic of length $l(\gamma)$. To relate this to our analysis, we recall that we can associate a piecewise $C^\omega$ expanding map of the circle (using the Bowen-Series approach) and then the zeta function can be written in terms of the determinants $\det(I-\mathcal L_s)$ of the associated transfer operators $\mathcal L_s$ (see e.g.~\cite{mayercmp, ruelleinventiones}). The zeros of $Z_\rho(s)$ have a spectral interpretation, in terms of eigenvalues of the Laplacian on $(V, \rho)$, but these can be well estimated using other techniques. Other special values such as $Z_\rho'(0)$ can be written in terms $\frac{d}{ds}\det(I-\mathcal L_s)|_{s=0}$ and this is proportional to the much studied {\it determinant of the Laplacian}, originally defined in terms of the spectrum of the Laplacian (see e.g.~\cite{dhokerphong, friedinventiones, sarnakcmp}). \begin{question} Can we get useful bounds on $Z_\rho'(0)$ ? \end{question} There is a well-known problem of Sarnak to show that there is a (local) minimum for the determinant that occurs at very symmetric hypergeometric surfaces which could be addressed using this approach. In a different direction, the Weil-Petersson metric is a classical distance on the space of such Riemann metrics $\rho$. There is a particularly useful thermodynamic interpretation for the Weil-Petersson metric due to McMullen in terms of the second derivative of the associated thermodynamic pressure in the context above of the piecewise $C^\omega$ expanding maps. \begin{question} Can one get effective estimates on the Weil-Petersson metric? \end{question} This could then be used to explore empirically Weil-Petersson metric on the space of metrics. Moreover, there are higher dimensional analogues of Weil-Petersson metrics in \cite{bcls} which could be similarly analysed.
1,108,101,566,816
arxiv
\section{Introduction} Suppose $M$ is a $n$-dimensional non-compact complete connected Riemannian manifold, the Riemannian path space $C_{o,T}(M)$ over $M$ is defined by $$C_{o,T}(M):=\{\gamma\in C([0,T];M):\gamma(0)=o\},$$ where $T$ is a positive constant and $o \in M$. Let $d_M$ be the Riemannian distance on $M$, then $C_{o,T}(M)$ is a Polish space under the uniform distance $$d(\gamma,\sigma):=\displaystyle\sup_{t\in[0,T]}d_M(\gamma(t),\sigma(t)),\quad\gamma,\sigma\in C_{o,T}(M).$$ Let $O(M)$ be the orthonormal frame bundle over $M$, and let $\pi: O(M) \rightarrow M$ be the canonical projection. Furthermore, we choose a standard othornormal basis $\{H_i\}_{i=1}^n$ of horizontal vector fields on $O(M)$ and consider the following SDE, \begin{equation}\label{c1.0} \begin{cases} &\d U_t=\displaystyle\sum^n_{i=1}H_i(U_t)\circ\d W_t^i,\ \ t \in [0,\zeta),\\ & U_0=u_o, \end{cases} \end{equation} where $u_o$ is a fixed orthonormal basis of $T_o M$, $W^1_t,\cdots,W_t^n$ are independent Brownian motions on $\mathbb{R}$ and $\zeta$ is the maximal time of the solution. Then $X_t:=\pi(U_t),\ t \in [0,\zeta)$ is the Brownian motion on $M$ with initial point $o$, and $U_{\cdot}$ is the (stochastic) horizontal lift along $X_{\cdot}$. Throughout this paper, besides the completeness of $M$, we assume further that $M$ is stochastically complete, i.e., $\zeta=\infty$, a.s.. Let $\mu_{o,T}$ be the distribution of $X_{\cdot}$ in the time interval $t\in [0,T]$, then $\mu_{o,T}$ is a probability measure on $C_{o,T}(M)$. From now on, we fix $o \in M$, $T=1$, and for simplicity, we write $C_o(M)$ for $C_{o,1}(M)$ and $\mu$ for $\mu_{o,1}$. Let $\F C_b$ be the space of bounded Lipschitz continuous cylinder functions on $C_{o}(M)$, i,e, for every $F \in \F C_b$, there exist some $m\geq1, 0<t_1<t_2\cdots<t_m\leq 1, f\in C^{Lip}_b(M^m)$ such that $F(\gamma)=f\big(\gamma(t_1),\cdots,\gamma(t_m)\big)$, $\gamma \in C_{o}(M)$, where $C^{Lip}_b(M^m)$ is the collection of bounded Lipschitz continuous functions on $M^m$. Suppose $\mathbb{H}$ is the standard Cameron-Martin space for $C([0,1];\mathbb{R}^n)$, i.e. \begin{equation*} \begin{split} &\mathbb{H}:=\Big\{h\in C([0,1]; \mathbb{R}^n)\Big| h~ \text{is absolutely continuous}\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~h(0)=0, \|h\|^2_{\mathbb{H}}:=\int_0^1| h'(s)|^2\d s<\infty\Big\}, \end{split} \end{equation*} where $h'(s)$ is the derivative with respect to the time variable $s$. In fact, $\mathbb{H}$ is a separable Hilbert space with the inner product $\<h,g\>_{\mathbb{H}}:=\int_0^1 \<h'(s) , g'(s)\>\d s,\ h,g\in \mathbb H.$ For any $F\in\F C_b$ with the form $F(\gamma):=f\big(\gamma(t_1),\cdots,\gamma(t_m)\big)$ and any $h\in\mathbb{H}$, we define the directional derivative $D_h F$ as following, \begin{equation}\label{c1.1} D_hF(\gamma):=\displaystyle\sum^m_{i=1}\<\nabla_i f\big(\gamma(t_1),\cdots,\gamma(t_m)\big),U_{t_i}(\gamma)h(t_i)\>_{T_{\gamma(t_i)}M}, \end{equation} where $\nabla_i$ is the (distributional) gradient operator for the $i$-th component on $M^m$ and $U_{\cdot}(\gamma)$ is the horizontal lift along $\gamma(\cdot)$. Note that $D_hF$ is independent of the representation of $F$, $\nabla_i f$ is defined almost everywhere with respect to the Riemannian volume measure, and the law of $\gamma(t), t \in (0,1]$ under $\mu$ is absolutely continuous with respect to the Riemannian volume measure (see e.g. \cite{Hsu2}), so $D_h F$ is well defined, and it is defined $\mu$-$a.s.$ on $C_{o}(M)$. By Riesz representation theorem, there exists a gradient operator $DF(\gamma)\in \mathbb{H}$, such that $\<DF(\gamma),h\>_{\mathbb{H}}=D_hF(\gamma), h\in\mathbb{H}$, $\gamma \in C_{o}(M)$. And for every $F \in \F C_b$ with the form above, it is easy to check that $DF$ has the following expression, $$(DF(\gamma))_s=\sum^m_{i=1}(s\wedge t_i)U_{t_i}(\gamma)^{-1}\nabla_i f(\gamma),\quad \mu-a.s..$$ Since $DF$ is bounded for every $F \in \F C_b$, we can define a quadratic form as following, \begin{equation}\label{c1.2} \E(F,G):=\int_{C_{o}(M)}\<DF,DG\>_{\mathbb{H}}\d \mu, \ \ F,G\in\F C_b. \end{equation} It is well known that if the based manifold $M$ is compact or with bounded Ricci curvature, then the integration by parts formula holds for $D_h$, hence the quadratic form $(\E, \F C_b)$ is closable. According to the theory of Dirichlet form, it is not difficulty to show that the closed extension $(\E, \mathscr D(\E))$ is a conservative local Dirichlet form on $L^2(\mu):=L^2(C_o(M),\mu)$, which is usually called the O-U Dirichlet form. For the case $M$ compact, see \cite{CM}, \cite{D1}, \cite{EL}, \cite{ES} \cite{FM}, \cite{Hsu1}, for the case $M$ non-compact with bounded Ricci curvature, see \cite{CHL}, \cite{Hsu3}. In fact, in the integration by parts formula for $D_h$, a term depending on the Ricci curvature of the based manifold appears, to make such term integrable, it is natural to put some restrictions on the bound of the Ricci curvature. On the other hand, since the horizontal lift $U_t$ is an isometry, without any condition on the bound of the Ricci curvature, the quadratic form $(\E, \F C_b)$ is still well defined by (\ref{c1.2}). In this article, we will show the following result about the closability of $(\E, \F C_b)$, \begin{thm}\label{t1.1} The quadratic form $(\E, \F C_b)$ is closable on $L^2(\mu)$, and its closed extension $(\E, \D(\E))$ is a Dirichlet form on $ L^2(\mu)$. \end{thm} In particular, only the completeness and the stochastic completeness of the based manifold is needed in Theorem \ref{t1.1}. Under the same condition, i.e. completeness and stochastic completeness of the based manifold, the existence of a quasi-invariance flow on $C_o(M)$ was shown in \cite{HO}. Provided the closability of $(\E, \F C_b)$, we can define its closed extension $(\E, \mathscr{D}(\E))$ as the O-U Dirichlet form on $L^2(\mu)$. A natural question is what functional inequality holds for the O-U Dirichlet form? If the based manifold is compact or with bounded Ricci curvature, the Poincar\'e inequality for the O-U Dirichlet form was first shown in \cite{F}, after that the log-Sobolev inequality has also been established for the O-U Dirichlet form, see e.g. \cite{AE}, \cite{CHL}, \cite{EL}, \cite{Hsu4}, \cite{Hsu3}. For a class of based manifolds with unbounded Ricci curvatures, a weak Poincar\'e inequality was shown in \cite{W} for the O-U Dirichlet form. In this article, we will study the functional inequality for the O-U Dirichlet form on path space over a general non-compact manifold. Let \begin{equation}\label{e15aa}\aligned& K(\gamma):=\sup_{t \in [0,1]}\|\text{Ric}(\gamma(t))\|_{T_{\gamma(t)}M},\ \ \gamma \in C_o(M),\\&K_1(\gamma):=\inf_{t \in [0,1]}\inf_{v \in T_{\gamma(t)}M, |v|=1}\langle\text{Ric}(\gamma(t))v, v\rangle_{T_{\gamma(t)}M} \ ,\ \ \ \gamma \in C_o(M).\endaligned \end{equation} For every $R \ge 0$, we define \begin{equation}\label{e15c} \begin{split} & \tilde K(R):=\sup\Big\{\|\text{Ric}(x)\|_{T_x M}:\ x \in M,\ d_M(o,x)\le R\Big\},\ \ \\ &\tilde K_1(R) :=\inf\Big\{\langle\text{Ric}(x)v, v\rangle_{T_x M}:\ x \in M,\ d_M(o,x)\le R ,\ v \in T_{x}M,\ |v|=1\Big\}. \end{split} \end{equation} \begin{thm}\label{t1.3} (1) The following weighted log-Sobolev inequality holds, \begin{equation}\label{t1.3-1} \mu(F^2\log F^2)\le \int_{C_o(M)}\big(4+K(\gamma)^2\e^{-K_1(\gamma)}\big) \|D F\|_{\H}^2 \d \mu ,\ \ \ F \in \F C_{b,loc}. \ \mu(F^2)=1, \end{equation} (2) Suppose \begin{equation*} \tilde K(s) \le c_1(1+s^{\delta_1}),\ \ \tilde K_1(s)\ge -c_2-\delta_2 \log(1+s),\ \ \ s>0, \end{equation*} for some non-negative constants $c_1,c_2,\delta_1,\delta_2$ satisfying $2\delta_1+\delta_2\le 2$, then the following Poincar\'e inequality \begin{equation*} \mu(F^2)\le c_3\E(F,F)+\mu(F)^2,\ \ \ F \in \D(\E), \end{equation*} holds for some $c_3>0$. \end{thm} Theorem \ref{t1.3} is a combination of Theorem \ref{t4.1} and Corollary \ref{cor} below. By our knowledge, it is the first result to show that the Poincar\'e inequality holds for the O-U Dirichlet form on some path space whose based manifold may have unbounded Ricci curvature. In particular, note that the right side of (\ref{t1.3-1}) may not be well defined for every $F \in \F C_b$ without any condition on the curvature bound of the based manifold since the associated weighted function may not be integrable, and the weighted log-Sobolev inequality (\ref{t1.3-1}) holds for every $F \in \F C_{b,loc}$. (see (\ref{e1}) below for the definition of $\F C_{b,loc}$) As long as the based manifold is complete and stochastically complete, we can construct the damped O-U Dirichlet form on $C_o(M)$, see e.g. Example \ref{ex1} below. Moreover, the log-Sobolev inequality holds for the damped O-U Dirichlet form with the corresponding constant to be $2$ (independent of the curvature of the based manifold), see Theorem \ref{t4.1}. In \cite{W}, a weak Poincar\'e inequality for the O-U Dirichlet form was shown under some conditions of $K_1$ and $K$, but we are not sure whether the weak Poincar\'e inequality is true if we only assume the based manifold is complete and stochastically complete. Let \begin{equation*} \rho(\gamma):=\displaystyle\sup_{t\in[0,1]}d_M(\gamma(t),o), \end{equation*} we will show that for every $l \in C_0^{\infty}(\R)$, $l(\rho)\in \D(\E)$, where $C_0^{\infty}(\R)$ denotes the set of smooth functions on $\R$ with compact supports. Based on such property, we can construct more general Dirichlet forms with diffusion coefficients, which can be viewed as a generalization of those in \cite{L} and \cite{WW1}. In fact, let \begin{equation}\label{e1} \F C_{b,loc}:=\Big\{Fl(\rho): F \in \F C_{b},\ \ l \in C_0^{\infty}(\R)\Big\} \end{equation} be the collection of ``local" bounded Lipschitz continuous cylinder functions. Let $\A: C_o(M)\times \H$ $\rightarrow \H$ be a measurable operator, such that, \begin{enumerate} \item[(A1)] For $\mu$-$a.s.$ $\gamma \in C_o(M)$, $\A(\gamma):\H \rightarrow \H$ is a densely defined self-adjoint operator with the domain $\D(\A(\gamma))$. \item[(A2)] For every $F \in \F C_{b,loc}$, $DF(\gamma)\in \D(\A(\gamma)^{\frac{1}{2}})$ for $\mu$-$a.s.$ $\gamma \in C_o(M)$ and \begin{equation*} \int_{C_o(M)}\big|\A(\gamma)^{\frac{1}{2}}\big(DF(\gamma)\big)\big|_{\H}^2 \d \mu <\infty. \end{equation*} \item[(A3)] For each $R>0$, there exists a constant $\vv(R)>0$ such that $\A(\gamma) \ge \vv(R)\mathbf{I}$ for $\mu$-$a.s.$ $\gamma\in C_o(M)$ satisfying $\rho(\gamma)\le R$, where $\mathbf{I}$ denotes the identity operator. \end{enumerate} It is easy to see that under conditions (A1)-(A2), the following quadratic form $(\E_{\A}, \F C_{b,loc})$ is well defined, \begin{equation}\label{e2} \E_{\A}(F,G):=\int_{C_o(M)}\big\langle \A(\gamma)^{\frac{1}{2}} DF(\gamma), \A(\gamma)^{\frac{1}{2}} DG(\gamma)\big \rangle_{\H}d\mu,\ \ \ F,G \in \F C_{b,loc}. \end{equation} And in this article, it will be shown that if we assume (A1)-(A3), then $(\E_{\A}, \F C_{b,loc})$ is closable, and its closed extension $(\E_{\A}, \D(\E_{\A}))$ is a local Dirichlet form. The quasi-regularity of a Drichlet form, in particular that on a infinite dimensional space (without locally compact property), implies the existence of the associated Hunt process for the Dirichlet form. For the overall introduction of the properties of the quasi-regular Dirichlet forms on infinite dimensional space, we refer the reader to \cite{MR}. The quasi-regularity of the O-U Dirichlet form on path space over a compact manifold was first shown in \cite{DR}. And the quasi-regularity of a class of Dirichlet forms with constant diffusion coefficients was established in \cite{L}. We also want to remark that if in condition (A2) above, we replace the set $\F C_{b,loc}$ by $\F C_b$, and the constant $\vv(R)$ is independent of $R>0$ in (A3) , then the quasi-regularity of such Dirichlet form $(\E_{\A}, \D(\E_{\A}))$ was shown in \cite{WW1}. See \cite{EM} for the case of the Dirichlet form on Finsler manifold, and see \cite{WW2} for the case of the Dirichlet form on free path space. Another aim of this article is to prove the quasi-regularity of $(\E_{\A}, \D(\E_{\A}))$, let the assumption (A2') be introduced as (\ref{e1a}) and (\ref{e1aa}) below, we can obtain the following result \begin{thm}\label{t1.2} Suppose (A1), (A2') and (A3) hold, then $(\E_{\A}, \D(\E_{\A}))$ is a quasi-regular Dirichlet form. \end{thm} The article is organized as following, in the second section, we will prove the closability of the quadratic form $(\E, \F C_b)$ and $(\E_{\A}, \F C_{b,loc})$. In the third section, we will show some functional inequalities for the O-U Dirichlet form. In the fourth section, we will prove the quasi-regularity for $(\E_{\A}, \D(\E_{\A}))$. In the Appendix, following the argument in \cite{TW}, we will prove a lemma needed in the proof of Theorem \ref{t1.1}. \section{The closability of quadratic form} In this section, we first show that the quadratic form $(\E, \F C_b)$ defined by (\ref{c1.1}) is closable. The proof below is inspired by the cut-off procedure for the Dirichlet form, see e.g. \cite[Proposition A.1]{BLW}, and the procedure of the conformal change for the metric of the based manifold, see e.g. \cite{TW}, \cite{W}, \cite{WW2}. \begin{proof}[Proof of Theorem $\ref{t1.1}$] (1) For every $R \ge 1$, let $B_R:=\{x\in M: d_M(x,o)\leq R\}$, and there exists a non-negative function $f_R \in C_0^{\infty}(M)$ such that $f_R(x)=1$ for all $ x \in B_R$ and $M_R:=\{x \in M:\ f_R(x)>0\}$ is a connected open set. We define a metric $\langle, \rangle_R$ on $M_R$ as \begin{equation*} \langle, \rangle_R:=f_R^{-2} \langle, \rangle, \end{equation*} where $\langle, \rangle$ is the Riemannian metric on $M$. Moreover, by \cite[Section 2]{TW} and \cite[Lemma 3.4]{FWW}(see also Lemma \ref{l5.1} in the Appendix below), we know $(M_R, \langle, \rangle_R)$ is a complete Riemannian manifold, and \begin{equation}\label{e3} K_R:=\sup_{M_R}\|\Ric^{(R)}\|_R <\infty, \end{equation} for every $R\ge 1$, where $\Ric^{(R)}$ denotes the Ricci curvature tensor on $(M_R, \langle, \rangle_R)$. Hence $M_R$ is stochastically complete. We write $\mu_R$ for the distribution of the Brownian motion on $C_o(M_R)$, by the reference listed (see e.g. \cite{Hsu3}) in the introduction, there exists an O-U Dirichlet form $(\E_R, \D(\E_R))$ on $L^2(\mu_R)$, such that, \begin{equation*} \E_R(F,F)=\int_{C_o(M_R)}\langle D_R F, D_R F\rangle_{\H} \d \mu_R, \ \ F \in \F C_b(M_R), \end{equation*} where $D_R$ denotes the (closed) gradient operator on $L^2(\mu_R)$. In order to compare the O-U Dirichlet form $(\E_R, \D(\E_R))$ in different spaces $C_o(M_R)$, we model them into the same probability space. Suppose that $(\Omega, \F, \P)$ is a complete probability space, and $W_t$ is a $\R^n$-valued Brownian motion on this space. We consider the SDE (\ref{c1.0}) on $M$, \begin{equation}\label{e2a} \begin{cases} &\d U_t=\displaystyle\sum^n_{i=1}H_i(U_t)\circ\d W_t^i,\ \ t \in [0,1],\\ & U_0=u_o, \end{cases} \end{equation} so $X_t:=\pi(U_t)$ is the Brownian motion on $M$, $U_{\cdot}$ is the horizontal lift along $X_{\cdot}$. Similarly, since $\langle, \rangle_R= \langle, \rangle,$ on $B_R$, we can choose an orthonormal basis $\{H_{i,R}\}_{i=1}^n$ of horizontal vector fields on $O(M_R)$ such that $H_{i,R}(u)=H_{i,m}(u)=H_i(u)$ for every $m\ge R$ when $u \in O(M_R)$ satisfies $\pi(u) \in B_R$. Let $W_t$, $u_0$ be the same as that in (\ref{e2a}), we consider the following SDE, \begin{equation*} \begin{cases} &\d U_{t,R}=\displaystyle\sum^n_{i=1}H_{i,R}(U_{t,R})\circ\d W_t^i,\ \ t \in [0,1],\\ & U_{0,R}=u_o, \end{cases} \end{equation*} so $X_{\cdot,R}:=\pi(U_{\cdot,R})$ is the Brownian motion on $M_R$, $U_{\cdot,R}$ is the horizontal lift along $X_{\cdot,R}$ on $M_R$. Moreover, let $\tau_R:= \inf\{t\ge 0: X_t \notin B_R\}$, we have $U_{t,R}=U_{t,m}=U_t$ $\P$-$a.s.$ for every $m \ge R, t \le \tau_R$. Suppose $\{F_k\}_{k\ge 1}\subset \F C_b$ satisfy \begin{equation}\label{e4} \begin{split} \lim_{k \rightarrow \infty}\mu(F_k^2)=0,\ \ \ \lim_{k,m \rightarrow \infty}\E(F_k-F_m,F_k-F_m)=0.\end{split} \end{equation} For every $R\ge 1$, we can find a $l_R\in C^\infty_0(\R)$ such that \begin{equation}\label{e0a} l_R(r)=\begin{cases} & 1,\quad\quad\quad\text{if}\ |r|\le R-1,\\ & \in [0,1], ~\text{if}\ R-1<|r|<R,\\ &0,\quad\quad\quad\text{if}\ |r|\ge R, \end{cases} \end{equation} and $ \sup_{r \in \R}|l_R'(r)|\leq2$. Let $d_R$ be the Riemannian distance on $M_R$ and $$\rho_R(\gamma):=\sup_{t \in [0,1]}d_R(\gamma(t),o), \ \ \phi_R(\gamma):=l_R(\rho_R(\gamma)),\quad \gamma \in C_o(M_R).$$ We denote the gradient operator on $M_R$ by $\nabla^R$. Since $|\nabla^R d_R|_R \le 1$ and the O-U gradient operator $D_R$ on $L^2(\mu_R)$ is closed due to (\ref{e3}), as the same argument in the proof of \cite[Lemma 2.2]{A} or \cite[Proposition 3.1]{RS}, we have $\phi_R \in \D(\E_R)$, and $\|D_R \phi_R(\gamma)\|_{\H}\le 2$ for every $R \ge 1$ and $\mu$-$a.s.$ $\gamma \in C_o(M)$. Note that $l_R(r)=0$ if $r \ge R$ and $\rho_R(\gamma)=\rho(\gamma)$ for each $\gamma \in C_o(M_R) \subseteq C_o(M)$ satisfying $\rho_R(\gamma)\le R$, so we can extend $\phi_R$ to be defined in $C_o(M)$ by $\phi_R(\gamma):=l_R(\rho(\gamma))$ for $\mu$-$a.s.$ $\gamma \in C_o(M)$. Since $\{F_k\}_{k\ge 1}\subset \F C_b$, we may assume that for each $k\ge 1$, $$F_k(\gamma)=f_k\big(\gamma(t_1),\cdots, \gamma(t_{j_k})\big),\quad \gamma\in C_o(M)$$ for some $f_k \in C_b^{Lip}(M^{j_k}), j_k\ge 1$ and $0<t_1<\cdots <t_{j_k}\le 1$. Let $F_{k,R}(\gamma):=\phi_{R}(\gamma)F_k(\gamma)$, since $\phi_R(\gamma) \neq 0$ only if $\rho(\gamma)\le R$, we can replace $f_k \in C_b^{Lip}(M^{j_k})$ by some $\tilde f_k \in C_b^{Lip}(M_R^{j_k})$ such that $f_k(x)=\tilde f_k(x)$, $\forall \ x \in B_R^{j_k}$ in the definition of $F_{k,R}$, then $F_{k,R}\big |_{C_o(M_R)} \in \F C_{b,loc}(M_R) \subseteq \D(\E_R)$. Note that $\rho(X_{\cdot})=\rho_R(X_{\cdot,R})\le R$ implies $X_{\cdot} =X_{\cdot,R}$ and $U_{\cdot} =U_{\cdot,R}$ $\P$-$a.s.$, hence it is easy to see that, \begin{equation}\label{e4a} D_R F_{k,R}(X_{\cdot,R})=\phi_R(X_{\cdot,R}) DF_k(X_{\cdot})+ F_k (X_{\cdot})D_R \phi_R(X_{\cdot,R}). \end{equation} Then we obtain, for every fixed $R \ge 1$, \begin{equation}\label{c2.8}\aligned &\E_R(F_{k,R}-F_{m,R}, F_{k,R}-F_{m,R})=\int \|D_R F_{k,R}(X_{\cdot,R})-D_R F_{m,R}(X_{\cdot,R})\|_{\H}^2\d\P\\ &\le 2\int\phi_R^2(X_{\cdot,R}) \|DF_k(X_{\cdot})-DF_m(X_{\cdot})\|_{\H}^2\d\P\\& ~~~+2\int \|D_R\phi_R(X_{\cdot,R})\|_{\H}^2 |F_k(X_{\cdot})-F_m(X_{\cdot})|^2\d \P\\ &\le 2\int \|DF_k(X_{\cdot})-DF_m(X_{\cdot})\|_{\H}^2\d\P+4\int |F_k(X_{\cdot})-F_m(X_{\cdot})|^2\d \P\\ &=2\E(F_k-F_m,F_k-F_m)+4 \mu(|F_k-F_m|^2), \endaligned \end{equation} where in the second inequality above, we use the property that $\phi_R \le 1$ and $\|D_R \phi_R(\gamma)\|_{\H}\le 2$. According (\ref{e4}), we have \begin{equation}\label{e5} \lim_{k,m\rightarrow\infty} \E_R(F_{k,R}-F_{m,R}, F_{k,R}-F_{m,R})=0. \end{equation} Note that by (\ref{e4}) and as the same procedure above, it is not difficult to check that, \begin{equation}\label{e5a} \lim_{k,m\rightarrow\infty}\mu_R(|F_{k,R}- F_{m,R}|^2) \le \lim_{k,m\rightarrow\infty}\mu(|F_{k}- F_{m}|^2)=0. \end{equation} As mentioned earlier, $(\E_R, \D(\E_R))$ is closed due to (\ref{e3}), by (\ref{e5}) and (\ref{e5a}), we derive for every fixed $R\ge 1$, \begin{equation}\label{e6} \lim_{k\rightarrow\infty}\E_R(F_{k,R},F_{k,R})=0. \end{equation} We define $\mathbf{B}_R \subseteq C_o(M)$ by \begin{equation}\label{e6aa} \mathbf{B}_R:=\{\gamma \in C_o(M):\ \rho(\gamma)\le R\}. \end{equation} For every $k,m,R\ge 1$, \begin{equation}\label{e6a} \aligned \E(F_k,F_k)&=\int\|D F_k (X_{\cdot})\|_{\H}^2\d\P= \int \|D F_k(X_{\cdot})-D_R F_{k,R}(X_{\cdot,R})+D_R F_{k,R}(X_{\cdot,R})\|_{\H}^2 \d \P\\ &\le 2\int \|D F_k(X_{\cdot})- D_R F_{k,R}(X_{\cdot,R})\|_{\H}^2 \d \P+ 2\E_R(F_{k,R},F_{k,R})\\ & \leq 4\int (1-\phi_R(X_{\cdot,R}))^2\|DF_k(X_{\cdot})\|_{\H}^2\d\P\\&~~~~~~~~~~~+ 4\int \|D_R \phi_R(X_{\cdot,R})\|_{\H}^2F_k^2(X_{\cdot})\d \P+2\E_R(F_{k,R},F_{k,R})\\ &\le 4\int_{\mathbf{B}^c_{R-1}}\|DF_k(\gamma)\|_{\H}^2\d \mu+8\int F_k^2(\gamma)\d \mu+2\E_R(F_{k,R},F_{k,R})\\ &\le 8\int_{\mathbf{B}^c_{R-1}}\|DF_m(\gamma)\|_{\H}^2\d \mu+8\E(F_k-F_m,F_k-F_m)\\& ~~~~~~~~~~~~~~~~~~~ +8\int F_k^2(\gamma)\d \mu+2\E_R(F_{k,R},F_{k,R}), \endaligned \end{equation} where in the second inequality above, we use (\ref{e4a}), the third inequality is due to the property that $\|D_R \phi_R(\gamma)\|_{\H}\le 2$ and $\phi_R(X_{\cdot,R}) \neq 1$ only if $\rho_R (X_{\cdot,R})>R-1$, thus $\rho (X_{\cdot})>R-1$, and the complement of $\mathbf{B}_R$ is denoted by $\mathbf{B}_R^c$. According to (\ref{e4}) and (\ref{e6}), we obtain for every fixed $R,m \ge 1$, \begin{equation*} \begin{split} &\limsup_{k\rightarrow \infty}\E(F_k,F_k)\le 8\int_{\mathbf{B}^c_{R-1}}\|DF_m(\gamma)\|_{\H}^2\d \mu+8\limsup_{k \rightarrow \infty}\E(F_k-F_m,F_k-F_m), \end{split} \end{equation*} and in the above inequality, first let $R \rightarrow \infty$ then $m \rightarrow \infty$, we get \begin{equation*} \limsup_{k \rightarrow \infty}\E(F_k,F_k)=0, \end{equation*} hence $(\E, \F C_b)$ is closable. Let $(\E, \D(\E))$ be the closed extension of of $(\E, \F C_b)$, it is easy to show the contraction property of $(\E, \D(\E))$, for example, just repeating the step (b) in the proof of \cite[Proposition 2.1]{WW1}. So we have proved $(\E, \D(\E))$ is a symmetric Dirichlet form. \end{proof} \begin{lem}\label{l2.1} For every $l \in C_0^{\infty}(\R)$, $l(\rho) \in \D(\E)$, and \begin{equation}\label{t2.1.1} \|Dl(\rho(\gamma))\|_{\H}\le \sup_{r \in \R}|l'(r)|,\ \ \ \ \ \mu- a.s. \ \gamma \in C_o(M). \end{equation} \end{lem} \begin{proof} We follow the argument in the proof of \cite[Lemma 2.2]{A} or \cite[Proposition 3.1]{RS}. We take a countable dense subset $\{t_i\}_{i=1}^{\infty}$ in $(0,1]$, and define \begin{equation}\label{c*} \rho^m(\gamma):=\sup_{1\le i \le m}d_M(\gamma(t_i),o),\ \ \phi^m(\gamma):=l(\rho^m(\gamma)),\quad \gamma \in C_o(M). \end{equation} It is obvious that $\phi^m:=l(\rho^m) \in \F C_b$. Note that $\rho^m=g\Big(\big(d_M(\gamma(t_1),o),\dots,d_M(\gamma(t_m),o)\big)\Big)$, where $$g(s):=\max_{1\le i \le m}s_i, \quad s=(s_1,\dots,s_m)\in \R^m,$$ since $g(s)$ is a Lipschitz continuous function on $\R^m$ with Lipschitz constant $1$, and $|\nabla d_M(o,x)|\le 1$, then for every $m \ge 1$, we have \begin{equation*} \|D \phi^m(\gamma)\|_{\H}\le \sup_{r \in \R}|l'(r)|,\ \ \ \mu-a.s. \ \gamma \in C_o(M). \end{equation*} By the Banach-Saks property, there exists a subsequence $\{\phi^{m_i}\}_{i=1}^{\infty}$ of $\{\phi^{m}\}$ such that for $S_N:=\frac{1}{N}\sum_{i=1}^N \phi^{m_i}$, $\{D S_N\}_{N=1}^{\infty}$ is convergent in $L^2(\mu)$. Since $\lim_{N \rightarrow \infty}\mu(|S_N-l(\rho)|^2)=0$, and $(\E, \D(\E))$ is closed, we obtain $l(\rho) \in \D(\E)$ and (\ref{t2.1.1}) holds. \end{proof} We call $(\E, \D(\E))$ the O-U Dirichlet form on $L^2(\mu)$. Let $\F C_{b,loc}$ be defined by (\ref{e1}), by Lemma \ref{l2.1}, we know $\F C_{b,loc}\subseteq \D(\E)$. Hence the quadratic form $(\E_{\A}, \F C_{b,loc})$ is well defined by (\ref{e2}). Furthermore, inspired by \cite[Theorem 2.2]{L} and \cite[Proposition 2.1]{WW1}, we can show the closability of $(\E_{\A}, \F C_{b,loc})$. \begin{prp}\label{p2.1} Suppose (A1), (A2) and (A3) hold. The quadratic form $(\E_{\A},$ $ \F C_{b,loc})$ is closable on $L^2(\mu)$, and its closed extension $(\E_{\A}, \D(\E_{\A}))$ is a Dirichlet form. \end{prp} \begin{proof} It is not difficult to show that $\F C_{b,loc}$ is dense in $L^2(\mu)$ since $\F C_b$ is dense. Suppose $\{F_k\}_{k\ge 1}\subset \F C_{b,loc}$ satisfy \begin{equation}\label{e7} \begin{split} \lim_{k \rightarrow \infty}\mu(F_k^2)=0,\ \ \ \lim_{k,m \rightarrow \infty}\E_{\A}(F_k-F_m,F_k-F_m)=0. \end{split} \end{equation} Let $X_{\cdot,R}$, $X_{\cdot}$, $\phi_R$, $F_{k,R}$, $\mathbf{B}_R$ be the same terms as that in the proof of Theorem \ref{t1.1}. From (\ref{e7}), we know $\{\A^{\frac{1}{2}}DF_k\}_{k=1}^{\infty}$ is a Cauchy sequence in $L^2(C_o(M)\rightarrow \H;\mu)$, hence there exists a $\Phi \in L^2(C_o(M)\rightarrow \H;\mu)$, such that, \begin{equation}\label{e8} \lim_{k \rightarrow \infty}\int \big\|\A^{\frac{1}{2}}DF_k-\Phi\big\|_{\H}^2 \d\mu=0. \end{equation} In order to prove the closability of $\E_{\A}$, it suffices to show $\Phi=0$. By assumption (A3), $\A^{-\frac{1}{2}}(\gamma)$ is a bounded operator on $\A(\gamma)^{\frac{1}{2}} \big(\D(\A(\gamma)^{\frac{1}{2}})\big)$ with $||\A(\gamma)^{-\frac{1}{2}}||\le \frac{1}{\sqrt{\vv(R)}}$ for $\mu$-$a.s.$ $\gamma\in \mathbf{B}_R$, as the same argument for (\ref{c2.8}), we obtain \begin{equation}\label{e7a} \aligned &\E_R (F_{k,R}-F_{m,R}, F_{k,R}-F_{m,R})\\&\le 2\int_{\mathbf{B}_R} \| \mathbf{A}^{-\frac{1}{2}}\mathbf{A}^{\frac{1}{2}}(DF_k(X_{\cdot})-DF_m(X_{\cdot}))\|_{\H}^2\d\P +4 \mu(|F_k-F_m|^2)\\&\le\frac{2\E_{\A}(F_k-F_m,F_k-F_m)}{\vv(R)} +4\mu(|F_k-F_m|^2), \endaligned \end{equation} hence from (\ref{e7}), $\lim_{k,m \rightarrow \infty}\E_R (F_{k,R}-F_{m,R}, F_{k,R}-F_{m,R})=0$, then (\ref{e6}) is still true due to the closability of $(\E_R, \D(\E_R))$. Note that $D_R F_{k,R}(X_{\cdot,R})=D F_k(X_{\cdot})$ for $\P$-$a.s.$ $\omega \in \Omega$ such that $\rho(X_{\cdot})\le R-1$, by (\ref{e6}) and (\ref{e8}), for every $R\ge 1$, taking a subsequence if necessary (the subsequence may depend on $R$), \begin{equation}\label{e7aa} \begin{split} & \lim_{k \rightarrow \infty}\|DF_k(X_{\cdot})\|_{\H}=0,\\ &\lim_{k \rightarrow \infty}\|\mathbf{A}(X_{\cdot})^{\frac{1}{2}}(DF_k(X_{\cdot}))- \Phi(X_{\cdot})\|_{\H}=0,\ \P-\ a.s.\ \omega \in \Omega\ \text{with}\ \rho(X_{\cdot})\le R-1. \end{split} \end{equation} Since $\A(X_{\cdot})^{\frac{1}{2}}$ is closed, from (\ref{e7aa}) we know for every $R\ge 1$, $\Phi(X_{\cdot})=0$ for $\P$-$a.s.$ $\omega \in \Omega$ with $\rho(X_{\cdot})\le R-1$. Note that $R$ is arbitrary, we have $\Phi(\gamma)=0$ for $\mu$-$a.s.$ $\gamma \in C_o(M)$. Let $(\E_{\A}, \D(\E_{\A}))$ be the closed extension of $(\E_{\A}, \F C_{b,loc})$, as the same argument in the step (b) in the proof of \cite[Proposition 2.1]{WW1}, we can show the contraction property of $\E_{\A}$, hence $(\E_{\A}, \D(\E_{\A}))$ is a Dirichlet form. \end{proof} For a suitable choice of $\A$, we will give the following example of $(\E_{\A}, \D(\E_{\A}))$, which is the damped O-U Dirichlet form studied in \cite{EL}, \cite{FM} when the based manifold is compact. \\ \begin{exa}\label{ex1} (Damped O-U Dirichlet form) \end{exa} As in \cite{EL} and \cite{FM}, we can define $\A^{\frac{1}{2}}(\gamma)$ pointwise. For every $\gamma \in C_o(M)$ and $0\le t \le s \le 1$, let $\Phi_{t,s}(\gamma) \in L(\R^n ; \R^n)$ be the solution of the following linear ODE, \begin{equation*} \frac{\d \Phi_{t,s}(\gamma)}{\d s}=-\frac{1}{2}\text{Ric}_{U_{\cdot}(\gamma)}^{\sharp}\big(\gamma(s)\big)\Phi_{t,s}(\gamma), \ \ \Phi_{t,t}=\mathbf{I},\ 0\le t\le s \le 1, \end{equation*} where $U_{\cdot}(\gamma)$ is the horizontal lift along $\gamma$, and $\text{Ric}_{U_{\cdot}(\gamma)}^{\sharp}\big(\gamma(s)\big)\in L(\R^n;\R^n)$ is defined by $\langle \text{Ric}_{U_{\cdot}(\gamma)}^{\sharp}\big(\gamma(s)\big)a, b\rangle $ $=\big \langle \text{Ric}(\gamma(s))\big(U_s(\gamma)a\big), U_s(\gamma)b\big\rangle_{T_{\gamma(s)}M} $ for every $a,b \in \R^n$. And we write $\Phi_s(\gamma):=\Phi_{0,s}(\gamma)$ for simplicity. Let $K(\gamma)$ and $K_1(\gamma)$ be defined by (\ref{e15aa}), it is not difficult to see that \begin{equation}\label{e14} \|\Phi_{t,s}(\gamma)\|^2\le \e^{-K_1(\gamma)},\quad \gamma\in C_o(M). \end{equation} For every $\gamma \in C_o(M)$, we define $\hat \A(\gamma):\H \rightarrow \H$ as following, \begin{equation}\label{e14a}\aligned &\big(\hat \A(\gamma) h\big)(t)\\&= h(t)-\frac{1}{2}\int_0^t \big(\Phi_r(\gamma)^{*}\big)^{-1}\int_r^1 \Phi_s(\gamma)^{*}\text{Ric}_{U_{\cdot}(\gamma)}^{\sharp}\big((\gamma(s))\big) h'(s)ds,\ h \in \H.\endaligned \end{equation} where $\Phi_r(\gamma)^{*}$ denotes the adjoint operator of $\Phi_r(\gamma)$. Note that $\big(\Phi_r(\gamma)^{*}\big)^{-1}\Phi_s(\gamma)^{*}=\Phi_{r,s}(\gamma)^{*}$, from (\ref{e14}) we know, \begin{equation}\label{e15a} \|\hat \A(\gamma)\|^2 \le 2\bigg(1+\frac{K(\gamma)^2 \e^{-K_1(\gamma)}}{4}\bigg),\ \ \ \gamma \in C_o(M), \end{equation} thus $\hat \A(\gamma)$ is a bounded operator. Let $\A(\gamma):=(\hat \A(\gamma))^*\hat \A(\gamma)$, then assumption (A1), (A2) is true for $\A$. On the other hand, \begin{equation}\label{e15} \langle \hat \A(\gamma) h_1, h_2\rangle_{\H}:= \langle h_1, \hat h_2(\gamma)\rangle_{\H},\ \ h_1,h_2 \in \H, \end{equation} where $\hat h_2(t,\gamma):=\Phi_t(\gamma)\int_0^t \Phi_s(\gamma)^{-1} h_2'(s)ds $. Let $\tilde {\mathbf A}(\gamma)$ be defined by, \begin{equation*} \langle \tilde {\mathbf A}(\gamma) h_1, h_2\rangle_{\H}:= \langle h_1, \tilde h_2(\gamma)\rangle_{\H},\ \ h_1,h_2 \in \H, \end{equation*} where $\tilde h_2(\cdot,\gamma)\in \H$ and $\tilde h_2(t,\gamma):= \int_0^t \Phi_s(\gamma)\frac{d}{ds}\big((\Phi_s(\gamma))^{-1}h_2(s)\big)ds$ is the solution to the following equation $$ \tilde h_2'(t,\gamma)=\frac{1}{2} \text{Ric}_{U_{\cdot}(\gamma)}^{\sharp}\big(\gamma(t)\big)h_2(t) +h_2'(t),\ \ \ \tilde h_2(0,\gamma)=0.$$ Then $\tilde {\mathbf A}(\gamma)$ is a bounded operator on $\H$ with $$\|\tilde {\mathbf A}(\gamma)\|^2\le 2\bigg(1+\frac{K(\gamma)^2}{4}\bigg),\ \ \ \gamma \in C_o(M).$$ Furthermore, by (\ref{e15}), it is easy to show $\tilde {\mathbf A}(\gamma)\hat\A(\gamma) =\mathbf{I}$, which implies that (A3) holds for $\A$ with $$\vv(R)=\sup_{\gamma \in \mathbf{B}_R} 2\bigg(1+\frac{K(\gamma)^2}{4}\bigg).$$ By Theorem \ref{t1.1} and Theorem \ref{t1.2}, $(\E_{\A}, \F C_{b,loc})$ is closable, and its closure $(\E_{\A}, \D(\E_{\A}))$ is a Dirichlet form. We also want to remark that without any restriction on the bound of Ricci curvature of $M$, $\E_{\A}$ may not be well defined on $\F C_b$ since $\A^{\frac{1}{2}}$ may not be integrable. Hence the domain $\D(\E_{\A})$ may not be equal to $\D(\E)$, the domain of the O-U Dirichlet form, which is different from the case in \cite{EL}, \cite{FM}, where the based manifold is compact. \section{Functional Inequalities} Through this section, let $\mathbf{\Lambda}(\gamma)$ be the operator $\A(\gamma)$ defined in Example \ref{ex1}, so $(\E_{\mathbf{\Lambda}},\D(\E_{\mathbf{\Lambda}}))$ is the damped O-U Dirichlet form on $L^2(\mu)$. We first show that the log-Sobolev inequality still holds for $(\E_{\mathbf{\Lambda}},\D(\E_{\mathbf{\Lambda}}))$. In particular, if the based manifold is compact, the corresponding result was shown in \cite[Chapter 4]{EL}. \begin{thm}\label{t4.1} The following log-Sobolev inequality holds for $(\E_{\mathbf{\Lambda}},\D(\E_{\mathbf{\Lambda}}))$, \begin{equation}\label{e16} \mu(F^2\log F^2)\le 2 \E_{\mathbf{\Lambda}}(F,F),\ \ \ \ F \in \F C_{b,loc}, \ \mu(F^2)=1. \end{equation} In particular, the following weighted log-Sobolev inequality is true, \begin{equation}\label{e16a} \mu(F^2\log F^2)\le \int_{C_o(M)}\big(4+K(\gamma)^2\e^{-K_1(\gamma)}\big) \|D F\|_{\H}^2 \d \mu ,\ \ \ F \in \F C_{b,loc}, \ \mu(F^2)=1, \end{equation} where the items $K(\gamma)$, $K_1(\gamma)$ are defined by (\ref{e15aa}). \end{thm} \begin{proof} For every $R \ge 1$, let $M_R\subseteq M$, $D_R$ be the same items as that in proof of Theorem \ref{t1.1}. Since $M_R$ has bounded Ricci curvature, by \cite{Hsu3} (see also \cite{CHL}), the integration by parts formula holds, from which we can deduce that the Clark-Ocone formula first developed in \cite{F} for the case $M$ compact is still true on $C_o(M_R)$, see e.g. \cite[Chapter 4]{EL} or \cite{Hsu3}. Hence following the same procedure as that in \cite[Section 4.2]{EL}, based on the Clark-Ocone formula, we can show for every $R \ge 1$, \begin{equation}\label{e17} \mu_R(F^2\log F^2)\le 2 \int_{C_o(M_R)} \big\|\mathbf{\Lambda}_R^{\frac{1}{2}}D_R F\big\|_{\H}^2 \d \mu_R ,\ \ \ F \in \F C_{b,loc}(M_R), \mu_R(F^2)=1, \end{equation} where $\mathbf{\Lambda}_R^{\frac{1}{2}}D_R$ is the damped gradient operator on $C_o(M_R)$, and $\mu_R$ is the Brownian measure on $C_o(M_R)$. In particular, the constant $2$ of the log-Sobolev inequality (\ref{e17}) is independent of $R$ and the Ricci curvature bound. Suppose $F \in \F C_{b,loc}$ with $\mu(F^2)=1$, then it has the form $F=\tilde F l(\rho)$ for some $\tilde F \in \F C_{b}$ and $l \in C_0^{\infty}(\R)$ satisfying $\text{supp}{l} \subseteq \{r \in \R: |r|\le R_0\}$ for some $R_0\ge 1$. Let $(\E_{R_0},\D(\E_{R_0}))$ be the O-U Dirichlet form on $C_o(M_{R_0})$, as the same argument in the proof of Theorem \ref{t1.1}, we know $\hat F:=F \big |_{C_o(M_{R_0})}\in \D(\E_{R_0})$ and $$\text{supp}F = \text{supp}\hat F\subseteq \supp\{\gamma \in C_o(M_{R_0})\subseteq C_o(M): \ \rho(\gamma)=\rho_{R_0}(\gamma)\le R_0\},$$ so $\mu_{R_0}(\hat F^2\log \hat F^2)=\mu( F^2\log F^2)$, $\mu_{R_0}(\hat F^2)=\mu(F^2)=1$. As explained in the proof of Theorem \ref{t1.1}, by modeling $C_o(M)$ and $C_o(M_{R_0})$ into the same probability space $(\Omega, \P)$, we know $U_{\cdot,R_0}=U_{\cdot}$ $\P$ -$a.s.$ when $\hat F(\pi(U_{\cdot}))\neq 0$, where $U_{\cdot,R_0}$ and $U_{\cdot}$ are the horizontal lift (along the Brownian motion) on $M_{R_0}$ and $M$ respectively, so $\int_{C_o(M)} \|\mathbf{\Lambda}^{\frac{1}{2}}D F\|_{\H}^2 \d \mu $ $=\int_{C_o(M_{R_0})} \|\mathbf{\Lambda}_{R_0}^{\frac{1}{2}}D_{R_0} \hat F\|_{\H}^2 \d \mu_{R_0} $. Then applying (\ref{e17}) to $\hat F$, we have, \begin{equation*} \mu(F^2\log F^2)\le 2 \int_{C_o(M)} \|\mathbf{\Lambda}^{\frac{1}{2}}D F\|_{\H}^2 \d \mu , \end{equation*} note that $F \in \F C_{b,loc}$ is arbitrary, we have shown (\ref{e16}). And by the estimate (\ref{e15a}), we can get (\ref{e16a}) immediately from (\ref{e16}). \end{proof} As \cite{AE}, \cite{CHL}, \cite{Hsu2}, if the based manifold is compact or with bounded Ricci curvature, the log-Sobolev inequality for the O-U Dirichlet form holds, but the corresponding constant depends on the uniform bound of the Ricci curvature. Hence the log-Sobolev inequality may not be true if the Ricci curvature of the based manifold is unbounded, and we will study the weak log-Sobolev inequality introduced in \cite{CGG}, which can be used to describe the convergence rate for the entropy of the associated Markov semigroup. Based on the weighted log-Sobolev inequality (\ref{e16a}), following the techniques in \cite[Lemma 2.3]{CLW} and \cite[Theorem 1]{W}, we obtain the weak log-Sobolev inequality for the O-U Dirichlet form. \begin{thm}\label{t4.2} If \begin{equation}\label{e17aa} \lim_{R \rightarrow \infty} \frac{1}{\sqrt{\mu(\rho > R)}}\int^\infty_{R}\frac{\d s} {\sqrt{4+\e^{-\tilde K_1(s)}\tilde K(s)^2}}=\infty, \end{equation} where $\tilde K_1$ and $\tilde K$ are defined by (\ref{e15c}), then the following weak log-Sooblev inequality holds, \begin{equation}\label{e17a} \mu(F^2 \log F^2)\le \alpha(r) \E(F,F)+r\|F\|_{\infty}^2,\ \ F \in \F C_b \ \ \ \mu(F^2)=1,\ \ 0<r\le r_0, \end{equation} for some $r_0>0$ with \begin{equation*} \begin{split} \alpha(r):=\inf_{R \in \Lambda_r}\big\{2\big(4+\tilde K(R)^2\e^{-\tilde K_1(R)}\big)\big\}<\infty, \ \ r>0, \end{split} \end{equation*} where \begin{equation}\label{e18a} \Lambda_r:=\bigg\{R>0:\ \inf_{R_1 \in (0,R)}\bigg\{\frac{2\mu(\rho > R_1)} {\big(\int^{R}_{R_1}\frac{\d s} {\sqrt{4+\e^{-\tilde K_1(s)}\tilde K(s)^2}}\big)^2}+ 3\sqrt{\mu(\rho>R_1)}\bigg\}\le r\bigg\}. \end{equation} \end{thm} \begin{proof} For a fixed $R_1>0$, let \begin{equation*} \theta(r)=\frac{1}{\sqrt{\mu(\rho > R_1)}}\int^{R_1\vee r}_{R_1}\frac{\d s} {\sqrt{4+\e^{-\tilde K_1(s)}\tilde K(s)^2}},\ \ r\in \R. \end{equation*} For every (fixed) $R>R_1$, let \begin{equation*} g_R(r):=\Big(1-\frac{\theta(r)}{\theta(R)}\Big)^+, \ \ r \in \R, \end{equation*} then $g_R$ is a bounded Lipschitz continuous function on $\R$ with compact support, moreover $0\le g_R \le 1$, $g_R(r)=1$ if $r\leq R_1$, and $g_R(r)=0$ if $r\geq R$. So by lemma \ref{l2.1} and the approximation argument, we have $g_R(\rho)\in \D(\E)$ and \begin{equation}\label{ee}\|D g_R(\rho)\|_{\H}^2 \le \frac{1}{(4+\e^{-\tilde K_1(\rho)}\tilde K(\rho)^2)\mu(\rho>R_1) \theta(R)^2}1_{\{\rho>R_1\}}.\end{equation} It is sufficient to prove (\ref{e17a}) for every $F \in \F C_{b,loc}$. For every $F \in \F C_{b,loc}$ with $\mu(F^2)=1$, let $F_R:=g_R(\rho)F$, we have, \begin{equation*} \begin{split} &\Ent(F^2)=\mu(F^2\log F^2)= \big(\mu(F^2\log F^2)-\Ent(F_R^2)\big)+\Ent(F_R^2)\\ &=\mu\big(F^2(1-g_R^2(\rho))\log F^2\big)+ \mu\Big(F^2g_R^2(\rho)\big(-\log g_R^2(\rho)+\log \mu(F_R^2)\big)\Big)+\Ent(F_R^2)\\ &:=I_1+I_2+I_3, \end{split} \end{equation*} where $\Ent(G^2):=\mu(G^2 \log G^2)-\mu(G^2)\log \mu(G^2)$ for every measurable function $G$ on $C_o(M)$. According to the inequality $(\log x)^{+} \leq x, x>0$, \begin{equation*} \aligned I_1&=\int_{\{\rho>R_1\}}(1-g_R^2(\rho))F^2 \log F^2 \d \mu\\ &\leq2\|F\|_\infty^2\int_{\{\rho>R_1\}} (\log |F|)^+ \d\mu \leq2\|F\|_\infty^2\int_{\{\rho>R_1\}} |F| \d \mu\\ &\leq2\|F\|_\infty^2 \sqrt{\mu(\rho>R_1)},\endaligned \end{equation*} where in the last step above, we use Cauchy-Schwarz inequality and the assumption $\mu(F^2)=1$. Since $x\log x\geq -\e^{-1}$ for any $0< x\leq1$, and $\log \mu(F_R^2)\le 0$ due to $\mu(F_R^2)\le \mu(F^2) \le 1$, we have, \begin{equation*} I_2\leq \e^{-1}\int_{\{\rho>R_1\}} F^2 \d \mu\leq \|F\|_\infty^2\mu(\rho>R_1). \end{equation*} Since $\text{supp}F_R \subseteq \mathbf{B}_R$, we apply (\ref{e16a}) to $F_R$ and obtain, \begin{equation*} \begin{split} & I_3 \le \int \big(4+K(\gamma)^2 \e^{-\tilde K_1(\gamma)}\big)\|D F_R(\gamma)\|_{\H}^2 \d \mu\\ &\le 2\int \big(4+\tilde K(\rho)^2 \e^{-\tilde K_1(\rho)}\big)\big( g_R^2(\rho)\|D F\|_{\H}^2+ \|D g_R(\rho)\|_{\H}^2 F^2\big)\d \mu\\ &\le 2 \big(4+\tilde K(R)^2 \e^{-\tilde K_1(R)}\big)\E(F,F) +\frac{2\|F\|_{\infty}^2}{\theta(R)^2}, \end{split} \end{equation*} where in the last step, we use (\ref{ee}). Combining with the above inequalities, we obtain $$\aligned\mu(F^2\log F^2)&\leq 2\big(4+\tilde K(R)^2 \e^{-\tilde K_1(R)}\big)\E(F,F)\\ &+\Big(\frac{2}{\theta(R)^2}+3\sqrt{\mu(\rho>R_1)}\Big)\|F\|_\infty^2, \endaligned$$ from which we can get (\ref{e17a}) immediately. In particular, (\ref{e17aa}) ensures the set $\Lambda_r$ defined by (\ref{e18a}) is not empty when $r$ is small enough. \end{proof} Given $\tilde K$ and $\tilde K_1$, i.e. the growth rate of the Ricci curvature of the based manifold, we obtain a concrete estimate for the rate function $\alpha$ in the weak log-Sobolev inequality (\ref{e17a}). In particular, by the equivalence between a class of weak log-Sobolev inequalities and super Poincar\'e inequalities established in \cite{CGG}, under some suitable condition of $\tilde K$ and $\tilde K_1$, we can show that the super Poincar\'e inequality or the Poincar\'e inequality holds for the O-U Dirichlet form. For the overall introduction of the super Poincar\'e inequality, we refer the reader to \cite{W00a} and \cite[Chapter 3]{Wbook}. \begin{cor}\label{cor} Suppose \begin{equation}\label{e18} \tilde K(s) \le c_1(1+s^{\delta_1}),\ \ \tilde K_1(s)\ge -c_2-\delta_2 \log(1+s),\ \ \ s>0, \end{equation} for some non-negative constants $c_1,c_2,\delta_1,\delta_2$. (1) If $2\delta_1+\delta_2<2$, then the following super Poincar\'e inequality holds, \begin{equation}\label{e19a} \mu(F^2)\le r \E(F,F)+\beta(r)\mu(|F|)^2,\ \ \ F \in \D(\E),\ r>0, \end{equation} where $\beta(r)=\exp\Big(c_3\Big(1+r^{-\frac{2}{2-2\delta_1-\delta_2}}\Big)\Big)$ for some $c_3>0$. (2) If $2\delta_1+\delta_2\le 2$, then the following Poincar\'e inequality \begin{equation}\label{e20} \mu(F^2)\le c_4\E(F,F)+\mu(F)^2,\ \ \ F \in \D(\E), \end{equation} holds for some $c_4>0$. \end{cor} \begin{proof} Under the curvature condition (\ref{e18}), by \cite[Lemma 2.2]{W}, see also \cite[Page 1091 (2.6)]{WW1}, we know, \begin{equation*} \mu(\rho>R_1)\le C_1 \e^{-C_2 R_1^2},\ \ \ R_1>0, \end{equation*} for some positive constants $C_1,C_2$. By (\ref{e18}), there exist positive constants $C_3, C_4$ such that for every $R_1,R$ large enough with $R>> R_1$, \begin{equation*} \frac{1}{\sqrt{\mu(\rho > R_1)}}\int^{R}_{R_1}\frac{\d s} {\sqrt{4+\e^{-\tilde K_1(s)}\tilde K(s)^2}}\ge C_3\e^{C_4 R_1^2}, \end{equation*} so condition (\ref{e17aa}) is true. And if we take $R_1=\frac{R}{2}$ in (\ref{e18a}), then there exist some positive constants $C_5,C_6$, such that for any $r>0$ small enough, $\tilde R_0:=C_5+C_6\sqrt{|\log r|}\in \Lambda_r $. So by Theorem \ref{t4.2}, (\ref{e17a}) is true with the rate function \begin{equation}\label{e19} \alpha(r)=C_7 |\log r|^{\frac{2\delta_1+\delta_2}{2}},\ \ r>0, \end{equation} for some constant $C_7>0$. If $2\delta_1+\delta_2<2$, by \cite[Proposition 3.4]{CGG}, the weak log-Sobolev inequality with rate function (\ref{e19}) implies the super Poincar\'e inequality (\ref{e19a}). We want to remark that although their result is presented for the inequality on Euclidean space, by carefully tracking the proof, such result is still true for our case. If $2\delta_1+\delta_2\le 2$, by \cite[Proposition 3.1]{CGG} (see also \cite[Lemma 2.4]{CLW}), we obtain the Poincar\'e inequality (\ref{e20}). \end{proof} \section{Quasi-regularity of the Dirichlet form} In this section, we will study the quasi-regularity of the Dirichlet form $(\E_{\A}, \D(\E_{\A}))$. Let $L^{\infty}(C_o(M)\rightarrow \R^n;\mu)$ be the set of measurable vectors $v: C_o(M) \rightarrow \R^n$ such that $||v||_{L^{\infty}}<\infty$ and let $L^{\infty}_{loc}(C_o(M)\rightarrow \R^n;\mu)$ be the collection of measurable vectors $v: C_o(M) \rightarrow \R^n$ such that $v1_{\mathbf{B}_R} \in L^{\infty}(C_o(M)\rightarrow \R^n;\mu)$ for every $R \ge 1$, where $\mathbf{B}_R$ is defined by (\ref{e6aa}). For every $t \in [0,1]$, $v \in L^{\infty}_{loc}(C_o(M)\rightarrow \R^n;\mu)$, we define $\Psi_{t,v}(\cdot,\gamma) \in \H$ as following \begin{equation*} \Psi_{t,v}(s,\gamma):= \big(s \wedge t\big)v(\gamma),\ \ \ s \in [0,1],\ \gamma \in C_o(M). \end{equation*} Let $l_R \in C_0^{\infty}(\R)$ be the function constructed by (\ref{e0a}), for every $R \ge 1$ we define \begin{equation}\label{e0aa} \phi_R(\gamma):=l_R(\rho(\gamma)),\ \ \gamma \in C_o(M). \end{equation} In order to prove the quasi-regularity of $(\E_{\A}, \D(\E_{\A}))$, we need to impose a condition which is stronger than (A2). We assume \begin{enumerate} \item[(A2')] For every $v \in L^{\infty}_{loc}(C_o(M)\rightarrow \R^n;\mu)$, $\Psi_{t,v}(\cdot,\gamma) \in \D(\A(\gamma)^{\frac{1}{2}})$ $\mu$- $a.s.$. For every $R \ge 1$, $D\phi_R(\gamma) \in \D(\A(\gamma)^{\frac{1}{2}})$ $\mu$-$a.s.$, and there exists a constant $0<c_1(R)<\infty$, such that, \begin{equation}\label{e1a} \begin{split} &\int_{\mathbf{B}_R}\|\A^{\frac{1}{2}}\Psi_{t,v}\|_{\H}^2\d \mu\le c_1(R)||v1_{\mathbf{B}_R}||^2_{L^{\infty}}, \end{split} \end{equation} \begin{equation}\label{e1aa} \begin{split} &\int_{\mathbf{B}_R}\big\|\A^{\frac{1}{2}}(D\phi_R)\big\|_{\H}^2\d \mu\le c_1(R), \end{split} \end{equation} for every $v \in L^{\infty}_{loc}(C_o(M)\rightarrow \R^n;\mu)$ and $t \in (0,1]$. \end{enumerate} The assumption (A2') is a local version of assumption of (A1) in \cite[Page 1087]{WW1}, and it is obvious that assumption (A2') implies (A2). Now we will prove Theorem \ref{t1.2}, the proof is inspired by the argument developed in \cite{DR} for path space over a compact manifold. \begin{proof}[Proof of Theorem $\ref{t1.2}$] (a) We need to check (i)-(iii) in \cite[Definition IV-3.1]{MR}. Since every $F \in \F C_{b,loc}$ is a continuous function on $C_o(M)$, and $\F C_{b,loc}$ is dense on $\D(\E_{\A})$ under $\E_{\A,1}$ norm, so (ii) of \cite[Definition IV-3.1]{MR} is satisfied. Let $\{t_i\}_{i=1}^\infty$ be a countable dense subset of $(0,1]$, $\{\phi_R\}_{R=1}^{\infty}$ be the function defined by (\ref{e0aa}), we may choose a countable dense subset $\{\eta_m\}_{m=1}^{\infty}$ $\subseteq C_0^{\infty}(M)$ which separates the points of $M$. Define \begin{equation*} \begin{split} &\mathbf{S}:=\Big\{F_{R,m,i}|F_{R,m,i}(\gamma)=\phi_R(\gamma)\eta_m\big(\gamma(t_i)\big),\ \ R,i,m \in \mathbb{N}_+, \gamma\in C_o(M)\Big\}, \end{split} \end{equation*} it is obvious that $\mathbf{S}\subseteq \D(\E_{\A})$ is a countable subset and every $F \in \mathbf{S}$ is a continuous function. For any $\gamma, \sigma \in C_o(M)$ with $\gamma\neq\sigma$, there exists a $t_{i_0}$ such that $\gamma(t_{i_0}) \neq \sigma(t_{i_0})$, and we can choose a $\eta_{m_0} \in C_0^{\infty}(M)$ with $\eta_{m_0}(\gamma(t_{i_0})) \neq \eta_{m_0}(\sigma(t_{i_0}))$ since $\{\eta_m\}$ separates the point of $M$. We can also find a $R_0 \ge \max\{\rho(\gamma),\rho(\sigma)\}+1$, hence $\phi_{R_0}(\gamma)=\phi_{R_0}(\sigma)=1$ by definition. Let $\tilde F(\gamma):=\phi_{R_0}(\gamma)\eta_{m_0}\big(\gamma_{t_0}\big)$, then $\tilde F\in \mathbf{S}$ and $\tilde F(\gamma) \neq \tilde F(\sigma)$, which implies $\mathbf{S}$ separates the point in $C_o(M)$, and (iii) of \cite[Definition IV-3.1]{MR} is true. In the following, it suffices to check (i) of \cite[Definition IV-3.1]{MR}, i.e. (see \cite[Remark IV-3.2]{MR}) to find out a sequence of compact sets $\{\mathbf{K}_k\}_{k=1}^{\infty}\subset C_o(M)$ such that \begin{equation}\label{e9} \displaystyle\lim_{k\to\infty}\Cap(C_o(M)\backslash \mathbf{K}_k)=0, \end{equation} where $\Cap$ is the capacity induced by $(\E_{\A},\D(\E_{\A}))$, see e.g. \cite[Page 606]{DR}. (b) To construct $\mathbf{K}_k$, we use the method developed in \cite{DR}, the main difference of our situation here from that in \cite{DR} is that we can only take a local test function $G_{j,R}$ (defined below) due to the lack of uniformly control of $\E_{\A}$ without any curvature condition. For the reader's convenience, here we write all the procedure explicitly. By Nash embedding theorem, there exists a $\varphi:M \rightarrow \R^N$ (for some $N \in \mathbb{N}_+$) such that \begin{equation*} M\ni x\mapsto\varphi(x):=\big(\varphi_1(x),\cdots,\varphi_N(x)\big)\in \mathbb{R}^N \end{equation*} is a smooth isometric embedding, i.e. $M$ is isometric to $\varphi(M)$ endowed with the induced metric of $\mathbb{R}^N$. As introduced, the distance $d$ defined by $d(\gamma,\sigma):=\sup_{t\in[0,1]}d_M(\gamma(t),\sigma(t))$, $\gamma,\sigma \in C_o(M)$ is compatible with the topology on $C_o(M)$. Let $d_0(x,y):=|\varphi(x)-\varphi(y)|$, $ x,y\in M$ and \begin{equation}\label{e8aa} \bar{d}(\gamma,\sigma):=\displaystyle\sup_{t\in [0,1]}d_0(\gamma(t),\sigma(t)), \ \ \ \gamma,\sigma \in C_o(M). \end{equation} Repeating the same procedure as that in step (1) of the proof of \cite[Page 1092, Theorem 1.1]{WW1}, we know $\bar d$ induces the same topology on $C_o(M)$ as $d$ does. (c) From now on, we take $R$ to be a positive integer. Let $\{t_j\}_{j=1}^\infty$ be a countable dense subset of $(0,1]$, let $\{\phi_R\}_{R=1}^{\infty}$ be defined by (\ref{e0aa}). For every fixed $R,j\ge 1$ and $\sigma \in C_o(M)$, we define, \begin{equation*} G^j_{\sigma,R}(\gamma):=\displaystyle \phi_R(\gamma)\big(\sup_{1\leq i\leq N}\displaystyle| \varphi_i(\gamma(t_j))-\varphi_i(\sigma(t_j))|\wedge 1\big), \quad \ \ \gamma \in C_o(M). \end{equation*} Notice that $\phi_R(\gamma)\neq 0$ only if $\gamma\in \mathbf{B}_R$, so the above definition will not be changed if we replace $\varphi_i(\gamma(t_j))$ by $\tilde \varphi_i(\gamma(t_j))$, where $\tilde \varphi_i \in C_0^{\infty}(M)$ with $\tilde \varphi_i(x)=\varphi_i(x)$ for each $x \in B_R$. Hence $G^j_{\sigma,R} \in \F C_{b,loc}\subseteq \D(\E_{\A})$. From the assumption (A2'), we have, \begin{equation}\label{e9a} \begin{split} \E_{\A}(G^j_{\sigma,R},G^j_{\sigma,R}) \leq 2c_1(R)(1+c_2(R)), \ \ \ R,j\ge 1, \sigma \in C_o(M), \end{split} \end{equation} where $c_1(R)$ is the constant in assumption (A2'), and $c_2(R):=\max_{1\le i \le N} \sup_{x \in B_R}$ $|\nabla \varphi_i(x)|^2$. For every fixed $R \ge 1$ and $\sigma \in C_o(M)$, we define \begin{equation*} G_{\sigma,R}(\gamma):=\phi_R(\gamma)\big(\displaystyle\sup_{1\leq i\leq N}\displaystyle\sup_{t\in[0,1]} |\varphi_i(\gamma(t))-\varphi_i(\sigma(t))|\wedge 1\big). \end{equation*} By the dominated convergence theorem, \begin{equation*} \lim_{k \rightarrow \infty}\mu\Big(\Big|\sup_{1\le j \le k}G_{\sigma,R}^j-G_{\sigma,R}\Big|^2\Big)=0. \end{equation*} Since $(\E_{\A}, \D(\E_{\A}))$ is closed, by (\ref{e9a}) and according to the same argument in the proof of Lemma \ref{l2.1}, (see also\cite[Proposition 3.1]{RS}), we have $G_{\sigma,R} \in \D(\E_{\A})$ and \begin{equation}\label{e10} \E_{\A}(G_{\sigma,R},G_{\sigma,R}) \le 2c_1(R)(1+c_2(R)), \ \ \ R\ge 1, \sigma \in C_o(M). \end{equation} (d) Let $\{\sigma_i\}_{i=1}^{\infty}$ be a countable dense subset of $C_o(M)$. For any $m,R\ge 1$, let \begin{equation*} G_{m,R}(\gamma):=\displaystyle\inf_{1\leq i\leq m}G_{\sigma_i,R}(\gamma),\ \ \ \gamma \in C_o(M). \end{equation*} Similar to the above argument, we obtain from (\ref{e10}) that $G_{m,R} \in \D(\E_{\A})$ and for every $m,R\ge 1$, \begin{equation}\label{e10a} \E_{\A}(G_{m,R},G_{m,R}) \le 2c_1(R)(1+c_2(R)). \end{equation} Since $\{\sigma_i\}_{i=1}^{\infty}\subset C_o(M)$ is dense and $\bar d$ induces the same topology as that induced by $d$, by dominated convergence theorem we obtain for every $R\ge 1$, \begin{equation}\label{e10a1} \lim_{m \rightarrow \infty}\mu \big(|G_{m,R}|^2\big)=0. \end{equation} For a fixed $R \ge 1$, due to (\ref{e10a}), (\ref{e10a1}), repeating the same argument in the proof of Lemma \ref{l2.1} or \cite[Proposition 3.1]{RS} (by the Banach-Saks property), there exists a subsequence $\{m_i^R\}_{i=1}^{\infty}$ such that \begin{equation}\label{e10a2} \lim_{j \rightarrow \infty}\E_{\A}(\bar G_{j,R}, \bar G_{j,R})=0, \end{equation} where $\bar G_{j,R}:=\frac{1}{j}\sum_{i=1}^j G_{m_i^R,R}$. We can also find a subsequence $\{j^R\}_{j=1}^{\infty}$ $\subseteq \{j\}_{j=1}^{\infty}$, such that for every $j$, \begin{equation}\label{e11} \begin{split} &\E_{\A,1}\big(\bar G_{(j+1)^R,R}-\bar G_{j^R,R},\bar G_{(j+1)^R,R}-\bar G_{j^R,R}\big)\\ &:=\E_{\A}\big(\bar G_{(j+1)^R,R}-\bar G_{j^R,R},\bar G_{(j+1)^R,R}-\bar G_{j^R,R}\big)+ \|\bar G_{(j+1)^R,R}-\bar G_{j^R,R}\|_{L^2(\mu)}^2 \le 2^{-5(j+R)}. \end{split} \end{equation} Since (\ref{e10a}) holds for each $m=m_{j^R}^R$, as the same way above, we can find subsequence $\{m_{j^R}^{R}\}_{j=1}^{\infty}$ and $\{j^{R+1}\}_{j=1}^{\infty}$, such that $\{m_i^{R+1}\}_{i=1}^{\infty}$ is a subsequence of $\{m_{j^R}^{R}\}_{j=1}^{\infty} $, and for every $j$, \begin{equation*} \E_{\A,1}\big(\bar G_{(j+1)^{R+1},R+1}-\bar G_{j^{R+1},R+1},\bar G_{(j+1)^{R+1}+1,R+1}-\bar G_{j^{R+1},R+1}\big) \le 2^{-5(j+R+1)}, \end{equation*} where $\bar G_{j,R+1}:=\frac{1}{j}\sum_{i=1}^j G_{m_i^{R+1},R+1}$ for every positive integer $j$. Then by induction, for every $R\ge 1$, we can construct subsequence $\{m_{i}^{R}\}_{i=1}^{\infty} $ and $\{j^R\}_{j=1}^{\infty}$, such that $\{m_i^{R+1}\}_{i=1}^{\infty}$ is a subsequence of $\{m_{j^R}^{R}\}_{j=1}^{\infty} $ and (\ref{e11}) is true for every $j,R$. Let $Y_{j,R}:=\{\gamma \in C_o(M): \ \bar G_{(j+1)^R,R}-\bar G_{j^R,R}>2^{-j}\}$, since $\bar G_{j,R}$ is continuous by definition (note that we have shown $d$ and $\bar d$ induce the same topology on $C_0(M)$), $Y_{j,R}$ is a open set, hence by (\ref{e11}) and the same argument for (2.1.10) in the proof of \cite[Theorem 2.1.3]{FOT}, we have, \begin{equation}\label{e11a} \Cap(Y_{j,R})\le \frac{\E_{\A,1}\big(\bar G_{(j+1)^R,R}-\bar G_{j^R,R},\bar G_{(j+1)^R,R}-\bar G_{j^R,R} \big)}{2^{-2j}}\le 2^{-3j-5R}. \end{equation} For every $k\ge 1$, let \begin{equation*} Z_k:=\bigcup_{R=1}^{\infty}\bigcup_{j=k}^{\infty}Y_{j,R},\ \ \ \mathbf{K}_k:=C_o(M)\backslash Z_k, \end{equation*} so $\mathbf{K}_k$ is closed, and for any $R \ge 1$, \begin{equation}\label{e12} \bar G_{j^R,R}(\gamma)\le 2^{-j+1},\ \ \gamma \in \mathbf{K}_k, \ j\ge k. \end{equation} Note that $G_{j+1,R}\le G_{j,R}$, so $G_{m_{j^R}^R,R} \le\bar G_{j^R,R}$, and for any $R \ge 1$, (\ref{e12}) holds for $G_{m_{j^R}^R,R}$. By construction above, $\{m_i^{R+1}\}_{i=1}^{\infty}$, hence $\{m_{j^{R+1}}^{R+1}\}_{j=1}^{\infty}$, is a subsequence of $\{m_{j^R}^{R}\}_{j=1}^{\infty}$, by choosing the diagonal subsequence, we can find a subsequence $\{q_j\}_{j=1}^{\infty}$, such that for every $R\ge 1$, \begin{equation*} G_{q_j,R}(\gamma)\le 2^{-j+1},\ \ \gamma \in \mathbf{K}_k, \ j\ge k. \end{equation*} By the definition of $G_{q_j,R}$, let $R \to \infty$, note that $\phi_R \to 1$ as $R \to \infty$, we obtain, \begin{equation}\label{e12a} \inf_{1\le r \le q_j}\sup_{1\le i \le N}\sup_{t \in [0,1]} \big|\varphi_i(\gamma(t))-\varphi_i(\sigma_r(t))\big|\le 2^{-j+1},\ \ \gamma \in \mathbf{K}_k, \ j\ge k. \end{equation} It is obvious that (\ref{e12a}) implies that $\mathbf{K}_k$ is totally bounded with respect to the metric $\bar d$ defined by (\ref{e8aa}), also note that $\mathbf{K}_k$ is closed and the topology on $C_o(M)$ induced by $d$ and $\bar d$ is the same, we know $\mathbf{K}_k$ is compact. On the other hand, by (\ref{e11a}) and \cite[Lemma 2.3]{RS}, \begin{equation*} \begin{split} & \Cap\big(C_o(M)\backslash \mathbf{K}_k\big)=\Cap\big(\bigcup_{R=1}^{\infty}\bigcup_{j=k}^{\infty}Y_{j,R}\big)\\ &\le\sum_{R=1}^{\infty}\sum_{j=k}^{\infty}\Cap(Y_{j,R}) \le \sum_{R=1}^{\infty}\sum_{j=k}^{\infty}2^{-3j-5R}\le \frac{8\cdot 2^{-3k}}{217}, \end{split} \end{equation*} which implies that (\ref{e9}) is true. So by now we have completed the proof. \end{proof} \begin{Remark} Here $\A$ can also be viewed as an (pointwise defined) operator from $L^{\infty}(C_o(M)\rightarrow \H;\mu)$ to $L^{2}(C_o(M)\rightarrow \H;\mu)$ such that \begin{equation}\label{e0} \big(\A^{\frac{1}{2}}\big)\Phi(\gamma)=\A(\gamma)^{\frac{1}{2}}\Phi(\gamma),\ \mu-a.s. \ \gamma \in C_o(M), \Phi \in \D(\A). \end{equation} In \cite{WW1}, with some restriction on the curvature of the based manifold, the closability and quasi-regularity was shown for $(\E_{\A}, \D(\E_{\A}))$ without the condition (\ref{e0}) on $\A$. \end{Remark} Repeating the proof of \cite[Proposition 5 (ii)]{DR} or \cite[Proposition 3.4]{L}, we can show the the locality of $(\E_{\A}, \D(\E_{\A}))$, \begin{prp}\label{p3.1} Suppose assumption (A1), (A2') and (A3) hold, the Dirichlet form $(\E_{\A}, \D(\E_{\A}))$ is local. \end{prp} By \cite[Theorem IV 3.5]{MR} and \cite[Proposition V 1.11]{MR}, Theorem \ref{t1.2} and Proposition \ref{p3.1}, we get the following the result, \begin{thm}\label{t3.2} Suppose assumption (A1), (A2') and (A3) hold, then there exists a diffusion process $\mathbf{M}=\big(\Omega, \scr{F}, (\scr{F}_t)_{t \ge 0}, \xi_t, (\mathbb{P}_z)_{z \in C_o(M)}\big)$ associated with $(\E_{\A}, \D(\E_{\A}))$, i.e. $\mathbf{M}$ is a strong Markov process with continuous trajectories, and for every $t>0$, $u \in L^2(\mu)$ bounded, \begin{equation*} T_t u(z)=\int u(\xi_t)\d \P_z,\ \ \mu-a.s.\ z \in C_o(M), \end{equation*} where $T_t$ denotes the $L^2(\mu)$ semigroup associated with $(\E_{\A}, \D(\E_{\A}))$. \end{thm} If $\A=\mathbf{I}$, $(\E_{\A}, \D(\E_{\A}))$ is the O-U Dirichlet form, and assumption (A1), (A2'), (A3) hold, so we get the following corollary, \begin{cor} The O-U Dirichlet form $(\E, \D(\E))$ is quasi-regular. \end{cor} Moreover, it is easy to check for the damped Dirichlet form in Example \ref{ex1}, (A2') is true. Since (A1) and (A3) are verified in Example \ref{ex1}, the damped Dirichlet form is also quasi-regular. We provide the following example such that $\A(\gamma)$ may be an unbounded operator, which can be viewed a generalization of that in \cite{L} and \cite{WW1}. \begin{exa} \end{exa} We first introduce an orthonormal basis $\{H_m\}_{m=1}^{\infty}$ of $\H$, which is constructed in \cite[Page 3]{L}. Let $S_1\equiv 1$ and \begin{equation*} S_{2^k+i}(s):=\begin{cases} &2^{\frac{k}{2}},\ \ \ \ \ \ ~~\text{if}\ s \in [(i-1)2^{-k}, (2i-1)2^{-(k+1)}),\\ &-2^{\frac{k}{2}},\ \ \ \ \ \text{if}\ s \in [(2i-1)2^{-(k+1)}, i 2^{-k}),\\ &0,\ \ \ \ \ \ \ \ ~~\text{otherwise}, \end{cases} \end{equation*} for every integer $k\ge 0$ and $1\le i \le 2^k$. Let \begin{equation*} H_{n(p-1)+j}(t):=\int_0^t S_p(s)e_j ds,\ \ \ p,j \in \mathbb{N}_+~\text{with}\ \ 1\le j \le n, \end{equation*} where $\{e_j\}_{j=1}^n$ is an orthonormal basis of $\R^n$. Then $\{H_m\}_{m=1}^{\infty}$ is an orthonormal basis of $\H$. Let $\D(\A(\gamma))\subseteq \H$ be the domain of $\A(\gamma)$. We suppose $\{H_m\}_{m=1}^{\infty}\subseteq \D(\A(\gamma))$ for $\mu$-$a.s.$ $\gamma \in C_o(M)$ and for every $m \ge 1$, $\A(\gamma)H_m=\lambda_m(\gamma)H_m$, where $\lambda_m(\cdot)\ge 0$ is a measurable function on $C_o(M)$. Hence \begin{equation*} \D(\A(\gamma)^{\frac{1}{2}})=\bigg\{h=\sum_{m=1}^{\infty}\langle h, H_m\rangle_{\H}H_m:\ \sum_{m=1}^{\infty}\lambda_m(\gamma)\big|\langle h, H_m\rangle_{\H}\big|^2<\infty \bigg\}, \end{equation*} and \begin{equation*} \A(\gamma)^{\frac{1}{2}}h=\sum_{m=1}^{\infty}\lambda_m^{\frac{1}{2}}(\gamma)\langle h, H_m\rangle_{\H}H_m,\ \ \ \ h \in \D(\A(\gamma)^{\frac{1}{2}}), \end{equation*} by which we can check easily that assumption (A1) is true. We also assume that for every $R \ge 1$, \begin{equation}\label{e13aa} \lambda_m(\gamma)\ge \vv(R),\ \ \ \gamma \in \mathbf{B}_R,\ \ m \ge 1, \end{equation} for some constant $\vv(R)>0$. So assumption (A3) for $\A$ is true. By the same computation in \cite[Page 1089]{WW1}, we obtain for every $t \in [0,1]$, $v \in L_{loc}^{\infty}(C_o(M)\to \H; \mu)$, \begin{equation}\label{e13a} \begin{split} & \big\|\A(\gamma)^{\frac{1}{2}}\big((t \wedge \cdot)v(\gamma)\big)\big\|^2_{\H}\\ &\le\Big\{\sum_{j=1}^n \lambda_j(\gamma)+\sum_{k=0}^{\infty}\sum_{i=1}^{2^k} \sum_{j=1}^n\lambda_{n(2^k+i-1)+j}(\gamma)2^{-k} 1_{\{((i-1)2^{-k},i2^{-k})\}}(t)\Big\}|v(\gamma)|^2, \end{split} \end{equation} so if we assume for every $R\ge 1$, \begin{equation}\label{e13} \sum_{j=1}^n\mu\big(\lambda_j 1_{\mathbf{B}_R}\big)+\sum_{k=0}^{\infty}\sum_{i=1}^{2^k} \sum_{j=1}^n\mu\big(\lambda_{n(2^k+i-1)+j} 1_{\mathbf{B}_R}\big)2^{-k}<\infty, \end{equation} then $(t \wedge \cdot)v(\gamma) \in \D(\A(\gamma)^{\frac{1}{2}})$ $\mu$-$a.s.$, and (\ref{e1a}) in (A2') holds. Next we are going to check that (\ref{e1aa}) is true under (\ref{e13}). Let $\phi_R^m(\gamma):=l_R(\rho^m(\gamma)),~\gamma\in C_o(M)$, where $l_R\in C_0^\infty(\mathbb{R})$ is constructed by (\ref{e0a}) and $\rho^m$ is defined by (\ref{c*}). Since $\mu\Big(d_M(o,\gamma(t_i))=d_M(o,\gamma(t_j))\Big)=0$ for every $i \neq j$, we have for each $s \in (0,1]$, \begin{equation*} \big(D\phi_R^m(\gamma)\big)(s)=\sum_{i=1}^m (t_i \wedge s) v_i(\gamma)1_{\{\rho^m(\gamma)= d_M(\gamma(t_i),o)\}},\ \mu-a.s. \gamma\in C_o(M), \end{equation*} where $v_i(\gamma):=l_R'(\rho^m(\gamma))U_{t_i}(\gamma)^{-1}\big(\nabla d_M(\gamma(t_i),o) \big)$. Note that $v_i \in L_{loc}^{\infty}(C_o(M) \to \R^d; \mu)$, by (\ref{e13a}) we obtain, \begin{equation*} \begin{split} &\big\|\A(\gamma)^{\frac{1}{2}}(D\phi_R^m(\gamma))\big\|^2_{\H}=\sum_{i=1}^m \big\|\A(\gamma)^{\frac{1}{2}}\big((t_i \wedge \cdot)v_i(\gamma)\big) \big\|_{\H}^2 1_{\{\rho^m(\gamma)= d_M(\gamma(t_i),o)\}}\\ &\le 2\sum_{j=1}^n \lambda_j(\gamma)+2\sum_{k=0}^{\infty}\sum_{i=1}^{2^k} \sum_{j=1}^n\lambda_{n(2^k+i-1)+j}(\gamma)2^{-k},\ \ \mu-a.s., \end{split} \end{equation*} where we use $\|v_i\|_{L^{\infty}}\le 2$ and for $\mu-a.s.$ $\gamma$, there is only one $1\le i \le m$, such that $1_{\{\rho^m(\gamma)= d_M(\gamma(t_i),o)\}} \neq 0$. So according to (\ref{e13}), \begin{equation*} \sup_{m} \big\|\A(\gamma)^{\frac{1}{2}}(D\phi_R^m(\gamma))\big\|^2_{\H} <\infty,\ \ \mu-a.s.. \end{equation*} Since $\A(\gamma)^{\frac{1}{2}}$ is closed, based on this and as the same argument in Lemma \ref{l2.1}, by the Banach-Saks property, we have, $D \phi_R \in \D(\A(\gamma)^{\frac{1}{2}})$, and \begin{equation*} \begin{split} & \big\|\A(\gamma)^{\frac{1}{2}}(D\phi_R(\gamma))\big\|^2_{\H} \le 2\sum_{j=1}^n \lambda_j(\gamma)+2\sum_{k=0}^{\infty}\sum_{i=1}^{2^k} \sum_{j=1}^n\lambda_{n(2^k+i-1)+j}(\gamma)2^{-k},\ \mu-a.s, \end{split} \end{equation*} applying (\ref{e13}) to this, we get (\ref{e1aa}). Coming all the analysis above, we know (A1), (A2') and (A3) are true if (\ref{e13aa}) and (\ref{e13}) holds. Moreover, it is easy to see that if for every $R\ge 1$, there are positive constants $c(R),\delta(R)$ such that, \begin{equation*} \mu\big(\lambda_m 1_{\{\rho \le R\}}\big)\le c(R)m^{1-\delta(R)},\ \ \ m\ge 1, \end{equation*} then (\ref{e13}) holds. \section{Appendix} As in the proof of Theorem \ref{t1.1}, for every $R\ge 1$, we take a non-negative function $f_R \in C_0^{\infty}(M)$, such that $f \big |_{B_R}=1$, and $M_R:=\{x \in M:\ f_R(x)>0\}$ is a connected set. Then we define a metric $\langle, \rangle_R$ on $M_R$ as \begin{equation*} \langle, \rangle_R:=f_R^{-2} \langle, \rangle, \end{equation*} where $\langle, \rangle$ is the Riemannian metric on $M$. Step by step following the argument in \cite[Section 2]{TW}, we know $(M_R, \langle, \rangle_R)$ is a complete Riemannian manifold. But it was not written as a Lemma in \cite[Section 2]{TW} for such conclusion, so for convenience of the reader, we include the following lemma here. \begin{lem}\label{l5.1}(\cite{TW}) $(M_R, \langle, \rangle_R)$ is a complete Riemannian manifold. \end{lem} \begin{proof} Let $d_R$ be the Riemannian distance on $(M_R, \langle, \rangle_R)$, to prove the conclusion, it suffices to show that for any $d_R$-Cauchy sequence $\{x_i\}_{i=1}^{\infty}\subseteq M_R$, there is a limit point $x_0 \in M_R$, i.e. $\lim_{i \rightarrow \infty}d_R(x_i,x_0)=0$. For any $m\ge 1$, let $O_m:=\{x \in M: f_R(x)>\frac{1}{m}\ \}$. If there exists a $m_0>0$, such that $\{x_i\}_{i=1}^{\infty}\subseteq O_{m_0}$, since $O_{m_0}$ is relatively compact and $(M,\langle,\rangle)$ is complete, there exist a $x_0 \in O_{m_0+1}$ and a subsequence $\{x_{i_k}\}_{k=1}^{\infty}$, such that $\lim_{k \rightarrow \infty}d_M(x_{i_k},x_0)=0$, where $d_M$ denotes the Riemannian distance on $(M,\langle,\rangle)$. By the continuity of $f_R$, when $k$ is big enough, for the minimal geodesic $\gamma_k(\cdot)$ on $(M,\langle,\rangle)$ connecting $x_{i_k}$ and $x_0$, we have $f_R(\gamma_k(t))\ge \frac{1}{m_0+2}$ for all $t$, so by definition $d_R(x_{i_k},x_0)\le (m_0+2) d_M(x_{i_k},x_0)$, hence $\lim_{k \rightarrow \infty}d_R(x_{i_k},x_0)=0$. Note that a Cauchy sequence is convergent if a subsequence is convergent, so we obtain $\lim_{i \rightarrow \infty}d_R(x_i,x_0)=0$. If $\{x_i\}_{i=1}^{\infty}\nsubseteq O_{m}$ for any $m \ge 1$, by selecting a subsequence if necessary, we can assume that $\lim_{i \rightarrow \infty}d_M(x_i,x_0)=0$, for some $x_0 \in M$ satisfying $f_R(x_0)=0$ and there exists a subsequence $\{m_i\}_{i=1}^{\infty}$ of $\{m\}_{m=1}^{\infty}$, such that, $x_i \notin O_{m_i}$, $x_i \in O_{m_{i+1}}$, $m_{i+2}>2m_{i+1}$, $\lim_{i \rightarrow \infty}m_i=\infty$. On the other hand, suppose $\gamma \in C([0,1];M)$ is a $C^1$ path such that $\gamma(0)=x_i$ and $\gamma(1)=x_{i+2}$, let \begin{equation*} \tau:=\sup\{t \in [0,1]: \ \gamma(t) \in O_{m_{i+1}}\}, \end{equation*} so $0<\tau<1$ since $x_i \in O_{m_{i+1}}$, $x_{i+2} \notin O_{m_{i+1}}$, and $f_R(\gamma(t))\le \frac{1}{m_{i+1}}$ for every $t \in [\tau,1]$. Hence we have, \begin{equation*} \begin{split} & \int_0^1 |\gamma'(t)|_R \d t\ge \int_{\tau}^1 |\gamma'(t)|_R \d t \ge m_{i+1}\int_{ \tau}^1 |\gamma'(t)| \d t\ge m_{i+1} d_M(O_{m_{i+1}},O^c_{m_{i+2}}), \end{split} \end{equation*} note that $\gamma$ is arbitrary, we obtain, \begin{equation*} d_R(x_i,x_{i+2})\ge m_{i+1} d_M(O_{m_{i+1}},O^c_{m_{i+2}}). \end{equation*} Since $|f_R(x)-f_R(y)|\ge \frac{1}{m_{i+1}}-\frac{1}{m_{i+2}}$ for all $ x \in O_{m_{i+1}}$, $y \in O^c_{m_{i+2}}$, by mean value theorem, \begin{equation*} \begin{split} &\frac{1}{m_{i+1}}-\frac{1}{m_{i+2}}\le |f_R(x)-f_R(y)|\le C_1d_M(x,y), \ \ \ x \in O_{m_{i+1}},\ y \in O^c_{m_{i+2}}, \end{split} \end{equation*} where $C_1:=\sup_{x \in M}|\nabla f(x)|>0$, which implies that $d_M(O_{m_{i+1}},O^c_{m_{i+2}})\ge $ $\frac{1}{C_1} \big(\frac{1}{m_{i+1}}-\frac{1}{m_{i+2}}\big)$. So we have, \begin{equation}\label{e20a} \begin{split} &d_R(x_i,x_{i+2})\ge m_{i+1}d_M(O_{m_{i+1}},O^c_{m_{i+2}})\\ &\ge \frac{m_{i+1}}{C_1}\big(\frac{1}{m_{i+1}}-\frac{1}{m_{i+2}}\big)\ge \frac{1}{2C_1}, \end{split} \end{equation} where we use $m_{i+2}>2m_{i+1}$ in the last step. It is obvious that (\ref{e20a}) contradicts with the assumption $\{x_i\}_{i=1}^{\infty}$ is a $d_R$-Cauchy sequence, so we must have $\{x_i\}_{i=1}^{\infty}\subseteq O_{m_0}$ for some $m_0 \ge 1$, and as the analysis above, there exists a limit point $x_0 \in M_R$ for $\{x_i\}_{i=1}^{\infty}$. \end{proof} \section*{Acknowledgments} We would like to thank Professor Feng-Yu Wang for useful conversations. This research is supported in part by the FCT(PTDC/MAT/104173/2008) and Specialized Research Fund for the Doctoral Program of Higher Education (No. 20120071120001) of China \beg{thebibliography}{99} \leftskip=-2mm \parskip=-1mm \bibitem{A} S. Aida, \emph{Logarithmic derivatives of heat kernels and logarithmic Sobolev inequalities with unbounded diffusion coefficients on loop spaces,} J. Funct. Anal. 174(2000), 430--477. \bibitem{AE} S. Aida and K. D. Elworthy, \emph{Differential calculus on path and loop spaces. I. Logarithmic Sobolev inequalities on path spaces,} C. R. Acad. Sci. Paris S\'erie I, 321(1995), 97--102. \bibitem{BLW} D. Bakry, M. Ledoux, F.-Y. Wang, \emph{Perturbations of functional inequalities using growth conditions,} J. Math. Pures Appl. 87(2007), 394--407. \bibitem{CHL} B. Capitaine, E. P. Hsu and M. Ledoux, \emph{Martingale representation and a simple proof of logarithmic Sobolev inequalities on path spaces,} Electron. Comm. Probab. 2(1997), 71--81. \bibitem{CGG} P. Cattiaux, I. Gentil and A. Guillin, \emph{Weak logarithmic Sobolev inequalities and entropic convergence,} Probab. Theory. Related. Fields. 139 (2007), 563--603. \bibitem{CLW} X. Chen, X.-M. Li and B. Wu, \emph{A Poincar\'e inequality on loop spaces,} J. Funct. Anal. 259 (2010), 1421--1442. \bibitem{CM} A. B. Cruzeiro and P. Malliavin, \emph{Renormalized differential geometry and path space: structural equation, curvature,} J. Funct. Anal. 139: 1 (1996), 119--181. \bibitem{D1} B. K. Driver, \emph{A Cameron-Martin type quasi-invariance theorem for Brownian motion on a compact Riemannian manifolds,} J. Funct. Anal. 110(1992), 273--376. \bibitem{DR} B. K. Driver and M. R\"{o}ckner, \emph{Construction of diffusions on path and loop spaces of compact Riemannian manifolds,} C. R. Acad. Sci. Paris S\'eries I 315(1992), 603--608. \bibitem{EL} K. D. Elworthy, X.- M. Li and Y. Lejan, \emph{On The geometry of diffusion operators and stochastic flows,} Lecture Notes in Mathematics, 1720(1999), Springer-Verlag. \bibitem{EM} K. D. Elworthy and Z.-M. Ma, \emph{Vector fields on mapping spaces and related Dirichlet forms and diffusions,} Osaka. J. Math. 34(1997), 629--651. \bibitem{ES} O. Enchev and D. W. Stroock, \emph{Towards a Riemannian geometry on the path space over a Riemannian manifold,} J. Funct. Anal. 134 : 2 (1995), 392--416. \bibitem{F} S.- Z. Fang, \emph{Un in\'equalit\'e du type Poincar\'esur un espace de chemins,} C. R. Acad. Sci. Paris S\'erie I 318(1994), 257--260. \bibitem{FM} S.- Z. Fang and P. Malliavin, \emph{Stochastic analysis on the path space of a Riemannian manifold: I. Markovian stochastic calculus,} J. Funct. Anal. 118 : 1 (1993), 249--274. \bibitem{FWW} S.- Z. Fang, F.-Y. Wang and B. Wu, \emph{Transportation-cost inequality on path spaces with uniform distance,} Stochastic. Process. Appl. 118: 12 (2008), 2181--2197. \bibitem{FOT} M. Fukushima, Y. Oshima and M. Takeda, \emph{Dirichlet Forms and Symmetric Markov Processes,} Walter de Gruyter, 2010. \bibitem{Hsu1} E. P. Hsu, \emph{Quasi-invariance of the Wiener measure on the path space over a compact Riemannian manifold,} J. Funct. Anal. 134(1995), 417--450. \bibitem{Hsu4} E. P. Hsu, \emph{Logarithmic Sobolev inequalites on path spaces over compact Riemannian manifolds,} Commun. Math. Phys. 189(1997), 9?16. \bibitem{Hsu3} E. P. Hsu, \emph{Analysis on path and loop spaces,} in "Probability Theory and Applications" (E. P. Hsu and S. R. S. Varadhan, Eds.), LAS/PARK CITY Mathematics Series, 6(1999), 279--347, Amer. Math. Soc. Providence. \bibitem{Hsu2} E. P. Hsu, \emph{Stochastic Analysis on Manifold,} American Mathematical Society, 2002. \bibitem{HO} E. P. Hsu and C. Ouyang, \emph{Cameron-Martin theorem for complete Riemannian manifolds,} J. Funct. Anal. 257: 5 (2009), 1379--1395. \bibitem{L} J.-U. L\"{o}bus, \emph{A class of processes on the path space over a compact Riemannian manifold with unbounded diffusion,} Tran. Ame. Math. Soc. (2004), 1--17. \bibitem{MR} Z.- M. Ma and M. R\"{o}ckner, \emph{Introduction to the theory of (non-symmetric) Dirichlet forms,} Berlin: Springer, 1992. \bibitem{RS} M. R\"{o}ckner and B. Schmuland, \emph{Tightness of general $C_{1,p}$ capacities on Banach space,} J. Funct. Anal. 108(1992), 1--12. \bibitem{TW} A. Thalmaier and F.-Y. Wang, \emph{Gradient estimates for harmonic functions on regular domains in Riemannian manifolds,} J. Funct. Anal. 155:1(1998),109--124. \bibitem{W00a} F.-Y. Wang, \emph{Functional inequalities for empty essential spectrum,} J.\ Funct.\ Anal. 170(2000), 219--245. \bibitem{W} F.- Y. Wang, \emph{Weak poincar\'{e} Inequalities on path spaces,} Int. Math. Res. Not. 2004(2004), 90--108. \bibitem{Wbook} F.-Y. Wang, \emph{Functional Inequalities, Markov Processes and Spectral Theory,} Science Press, Beijing, (2005). \bibitem{WW1} F.- Y. Wang and B. Wu, \emph{Quasi-Regular Dirichlet Forms on Riemannian Path and Loop Spaces,} Forum Math. 20(2008), 1085--1096. \bibitem{WW2} F. -Y. Wang and B. Wu, \emph{Quasi-Regular Dirichlet Forms on Free Riemannian Path and Loop Spaces,} Inf. Dimen. Anal. Quantum Probab. and Rel. Topics 2(2009) 251--267. \end{thebibliography} \end{document}
1,108,101,566,817
arxiv
\section{Introduction}\label{s:HPH} The higher Hochschild homology is a bifunctor introduced by T.~Pirashvili in~\cite{pirash00} that to a topological space (simplicial set) and a (co)commutative (co)algebra assigns a graded vector space. Informally speaking this functor is a way to \lq\lq integrate\rq\rq{} a (co)algebra over a given space. Specialized to a circle the result is the usual Hochschild homology. The precursor to the higher Hochschild homology was the discovery of the Hodge splitting in the usual Hochschild homology of a commutative algebra~\cite{GerstSchack,Loday}. Indeed, the most surprising and perhaps the motivating result for T.~Pirashvili to write his seminal work~\cite{pirash00} was the striking fact that the higher Hochschild homology on a sphere of any positive dimension also admits the Hodge splitting and moreover the terms of the splitting up to a regrading depend only on the parity of the dimension of the sphere. With this excuse to be born, the higher Hochschild homology is nowadays a widely used tool that has various applications including the string topology and more generally the study of mapping and embedding spaces~\cite{pirash00,AroneTur1,AroneTur2,GTZ0,Patras,PatrasThomas,Song,SongTur}. It also has very interesting and deep generalizations such as the topological higher Hochschild homology~\cite{CarlDougDun,Schl} and factorization homology~\cite{AyFr,Ginot,GTZ,Lurie}. In our work we study the very nature of the Hodge splitting. In particular we show that it always takes place for suspensions. Moreover, it will be clear from the construction that only suspensions and spaces rationally homology equivalent to them have this property. For any suspension $\Sigma Y$, the terms of the splitting depend in some polynomial way on $\tilde H_*\Sigma Y$, which in particular explains Pirashvili\rq{}s result for spheres. We also show that if a map $f\colon \Sigma Y\to\Sigma Z$ is a suspension, than the induced map in the Hochschild-Pirashvili homology preserves the splitting and is determined by the map $f_*\colon \tilde H_*\Sigma Y\to \tilde H_*\Sigma Z$. In case $f$ is not a suspension, the Hodge splitting is preserved only as a filtration. We explain how the induced map between different layers is computed from the rational homotopy type of $f$. We treat more carefully the case of wedges of circles and discover certain representations of the group $\mathrm{Out}(F_n)$ of outer automorphisms of a free group\footnote{These representations appear as application to the hairy graph-homology computations in the study of the spaces of long embeddings, higher dimensional string links, and the deformation theory of the little discs operads~\cite{AroneTur2,SongTur,Turchin1,TW,TW2}.} that have the smallest known dimension among those that don\rq{}t factor through $\mathrm{GL}(n,{\mathbb{Z}})$. \subsection*{Notation} We work over rational numbers ${\mathbb{Q}}$ unless otherwise stated. All vector spaces are assumed to be vector spaces over ${\mathbb{Q}}$. Graded vector spaces are vector spaces with a $\mathbb{Z}$-grading, and we abbreviate the phrase ``differential graded'' by dg as usual. We generally use homological conventions, i.e., the differentials will have degree $-1$. We denote by $gVect$ and $dgVect$ the category of graded vector spaces and the category of chain complexes respectively. For a chain complex or a graded vector space $C$ we denote by $C[k]$ its $k$-th desuspension. We use freely the language of operads. A good introduction into the subject can be found in the textbook \cite{lodayval}, whose conventions we mostly follow. We use the notation $\mathcal P\{k\}$ for the $k$-fold operadic suspension. The operads governing commutative, associative and Lie algebras are denoted by $\mathsf{Com}$, $\mathsf{Assoc}$, and $\mathsf{Lie}$ respectively. By $\mathsf{Com}_+$ we denote the commutative non-unital operad and by $\mathsf{coLie}$ the cooperad dual to $\mathsf{Lie}$. For a category ${\mathcal C}$, we denote by ${\mathrm{mod}}{-}{\mathcal C}$ the category of cofunctors ${\mathcal C}^{op}\to dgVect$ to chain complexes. The objects of ${\mathrm {mod}}{-}{\mathcal C}$ will be called {\it right ${\mathcal C}$-modules}. In the following section, ${\mathcal C}$ is either the category $\Gamma$ of finite pointed sets or the category $\mathrm{Fin}$ of finite sets. Abusing notation we denote the set $\{1,\ldots,k\}$ by $k$ and the set $\{*,1,\ldots,k\}$ based at $*$ by $k_*$. We will consider the following examples of right $\Gamma$ and $\mathrm{Fin}$-modules: \begin{itemize} \item For $X$ some topological space we can consider the $\mathrm{Fin}$-module sending a finite set $S$ to the singular chains on the mapping space $C_*(X^S)$. We denote this $\mathrm{Fin}$-module by $C_*(X^\bullet)$. \item Similarly, to a basepointed space $X_*$ we assign a $\Gamma$-module $C_*(X_*^\bullet)$ sending a pointed set $S_*$ to $C_*(X_*^{S_*})$, where now $X_*^{S_*}$ is supposed to be the space of pointed maps. \item To a cocommutative coalgebra $C$ we assign the $\mathrm{Fin}$-module sending the finite set $S$ to the tensor product $C^{\otimes S}\cong \bigotimes_{s\in S} C$. We denote this $\mathrm{Fin}$-module by $C^{\otimes\bullet}$. If not otherwise stated we assume that $C$ is non-negatively graded and simply connected. \item If in addition $M$ is a $C$-comodule (e.g., $M=C$) one can construct a $\Gamma$-module $M\otimes C^{\otimes\bullet}$ such that $S_*\mapsto M\otimes \bigotimes_{s\in S_*\setminus\{*\}} C$. \item Dually, if $M$ is a module over a commutative algebra $A$, then $M\otimes A^{\otimes\bullet}$ is a {\it left} $\Gamma$-module, and its objectwise dual $\left(M\otimes A^{\otimes\bullet}\right)^\vee$ is a right $\Gamma$-module. \end{itemize} A topological space is said of finite type if all its homology groups are finitely generated in every degree. Two spaces are said rationaly homology equivalent if there is a zigzag of maps between them, such that its every map induces an isomorphism in rational homology. The completed tensor product is denoted by $\hat \otimes$. \subsection*{Main results} In the paper for simplicity of exposition we stick to the contravariant Hochschild-Pirashvili homology that is to the one assigned to right $\mathrm{Fin}$ and $\Gamma$ modules. One should mention however that all the results can be easily adjusted to the covariant case as well. There are two ways to define the higher Hochschild homology. In the first combinatorial way, for a space $X$ (respectively pointed space $X_*$) obtained as a realization of a (pointed) finite simplicial set ${\mathcal{X}}_\bullet\colon\Delta^{op}\to \mathrm{Fin}$ (respectively ${\mathcal{X}}_\bullet\colon \Delta^{op}\to\Gamma$), the higher Hochschild homology $HH^X(\L)$ (respectively $HH^{X_*}(\L_*)$) can be computed as the homology of the totalization of the cosimplicial chain complex $\L\circ {\mathcal{X}}\colon\Delta\to dgVect$ (respectively $\L_*\circ {\mathcal{X}}_*\colon\Delta\to dgVect$).~\footnote{This definition can also be adjusted to realizations of any simplicial sets non-necessarily finite by using the right Kan extention of $\L$ (respectively $\L_*$) to the category of all (pointed) sets~\cite{pirash00}.} In another definition, for a right $\mathrm{Fin}$-module $\L$ (respectively right $\Gamma$-module $\L_*$) and a topological space $X$ (respectively pointed space $X_*$), the {\it higher Hochschild homology} that we also call {\it Hochschild-Pirashvili homology} $HH^X(\L)$ (respectively $HH^{X_*}(\L_*)$) is the homology of the complex of homotopy natural transformations $C_*(X^\bullet) \to \L$ (respectively $C_*(X_*^\bullet) \to \L_*$)~\cite{pirash00,GTZ}. The fact that the two definitions are equivalent is implicitly shown in the proof of~\cite[Theorem~2.4]{pirash00} by Pirashvili, see also~\cite[Proof of Proposition~4]{GTZ} and~\cite[Proposition~3.4]{Song}. In case $\L=C^{\otimes\bullet}$ (respectively $\L=M\otimes C^{\otimes\bullet}$), we denote the higher Hochschild homology as $HH^{X}(C)$ (respectively $HH^{X_*}(C,M)$).\footnote{This particular case of higher Hochschild homology is also called topological factorisation (or chiral) cohomology, see for example~\cite{AyFr,GTZ}.} In our paper the combinatorial definition will be used only for wedges of circles as we want to treat this case more explicitly. Later in the paper we show that for wedges of circles the first and the second definitions produce identical complexes. Any map $f:X \to Y$ (respectively basepoint preserving map $X_*\to Y_*$) induces a map $f^*: HH^{Y}(\L)\to HH^X(\L)$ (respectively $HH^{Y_*}(\L_*)\to HH^{X_*}(\L_*)$). Two homotopic maps (respectively basepoint homotopic maps) induce the same map in higher Hochschild homology. It is also clear from the (first) definition that in case $f$ is a rational homology equivalence, then the induced map $f^*$ is an isomorphism. One has a functor $u\colon\Gamma\to\mathrm{Fin}$ that forgets the basepoint. If $X=X_*$ and $\L_*=\L\circ u$, then \begin{equation}\label{eq:base} HH^{X}(\L)=HH^{X_*}(\L_*). \end{equation} In case we take $X$ and $X_*$ to be a wedge of $n$ circles $\vee_n S^1$, the automorphism group $\mathrm{Aut}(F_n)$ acts on $\vee_n S^1$ up to homotopy by basepoint preserving maps and hence we obtain a representation of $\mathrm{Aut}(F_n)$ on $HH^{\vee_n S^1}(\L_*)$. Similarly, the outer automorphism group $\mathrm{Out}(F_n)$ acts on $\vee_n S^1$ up to homotopy and hence we obtain a representation of $\mathrm{Out}(F_n)$ on $HH^{\vee_n S^1}(\L)$. While this result should at least morally be known to experts, the representations of $\mathrm{Out}(F_n)$ arising in this manner seem to have received little attention in the literature. We will study a few special cases. The representations that we obtain inherit an additional filtration (the Hodge or Poincar\'e-Birkhoff-Witt filtration) such that the associated graded representation factors through $\mathrm{GL}(n,{\mathbb{Z}})$. We show that in general the representations of $\mathrm{Out}(F_n)$ thus obtained do not factor through $\mathrm{GL}(n,{\mathbb{Z}})$, but are nontrivial iterated extensions of $\mathrm{GL}(n,{\mathbb{Z}})$ representations. In particular, it is an open problem to determine the lowest dimensional representations of $\mathrm{Out}(F_n)$ that do not factor through $\mathrm{GL}(n,{\mathbb{Z}})$.\footnote{One assumes $n\geq 3$ as $\mathrm{Out}(F_2)=\mathrm{GL}(2,{\mathbb{Z}})$.} A lower bound has been obtained by D. Kielak \cite{Kielak}, who showed that the dimension must be at least \[ {n+1 \choose 2}. \] For $n=3$ the lower bound was refined to $7$ (instead of $6$) \cite{Kielak2}. We obtain an upper bound as follows. \begin{thm}\label{thm:outfrrep} For $n\geq 3$, the representations of $\mathrm{Out}(F_n)$ on $HH^{\vee_n S^1}(\Chev(\alg g))$, where $\Chev(\alg g)$ is the Chevalley complex of a free Lie algebra $\alg g=\mathrm{FreeLie}(x)$ in one generator $x$ of odd degree, contain a direct summand representation which does not factor through $\mathrm{GL}(n,{\mathbb{Z}})$ and has dimension \[ \frac{n(n^2+5)}{6}. \] In particular, for $n=3$ this representation saturates the lower bound $7$ obtained in~\cite{Kielak2}. \end{thm} The previously known representations with such property have the smallest dimension 21 for $n=3$ and \[ (2^n-1){{n-1}\choose 2} \] for $n\geq 4$, see~\cite[Section~4]{Kielak}, and also~\cite{BV,GL}. The higher Hochschild homology on spheres was introduced and studied in the original work of Pirashvili~\cite{pirash00} and on wedges of spheres it was studied in~\cite{Song,SongTur} in connection with the homology and homotopy of spaces of higher dimensional string links. An interesting feature of this homology is that it admits a decomposition into a direct product, and the factors of this {\it Hodge splitting} depend only on the parity of the dimensions of the spheres. In particular, if we know $HH^{\vee_n S^1}(\L_*)$ with the Hodge decomposition, we can reconstruct $HH^{\vee_n S^d}(\L_*)$ for any other odd~$d$. On the other hand, the homotopy type of a map $\vee_n S^d\to\vee_n S^d$, $d\geq 2$, is completely determined by the map in homology. Therefore, $HH^{\vee_n S^d}(\L_*)$, $d\geq 2$, is acted upon by the monoid $\End({\mathbb{Z}}^n)$ of endomorphisms of ${\mathbb{Z}}^n$. For $d=1$, we get an action of the monoid $\End(F_n)$ of endomorphisms of a free group $F_n$. In Section~\ref{s:CH_vee_n} we define a certain explicit complex $CH^{\vee_n S^1}(\L_*)$ computing $HH^{\vee_n S^1}(\L_*)$. \begin{thm}\label{thm:endf_n_HP} For any $\Gamma$-module $\L_*$, the action of $\End(F_n)$ on $HH^{\vee_n S^1}(\L_*)$ is naturally lifted on the level of the complex $CH^{\vee_n S^1}(\L_*)$. Moreover this action respects the Hodge splitting as an increasing filtration, and the action on the associated graded complex $\mathit{gr}\, CH^{\vee_n S^1}(\L_*)$ factors through $\End({\mathbb{Z}}^n)$. \end{thm} We will see in Section~\ref{s4} that as an $\End({\mathbb{Z}}^n)$ module, $\mathit{gr}\, HH^{\vee_n S^1}(\L_*)$ is (up to regrading) naturally isomorphic to $HH^{\vee_n S^d}(\L_*)$ for any odd $d\geq 3$. The fact that the $\End(F_n)$ action above respects the Hodge filtration is actually a manifestation of a more general phenomenon. We show in Section~\ref{s4} that the Hodge filtration in $HH^{X_*}(\L_*)$, that can also be called Poincar\'e-Birkhoff-Witt filtration, is defined functorially in $X_*$ and $\L_*$. This filtration is an interesting phenomenon in itself that does not seem to appear earlier in any kind of functor calculus. In particular, the Hodge filtration should not be confused with the cardinality or rank (co)filtration considered, for example, in~\cite{AyFr,IntJMc}, and inspired from the manifold functor calculus~\cite{WeissEmb}, see Subsection~\ref{ss42+}. In that subsection we also explain in which sense the Hodge filtration in the Hochschild-Pirashvili homology on suspensions is exhaustive: it is dense in the topology induced by the cardinality cofiltration. Theorem~\ref{thm:endf_n_HP} can be \lq\lq{}categorified\rq\rq{} to all suspensions and maps between them. More specifically, let ${\mathrm{Top}}_*$ denote the category of pointed topological spaces with morphisms homotopy classes of pointed maps. Let ${\mathrm{Top}}_*|_\Sigma$ denote its full subcategory whose objects are suspensions. By $\Sigma({\mathrm{Top}}_*)$ we denote the image category of the suspension functor $\Sigma\colon{\mathrm{Top}}_*\to{\mathrm{Top}}_*$. Notice that any suspension is rationally equivalent to a wedge of spheres~\cite[Theorem~24.5]{FHT1}. Thus, for the sake of concreteness and slightly simplifying the matters, the reader can think about the category ${\mathrm{Top}}_*|_\Sigma$ as about the full subcategory in ${\mathrm{Top}}_*$ of wedges of spheres of possibly different dimensions $\geq 1$. The following theorems generalize Theorem~\ref{thm:endf_n_HP} on this category ${\mathrm{Top}}_*|_\Sigma$. \begin{thm}\label{t:HP_suspensions1} For any right $\Gamma$-module $\L_*$, the cofunctor $HH^{(-)}(\L_*)\colon {{\mathrm{Top}}_*}^{op} \to gVect$ admits an increasing filtration generalizing the Hodge filtration on $HH^{\vee_n S^1}(\L_*)$, such that the completed associated graded functor $\mathit{gr}\, HH^{(-)}(\L_*)$ restricted on ${\mathrm{Top}}_*|_\Sigma$ factors through the reduced homology functor $\tilde H_*\colon {\mathrm{Top}}_*\to gVect$. Over $\Sigma({\mathrm{Top}}_*)$, this filtration splits in the sense that one has a natural isomorphism $HH^{(-)}(\L_*)|_{\Sigma({\mathrm{Top}}_*)} \to \mathit{gr}\, HH^{(-)}(\L_*)|_{\Sigma({\mathrm{Top}}_*)}$. \end{thm} In Section~\ref{ss43} we construct a cofunctor ${\mathrm{CH}}^{(-)}(\L_*)\colon ({\mathrm{Top}}_*|_\Sigma)^{op} \to dgVect$. \begin{thm}\label{t:HP_suspensions2} The cofunctor ${\mathrm{CH}}^{(-)}(\L_*)\colon ({\mathrm{Top}}_*|_\Sigma)^{op} \to dgVect$ has the following properties \begin{itemize} \item $H_*\circ {\mathrm{CH}}^{(-)}(\L_*) = HH^{(-)}(\L_*)$. \item The complex ${\mathrm{CH}}^{\vee_n S^1}(\L_*)$ is identical to $CH^{\vee_n S^1}(\L_*)$. \item This functor admits an increasing (Hodge) filtration compatible with the Hodge filtration in homology. \item The completed associated graded functor $\mathit{gr}\,{\mathrm{CH}}^{(-)}(\L_*)$ factors through the reduced homology finctor $\tilde H_*\colon {\mathrm{Top}}_*|_\Sigma\to gVect$. \item Over $\Sigma({\mathrm{Top}}_*)$, the Hodge filtration in ${\mathrm{CH}}^{(-)}(\L_*) $ splits in the sense that one has a natural isomorphism ${\mathrm{CH}}^{(-)}(\L_*)|_{\Sigma({\mathrm{Top}}_*)} \to \mathit{gr}\, {\mathrm{CH}}^{(-)}(\L_*)|_{\Sigma({\mathrm{Top}}_*)}$. \end{itemize} \end{thm} More concretely when we say that the functors $\mathit{gr}\, HH^{(-)}(\L_*)\colon {\mathrm{Top}}_*|_\Sigma \to gVect$ and $\mathit{gr}\, {\mathrm{CH}}^{(-)}(\L_*)\colon {\mathrm{Top}}_*|_\Sigma\to dgVect$ factor through $\tilde H_*\colon {\mathrm{Top}}_*|_\Sigma\to gVect$ we mean that for any pointed space $Y_*$, both $\mathit{gr}\, HH^{\Sigma Y_*}(\L_*)$ and $\mathit{gr}\, {\mathrm{CH}}^{\Sigma Y_*}(\L_*)$ can be described as a power series expression in $\tilde H_*\Sigma Y_*$: \begin{gather} \mathit{gr}\, HH^{\Sigma Y_*}(\L_*)= \prod_n\mathrm{Hom}_{\mathbb{S}_n}\left( (\tilde H_*\Sigma Y_*)^{\otimes n}, {\mathcal H}_{\L_*}(n) \right), \label{eq:power_ser1} \\ \mathit{gr}\, {\mathrm{CH}}^{\Sigma Y_*}(\L_*)= \prod_n\mathrm{Hom}_{\mathbb{S}_n}\left( (\tilde H_*\Sigma Y_*)^{\otimes n}, {\mathcal C}_{\L_*}(n) \right), \label{eq:power_ser2} \end{gather} where $\mathcal{C}_{\L_*}$ is some symmetric sequence in chain complexes depending on $\L_*$, and $\mathcal{H}_{\L_*}$ is its homology symmetric sequence. The fact that the Hodge filtration splits over $\Sigma({\mathrm{Top}}_*)$ means that we have isomorphisms \begin{gather} {\mathrm{CH}}^{\Sigma Y_*}(\L_*)\stackrel{\simeq}{\longrightarrow} \mathit{gr}\, {\mathrm{CH}}^{\Sigma Y_*}(\L_*),\label{eq:spl1}\\ HH^{\Sigma Y_*}(\L_*)\stackrel{\simeq}{\longrightarrow} \mathit{gr}\, HH^{\Sigma Y_*}(\L_*)\label{eq:spl2} \end{gather} natural in $\Sigma Y_*\in \Sigma({\mathrm{Top}}_*)$. The $n$-th term of the Hodge splitting is exactly the $n$-th factor in~\eqref{eq:power_ser1} and~\eqref{eq:power_ser2}. (This splitting also means that the higher Hochschild complexes for suspensions split as a product of complexes.) In case a pointed map $f\colon \Sigma Y_*\to \Sigma Z_*$ is not a suspension, the Hodge splitting in the higher Hochschild complexes/homology (via isomorphisms~\eqref{eq:spl1}-\eqref{eq:spl2}) behaves like a filtration: higher terms of the splitting can be send non-trivially to lower ones. In Section~\ref{s:rht_map} we compute how from the given rational homotopy type of a map of suspensions one gets the induced map between the terms of the splitting. We also demonstrate this on some examples, such as the Hopf map $S^3\to S^2$ and a non-trivial pointed map $S^2\to S^2\vee S^1$. Some of the techniques that we develop for suspensions work equally well for general spaces. In Section~\ref{s:non_susp} we briefly consider this general case of non-suspensions. Theorems~\ref{th:non_susp}-\ref{th:non_susp2} and Proposition~\ref{p:non_susp} describe these more general higher Hochschild complexes in the case $\L_* = M\otimes C^{\otimes \bullet}$ as some kind of homotopy base change type of Chevalley complexes. In this section we also show that for a connected pointed space $X_*$ (of finite type) the Hodge filtration splits for any coefficient $\Gamma$ module $\L_*$ if and only if $X_*$ is rationally homology equivalent to a suspension. \subsection*{Acknowledgements} We thank G. Arone, B. Fresse, G. Ginot, and D. Kielak for helpful discussions. V.T. thanks the MPIM and the IHES, where he spent his sabbatical and where he started to work on this project. T.W. has been partially supported by the Swiss National Science foundation, grant 200021\_150012, and the SwissMAP NCCR funded by the Swiss National Science foundation. \section{Special case of $\End(F_n)$ action}\label{s2} In this section we look at the special case $\L_*=M\otimes C^{\otimes\bullet}$, where $C$ is a cocommutative coalgebra and $M$ a $C$-comodule as before. If not otherwise stated we will always assume that $C$ is simply connected. We will define a complex $CH^{\vee_n S^1}(M\otimes C^{\otimes \bullet})$ and an $\End(F_n)$ action on it. In Section~\ref{s:CH_vee_n} we explain why this complex computes $HH^{\vee_n S^1}(M\otimes C^{\otimes \bullet})=HH^{\vee_n S^1}(C,M)$ and why the $\End(F_n)$ action that we define corresponds to the topological action. Define $CH^{\vee_n S^1}(M\otimes C^{\otimes \bullet})$ as $M\otimes (\Omega C)^{\otimes n}$, where $\Omega C$ is the cobar construction of $C$ --- as a space it is a free associative algebra generated by $C[1]$. The differential \begin{equation}\label{eq:differential1} d=d_M+d_C+\delta, \end{equation} where $d_M$ and $d_C$ are induced by the differentials on $M$ and $\Omega C$ respectively and \[ \delta(m\otimes b_1\otimes\ldots\otimes b_n)= \sum\sum_j\pm m'\otimes\ldots\otimes [m'',b_j]\otimes\ldots\otimes b_n, \] where we used Sweedler\rq{}s notation; $\pm$ is the Koszul sign due to permutation of $m''$ with $b_i$\rq{}s. We can assume without loss of generality that $C=\Chev(g)$ is the Chevalley complex of a dg Lie algebra $\alg g$ concentrated in strictly positive degrees. (If not, take for $\alg g$ the Harrison complex of $C$.) As a cocommutative coalgebra it is freely cogenerated by ${\alg g} [-1]$. In the latter case the aforementioned complex is quasi-isomorphic to $ M\otimes (\mathcal{U} \alg g)^{\otimes n}$ , where $\mathcal{U} \alg g$ is the universal envelopping algebra of $\alg g$, with differential \begin{equation}\label{eq:differential2} d=d_M+d_{\alg g}+\delta, \end{equation} defined similarly: $d_M$ and $d_{\alg g}$ are induced from the differentials on $M$ and $\alg g$ and \[ \delta (m\otimes b_1,\dots ,b_n) = \sum \sum_j \pm m'\otimes b_1\otimes\ldots \otimes [\pi(m''), b_j]\otimes \dots \otimes b_n, \] where $\pi:\Chev(\alg g)\to \alg g$ is the projection to the cogenerators. The action of $\End(F_n)$ on $ M\otimes (\mathcal{U} \alg g)^{\otimes n}$ and $M\otimes (\Omega C)^{\otimes n}$ is described by the same formulas. Both $\mathcal{U} \alg g$ and $\Omega C$ are cocommutative Hopf algebras. In Sweedler\rq{}s notation the iterated coproduct is written as \[ \Delta^k b=\sum b^{(1)}\otimes b^{(2)}\otimes\ldots\otimes b^{(k)}. \] Since the coproduct is cocommutative, we will be writing instead \[ \Delta^k b=\sum b^{(\bullet)}\otimes b^{(\bullet)}\otimes\ldots\otimes b^{(\bullet)}. \] Let $\Psi\in\End(F_n)$ send \begin{equation}\label{eq_Psi1} x_i\mapsto x_{\alpha_{i1}}^{\varepsilon_{i1}}\cdot x_{\alpha_{i2}}^{\varepsilon_{i2}}\cdot \ldots\cdot x_{\alpha_{ik_i}}^{\varepsilon_{ik_i}},\quad i=1\ldots n, \end{equation} where $\varepsilon_{ij}=\pm 1$, $\alpha_{ij}\in\{1\ldots n\}$. We let $\beta_{ij}=\frac{1-\varepsilon_{ij}}2\in\{0,1\}$ and define \begin{equation}\label{eq_Psi2} \Psi^*(m\otimes b_1\otimes\ldots\otimes b_n):= m\otimes\sum\pm\bigotimes_{i=1}^n\prod_{j=1}^{k_i}s^{\beta_{ij}}(b_{\alpha_{ij}}^{(\bullet)}), \end{equation} where the sign $\pm$ is the Koszul sign arising from the factors permutations, $s$ is the antipod. \begin{ex}\label{ex_Psi} (a) $n=1$; $x_1\mapsto (x_1)^2$. \[ \Psi^*(m\otimes b)=m\otimes \sum b'\cdot b''. \] (b) $n=1$; $x_1\mapsto x_1^{-1}$. \[ \Psi^*(m\otimes b)=m\otimes s(b). \] (c) $n=2$; $x_1\mapsto x_1\cdot x_2$, $ x_2\mapsto x_2$. \[ \Psi^*(m\otimes b_1\otimes b_2)= m\otimes \sum b_1\cdot b_2'\otimes b_2''. \] \end{ex} \begin{prop}\label{p:out_action1} The formula \eqref{eq_Psi2} defines the right action of $\End(F_n)$ on the complexes $ M\otimes (\mathcal{U} \alg g)^{\otimes n}$ and $M\otimes (\Omega C)^{\otimes n}$. \end{prop} \begin{proof} To see that $\Psi^*$ is a morphism of complexes we notice that it commutes with each term of the differentials~\eqref{eq:differential1} and \eqref{eq:differential2}: it commutes with $d_M$ by obvious reasons; it commutes with $d_{\alg g}$ since both product and coproduct of $\mathcal{U} \alg g$ are morphisms of complexes; it commutes with $\delta$ since both product and coproduct respect the $\alg g$ action. For the composition, it is quite easy to see that $(\Psi_1\circ \Psi_2)^*=\Psi_2^*\circ\Psi_1^*$, where the composition $\Psi_1\circ\Psi_2$ is understood as substitution without simplification. We only need to check that in case $(\Psi_1\circ\Psi_2)(x_i)$ has two consecutive factors $x_j$ and $x_j^{-1}$ for some $i$, then $(\Psi_1\circ \Psi_2)^*$ is the same as if these factors are cancelled out. But in such case, $(\Psi_1\circ \Psi_2)^*(m\otimes b_1\otimes\ldots\otimes b_n)$ also has two consecutive factors $b_j^{(\bullet)}$ and $s(b_j^{(\bullet)})$, which can also be eliminated: \[ \sum b_j^{(\bullet)}\cdot s(b_j^{(\bullet)})\otimes (b_j^{(\bullet)})^{\otimes k}= 1\otimes \sum (b_j^{(\bullet)})^{\otimes k}=1\otimes\Delta^k b_j= \sum s(b_j^{(\bullet)})\cdot b_j^{(\bullet)} \otimes (b_j^{(\bullet)})^{\otimes k}. \] \end{proof} \subsection{Hodge decomposition/filtration}\label{ss:hodge_filtr} The Poincar\'e-Birkhoff-Witt isomorphism $S\alg g\to\mathcal{U} \alg g$ respects both the coalgebra and $\alg g$ action structures. As a corollary, the induced map \[ M\otimes (S \alg g)^{\otimes n}\to M\otimes (\mathcal{U} \alg g)^{\otimes n} \] is an isomorphism of complexes. The image of the subcomplex $M\otimes S^{m_1}\alg g\otimes\ldots \otimes S^{m_n}\alg g$ in $M\otimes (\mathcal{U} \alg g)^{\otimes n}$ is called $(m_1,\ldots,m_n)$ Hodge multidegree component, whose {\it total Hodge degree} is $m=m_1+\ldots +m_n$. One has \[ \bigoplus_{m_1+\ldots+m_n=m}M\otimes S^{m_1}\alg g\otimes\ldots \otimes S^{m_n}\alg g=M\otimes S^m(H^1\otimes\alg g), \] where $H^1:=H^1(\vee_n S^1,{\mathbb{Z}})={\mathbb{Z}}^n$ viewed as a space concentrated in degree zero. Below $H_1:=H_1(\vee_n S^1,{\mathbb{Z}})$. \begin{prop}\label{p:out_hodge_filtr} The action of $\End(F_n)$ on $M\otimes (\mathcal{U} \alg g)^{\otimes n}$ preserves the total Hodge degree as a filtration. The induced action on the associated graded complex $\mathit{gr}\, M\otimes (\mathcal{U} \alg g)^{\otimes n}$ factors through $\End(H_1)=\End({\mathbb{Z}}^n)$ as one has \[ \mathit{gr}\, M\otimes (\mathcal{U} \alg g)^{\otimes n}= M\otimes S(H^1\otimes \alg g). \] \end{prop} This proposition is a particular case of Theorem~\ref{thm:endf_n_HP}. \begin{proof} The Hodge filtration is preserved because both the product and coproduct of $\mathcal{U} \alg g$ preserve the Poincar\'e-Birkhoff-Witt filtration. Notice also that if we apply~\eqref{eq_Psi2} to define an $\End(F_n)$ action on $M\otimes (S \alg g)^{\otimes n}$, we get exactly $M\otimes (S \alg g)^{\otimes n}\simeq M\otimes S(H^1\otimes \alg g)$ as a right $\End(F_n)$ module. \end{proof} \begin{rem}\label{r:HH_suspension} It will be shown in Subsection~\ref{ss41} (see Remark~\ref{r:grCH_C_M}) that for any pointed space $Y_*$ of finite type, the Hochschild-Pirashvili homology $HH^{\Sigma Y_*}(C(\alg g),M)$ is computed by the complex \[ M\otimes S(\tilde H^*Y_*\otimes\alg g), \] where $\tilde H^*(Y)$ is the reduced cohomology of $Y$ viewed as a negatively graded vector space. The differential has the same form~\eqref{eq:differential2}.\footnote{Unless certain convergency properties are satisfied, $S(-)$ should be undersood as a completed symmetric algebra, i.e. a direct product $\prod_{m\geq 0} S^m(-)$ rather than a direct sum. Similarly the tensor product should be understood as the completed tensor product with respect to the homological degree of $\alg g$.} \end{rem} \section{$\mathrm{Out}(F_n)$ representations. Proof of Theorem~\ref{thm:outfrrep}}\label{s:out_n_rep} Recall isomorphism \eqref{eq:base}, which in particular implies that in case $M=C$ the action of $\mathrm{Aut}(F_n)$ on $HH^{\vee_n S^1}(C,M)=HH^{\vee_n S^1}(C)$ descends to an $\mathrm{Out}(F_n)$ action. Recall also that according to Proposition~\ref{p:out_hodge_filtr} the higher Hochschild homology $HH^{\vee_n S^1}(C, M)$ carries a Hodge filtration such that the action of $\mathrm{Aut}(F_n)$ on the associated graded factors through $\mathrm{GL}(n,{\mathbb{Z}})$. In other words, all $\mathrm{Aut}(F_n)$ and $\mathrm{Out}(F_n)$ modules obtained in this manner can be obtained by iterated extension of $\mathrm{GL}(n,{\mathbb{Z}})$-modules by $\mathrm{GL}(n,{\mathbb{Z}})$-modules. \subsection{Example 1: Polynomial coalgebras} If $C={\mathbb{Q}}[x_1,\dots,x_n]$ is a cofree cocommutative coalgebra (in potentially odd generators), we have $\alg g=\xi_1{\mathbb{Q}} \oplus \cdots \oplus \xi_n{\mathbb{Q}}$ as abelian Lie algebra, where the generators $\xi_j$ are degree shifted by one unit with respect to the generators $x_j$. In this case the Hodge grading is preserved by the $\mathrm{Aut}(F_n)$ action (because $\mathcal{U} \alg g$ is commutative) and hence all representations obtained factor through $\mathrm{GL}(n,{\mathbb{Z}})$. Since the differential on $C\otimes(\mathcal{U} \alg g)^{\otimes n}$ vanishes the higher Hochschild homology is just \[ HH^{\vee_n S^1}(C)\cong C\otimes S( H^1 \otimes \alg g) \] with the $\mathrm{Out}(F_n)$ action factoring through $GL(n,{\mathbb{Z}})=GL(H_1)$, which acts by the standard action on ${\mathbb{Z}}^n=H^1$. \subsection{Example 2: Dual numbers}\label{sec:exdualnumbers} Consider the coalgebra of dual numbers ${\mathbb{Q}}\oplus x{\mathbb{Q}}$, where $x$ is a primitive cogenerator of even degree. The (Koszul) dual Lie algebra is the free Lie algebra in one odd generator $\xi$, i.e., $\alg g =\xi {\mathbb{Q}} \oplus [\xi,\xi] {\mathbb{Q}}$. Then the associated graded of $CH^{\vee_n S^1}(C\otimes C^{\otimes\bullet})$ may be identified with \[ \mathit{gr}\, CH^{\vee_n S^1}(C\otimes C^{\otimes\bullet})\cong C\otimes S(H^1\otimes \alg g)\cong C \otimes {\mathbb{Q}}[\xi_1,\dots, \xi_n, \eta_1,\dots, \eta_n]. \] Here $\xi_j$ corresponds to $\xi$ on the $j$-th circle and $\eta_j$ corresponds to $\eta=[\xi,\xi]=2\xi^2$ on the $j$-th circle. Notice that $ad_\xi(\xi)=\eta$ and $ad_\xi(\eta)=0$. The complex has length~2: \[ 0\leftarrow 1\otimes {\mathbb{Q}}[\xi_1,\dots, \xi_n, \eta_1,\dots, \eta_n]{\stackrel d\longleftarrow} x\otimes {\mathbb{Q}}[\xi_1,\dots, \xi_n, \eta_1,\dots, \eta_n]\leftarrow 0. \] The differential is defined such that \begin{multline*} d(x\otimes P(\xi_1,\dots, \xi_n, \eta_1,\dots, \eta_n) )= \sum_{j=1}^n 1\otimes ad_{\xi_j} P(\xi_1,\dots, \xi_n, \eta_1,\dots, \eta_n)=\\ = \sum_{j=1}^n 1\otimes \eta_j \frac{\partial}{\partial \xi_j} P(\xi_1,\dots, \xi_n, \eta_1,\dots, \eta_n) \end{multline*} The differential can be identified with the de Rham differential on an $n$-dimensional odd vector space, identifying $\eta_j$ with $d_{dR}\xi_j$. One can identify the corresponding representations of $\mathrm{GL}(n,{\mathbb{Z}})$. Namely, if we fix in the associated graded the Hodge degree to be $m$, then the corresponding representations of $\mathrm{GL}(n,{\mathbb{Z}})$ one obtains correspond to partitions of the form $m=\ell + 1+\cdots +1$. To be precise the homology is the sum $U^I\oplus U^{II}$, where $U^I=\mathrm{coker}\, d$, $U^{II}=\ker d$. The part of degree $k$ in $\xi$ and $\ell$ in $\eta$ is sent by $d$ to the part of degree $k-1$ in $\xi$ and $\ell+1$ in $\eta$: \[ 0\leftarrow \Lambda^{k-1} H^1\otimes S^{\ell+1} H^1{\stackrel d\longleftarrow}\Lambda^{k}H^1\otimes S^{\ell}H^1\leftarrow 0. \] The $\mathrm{GL}(n)$ module $\Lambda^k H^1\otimes S^{\ell} H^1$ is a direct sum of 2 representations encoded by partitions $(\ell+k)=\ell+1+\ldots+1$ and $(\ell+k)=(\ell+1)+1+\ldots+1$. We conclude that the kernel of $d$ in this bigrading is $V_{(\ell,1^k)}$ and the cokernel of $d$ is $V_{(\ell+2,1^{k-2})}$. The bigrading by $\xi$ and $\eta$ is preserved in $C\otimes ({\mathcal{U}}\alg g)^{\otimes n}$ only as a filtration. Instead one can consider the {total $\xi$ grading} by assigning 1 to each $\xi$ and 2 to each $\eta=[\xi,\xi]$. The component $U^I_N\oplus U^{II}_N$ in the homology of total $\xi$ degree $N$ is a filtered space, whose associated graded is \[ \mathit{gr} U^I_N=\bigoplus_{2\ell+k=N+1}V_{(\ell,1^k)},\quad \mathit{gr} U^{II}_N=\bigoplus_{2\ell+k=N}V_{(\ell,1^k)}. \] For both $U^I$ and $U^{II}$ the Hodge degree of $V_{(\ell,1^k)}$ is $\ell+k$. \subsection{The lowest non-trivial example worked out}\label{sec:lowest} Let us consider the first $\mathrm{Out}(F_n)$ representation obtained by the above methods that does not factor through $\mathrm{GL}(n,{\mathbb{Z}})$. It is obtained as the cokernel of the differential in the dual numbers example above for $n=3$ and the total $\xi$ degree~3. It was denoted by $U^I_3$ in the previous subsection The representation is 7 dimensional. As in Subsection \ref{sec:exdualnumbers} one sees that the associated graded representation splits into two $\mathrm{GL}(3,{\mathbb{Z}})$ representations \[ \mathit{gr} U_{3}^I = V_{(2)} \oplus V_{(1,1,1)}. \] In other words, $U_{3}^I$ is an extension \[ 0\to V_{(2)} \to U_{3}^I \to V_{(1,1,1)} \to 0. \] A representative of the cohomology class in $HC^{\vee_3 S^1}(C)$ spanning the $V_{(1,1,1)}$ part is \[ e := 1\otimes \xi\otimes \xi \otimes \xi. \] Representatives forming a basis of $V_{(2)}$ are \begin{align*} f_1&:= 1\otimes [\xi,\xi] \otimes \xi\otimes 1 \cong -1\otimes \xi \otimes [\xi,\xi] \otimes 1 & f_2&:= 1\otimes [\xi,\xi] \otimes 1\otimes \xi \\ f_3&:= 1\otimes 1 \otimes [\xi,\xi] \otimes \xi & f_4&:= 1\otimes [\xi,\xi] \xi \otimes 1 \otimes 1 \\ f_5&:= 1\otimes 1\otimes [\xi,\xi] \xi \otimes 1 & f_6&:= 1\otimes 1\otimes 1\otimes [\xi,\xi] \xi. \end{align*} \subsection{The proof of Theorem \ref{thm:outfrrep}} More generally let us consider representation $U^I_{3}$ of $\mathrm{Out}(F_n)$ for arbitrary $n\geq 3$. We claim that this representation satisfies the requirements of Theorem \ref{thm:outfrrep}, i.e., it does not factor through $\mathrm{GL}(n,{\mathbb{Z}})$ and it has dimension $\frac{n(n^2+5)}{6}$. Indeed, as in Subsection~\ref{sec:lowest} we can identify the associated graded representation under the Hodge filtration with \[ \mathit{gr} U^I_{3} = V_{(2)}\oplus V_{(1,1,1)} \] where $V_{(2)}$ and $V_{(1,1,1)}$ are the irreducible representations of the linear group $\mathrm{GL}(n)$ corresponding to the partitions $(2)$ and $(1+1+1)$. Hence we find that indeed \[ \mathop{dim} U^I_{3} = \mathop{dim} V_{(2)} + \mathop{dim} V_{(1,1,1)} = \frac{n(n+1)}{2}+ {n \choose 3} = \frac{n(n^2+5)}{6}. \] Next, we check that the representation does not factor through $\mathrm{GL}(n)$. Consider $E_{12}, E_{\bar 1 \bar 2}\in \mathrm{Out}(F_n)$ that send \[ E_{12}(x_i)= \begin{cases} x_1x_2,& i=1;\\ x_i,& \textrm{otherwise;} \end{cases} \qquad E_{\bar 1 \bar 2}(x_i)= \begin{cases} x_2x_1,& i=1;\\ x_i,& \textrm{otherwise.} \end{cases} \] We will show that the action of $E_{12}$ is different from that of $E_{\bar 1 \bar 2}$ in the representation $U^I_{3}$ for $n\geq 3$. Indeed, choosing basis vectors as in Subsection \ref{sec:lowest} we find that \[ E_{12} \cdot (1, \xi, \xi, \xi,1,\dots,1) = (1, \xi, \xi, \xi,1,\dots,1)+ {\frac 12} (1, [\xi, \xi],1, \xi,1,\dots,1) \] while \[ E_{\bar 1 \bar 2} \cdot (1, \xi, \xi, \xi,1,\dots,1) = (1, \xi, \xi, \xi,1,\dots,1)- {\frac 12}(1, [\xi, \xi],1, \xi,1,\dots,1). \] To recall $U^I_{3}$ is the cokernel of $d$. Thus we need to verify that $(1, [\xi, \xi],1, \xi,1,\dots,1)\in {\mathbb{Q}}\otimes (S\alg g)^{\otimes n}$ is not in the image of $d$. As we have seen in Subsection~\ref{sec:exdualnumbers}, $d$ is the de Rham differential which is acyclic on non-constant polynomials in $\xi_i$ and $\eta_j$, thus we only have to check that the corresponding polynomial is not de Rham closed: \[ \sum_{j=1}^n \eta_j \frac{\partial}{\partial \xi_j} (\eta_1\xi_3) =\eta_1\eta_3\neq 0. \] \hfill\qed \subsection{Bead representations} Generalizing the example of dual numbers we may consider the coalgebra \[ C_N={\mathbb{Q}}\oplus x_1{\mathbb{Q}}\oplus x_2{\mathbb{Q}}\oplus\ldots\oplus x_N{\mathbb{Q}}, \] where the cogenerators $x_i$ are of even degrees and primitive. The Koszul dual Lie algebra is again free \[ \alg g=FreeLie(\xi_1,\dots, \xi_N). \] There is a ${\mathbb{Z}}^N$ grading on $C_N$ and a representation of $\mathbb{S}_N$, and hence a similar grading and action on the higher Hochschild homology $HH^{\vee_n S^1}(C_N)$. We may introduce a representation of $\mathrm{Out}(F_n)$ for every irreducible representation $V_\lambda$ of $\mathbb{S}_N$ labelled by a partition $\lambda$ of $N$: \[ U_\lambda = HH^{\vee_n S^1}(C_N)^{1,\dots, 1} \otimes_{\mathbb{S}_N} V_\lambda \] Here the superscript $(\cdot)^{1,\dots, 1}$ shall mean that we pick out the piece of ${\mathbb{Z}}^N$-degree $(1,\dots, 1)$. We will call $U_\lambda$ the \emph{bead representation}\footnote{The name stems from the fact that elements of $\Omega C_N$ can be understood as linear combinations of configurations of beads of $N$ colors arranged on a string.} of $\mathrm{Out}(F_n)$ associated to the partition $\lambda$. Notice that the obtained complex is again of length~2. Thus we have again $U_\lambda=U_\lambda^I\oplus U_\lambda^{II}$ where $U_\lambda^I$ is the cokernel of the differential and $U_\lambda^{II}$ is the kernel. We will call $U_\lambda^I$ the bead representation of first type and $U_\lambda^{II}$ the bead representation of second type. \footnote{The representations $U_N^{I,II}$ considered in Subsection~\ref{sec:exdualnumbers} correspond to $U_{(N)}^{I,II}$ in the new notation.} {\bf Open problem:} Describe $U_\lambda$. In particular, what are the dimensions $\mathit{dim}(U_\lambda^{I,II})$? If we decompose the associated graded $\mathit{gr} U_\lambda$ into irreducible representations of $\mathrm{GL}(n,{\mathbb{Z}})$ (actually $\mathrm{GL}(n,{\mathbb{R}})$) \[ \mathit{gr} U_\lambda \cong \oplus_\mu V_\mu \] which partitions $\mu$ occur in the direct sum, with what multiplicity? \section{Complexes $CH^{\vee_n S^1}(\L_*)$. Proof of Theorem~\ref{thm:endf_n_HP}}\label{s:CH_vee_n} Recall that in case the space $X$ (respectively pointed space $X_*$) is obtained as a realization of a (pointed) finite simplicial set ${\mathcal{X}}_\bullet\colon\Delta^{op}\to \mathrm{Fin}$ (respectively ${\mathcal{X}}_\bullet\colon \Delta^{op}\to\Gamma$), the higher Hochschild homology $HH^X(\L)$ (respectively $HH^{X_*}(\L_*)$) can be computed as the homology of the totalization of the cosimplicial chain complex $\L\circ {\mathcal{X}}\colon\Delta\to dgVect$ (respectively $\L_*\circ {\mathcal{X}}_*\colon\Delta\to dgVect$). The same construction works for realizations of bisimplicial (and more generally multisimplicial) sets. Indeed, if ${\mathcal{X}}_{\bullet\bullet}$ is a bisimplicial set, then its realization $|{\mathcal{X}}_{\bullet\bullet}|$ is homeomorphic to the realization $|{\mathrm{diag}}\,({\mathcal{X}}_{\bullet\bullet})|$ of its diagonal simplicial set. On the other hand, one also has the Eilenberg-Zilber quasi-isomorphism \begin{equation}\label{eq_EZ} {\mathrm{Tot}}({\mathrm{diag}}\, \L\circ {\mathcal{X}}_{\bullet\bullet}){\stackrel {EZ}\longrightarrow}{\mathrm{Tot}}(\L\circ{\mathcal{X}}_{\bullet\bullet}). \end{equation} As the first complex computes the Hochschild-Pirashvili homology of $|{\mathrm{diag}}\,({\mathcal{X}}_{\bullet\bullet})|= |{\mathcal{X}}_{\bullet\bullet}|$, so does the second. Now notice that the complexes $M\otimes\left(\Omega C\right)^{\otimes n}$ can be obtained as totalization of an $n$-multicosimplicial chain complex (rather than just cosimplicial). (In fact its diagonal totalization is $M\otimes\Omega\left(C^{\otimes n}\right)$.) The corresponding multicosimplicial complex is obtained as the composition of $M\otimes C^{\otimes\bullet}$ with an $n$-multisimplicial model of $\vee_n S^1$. Let $S_\bullet^1$ denote the standard simplicial model for $S^1$: its set of $k$-simplices consists of a basepoint $*$ and also all monotonic non-constant sequences of $0$\rq{}s and $1$\rq{}s of length $k+1$. This set can be identified with $k_*$ (where $i\in k_*$ corresponds to a sequence with $i$ $1$\rq{}s). The $n$-multisimplicial model for $\vee_n S^1$, we denote it by $(\vee_n S^1)_{\underbrace{\bullet\ldots\bullet}_n}$, is obtained as a degreewise wedge of $n$ $n$-multisimplical sets. The $i$-th summand of the wedge is the product of $S_\bullet^1$ and $(n-1)$ constant one-point simplicial sets, with $S_\bullet^1$ appearing on the $i$-th place in the product. Notice that the $(k_1,k_2,\ldots,k_n)$ component of $(\vee_n S^1)_{\underbrace{\bullet\ldots\bullet}_n}$ is the set $\bigvee_{i=1}^n (k_i)_*\simeq (k_1+\ldots +k_n)_*$. Thus the totalization of our multicosimplial complex is \begin{equation}\label{eq_CH_vee_n} CH^{\vee_n S^1}(\L_*):={\mathrm{Tot}}(\L_*\circ (\vee_n S^1)_{\underbrace{\bullet\ldots\bullet}_n}) =\left(\prod_{(k_1,\ldots,k_n)} N\L_*\left(\Sigma_{i=1}^n k_i\right)[\Sigma_{i=1}^n k_i],\, d=d_1+\ldots+d_n\right), \end{equation} where \begin{equation}\label{eq_NL} NL_*(k)=\bigcap_{i=1}^k \ker s_i^*, \end{equation} and $s_i^*\colon \L(k_*)\to\L(k_*\setminus\{i\})$ is the map induced by the inclusion $$s_i\colon k_*\setminus\{i\}\subset k_*.$$ The action of $\End(F_n)$ on $CH^{\vee_n S^1}(\L_*)$ is defined analogously as that on $CH^{\vee_n S^1}(M\otimes C^{\otimes\bullet})=M\otimes(\Omega C)^{\otimes n}$, see~\eqref{eq_Psi2}.\footnote{Recall that we assume that $C$ is simply connected. If we only assume that $C$ is connected, than the complex $CH^{\vee_n S^1}(M\otimes C^{\otimes\bullet})$ is $M\hat \otimes(\Omega C)^{\otimes n}$, where instead of the cobar complex we take the completed cobar and instead of tensor product the completed tensor product.} Notice that the coproduct on $\Omega C$ is the sum of coshuffles, and the product is just concatenation. Let $\gamma$ lie in the $(k_1,\ldots,k_n)$ component of~\eqref{eq_CH_vee_n}, and $\Psi\in \End(F_n)$ is such that $x_j$ appears in total $r_j$ times in $\Psi(x_1),\,\Psi(x_2),\ldots,\,\Psi (x_n)$. One has that $\Psi^*(\gamma)$ is the sum of $r_1^{k_1} \cdot r_2^{k_2}\cdot\ldots\cdot r_n^{k_n}$ elements each of which is obtained from $\gamma$ by some permutation of its inputs. More concretely, $\Psi$ defines a map $\vee_n S^1\to \vee_n S^1$ such that any point on the $i$-th circle has exactly $r_i$ preimages. We put $k_1$ points on the first circle in the target wedge, $k_2$ on the second, $\ldots$, $k_n$ on the last one. These points correspond to the inputs of $\gamma$. For every point in the target we choose a preimage point (thus for the $i$-th circle there are $r_i^{k_i}$ choices making the total of $\prod_{i=1}^n r_i^{k_i}$ choices). For every such choice we get a collection of points on the source wedge, which contributes a summand in $\Psi^*(\gamma)$, that has to be taken with the sign of permutation of inputs of $\gamma$. Consider examples similar to those given in Example~\ref{ex_Psi}: (a) $n=1$; $\Psi(x_1)=x_1^2$. In this case, \[ \Psi^*(\gamma(x_{11},\ldots,x_{1k_1}))= \sum_{i=0}^{k_1}\sum_{\sigma\in {\mathit{Sh}}(i,k_1-i)}(-1)^\sigma \gamma(\sigma(x_{11},\ldots,x_{1k_1})). \] Here and below ${\mathit{Sh}}(i,j)$ denotes the set of shuffles of an $i$-elements set with a $j$-elements set. (b) $n=1$, $\Psi(x_1)=x_1^{-1}$. In this case \[ \Psi^*(\gamma(x_{11},\ldots,x_{1k_1}))= (-1)^{\frac{k_1(k_1-1)}2}\gamma(x_{1k_1},\ldots,x_{11}). \] (c) $n=2$; $\Psi(x_1)=x_1x_2$, $\Psi(x_2)=x_2$: \[ \Psi^*(\gamma(x_{11},\ldots,x_{1k_1},x_{21},\ldots,x_{2k_2}))= \sum_{i=0}^{k_2}\sum_{\sigma\in {\mathit{Sh}}(i,k_2-i)}(-1)^\sigma \gamma(x_{11},\ldots,x_{1k_1}, \sigma(x_{1k_1+1},\ldots,x_{1k_1+i},x_{21},\ldots,x_{2k_2-i})). \] \begin{prop}\label{pr:act_topol} The action of $\End(F_n)$ on $CH^{\vee_n S^1}(L_*)$ defined above coincides in the homology with the topological action. \end{prop} \begin{proof}[Idea of the proof] One can check that for all elements $\Psi\in \End(F_n)$ their action $\Psi^*$ on $CH^{\vee_n S^1}(L_*)$ can be decomposed into a composition of maps induced by multisimplicial maps, Eilenberg-Zilber maps~\eqref{eq_EZ}, and some natural chain homotopy inverses to those maps. \end{proof} This proposition is a partial case of Theorem~\ref{t:HP_suspensions2}. That\rq{}s why we choose not to give a detailed proof of it, but only mention that there is a proof which goes through a careful study of multi-simplical maps. (This argument is similar to the explicit identification of the surface product studied in~\cite{GTZ0}.) Indeed, Theorem~\ref{t:HP_suspensions2} among other things states that the complexes $CH^{\vee_n S^1}(L_*)$ are identical to ${\mathrm{CH}}^{\vee_n S^1}(L_*)$, where the latter ones are constructed using the definition of the Hochschild-Pirashvili homology in terms of derived maps of right $\Gamma$ modules. Moreover, Remark~\ref{rem:action_ident} asserts that the induced action of $\End(F_n)$ on ${\mathrm{CH}}^{\vee_n S^1}(L_*)$ is identical to the one on $CH^{\vee_n S^1}(L_*)$ defined in this section. We will also see in Subsection~\ref{ss44} that the reason that the $\End(F_n)$ action on $HH^{\vee_n S^1}(L_*)$ can be lifted on the level of chains is the coformality of the induced $\End(F_n)$ action on the $\Omega$-module $C_*((\vee_n S^1)^{\wedge \bullet})$.\footnote{By this we mean that every induced map of the action is coformal, see Definition~\ref{d:coformal} and Proposition~\ref{p:coformal_susp}.} \begin{proof}[Proof of Theorem~\ref{thm:endf_n_HP}] At this point we only need to explain what is the Hodge splitting in $CH^{\vee_n S^1}(L_*)$, why it is preserved by the $\End(F_n)$ action as a filtration, and why on the associated graded complex $\mathit{gr}\, CH^{\vee_n S^1}(L_*)$ this action factors through $\End({\mathbb{Z}}^n)$. In case $n=1$, i.e. for the usual Hochschild complex $CH^{S^1}(\L_*)$, the Hodge splitting is obtained by noticing that the action of $\End(F_1)=({\mathbb{Z}},*)$ splits this complex into a direct product of spaces numbered by non-negative integers, such that on the $m$-th component $r\in({\mathbb{Z}},*)$ acts as multiplication by $r^m$~\cite{GerstSchack,Loday}. The projection on the $m$-th component is called $m$-th Euler idempotent $e_m$. Notice that each component $N\L_*(\ell)[\ell]$ of the complex \[ CH^{S^1}(\L_*)={\mathrm{Tot}}(\L_*\circ S^1_\bullet)= \left(\prod_{\ell\geq 0} N\L_*(\ell)[\ell],d\right) \] is acted on by $\mathbb{S}_\ell$ and thus by the group algebra ${\mathbb{Q}}[\mathbb{S}_\ell]$. The Euler idempotent $e_m(\ell)$ is obtained via this action and is in fact an element of ${\mathbb{Q}}[\mathbb{S}_\ell]$. To give a bit more insight, one has an isomorphism of symmetric sequences: \[ \mathsf{Com}\circ\mathsf{Lie}\stackrel{\simeq}{\longrightarrow}\mathsf{Assoc}, \] induced by the Poincar\'e-Birkhoff-Witt map. The image of $e_m(\ell)$ is exactly \[ \left[\mathsf{Com}(m)\circ\mathsf{Lie}\right](\ell)\subset \mathsf{Assoc}(\ell)={\mathbb{Q}}[\mathbb{S}_\ell]. \] When $n\geq 2$, to obtain a similar splitting in Hochschild-Pirashvili homology one can use the action of the monoid $({\mathbb{Z}},*)^{\times n}\subset\End(F_n)$ consisting of the homotopy classes of maps $\vee_n S^1\to \vee_n S^1$ sending each circle into itself. The complex $CH^{\vee_n S^1}(L_*)$ splits into a direct product of spaces numbered by $n$-tuples $(m_1,\ldots,m_n)$ of non-negative integers. Element $(r_1,\ldots,r_n)\in ({\mathbb{Z}},*)^{\times n}$ acts on the $(m_1,\ldots,m_n)$ component of the Hodge splitting as multiplication by $r_1^{m_1}\cdot\ldots\cdot r_n^{m_n}$. Each $(\ell_1,\ldots,\ell_n)$ component $N\L_*(\ell_1+\ldots+\ell_n)$ of ${\mathrm{Tot}}(\L_*\circ (\vee_n S^1)_{\underbrace{\bullet\ldots\bullet}_n})$ is acted on by $\mathbb{S}_{\ell_1}\times \ldots\times \mathbb{S}_{\ell_n}$. The projection onto the $(m_1,\ldots,m_n)$ Hodge component is given by $e_{m_1}(\ell_1)\otimes\ldots\otimes e_{m_n}(\ell_n)$. We define the {\it total Hodge degree} as $m=m_1+\ldots +m_n$. One can see that the action of $\End(F_n)$ preserves it as a filtration. To see that the $\End(F_n)$ action on $\mathit{gr}\, CH^{\vee_n S^1}(L_*)$ factors through $\mathrm{GL}(n,{\mathbb{Z}})$, see equations~\eqref{eq:grCH_m1}, \eqref{eq:grCH_m2}, and Remark~\ref{r:grCH_wedge_n}, which describe $\mathit{gr}\, CH^{\vee_n S^1}(L_*)$ in terms of $H^1(\vee_n S^1)$. \end{proof} \section{Hochschild-Pirashvili homology on suspensions. Proof of Theorem~\ref{t:HP_suspensions1}}\label{s4} \subsection{Complexes $\mathit{gr}\, {\mathrm{CH}}^{\Sigma Y_*}(\L_*) $}\label{ss41} In this subsection we describe complexes computing higher Hochschild homology on suspensions $HH^{\Sigma Y_*}(\L_*) $. These complexes depend only on $\tilde H_*(Y_*)$ and as we will later see in Subsection~\ref{ss44} they can be naturally identified with the associated graded of ${\mathrm{CH}}^{\Sigma Y_*}(\L_*) $. One of the two reasons for the Hodge splitting in the higher Hochschild homology (on a suspension) is the formality of the $\Gamma$-module $C_*(X_*^\bullet)$ in case $X_*=\Sigma Y_*$. Recall that a $\Gamma$-module is said {\it formal} if it is quasi-isomorphic via a zigzag of quasi-isomorphisms to its homology $\Gamma$-module. Similarly, a map between $\Gamma$-modules is formal if this map is quasi-isomorphic via a zigzag of quasi-isomorphisms of $\Gamma$-modules maps to the induced map in their homology. \begin{lemma}\label{l:formality1} If a pointed space $X_*$ is of finite type and is rationally formal, then the right $\Gamma$ module $C_*(X_*^\bullet)$ is also rationally formal. If a pointed map $X_*\to Y_*$ between spaces of finite type is rationally formal, then the induced map of $\Gamma$ modules $C_*(X_*^\bullet) \to C_*(Y_*^\bullet)$ is also formal. \end{lemma} \begin{proof} By formality of a space we understand formality of its Sullivan algebra $A_{X_*}$ as augmented algebra and similarly for a map between spaces. We show explicitly the first statement. The second one follows from functoriality of the construction. One has a quasi-isomorphism of $\Gamma$-modules: \[ C_*(X_*^\bullet)\simeq \left(A_{X_*^\bullet}\right)^\vee\simeq \left({\mathbb{Q}}\otimes A_{X_*}^{\otimes \bullet}\right)^\vee \simeq \left({\mathbb{Q}}\otimes H^*(X_*)^{\otimes \bullet}\right)^\vee \simeq H_*(X_*^\bullet). \] \end{proof} \begin{lemma}\label{l:formality2} Any suspension of a space of finite type is rationally formal and, moreover, any suspension of a map between spaces of finite type is rationally formal. \end{lemma} Recall that a map of pointed spaces is formal if the induced map of Sullivan augmented algebras is formal, i.e., quasi-isomorphic to the map of rational cohomology algebras (in the category of augmented algebras). In particular it implies that each space is formal. \begin{proof} Let $Y_*$ be a space of finite type and let us show that $\Sigma Y_*$ is formal. The argument for a map between suspensions is similar. In case $Y_*$ is connected, its suspension $\Sigma Y_*$ is simply connected. It is also a co-$H$-space, therefore it is coformal and its Quillen model is a free Lie algebra generated by $\tilde H_*(Y_*)$ with zero differential. The Koszul dual commutative algebra is generated by $\tilde H^*(\Sigma Y_*)$ with all products of generators being zero. In case $Y_*=\coprod_{i=1}^k Y_i$ is a disjoint union of $k$ components, then $\Sigma Y_*=\left(\bigvee_{k-1}S^1\right)\vee\left(\bigvee_{i=1}^k\Sigma Y_i\right). $ And the wedge of formal spaces is formal. \end{proof} Notice that from these two lemmas it follows that if $X_*$ is a suspension of finite type, then $C_*(X_*^\bullet)$ is a formal $\Gamma$-module and that the same is true for a suspension of a map between spaces of finite type. Proposition~\ref{p:formal_susp} below implies that the finiteness condition can be released. Let $\Omega$ be the category of finite sets with morphisms all surjective maps. In~\cite{pira00} Pirashvili defines an equivalence of categories \[ cr\colon \mathrm{mod}{-}\Gamma\to \mathrm{mod}{-}\Omega. \] On objects \begin{equation}\label{eq_cr} cr\,\L_*(k)=\L_*(k_*)\Big/ +_{i=1}^k {\mathrm {Im}}\, r_i^*, \end{equation} where $r_i^*\colon \L_*(k_*\setminus\{i\})\to \L_*(k_*)$ is induced by the map $r_i\colon k_*\to k_*\setminus \{i\}$: \[ r_i(j)= \begin{cases} j,& j\neq i;\\ *,& j=i. \end{cases} \] On morphisms $cr\, \L_* $ is obtained as restriction with respect to the inclusion $i\colon \Omega\to\Gamma$ that adds the basepoint to any set: $i(k)=k_*$. Recall~\eref{eq_NL}. The space $cr\,\L_*(k)$ is isomorphic to $N\L_*(k)$ via the obvious composition \begin{equation}\label{eq:q} q\colon N\L_*(k)\hookrightarrow\L_*(k_*)\to cr\,\L_*(k). \end{equation} One can show that $q$ is an isomorphism using the map $\prod_{i=1}^k(1-r_i^*s_i^*)$ that projects $\L_*(k_*)$ onto $N\L_*(k)$. (Notice that $r_i^*s_i^*$, $i=1\ldots k$, are pairwise commuting projectors as well as $(1-r_i^*s_i^*)$, $i=1\ldots k$.) For the complexes that we consider below it is sometimes convenient to use $N\L_*(\bullet)$ instead of $cr\,\L_*(\bullet)$. Let us describe the induced $\Omega$-module structure on $N\L_*(\bullet)$. The symmetric group action as part of $\Omega$ structure on $N\L_*(\bullet)$ is the usual one. Denote by $m_i\colon (k+1)\to k$ the surjection $$ m_i(j)=\begin{cases} j,& 1\leq j\leq i;\\ j-1,& i+1\leq j\leq k. \end{cases} $$ Abusing notation we denote by $m_i\colon (k+1)_*\to k_*$ the same map extended as $m_i\colon *\mapsto *$. For $\gamma\in cr\,\L_*(k)$, one has \begin{equation}\label{eq_right_inf_bim0} q^{-1}(m_i^*(\gamma))=(1-r_i^*s_i^*-r_{i+1}^*s_{i+1}^*)m_i^* (q^{-1}(\gamma)). \end{equation} One can write this formula slightly differently. Recall that the structure of a right $\Omega$-module is equivalent to the structure of a right module over the commutative non-unital operad $\mathsf{Com}_+$, while the structure of a right $\Gamma$-module is equivalent to the structure of an infinitesimal bimodule over the commutative unital operad $\mathsf{Com}$, see~\cite[Proposition~4.9]{AroneTur2} or~\cite[Lemma~4.3]{Turchin1}. In this terms, equation~\eqref{eq_right_inf_bim0} is written as \begin{multline}\label{eq_right_inf_bim} q^{-1}\bigl(\gamma(x_1,\ldots,x_i\cdot x_{i+1},\ldots,x_{k+1})\bigr)= q^{-1}(\gamma)(x_1,\ldots,x_i\cdot x_{i+1},\ldots,x_{k+1}) \\ - x_i\cdot q^{-1}(\gamma)(x_1\ldots \hat x_i \ldots x_{k+1}) - x_{i+1}\cdot q^{-1}(\gamma)(x_1\ldots \hat x_{i+1} \ldots x_{k+1}). \end{multline} The two last summands in~\eqref{eq_right_inf_bim0} and~\eqref{eq_right_inf_bim} are correction terms necessary to make the right-hand side normalized. The higher Hochschild homology over a pointed space $X_*$ is computed as the space of homotopy maps of $\Gamma$-modules \[ HH^{X_*}(\L_*)= H_*\bigl( \operatorname{hRmod}_\Gamma\left(C_*(X_*^\bullet),\L_*\right)\bigr). \] For any pointed space $X_*$, the cross-effect of the $\Gamma$-module $C_*(X_*^\bullet)$ is equivalent to \begin{equation}\label{eq:cross} cr\, C_*(X_*^\bullet)\simeq \tilde C_*(X_*^{\wedge\bullet}), \end{equation} see~\cite{AroneTur1}, where the $\Omega$-module structure on $\tilde C_*(X_*^{\wedge\bullet})$ is induced by the diagonal maps. For any surjection $p\colon k\twoheadrightarrow \ell$, one gets a map $X_*^{\wedge\ell}\to X_*^{\wedge k}$ defined as \begin{equation}\label{eq:diagonal_map} (x_1,\ldots x_\ell)\mapsto (x_{p^{-1}(1)},\ldots,x_{p^{-1}(k)}). \end{equation} It follows that the Hochschild-Pirashvili homology can also be described as \[ HH^{X_*}(\L_*)= H_*\left( \operatorname{hRmod}_\Omega\left(\tilde C_*(X_*^{\wedge\bullet}),cr\, \L_*\right)\right). \] \begin{defi}\label{d:omega_trivial} We say that a right $\Omega$ module $M$ has a trivial $\Omega$ action if for any strict surjection $p\colon k\twoheadrightarrow \ell$ the induced map $M(\ell)\to M(k)$ is the zero map. \end{defi} \begin{prop}\label{p:formal_susp} For any pointed suspension $\Sigma Y_*$, the $\Omega$ module $\tilde C_*\left((\Sigma Y_*)^{\wedge\bullet}\right)$ is formal. For any pointed map $g\colon Y_*\to Z_*$, the induced map of $\Omega$ modules $(\Sigma g)_*\colon \tilde C_*\left((\Sigma Y_*)^{\wedge\bullet}\right)\to \tilde C_*\left((\Sigma Z_*)^{\wedge\bullet}\right)$ is also formal. \end{prop} \begin{proof} For the proof we will need that the $\Omega$ module $ \tilde C_*((S^1)^{\wedge\bullet}) $ is formal and has the trivial $\Omega$ action in homology. The first statement follows from the fact that the $\Gamma$ module $C_*((S^1)^\bullet)$ is formal (by Lemmas~\ref{l:formality1} and~\ref{l:formality2}) and thus is so its cross-effect $cr\, C_*((S^1)^\bullet) \simeq \tilde C_*((S^1)^{\wedge\bullet}) $. The second statement is straightforward as any diagonal map $S^\ell\to S^k$ for $k>\ell$ induces the zero map in reduced homology. The following sequence of quasi-isomorphisms of $\Omega$ modules proves the formality of $\tilde C_*\left((\Sigma Y_*)^{\wedge\bullet}\right)$:\footnote{This simple argument was provided to us by G.~Arone.} \begin{multline}\label{eq:formality_arone} \tilde C_*\left((\Sigma Y_*)^{\wedge\bullet}\right)\simeq \tilde C_*\left((S^1)^{\wedge\bullet}\right)\otimes \tilde C_* \left( Y_*^{\wedge\bullet}\right)\simeq \tilde H_*\left((S^1)^{\wedge\bullet}\right)\otimes \tilde C_* \left( Y_*^{\wedge\bullet}\right)\simeq \\ \simeq\tilde H_*\left((S^1)^{\wedge\bullet}\right)\otimes \tilde C_*( Y_*)^{\otimes\bullet}\simeq \tilde H_*\left((S^1)^{\wedge\bullet}\right)\otimes \tilde H_*( Y_*)^{\otimes\bullet}. \end{multline} By the tensor product above we understand an objectwise tensor product of right $\Omega$ modules. The second quasi-isomorphism uses the formality of $ \tilde C_*((S^1)^{\wedge\bullet}) $. Notice that all the terms in this zigzag starting from the third one have the trivial $\Omega$ action. Notice also that all the quasi-isomorphisms are functorial in $Y_*$ except the last one, which uses a choice of a quasi-isomorphism $\tilde H_*Y_* \to \tilde C_*Y_*$. On the other hand, any morphism of complexes (in our case $\tilde C_*Y_*\to \tilde C_*Z_*$) is formal (i.e., is quasi-isomorphic to the induced map $\tilde H_*Y\to\tilde H_*Z$). This proves the formality of the induced map of $\Omega$ modules. \end{proof} \begin{rem}\label{r:triv_susp} It follows from~\eqref{eq:formality_arone} that for any suspension $\Sigma Y_*$, the right $\Omega$ module $\tilde C_*\left((\Sigma Y_*)^{\wedge\bullet}\right)$ has the trivial $\Omega$ action in homology. \end{rem} This property is in fact the second of the two reasons for the Hodge splitting. (The first one is the formality.) Indeed, as a consequence, the $\Omega$-module $\tilde H_*\left((\Sigma Y_*)^{\wedge \bullet}\right)$ splits into a direct sum of $\Omega$-modules: \begin{equation}\label{eq_suspens_split} cr\, \tilde C_*\left((\Sigma Y_*)^{\wedge \bullet}\right) \simeq \tilde H_*\left((\Sigma Y_*)^{\wedge \bullet}\right) \simeq \bigoplus_{m\geq 0} \tilde H_*(\Sigma Y_*)^{\otimes m}, \end{equation} where $\tilde H_*(\Sigma Y_*)^{\otimes m}$ denotes the $\Omega$-module which is $\tilde H_*(\Sigma Y_*)^{\otimes m}$ in arity $m$ and 0 in all others. Thus we get \begin{equation}\label{eq_HH_susp} HH^{\Sigma Y_*}(\L_*)\simeq \prod_{m\geq 0} H\left( \operatorname{hRmod}_\Omega \left(\tilde H_*(\Sigma Y_*)^{\otimes m}, cr\,\L_*\right)\right). \end{equation} As a corollary we see that the functor $HH^{(-)}(\L_*)$ factors through the reduced homology functor $\tilde H_*\colon {\mathrm{Top}}_*\to gVect$ when restricted on $\Sigma(Top_*)$. The splitting by $m$ in~\eqref{eq_HH_susp} is exactly the Hodge splitting. Now we want to make more explicit the right-hand side of~\eqref{eq_HH_susp}. Recall that the right $\Omega$-module is the same as the right $\mathsf{Com}_+$-module. Applying the Koszul duality between the $\mathsf{Lie}$ and $\mathsf{Com}_+$ operads, the cofibrant replacement of $\tilde H_*(\Sigma Y_*)^{\otimes m}$ as a right $\mathsf{Com}_+$-module is $\tilde H_*(\Sigma Y_*)^{\otimes m}\circ\mathsf{coLie}\{1\}\circ\mathsf{Com}_+$, where $\circ$ is the composition product of symmetric sequences; $\mathsf{coLie}$ is the Lie cooperad; $\{1\}$ denotes operadic suspension~\cite{Fresse1,AroneTur2,SongTur}. The differential in it is obtained by taking off one cobracket from the $\mathsf{coLie}\{1\}$ factor and by making it act from the left on the $\mathsf{Com}_+$ part as a product $x_1\cdot x_2$, see~\cite[Section~5]{AroneTur2}. For a general right $\mathsf{Com}_+$-module $M$, there is another term of the differential on its cofibrant replacement $M\circ\mathsf{coLie}\{1\}\circ\mathsf{Com}_+$, which takes off one cobracket from the $\mathsf{coLie}\{1\}$ part and makes it act from the right on $M$ also as a product $x_1\cdot x_2$. But in our case this action is trivial, so only the first part of the differential is present. The product over $m\geq 0$ of the complexes below computes $HH^{\Sigma Y_*}(\L_*)$: \begin{multline}\label{eq:grCH_m1} \operatorname{Rmod}_{\mathsf{Com}_+}\left(\tilde H_*(\Sigma Y_*)^{\otimes m}\circ\mathsf{coLie}\{1\}\circ\mathsf{Com}_+, cr\,\L_*\right)= \left(\mathrm{Hom}_\mathbb{S}\Bigl(\tilde H_*(\Sigma Y_*)^{\otimes m}\circ\mathsf{coLie}\{1\}, cr\,\L_*\Bigr),d\right)= \\ \mathrm{Hom}_{\mathbb{S}_m}\left( \tilde H_*(Y_*)^{\otimes m},\left(\prod_{\ell\geq m}\,\,\bigoplus_{\ell_1+\ldots +\ell_m=\ell} \left(\mathsf{Lie}(\ell_1)\otimes\ldots\otimes\mathsf{Lie}(\ell_m) \otimes_{\mathbb{S}_{\ell_1}\times\ldots\times \mathbb{S}_{\ell_m}}\Bigl(sign\otimes cr\,\L_*(\ell)\Bigr)\right)[\ell], d\right)\right), \end{multline} which assuming the finiteness condition on the homology of $Y_*$ can also be written as \begin{equation}\label{eq:grCH_m2} \tilde H^*(Y_*)^{\otimes m}\hat\otimes_{\mathbb{S}_m} \left(\prod_{\ell\geq m}\,\,\bigoplus_{\ell_1+\ldots +\ell_m=\ell} \left(\mathsf{Lie}(\ell_1)\otimes\ldots\otimes\mathsf{Lie}(\ell_m) \otimes_{\mathbb{S}_{\ell_1}\times\ldots\times \mathbb{S}_{\ell_m}}\Bigl(sign\otimes cr\,\L_*(\ell)\Bigr)\right)[\ell] ,d\right). \end{equation} Here $sign$ denotes the sign representation of $\mathbb{S}_\ell$; the reduced cohomology of $Y_*$ is viewed as a negatively graded vector space. The differential in this complex is the sum of simultaneous insertions of $[x_1,x_2]$ in one of the inputs of $\mathsf{Lie}(\ell_i)$ for some $i$, and right action by $x_1\cdot x_2$ on the corresponding input of $cr\,\L_*(\ell)$. Beware that if we replace $cr\,\L_*(\ell)$ by $N\L_*(\ell)$ additional summands in the differential appear due to the last two terms in~\eqref{eq_right_inf_bim0}-\eqref{eq_right_inf_bim}. \begin{rem}\label{r:grCH_C_M} In case $Y_*$ is of finite type, and $\L_*=M\otimes C^{\otimes\bullet}$, the obtained complex computing $HH^{\Sigma Y_*}(C,M)$ is \begin{equation}\label{eq:grCH_C_M} M\hat\otimes S\left(\tilde H^*(Y_*)\hat\otimes{\mathcal L}(C)\right), \end{equation} where the cohomology $\tilde H^*(Y_*)$ is non-positively graded; ${\mathcal L}(C)$ is the Harrison complex of $C$. The symmetric power and tensor products are the completed ones. The differential \[ d=d_M+d_C+\delta, \] where $d_M$ and $d_C$ are induced by the differential on $M$ and ${\mathcal L}(C)$, and $\delta(m\otimes x)=m'\otimes [m'',x]$. The part $\delta$ in the differential appears due to the last two summands in~\eqref{eq_right_inf_bim0}-\eqref{eq_right_inf_bim}.\footnote{To recall $C$ is simply connected. If $C$ is not simply connected, the Harrison complex ${\mathcal L}(C)$ should be replaced by the completed Harrison complex $\hat {\mathcal L}(C)$.} \end{rem} \begin{rem}\label{r:grCH_wedge_n} For $Y_*=\vee_n S^0$ and any $\L_*$, the obtained complex is identical to $\mathit{gr}\, CH_*^{\vee_n S^1}(\L_*)$ considered in Section~\ref{s:CH_vee_n}. In case $\L_*=M\otimes C^{\otimes\bullet}$ it follows from Proposition~\ref{p:out_hodge_filtr} and Remark~\ref{r:grCH_C_M}. For a general $\L_*$ one can construct this isomorphism analogously. The idea is that elements of $\mathsf{Lie}(\ell_i)$ in~\eqref{eq:grCH_m2} should be viewed as linear combinations of permutations in $\mathbb{S}_{\ell_i}$, which tells us in which order the elements should be put on the corresponding circle. \end{rem} \subsection{Hodge filtration. Proof of Theorem~\ref{t:HP_suspensions1}}\label{ss42} We define a functorial filtration on the space of homotopy maps of right $\Omega$-modules, which induces the desired filtration on $HH^{X_*}(\L_*)$ functorial in $X_*$ and $\L_*$. For a right $\Omega$-module $K$ define its $m$-th truncation $tr_m K$ as \[ tr_m(K)(\ell) = \begin{cases} K(\ell),& \ell\leq m;\\ 0,& \ell>m. \end{cases} \] This symmetric sequence has an obvious $\Omega$-module structure, such that the projection $K\to tr_m K$ is an $\Omega$-modules map. This morphism for any $\Omega$-module $L$ induces a map of complexes \[ \operatorname{hRmod}_{\Omega}(tr_m K,L)\to \operatorname{hRmod}_{\Omega}( K,L). \] Its image in homology is what we call the $m$-th term of the Hodge filtration in $H\left(\operatorname{hRmod}_{\Omega}( K,L\right))$. For $K=\tilde C_*\left( (\Sigma Y_*)^{\wedge\bullet}\right)\simeq \tilde H_*(\Sigma Y_*)^{\otimes\bullet}$, the cofiltration $tr_\bullet$ splits. For any pointed map of suspensions $\Sigma Y_*\to \Sigma Z_*$, the induced map \[ \mathit{gr}\, HH^{\Sigma Z_*}(\L_*)\to \mathit{gr}\, HH^{\Sigma Y_*}(\L_*) \] can be recovered from the map of the layers of $tr_\bullet$ (and thus from the map in homology $\tilde H_*\Sigma Y_*\to \tilde H_*\Sigma Z_*$) by the spectral sequence argument. \subsection{Hodge filtration versus cardinality cofiltration}\label{ss42+} Denote by ${\mathcal{CH}}^{X_*}(\L_*)$ the higher Hochschild complex $$ {\mathcal{CH}}^{X_*}(\L_*):=\operatorname{hRmod}_\Omega \left( \tilde C_* \left( X_*^{\wedge\bullet}\right), cr\,\L_*\right). $$ The Hodge filtration \[ F_0 {\mathcal{CH}}^{X_*}(\L_*) \to F_1 {\mathcal{CH}}^{X_*}(\L_*) \to F_2 {\mathcal{CH}}^{X_*}(\L_*) \to \ldots \] should not be confused with the more widely used cardinality or rank cofiltration (depending on the context it can also be called Goodwillie-Weiss tower)~\cite{AyFr,IntJMc,WeissEmb}: \[ T_0 {\mathcal{CH}}^{X_*}(\L_*) \leftarrow T_1 {\mathcal{CH}}^{X_*}(\L_*) \leftarrow T_2 {\mathcal{CH}}^{X_*}(\L_*) \leftarrow \ldots . \] We have seen in the previous subsection that \[ F_m {\mathcal{CH}}^{X_*}(\L_*) \simeq \operatorname{hRmod}_\Omega \left( tr_m \tilde C_* \left( X_*^{\wedge\bullet}\right),cr\,\L_*\right). \] \begin{prop}\label{p:cardinal} The $n$-th term of the cardinality cofiltration is \[ T_m {\mathcal{CH}}^{X_*}(\L_*) \simeq \operatorname{hRmod}_\Omega \left( \tilde C_* \left( X_*^{\wedge\bullet}\right),tr_m cr\,\L_*\right). \] \end{prop} \begin{proof} Denote by $\Gamma_m$ and $\Omega_m$ the full subcategories of $\Gamma$, respectively $\Omega$, consisting of objects of cardinal $\leq m+1$, respectively $\leq m$. One has obvious restriction functors \[ (-)|_{\leq m}\colon \mathrm{mod}{-}\Gamma\to \mathrm{mod}{-}\Gamma_m;\qquad (-)|_{\leq m}\colon \mathrm{mod}{-}\Omega\to \mathrm{mod}{-}\Omega_m. \] By definition \begin{equation}\label{eq:cardinal} T_m {\mathcal{CH}}^{X_*}(\L_*) \simeq \operatorname{hRmod}_{\Gamma_m} \left( C_* \left( X_*^{\bullet}\right)|_{\leq m}, \L_*|_{\leq m}\right). \end{equation} The cross-effect functor \[ cr\colon\mathrm{mod}{-}\Gamma_m\to \mathrm{mod}{-}\Omega_m \] defined by~\eqref{eq_cr} is also an equivalence in the truncated case. For a right $\Omega_m$ module $K$, denote by $triv_m (K)$ the $\Omega$ module extended trivially on sets of cardinal $>m$: \[ triv_m(K)(\ell) = \begin{cases} K(\ell),& \ell\leq m;\\ 0,& \ell>m. \end{cases} \] One has a Quillen adjunction \[ (-)|_{\leq m}\colon\mathrm{mod}{-}\Omega\rightleftarrows \mathrm{mod}{-}\Omega_m\colon triv_m. \] Notice that $triv_m\circ (-)_{\leq m} = tr_m$. As a consequence we get \[ T_m {\mathcal{CH}}^{X_*}(\L_*) \simeq \operatorname{hRmod}_{\Omega_m} \left( \tilde C_* \left( X_*^{\wedge\bullet}\right)|_{\leq m}, cr\,\L_*|_{\leq m}\right)\simeq \operatorname{hRmod}_\Omega \left( \tilde C_* \left( X_*^{\wedge\bullet}\right), tr_m cr\,\L_*\right). \] \end{proof} Finally, let us compare the $T_m$ and $F_m$ terms in case of a suspension to make sure that they are different. \begin{gather*} F_m {\mathrm{CH}}^{\Sigma Y_*}(\L_*) = \prod_{i=0}^m\operatorname{hRmod}_\Omega\left( \tilde H_*(\Sigma Y_*)^{\otimes i},cr\, \L_*\right)= \prod_{i=0}^m\left(\prod_{j=i}^{+\infty} \mathrm{Hom}_{\mathbb{S}_j}\left( ( \tilde H_*(\Sigma Y_*)^{\otimes i}\circ \mathsf{coLie}\{1\} )(j), cr\, \L_* (j) \right), d\right);\\ T_m {\mathrm{CH}}^{\Sigma Y_*}(\L_*) = \prod_{i=0}^{+\infty}\operatorname{hRmod}_\Omega\left( \tilde H_*(\Sigma Y_*)^{\otimes i}, tr_m cr\, \L_*\right)= \prod_{i=0}^m\left(\prod_{j=i}^{m} \mathrm{Hom}_{\mathbb{S}_j}\left( ( \tilde H_*(\Sigma Y_*)^{\otimes i}\circ \mathsf{coLie}\{1\} )(j), cr\, \L_* (j) \right), d\right). \end{gather*} One can see that the terms $F_m$ and $T_m$ are not the same. \begin{rem}\label{r:cardin} The cardinality cofiltration induces a decreasing filtration in ${\mathrm{CH}}^{\Sigma Y_*}(\L_*)$: we define $F^m {\mathrm{CH}}^{\Sigma Y_*}(\L_*)$ as the kernel of the projection $p_m\colon{\mathrm{CH}}^{\Sigma Y_*}(\L_*)\to T_m {\mathrm{CH}}^{\Sigma Y_*}(\L_*)$. Notice that $p_m$ restricted on $F_m {\mathrm{CH}}^{\Sigma Y_*}(\L_*)$ is still surjective. As a consequence, one has that the Hodge filtration in the Hochschild-Pirashvili homology on a suspension is dense in the topology induced by this decreasing filtration. \end{rem} \begin{rem}\label{r:cardin2} The cardinality cofiltration in the higher Hochschild homology on suspensions, contrary to the Hodge filtration, does not split in general. \end{rem} \section{Coformality of $C_*\left( (\Sigma Y_*)^{\wedge\bullet} \right)$. Proof of Theorem~\ref{t:HP_suspensions2}}\label{ss43} We need to recall some theory of right modules over $\mathsf{Com}_+$~\cite{Fresse1}. As we briefly explained in Subsection~\ref{ss41}, a functorial cofibrant replacement of a right $\Omega$-module or equivalently a right $\mathsf{Com}_+$-module $M$ is $M\circ\mathsf{coLie}\{1\}\circ\mathsf{Com}_+$. The sequence $M\circ\mathsf{coLie}\{1\}$ is the {\it Koszul dual} of $M$. Notice that it is naturally a right $\mathsf{coLie}\{1\}$-comodule. Given any other right $\mathsf{coLie}\{1\}$-comodule $N$, one can get a $\mathsf{Com}_+$-module $N\circ\mathsf{Com}_+$.\footnote{The differential in $N\circ\mathsf{Com}_+$ is the sum of two terms: the first one being induced by the differential on $N$, the second splits off one cobracket from $N$ and makes it act from the left as a product on $\mathsf{Com}_+$.} It is easy to see that $N\circ \mathsf{Com}_+$ is quasi-isomorphic to $M$ (as a $\mathsf{Com}_+$-module) if and only if $N$ is quasi-isomorphic to $M\circ\mathsf{coLie}\{1\}$ (as a $\mathsf{coLie}\{1\}$-comodule). If this happens we say that $N$ is a Koszul dual of $M$ and $M$ is a Koszul dual of $N$. This is part of a general homotopy theory of right modules~\cite{Fresse1}. For any right module $M$ over any doubly reduced operad ${\mathcal O}$ in chain complexes (${\mathcal O}(0)=0$, ${\mathcal O}(1)={\mathbb{Q}}$), the bar construction $B(M,{\mathcal O},I)$ is a right comodule over the cooperad $B(I,{\mathcal O},I)$. By $I$ we mean the unit object in symmetric sequences $$ I(k)= \begin{cases} {\mathbb{Q}},& k=1;\\ 0,& k\neq 1. \end{cases} $$ In our case the operad ${\mathcal O}=\mathsf{Com}_+$ is Koszul and the bar complexes can be replaced by equivalent Koszul complexes~\cite{Fresse1}. It was shown by~\cite[Lemma~11.4]{AroneTur1}, that for any pointed space $X_*$, the Koszul dual of $\tilde C_*(X_*^{\wedge\bullet})$ is $\tilde C_*(X_*^{\wedge\bullet}/\Delta^\bullet X_*)$, where by $\Delta^n X_*$ we understand the fat diagonal in $X_*^{\wedge n}$. On homology the $\mathsf{coLie}\{1\}$ coaction \[ \circ_{i\sim j}\colon\tilde H_*(X_*^{\wedge n}/\Delta^n X_*)\to \tilde H_{*-1}(X_*^{\wedge n-1}/\Delta^{n-1} X_*) \otimes \mathsf{coLie}\{1\}(2) \] is induced by the connecting homomorphisms $\partial\colon H_*(X_*^{\wedge n},\Delta^n X_*)\to H_{*-1}(\Delta^n X_*, \Delta_{ij}^n X_*)$ of the long exact sequence for the triples \[ (X_*^{\wedge n},\Delta^n X_*,\Delta_{ij}^n X_*), \] where $\Delta_{ij}^n X_*$ is the union of all diagonals except one: $x_i=x_j$. (One obviously has $\Delta^n X_*/\Delta_{ij}^n X_*\cong X_*^{\wedge n-1}/\Delta^{n-1} X_*$.) \begin{defi}\label{d:coformal} We say that a right $\mathsf{Com}_+$-module is coformal if its Koszul dual $\mathsf{coLie}\{1\}$-comodule is formal. A map of right $\mathsf{Com}_+$-modules is said coformal if the induced morphism of their Koszul duals is formal. \end{defi} \begin{prop}\label{p:coformal_susp} \sloppy For any pointed suspension $\Sigma Y_*$, the right $\mathsf{Com}_+$-module $\tilde C_*\left((\Sigma Y_*)^{\wedge\bullet}\right)$ is coformal. For any pointed map of suspensions $f\colon\Sigma Y_*\to\Sigma Z_*$, the induced map of $\mathsf{Com}_+$-modules $f_*\colon \tilde C_*\left((\Sigma Y_*)^{\wedge\bullet}\right)\to \tilde C_*\left((\Sigma Z_*)^{\wedge\bullet}\right)$ is coformal. \end{prop} \begin{proof} According to Proposition~\ref{p:formal_susp} both $\mathsf{Com}_+$-modules $\tilde C_*\left((\Sigma Y_*)^{\wedge\bullet}\right)$ and $\tilde C_*\left((\Sigma Z_*)^{\wedge\bullet}\right)$ are formal. Their Koszul duals are $\tilde H_*(\Sigma Y_*)^{\otimes \bullet}\circ \mathsf{coLie}\{1\}$ and $\tilde H_*(\Sigma Z_*)^{\otimes \bullet}\circ \mathsf{coLie}\{1\}$, see Subsection~\ref{ss41}, which are formal and cofree. On the other hand it is easy to see that any map between right $\mathsf{coLie}\{1\}$-comodules whose homology is cofree, is formal. \end{proof} \begin{cor}\label{cor:conf_susp} One has a natural isomorphism of right $\mathsf{coLie}\{1\}$-comodules \begin{equation}\label{eq_PBW_general} \tilde H_* \left((\Sigma Y_*)^{\wedge\bullet}/\Delta^\bullet Y_*\right){\stackrel \simeq\longrightarrow} \tilde H_*(\Sigma Y_*)^{\otimes\bullet}\circ\mathsf{coLie}\{1\}, \end{equation} functorial over the category $\Sigma({\mathrm{Top}}_*)$. \end{cor} One simply needs to apply the Koszul duality functor to the zigzag~\eqref{eq:formality_arone} and then take the homology. At the starting point we get the left-hand side of~\eqref{eq_PBW_general} and at the end we get the right-hand side. Notice that this corollary describes the rational homology of certain configuration spaces of points in suspensions. Now notice that the sequences $\tilde H_*(\Sigma Y_*)^{\otimes\bullet}$ and $\tilde H_* \left((\Sigma Y_*)^{\wedge\bullet}/\Delta^\bullet \Sigma Y_*\right)$ are naturally left modules over the commutative operad $\mathsf{Com}$. Indeed, the first one is freely generated by its arity one component $\tilde H_*(\Sigma Y_*)^{\otimes 1}$, while the left $\mathsf{Com}$-module structure on the second one is induced by the maps \[ \left( (\Sigma Y_*)^{\wedge m}/\Delta^m \Sigma Y_*\right)\wedge \left( (\Sigma Y_*)^{\wedge n}/\Delta^n \Sigma Y_*\right)\longrightarrow \left( (\Sigma Y_*)^{\wedge m+n}/\Delta^{m+n} \Sigma Y_*\right) \] (More generally if a right $\mathsf{Com}_+$-module has a compatible left action by another operad $\mathcal O$, then its Koszul dual also naturally is a left $\mathcal O$-module.) \begin{prop}\label{pr:left_act} The isomorphism~\eqref{eq_PBW_general} respects the left $\mathsf{Com}$ action. \end{prop} \begin{proof} It is enough to check that each map in the zigzag~\eqref{eq:formality_arone} respects the left $\mathsf{Com}$ action. \end{proof} \subsection{Complexes ${\mathrm{CH}}^{\Sigma Y_*}(\L_*)$. Proof of Theorem~\ref{t:HP_suspensions2}}\label{ss44} \sloppy We define complexes ${\mathrm{CH}}^{\Sigma Y_*}(\L_*)$ as follows \begin{multline}\label{eq_CH_general} \operatorname{Rmod}_{\mathsf{Com}_+}\left( \tilde H_* \left((\Sigma Y_*)^{\wedge\bullet}/\Delta^\bullet \Sigma Y_*\right)\circ\mathsf{Com}_+, cr\,\L_*\right)\simeq\\ \left( \prod_{n\geq 0} \mathrm{Hom}_{\mathbb{S}_n}\left( \tilde H_* \left((\Sigma Y_*)^{\wedge n}/\Delta^n \Sigma Y_*\right), cr\,\L_*(n)\right), d_{Y_*}+d_{\L_*}\right), \end{multline} where $d_{\L_*}$ is the part of the differential induced by the differential in $\L_*$, and $d_{Y_*}$ is induced by the differential in $\tilde H_* \left((\Sigma Y_*)^{\wedge\bullet}/\Delta^\bullet \Sigma Y_*\right)\circ\mathsf{Com}_+$, which is the Koszul dual $\mathsf{Com}_+$-module to the $\mathsf{coLie}\{1\}$-comodule $\tilde H_* \left((\Sigma Y_*)^{\wedge\bullet}/\Delta^\bullet \Sigma Y_*\right)$. Explicitly, if $f\in \mathrm{Hom}_{\mathbb{S}_n}\left( \tilde H_* \left((\Sigma Y_*)^{\wedge n}/\Delta^n \Sigma Y_*\right), cr\,\L_*(n)\right)$, one has $d_{Y_*}f\in \mathrm{Hom}_{\mathbb{S}_{n+1}}\left( \tilde H_* \left((\Sigma Y_*)^{\wedge n+1}/\Delta^{n+1} \Sigma Y_*\right), cr\,\L_*(n+1)\right)$ is defined as follows \[ (d_{Y_*}f)\bigl( \gamma(x_1\ldots x_{n+1})\bigr) = \sum_{1\leq i<j\leq n}f( \gamma_{ij}(x_1\ldots x_{i\sim j}\ldots x_n))\circ_{i\sim j}(x_i\cdot x_j), \] where $\gamma_{ij}$ is computed from the formula $\circ_{i\sim j} (\gamma)=\gamma_{ij}\otimes [x_i,x_j]^\vee$ of the $\mathsf{coLie}\{1\}$ coaction. Now we check that ${\mathrm{CH}}^{(-)}(\L_*)$ satisfies the properties from Theorem~\ref{t:HP_suspensions2}. Firstly, ${\mathrm{CH}}^{(-)}(\L_*)\colon{\mathrm{Top}}_*|_{\Sigma}\to dgVect$ is a well defined functor: a pointed map $\Sigma Y_*\to \Sigma Z_*$ induces a map of $\mathsf{coLie}\{1\}$-comodules \[ \tilde H_* ((\Sigma Y_*)^{\wedge\bullet}/\Delta^\bullet \Sigma Y_*) \to \tilde H_* ((\Sigma Z_*)^{\wedge\bullet}/\Delta^\bullet \Sigma Z_*). \] It computes the Hochschild-Pirashvili homology functor by the coformality property, see Proposition~\ref{p:coformal_susp}. Using isomorphism~\eqref{eq_PBW_general} we can define the $m$-th truncation of $\tilde H_* \left((\Sigma Y_*)^{\wedge\bullet}/\Delta^\bullet \Sigma Y_*\right)$ as the cofree part cogenerated by $\tilde H_*(\Sigma Y_*)^{\otimes i}$, $i\leq m$. In the Hochschild homology this obviously corresponds to the Hodge filtration defined in Subsection~\ref{ss42}. The map of graded quotients is determined by the morphism in homology $f_*\colon\tilde H_*(\Sigma Y)\to \tilde H_*(\Sigma Z)$ due to Corollary~\ref{cor:conf_susp} and Proposition~\ref{pr:left_act} (see also next section, where this is shown more explicitly). The splitting of the Hodge filtration over $\Sigma({\mathrm{Top}}_*)$ has been shown in the previous section. \sloppy Now let us check that the complexes ${\mathrm{CH}}^{\vee_n S^1}(\L_*)$ coincide with $CH^{\vee_n S^1}(\L_*)$ defined in Section~\ref{s:CH_vee_n}. To see this one needs to identify $cr\,\L_*(\bullet)$ with $N\L_*(\bullet)$ by means of the isomorphism~\eqref{eq:q}. For simplicity let us start with the case $n=1$. One has $(S^1)^{\wedge k}/\Delta^k S^1=\vee_{k!}S^k$. Thus, \[ \prod_{k\geq 0} \mathrm{Hom}_{\mathbb{S}_k}\left( \tilde H_* \left((S^1)^{\wedge k}/\Delta^k S^1 \right), N\L_*(k)\right)= \prod_{k=0}^{+\infty} N\L_*(k)[k] ={\mathrm{Tot}} \,\L_*\circ (S^1)_\bullet. \] One can check that the differentials agree. In case of arbitrary $n$, one has $\left(\vee_n S^1\right)^{\wedge k} /\Delta^k(\vee_n S^1)=\vee_{k_1+\ldots +k_n=k}\vee_{k!} S^k$, and one similarly gets \[ \prod_{k\geq 0} \mathrm{Hom}_{\mathbb{S}_k}\left( \tilde H_* \left((\vee_n S^1)^{\wedge k}/\Delta^k (\vee_n S^1) \right), N\L_*(k)\right)= \prod_{k=0}^{+\infty} \,\prod_{k_1+\ldots +k_n=k} N\L_*(k)[k] ={\mathrm{Tot}} (\,\L_*\circ (\vee_n S^1)_{\underbrace{\bullet\ldots\bullet}_n}). \] For the last identity, see equation~\eqref{eq_CH_vee_n}. \begin{rem}\label{rem:action_ident} The monoid $\End(F_n)$ describes the homotopy classes of poined self-maps $\vee_n S^1\to\vee_n S^1$ and thus acts on the $\mathsf{coLie}\{1\}$-comodule $ \tilde H_* \left((\vee_n S^1)^{\wedge \bullet}/\Delta^\bullet (\vee_n S^1) \right)$. One can check that the induced action on ${\mathrm{CH}}^{\vee_n S^1}(\L_*)$ coincides with the one on $CH^{\vee_n S^1}(\L_*)$ described explicitly in Section~\ref{s:CH_vee_n}. \end{rem} \section{Determining the map of Hochschild-Pirashvili homology from the rational homotopy type of a map}\label{s:rht_map} \sloppy It is clear from the definition that the rational homology type of a space determines the rational higher Hochschild homology. In other words, if $X_*\to W_*$ is a rational homology equivalence then the induced map $HH^{W_*}(\L_*)\to HH^{X_*}(\L_*)$ is an isomorphism. Similarly, the rational homology type of any map $X_*\to W_*$ determines the map in rational Hochschild-Pirashvili homology. In particular, the rational homotopy type of a map must determine the higher Hochschild homology map. (In fact for suspensions the rational homology and rational homotopy equivalences are the same.) In this section we compute how exactly the map of suspensions induces the map of Hochschild complexes. For simplicity we will be assuming that the homology groups of the spaces that we consider are of finite type. Many of the results hold without this restriction, but require more technical work involving careful colimit arguments. Since the goal is to make it applicable for concrete examples which in practice always have this property, we concentrate on this case. \subsection{Determining the map of Koszul duals from the rational homotopy type of a map}\label{ss:determ1} First we need to understand how the map of Koszul duals \[ \tilde H_* ((\Sigma Y_*)^{\wedge\bullet}/\Delta^\bullet \Sigma Y_*) \to \tilde H_* ((\Sigma Z_*)^{\wedge\bullet}/\Delta^\bullet \Sigma Z_*). \] is determined by the rational homotopy type of a map $f:\Sigma Y_*\to\Sigma Z_*$. \sloppy Any such map produces a commutative square of right $\mathsf{coLie}\{1\}$-comodules: \begin{equation}\label{eq_sq} \xymatrix{ \tilde H_* \left((\Sigma Y_*)^{\wedge\bullet}/\Delta^\bullet Y_*\right)\ar[r]^-\simeq\ar[d] &\tilde H_*(\Sigma Y_*)^{\otimes\bullet}\circ\mathsf{coLie}\{1\}\ar[d] \\ \tilde H_* \left((\Sigma Z_*)^{\wedge\bullet}/\Delta^\bullet Z_*\right)\ar[r]^-\simeq &\tilde H_*(\Sigma Z_*)^{\otimes\bullet}\circ\mathsf{coLie}\{1\} } \end{equation} The horizontal arrows are the isomorphisms from Corollary~\ref{cor:conf_susp}. We are interested in the right vertical map. (Notice that since $f$ is arbitrary and not necessarily a suspension, this right vertical map is not determined by the induced map in homology $f_*\colon\tilde H_*(\Sigma Y_*)\to\tilde H_*(\Sigma Z_*)$.) According to Proposition~\ref{pr:left_act}, the horizontal maps respect the left $\mathsf{Com}$ action. It is quite obvious that the left vertical map does so as well. As a consequence, the right vertical map also respects this action. Its source is freely generated as a left $\mathsf{Com}$-module by $\tilde H_*(\Sigma Y_*)^{\otimes 1}\circ\mathsf{coLie}\{1\}$, and its target is cofreely cogenerated as a $\mathsf{coLie}\{1\}$ right comodule by $\tilde H_*(\Sigma Z_*)^{\otimes\bullet}$. As a consequence this map is determined by a map of symmetric sequences \[ \tilde H_*(\Sigma Y_*)^{\otimes 1}\circ\mathsf{coLie}\{1\}\longrightarrow \tilde H_*(\Sigma Z_*)^{\otimes\bullet}, \] or equivalently by a map \begin{equation}\label{eq_rht} \tilde H_*(Y_*)\to \mathrm{FreeLie}\left(\tilde H_*Z_*\right), \end{equation} where $\mathrm{FreeLie}\left(\tilde H_*Z_*\right)$ denotes the free completed Lie algebra generated by $\tilde H_*Z_*$. The rational homotopy of a simply connected suspension is a free Lie algebra generated by its reduced homology. We claim that in the simply connected case the map obtained in~\eqref{eq_rht} describes exactly the map (of generators) of rational homotopy. More generally, when the suspensions are not necessarily simply connected, one can still assign a morphism~\eqref{eq_rht} to the rational homotopy type of a map $f\colon\Sigma Y_*\to\Sigma Z_*$. By Lemma~\ref{l:formality2} any suspension is rationally formal. Thus the induced map of their Sullivan\rq{}s models \[ A_{\Sigma Z_*}\to A_{\Sigma Y_*} \] is quasi-isomorphic to a map of dg algebras \begin{equation}\label{eq:rat_model_map} {\mathcal A}({\mathcal L}^c(\tilde H^*\Sigma Z_*))\to H^*\Sigma Y_*, \end{equation} where the left-hand side is the cofibrant replacement of $H^*\Sigma Z_*$ obtained as the Chevalley-Eilenberg complex ${\mathcal A}(-)$ of the Harrison complex ${\mathcal L}^c(-)$ of the (non-unital) algebra $\tilde H^*\Sigma Z_*$. Notice that ${\mathcal L}^c(\tilde H^*\Sigma Z_*)$ is the cofree Lie coalgebra cogenerated by $\tilde H^*Z_*$ (with zero differential). Its dual vector space is exactly $\mathrm{FreeLie}(\tilde H_*Z_*)$. The map of algebras~\eqref{eq:rat_model_map} is determined by its restriction on the space of generators \begin{equation}\label{eq:rat_model_gener} {\mathcal L}^c(\tilde H^*\Sigma Z_*)\to \tilde H^* \Sigma Y_*. \end{equation} \begin{prop}\label{pr:thomas} For any map $f\colon\Sigma Y_*\to\Sigma Z_*$ of pointed suspensions of finite type, the map~\eqref{eq:rat_model_gener} encoding the rational homotopy type of $f$ is dual to the map~\eqref{eq_rht} encoding the homotopy type of the induced map of right $\mathsf{Com}_+$ modules \begin{equation}\label{eq:cy_cz} \tilde C_*\left((\Sigma Y_*)^{\wedge\bullet}\right) \to \tilde C_*\left((\Sigma Z_*)^{\wedge\bullet}\right). \end{equation} \end{prop} \begin{proof} Arguing as in the proof of Lemma~\ref{l:formality1}, the map of right $\mathsf{Com}_+$ modules~\eqref{eq:cy_cz} is equivalent to the map \begin{equation}\label{eq:thomas1} (\tilde H_*\Sigma Y_*)^{\otimes \bullet}\to \left(\tilde{\mathcal A}({\mathcal L}^c(\tilde H^*\Sigma Z_*))^{\otimes\bullet}\right)^\vee, \end{equation} where $\tilde{\mathcal A}(-)$ denotes the augmented part of ${\mathcal A}(-)$; \lq\lq{}$^\vee$\rq\rq{} denotes taking the dual of a graded vector space. The map~\eqref{eq:thomas1} in each arity is the dual of a tensor power of~\eqref{eq:rat_model_map}. The right-hand side of~\eqref{eq:thomas1} can also be expressed as $\left(\tilde \hChev(\mathrm{FreeLie}(\tilde H_* Z_*))\right)^{\hat\otimes\bullet}$, where $\tilde \hChev(-)$ denotes the completed augmented Chevalley-Eilenberg complex (of a completed Lie algebra $\mathrm{FreeLie}(\tilde H_* Z_*)$); \lq\lq{}$\hat\otimes$\rq\rq{} denotes the completed tensor product. One has a zigzag of right $\mathsf{Com}_+$-modules \[ (\tilde H_*\Sigma Y_*)^{\otimes \bullet}\to \left(\tilde \hChev (\mathrm{FreeLie}(\tilde H_* Z_*))\right)^{\hat\otimes\bullet} {\stackrel \simeq\longleftarrow} (\tilde H_*\Sigma Z_*)^{\otimes \bullet}, \] where the right arrow is an equivalence. We get a zigzag of their Koszul duals: \begin{equation}\label{eq:thomas2} (\tilde H_*\Sigma Y_*)^{\otimes \bullet}\circ\mathsf{coLie}\{1\}\to \left(\tilde \hChev(\mathrm{FreeLie}(\tilde H_* Z_*))\right)^{\hat\otimes\bullet}\circ\mathsf{coLie}\{1\}{\stackrel \simeq\longleftarrow} (\tilde H_*\Sigma Z_*)^{\otimes \bullet}\circ\mathsf{coLie}\{1\}, \end{equation} We claim that the right arrow has a natural left inverse. In order to construct this left inverse \[ \left(\tilde \hChev(\mathrm{FreeLie}(\tilde H_* Z_*))\right)^{\hat\otimes\bullet}\circ\mathsf{coLie}\{1\} {\stackrel \simeq\longrightarrow} (\tilde H_*\Sigma Z_*)^{\otimes \bullet}\circ\mathsf{coLie}\{1\} \] it is enough to define a map of their (co)generators \[ \left(\tilde \hChev(\mathrm{FreeLie}(\tilde H_* Z_*))\right)^{\hat\otimes 1}\circ\mathsf{coLie}\{1\} \longrightarrow (\tilde H_*\Sigma Z_*)^{\otimes \bullet}. \] In arity $n$ the latter map of symmetric sequences is defined as the following composition \begin{multline*} \tilde \hChev(\mathrm{FreeLie}(\tilde H_* Z_*))\otimes\mathsf{coLie}\{1\}(n) \to \mathrm{FreeLie}(\tilde H_* Z_*)[-1]\otimes\mathsf{coLie}\{1\} (n)\to \\ \mathsf{Lie}(n)\otimes_{\mathbb{S}_n}(\tilde H_*\Sigma Z_*)^{\otimes n}\otimes \mathsf{coLie}(n) \to (\tilde H_*\Sigma Z_*)^{\otimes n}. \end{multline*} The first map is induced by the projection on cogenerators $\tilde \hChev(\mathrm{FreeLie}(\tilde H_* Z_*))\to \mathrm{FreeLie}(\tilde H_* Z_*)[-1]$. The second map is obtained by projecting $\mathrm{FreeLie}(\tilde H_*Z_*)$ onto its subspace spanned by brackets of length $n$. The last map takes into account the duality between the spaces $\mathsf{Lie}(n)$ and $\mathsf{coLie}(n)$: \[ L\otimes h_1\otimes\ldots \otimes h_n\otimes L' \mapsto \sum_{\sigma\in\mathbb{S}_n}(\sigma L,L') h_{\sigma_1}\otimes\ldots\otimes h_{\sigma_n}. \] To finish the proof we notice that the composite of the first arrow in~\eqref{eq:thomas2} and the constructed inverse is the map \[ (\tilde H_*\Sigma Y_*)^{\otimes \bullet}\circ\mathsf{coLie}\{1\}\to (\tilde H_*\Sigma Z_*)^{\otimes \bullet}\circ\mathsf{coLie}\{1\} \] (co)generated by the map dual do~\eqref{eq:rat_model_gener}. \end{proof} \subsection{Determining map of Hochschild-Pirashvili homology}\label{ss:determ2} In this subsection we describe how the map \begin{equation}\label{eq:rhmap} \tilde H_*Y_*\to \mathrm{FreeLie}(\tilde H_*Z) \end{equation} encoding the rational homotopy type of $f\colon\Sigma Y_*\to\Sigma Z_*$, determines the map of higher Hochschild complexes ${\mathrm{CH}}^{(-)}(-)$ (in fact we will work with $\mathit{gr}\, {\mathrm{CH}}^{(-)}(-)$ instead). For simplicity we will be assuming that $Y_*$ and $Z_*$ are of finite type and we will only look at the case $\L_*=M\otimes \Chev(\alg g)^{\otimes \bullet}$, where $\alg g$ is strictly positively graded. Thus we need to describe the induced map \begin{equation}\label{eq:expl1} M\,\hat\otimes\, S\left(\tilde H^*Z_*\hat\otimes\alg g\right)\to M\,\hat\otimes \,S\left(\tilde H^*Y_*\hat\otimes\alg g\right). \end{equation} Firstly, this map is the tensor product of the identity on the first factor $M$ and a coalgebra homomorphism on the second one. Ergo, it\rq{}s enough to describe its composition with the projection to the space of cogenerators \begin{equation}\label{eq:CH_map_cogener} S\left(\tilde H^*Z_*\hat\otimes\alg g\right)\to \tilde H^*Y_*\hat\otimes\alg g. \end{equation} The map~\eqref{eq:rhmap} is a product of maps \begin{equation}\label{eq:rhmap_n} \tilde H_*Y\to \mathsf{Lie}(n)\otimes_{\mathbb{S}_n}(\tilde H_* Z_*)^{\otimes n}. \end{equation} its $n$-th component~\eqref{eq:rhmap_n} can be viewed as an element $\rho_n\in\tilde H^*Y_*\hat\otimes\mathsf{Lie}(n)\otimes_{\mathbb{S}_n}(\tilde H_*Z)^{\otimes n}$. This element $\rho_n$ contributes only to \begin{equation}\label{eq:CH_map_cogener_n} S^n\left(\tilde H^*Z_*\otimes\alg g\right)\to \tilde H^*Y_*\otimes\alg g. \end{equation} in~\eqref{eq:CH_map_cogener}. The element $\rho_n$ is a sum of elements of the form \[ h^0\otimes L\otimes h_1\otimes\ldots \otimes h_n\in\tilde H^*Y_*\otimes\mathsf{Lie}(n)\otimes_{\mathbb{S}_n}(\tilde H_*Z)^{\otimes n}. \] Each such summand contributes to~\eqref{eq:CH_map_cogener_n} as a map sending \[ (h^1\otimes g_1)\cdot \ldots\cdot (h^n\otimes g_n)\in S^n\left(\tilde H^*Z_*\otimes\alg g\right) \] to \[ \sum_{\sigma\in\mathbb{S}_n}\pm\left(\prod_{i=1}^m (h_i,h^{\sigma_i})\right) h^0\otimes L(g_{\sigma_1},\ldots,g_{\sigma_n}) \in \tilde H^*Y_*\hat\otimes\alg g, \] where the sign is as usual the Koszul one induced by permutation of elements. In the examples below we will be omiting the hat sign over the tensor product as the induced map~\eqref{eq:expl1} can always be restricted on the non-completed part $M\otimes S(\tilde H^*(-)\otimes \alg g)$ (where the symmetric power is also taken in the non-completed sense.) \begin{ex} Consider the map $S^1\to S^1\vee S^1$ which sends the generator $x$ of $\pi_1S^1$ to the product $y_1y_2$ of generators of $\pi_1( S^1\vee S^1)$. The map~\eqref{eq:rhmap} becomes \[ x{\mathbb{Q}}\to \mathrm{FreeLie}(y_1,y_2), \] that encodes the map of the primitive part of the Malcev completions~\cite{FHT2} (all generators $x$, $y_1$, $y_2$ are of degree zero). The image of $x$ is described by the Baker-Campbell-Hausdorff formula \[ x\mapsto \ln(e^{y_1}\cdot e^{y_2}). \] The map~\eqref{eq:expl1} becomes \[ M\otimes S(\alg g)\otimes S(\alg g)\to M\otimes S(\alg g) \] which sends \[ m\otimes A\otimes B\mapsto m\otimes A\star B, \] where $\star$ is the associative (star) product on $S(\alg g)$ transported from ${\mathcal{U}}\alg g$ via the Poincar\'e-Birkhoff-Witt isomorphism. \end{ex} \begin{ex} Consider the map $S^2\to S^1\vee S^2$ corresponding to the element $x\cdot y\in \pi_2 ( S^1\vee S^2)$, where $x$ is the generator of $\pi_1 S^1$ and $y$ is the generator of $\pi_2 S^2$. The map~\eqref{eq:rhmap} in our case is \[ y{\mathbb{Q}}\to \mathrm{FreeLie}(x,y), \] where $|x|=0$, $|y|=1$, \[ y\mapsto e^{ad_x}(y). \] The induced map~\eqref{eq:expl1} is \[ M\otimes S(\alg g)\otimes S(\alg g[1])\to M\otimes S(\alg g[1]), \] sending \[ m\otimes g_1\cdot\ldots \cdot g_k\otimes s^{-1}g'_1\cdot\ldots\cdot s^{-1}g'_{k'} \mapsto m\otimes\frac 1{k!}\sum_{\sigma\in\mathbb{S}_k} ad_{g_{\sigma_1}}\ldots ad_{g_{\sigma_k}}( s^{-1}g'_1\cdot\ldots\cdot s^{-1}g'_{k'}). \] \end{ex} \begin{ex} Consider the Hopf map $S^3\to S^2$. On the level of rational homotopy we get a map \[ y{\mathbb{Q}}\to \mathrm{FreeLie}(x), \] where $|x|=1$, $|y|=2$, and \[ y\mapsto \frac 12 [x,x]. \] The induced map of higher Hochschild complexes \[ M\otimes S(\alg g[1])\to M\otimes S(\alg g[2]) \] sends \begin{gather*} m\otimes s^{-1}g_1\cdot\ldots \cdot s^{-1}g_{2k-1}\mapsto 0,\\ m\otimes s^{-1}g_1\cdot\ldots \cdot s^{-1}g_{2k}\mapsto m\otimes \frac{1}{2^k k!}\sum_{\sigma\in\mathbb{S}_{2k}} \pm s^{-2}[g_{\sigma_1},g_{\sigma_2}]\cdot\ldots\cdot s^{-2}[g_{\sigma_{2k-1}},g_{\sigma_{2k}}]. \end{gather*} \end{ex} \section{Hochschild-Pirashvili homology for non-suspensions}\label{s:non_susp} Some of the techniques given in the present paper can also be applied to study the higher Hochschild homology for non-suspensions and maps between them. This section is a short note on how this works in the special case when $\L_*=M\otimes \Chev(\alg g)^{\otimes\bullet}$, where $\alg g$ as usual is a strictly positively graded dg Lie algebra, and the spaces are connected and of finite type. \begin{thm}\label{th:non_susp} Assuming a pointed space $X_*$ is connected and of finite type, let $A$ be an augmented non-positively graded augmented commutative dg algebra of finite type quasi-isomorphic to the Sullivan algebra $A_{X_*}$, and $\tilde A$ be its augmentation ideal.\footnote{In our conventions all the complexes have differential of degree $-1$, for which reason the algebras we consider are non-positively graded.} Then the Hochschild-Pirashvili homology $HH^{X_*}(\Chev(\alg g),M)$ is computed by the complex $M\hat\otimes \hChev(\tilde A\hat\otimes\alg g)$, where $\hChev(\tilde A\hat\otimes\alg g)$ is the completed (with respect to the total homological degree of elements from $\alg g$) Chevalley-Eilenberg complex of the completed Lie algebra $\tilde A\hat\otimes\alg g$. The differential has the form \begin{equation}\label{eq:differential_any_space} d=d_M+d_{\alg g}+d_A+d_{CE}+\delta, \end{equation} where $d_M$, $d_{\alg g}$, $\delta$ are as those from~\eqref{eq:differential2}, $d_A$ is induced by the differential in $A$, $d_{CE}$ is the Chevalley-Eilenberg differential. \end{thm} \begin{proof} This complex is constructed in the same way as the higher Hochschild complexes for suspensions, see Subsection~\ref{ss41}. The extra term $d_{CE}$ in the differential appears due to the fact that the $\mathsf{Com}_+$ action on $(\tilde A^\vee)^{\otimes \bullet}$ is now non-trivial. \end{proof} The result of this theorem is partially known to experts. It appeared explicitly for spheres and surfaces respectively in~\cite[Theorem~3]{Ginot} and \cite[Theorem~4.3.3]{GTZ0}, see also~\cite{AyFr} for a similar implicit statement in case $X$ is a manifold. Notice also that in case $M=\Chev(\alg g)$ (i.e., when considering unpointed version of higher Hochschild homology) the obtained higher Hochschild complex is the completed Chevalley-Eilenberg complex $\hChev(A\hat \otimes \alg g)$. As application of this example, in case the dimension of $X$ is less than the connectivity of $Y$, the space $Y^X$ of continuous maps $Y\to X$ has homology with any coefficients described as $H_*(Y^X)\simeq HH^X(C_*(Y^\bullet))$, see~\cite{PatrasThomas,pirash00}. On the other hand, the rational homotopy type of $Y^X$ is described by the dg Lie algebra $A\hat\otimes L$, where $A$ is a suitable Sullivan model for $X$ and $L$ is a suitabe Quillen model for $Y$, see~\cite{BlLaz,BrSz,BuFeMu}. From this we also recover that $\hChev(A\hat\otimes L)$, i.e., our complex, computes the rational homology of $Y^X$. \begin{rem}\label{r:non_susp_Hodge} One can easily see that the $m$th term of the Hodge filtration in $ M\hat\otimes \hChev(\tilde A\hat\otimes\alg g) = \prod_{i=0}^{+\infty} M\hat\otimes S^i (\tilde A[-1] \hat\otimes \alg g) $ is $ F_m M\hat\otimes \hChev(\tilde A\hat\otimes\alg g) = \prod_{i=0}^{m} M\hat\otimes S^i (\tilde A[-1] \hat\otimes \alg g). $ \end{rem} Theorem~\ref{th:non_susp} applied to a suspension $\Sigma Y_*$ of a finite type is exactly the statement of Remark~\ref{r:grCH_C_M}. Indeed, since $\Sigma Y_*$ is formal one can take $\tilde A=\tilde H^*\Sigma Y_*$ the cohomology algebra, whose product is trivial, and thus the Chevalley-Eilenberg part of the differential is trivial $d_{CE}=0$. The rational homotopy type of a map of suspensions of finite type $f\colon\Sigma Y_*\to \colon \Sigma Z_*$ is encoded by a map~\eqref{eq:rat_model_gener}, which is essentially the same as a $\mathsf{Com}_\infty$ map of commutative algebras $f^*_\infty\colon \tilde H^*\Sigma Z_*\to \tilde H^*\Sigma Y_*$. In Subsection~\ref{ss:determ2} we show how this map determines a map of higher Hochschild complexes \[ M\hat\otimes\hChev(\tilde H^*\Sigma Z_*\hat\otimes\alg g)\to M\hat\otimes\hChev(\tilde H^*\Sigma Y_*\hat\otimes\alg g), \] which is the identity on the first factor $M$ and a completed coalgebras map on the second factor. The latter map can be regarded as a completed $L_\infty$ morphism \[ \tilde H^*\Sigma Z_*\hat\otimes {\alg g}\to \tilde H^*\Sigma Y_*\hat\otimes{\alg g}. \] of (completed) abelian Lie algebras. More generally, a tensor product with a dg Lie algebra is in fact a functor from $\mathsf{Com}_\infty$ algebras to $\L_\infty$ algebras. We will need a completed version of this construction. Let $\tilde A$ be a negatively graded $\mathsf{Com}_\infty$ algebra of finite type encoding the rational homotopy type of a connected pointed space $X_*$, and let $\alg g$ be a positively graded dg Lie algebra. The completed $L_\infty$ algebra structure on $\tilde A\hat \otimes \alg g$ is explicitly described by the structure maps $\mu_n$ defined as composition \begin{equation}\label{eq_A_infty} \mu_n\colon S^n(\tilde A[-1]\hat\otimes {\alg g} ) \to \mathrm{FreeLie}^c(\tilde A[-1])\hat\otimes \mathrm{FreeLie}(\alg g) \to \tilde A\hat \otimes \alg g, \end{equation} where $\mathrm{FreeLie}^c(\tilde A[-1])$ is the free Lie coalgebra cogenerated by $A[-1]$ (in other words, it is the Harrison complex ${\mathcal L}^c(\tilde A)$). The first map is induced by the diagonal $\mathsf{Com}(n)\to \mathsf{coLie}(n)\otimes \mathsf{Lie}(n)$. The second map is the $\mathsf{Com}_\infty$ structure on the first factor and the $\mathsf{Lie}$ structure on the second. If $\tilde B\to\tilde A$ is a $\mathsf{Com}_\infty$ morphism encoding the rational homotopy type of a pointed map $X_*\to Y_*$, then the induced completed $L_\infty$ map $\tilde B\hat\otimes \alg g\to \tilde A\hat\otimes \alg g$ is described by essentially the same formulas as~\eqref{eq_A_infty}. Its $n$-th component is the composition \begin{equation}\label{eq_A_infty2} F_n\colon S^n(\tilde B[-1]\hat\otimes {\alg g} ) \to \mathrm{FreeLie}^c(\tilde B[-1])\hat\otimes \mathrm{FreeLie}(\alg g) \to \tilde A[-1]\hat \otimes \alg g, \end{equation} where the first map is the same as the fist one in~\eqref{eq_A_infty}. The second map is the tensor product of the $\mathsf{Com}_\infty$ map $\tilde B\to \tilde A$ and the Lie algebra structure map on $\alg g$. In Subsection~\ref{ss:determ2} the corresponding $L_\infty$ map is explained in full detail for the case of suspensions $\tilde A=\tilde H^*\Sigma Y_*$, $\tilde B=\tilde H^*\Sigma Z_*$. \begin{rem}\label{r:l_infty} For a $\mathsf{Com}_\infty$ algebra $\tilde A$ (non-positively graded and of finite type) consider its dual $\mathsf{Com}_\infty$ coalgebra $\tilde A^\vee$. Then the $L_\infty$ algebra $\tilde A\hat\otimes \alg g$ considered above is the $L_\infty$ algebra of derivations of the zero map of Lie algebras ${\mathcal L}(\tilde A^\vee)\to \alg g$. \end{rem} \begin{thm}\label{th:non_susp2} Let $\tilde A$ be a non-positively graded $\mathsf{Com}_\infty$ algebra of finite type encoding the rational homotopy type of a pointed space $X_*$, then the Hochschild-Pirashvili homology $HH^{X_*}(\Chev(\alg g),M)$ is computed by the complex $M\hat\otimes \hChev(\tilde A\otimes\alg g)$, where $\hChev(\tilde A\otimes\alg g)$ is the completed Chevalley-Eilenberg complex of the completed $L_\infty$ algebra $\tilde A\hat \otimes \alg g$. The differential has the form~\eqref{eq:differential_any_space}. If $\tilde B\to \tilde A$ is a $\mathsf{Com}_\infty$ morphism (of non-positively graded $\mathsf{Com}_\infty$ algebras of finite type) encoding the rational homotopy type of a pointed map $X_*\to Y_*$, then the induced map in the Hochschild-Pirashvili homology \[ HH^{Y_*}(\Chev(\alg g),M)\to HH^{X_*}(\Chev(\alg g),M) \] is computed by the chain map \[ M\hat\otimes \hChev(\tilde B\otimes\alg g) \to M\hat\otimes \hChev(\tilde A\otimes\alg g), \] which is identity on the first factor $M$ and a completed coalgebra map corresponding to the induced completed $L_\infty$ algebras map $\tilde B\hat \otimes \alg g \to \tilde A\hat \otimes \alg g$. \end{thm} \begin{proof} First we check that the statement of the theorem holds when $\tilde B\to\tilde A$ is a dg commutative algebras map, which is an easy refinement of Theorem~\ref{th:non_susp}. On the other hand hand any $\mathsf{Com}_\infty$ algebra (and any $\mathsf{Com}_\infty$ morphism) is quasi-isomorphic to a dg commutative algebra (map of dg commutative algebras). This together with the fact that a $\mathsf{Com}_\infty$ quasi-isomorphism $\tilde A_1\to \tilde A_2$ induces an $L_\infty$ quasi-isomorphism $\tilde A_1\hat\otimes \alg g\to \tilde A_2\hat\otimes \alg g$ proves the staement of the theorem. \end{proof} The above theorem has the following corollary. \begin{prop}\label{p:non_susp_no} For a pointed connected space $X_*$ of finite type, the Hodge filtration in the higher Hochschild complexes splits for any coefficient $\Gamma$ module $\L_*$ if and only if $X_*$ is rationally homology equivalent to a suspension. \end{prop} \begin{proof} In one direction the statement easily follows from the fact that a rational homology equivalence of spaces induces a quasi-isomorphism of higher Hochschild complexes. Now let $X_*$ be not equivalent to a suspension. It is well known that any $\mathsf{Com}_\infty$ algebra is $\mathsf{Com}_\infty$ quasi-isomorphic to a one with zero differential \cite[Theorem 10.4.5]{lodayval}. Let $\tilde A$ be such one encoding the rational homotopy type of $X_*$. Since we assume $X_*$ is not rationally a suspension, $\tilde A$ must have non-trivial (higher) product(s). Let $k$ be the arity of the first non-trivial product. We choose $\L_*=M\otimes\Chev(\alg g)^{\otimes\bullet}$, where $M={\mathbb{Q}}$ is the comodule with the trivial coaction, and $\alg g$ is a free Lie algebra with $k$ generators. By construction $\tilde A\hat\otimes\alg g$ is an $L_\infty$ algebra with zero differential and whose first non-trivial (higher) bracket has arity $k$. Applying Remark~\ref{r:non_susp_Hodge} we get that the $(k-1)$th differential in the spectral sequence associated with the Hodge filtration in $ M\hat\otimes \hChev(\tilde A\otimes\alg g)$ is non-zero. Therefore the filtration does not split. \end{proof} \subsection{Hochschild-Pirashvili homology as "homotopy base change"} Let us conclude by remarking on a curious algebraic interpretation of the Hochschild-Pirashvili homology in the form described in Theorem \ref{th:non_susp}. First, recall that to any dg commutative algebra $A$ we may associate a functor \[ \Phi_A : (\text{Lie algebras}) \to (\text{Lie algebras}) \] by sending a dg Lie algebra $\alg g$ to the tensor product $\Phi_A(\alg g) := \alg g\otimes A$, with the Lie algebra structure $A$-linearly extended in the obvious manner. We may call this functor $\Phi_A$ "base change", even though this is a misnomer as we do not change the underlying ground ring. Similarly, if $\alg g$ is a dg Lie algebra and $K$ is an $A$-module, we may define a functor \[ \Psi_{A,K} : (\alg g-\text{modules}) \to (\Phi_A(\alg g)-\text{modules}) \] by sending a $\alg g$-module $\alg k$ to the $\Phi_A(\alg g)$-module $\Psi_{A,K}(\alg k):=\alg k \otimes K$, with the module structure defined in the obvious manner. We also call the functor $\Psi_{A,K}$ "base change", with the same caveat as above that this is a misnomer. There is also a topological variant: If the Lie algebra $\alg g$ carries in addition a complete topology compatible with the Lie algebra structure, then $\hat \Phi_A(\alg g):= \alg g \hat\otimes A$ is likewise equipped with a natural complete filtration. Similarly, if $\alg k$ is equipped with a complete filtration and the action of $A$ is continuous, then $\hat\Psi_{A,K}(\alg k):=\alg k \hat \otimes K$ is a complete (continuous) $\Phi_A(\alg g)$-module. Now it is well known \cite[chapter 11.3]{lodayval} that there is an adjunction of categories \[ \Ha : (\text{conilpotent coaugmented dg cocommutative coalgebras}) \leftrightarrows (\text{dg Lie algebras}): \Chev \] given by the bar and cobar functors (i.e., the Harrison and Chevalley complex functors), such that for any conilpotent dg coalgebra $C$ the unit of the adjunction $C\to \Chev(\Ha (C))$ is a quasi-isomorphism, and such that for any dg Lie algebra $\alg g$ the counit of the adjunction $\Ha(\Chev(\alg g))\to \alg g$ is a quasi-isomorphism. Concretely, the functor $\Ha$ takes the Harrison complex (a free Lie algebra) of the cokernel of the coaugmentation, while the functor $\Chev$ takes the Chevalley complex. Similar functors exist on the the level of comodules. If $C$ is a conilpotent dg cocommutative coalgebra then we have bar and cobar functors \begin{align*} \HaMod : (\text{conilpotent $C$-comodules}) \to (\Ha (C)-\text{modules}) \\ (\text{conilpotent $\Chev(\Ha (C))$-comodules}) \leftarrow (\Ha (C)-\text{modules}) : \Chev_{\rm mod} . \end{align*} Concretely, ${\mathcal L}_{\rm mod}(M)={ \mathrm{Harr} }(C;M)$ is the Harrison complex with values in the module $M$, i.e., a free $\Ha (C)$-module generated by $M$ if we disregard the differential. Similarly, $\Chev_{\rm mod}(N)=\Chev(\Ha (C);N)$ is the Chevalley complex with values in $N$, i.e., a cofree $\Chev(\Ha (C))$-comodule cogenerated by $N$ with a natural differential. There exist versions of the above constructions for complete topological algebras and modules, by replacing tensor products appearing there by a completed version. We denote those completed versions by $\hHa$, $\hChev$ etc. The above adjunctions allow us to transport any endofunctor of the category of dg Lie algebras to an endofunctor of the category of conilpotent dg cocommutative coalgebras (and vice versa). The point of this section is to remark that the Hochschild-Pirashvili homology functor is nothing but the (homology of the) well known base change functors above, transported to the category of conilpotent coalgebras via the bar and cobar adjunctions. This gives an algebraically "very simple" interpretation of the Hochschild-Pirashvili homology. Concretely, let us assume that we are given the following data: \begin{itemize} \item A conilpotent complete cocommutative dg coalgebra $C$, for example $C=\Chev(\alg g)$, for a dg Lie algebra $\alg g$ as in Theorem~\ref{th:non_susp}, which we endow with the complete the decreasing filtration by degree. \item A a conilpotent complete $C$-comodule $M$. \item An augmented dg commutative algebra $A$. For example, we may take such an $A$ from Theorem~\ref{th:non_susp}. We will still denote by $\tilde A$ its augmentation ideal. \item We let $K=\mathbb{Q}$ be the one-dimensional $A$-module, with the action defined by the augmentation. \end{itemize} Then we define a complete cocommutative coalgebra \[ C_A := \hChev(\hat\Phi_A(\Ha(C)) ) = \hChev(\Ha(C)\hat \otimes A ) \] and the complete $C_A$-comodule \[ M_A := \hChev_{\rm mod}(\hat\Psi_{A,K}(\HaMod(M)) ) = \hChev_{\rm mod}(\HaMod(M)\hat \otimes K ). \] Clearly, these constructions are functorial in $A$, $C$ and $M$. We will abusively call these constructions "homotopy base change". The main statement of this section is then that the complex of Theorem \ref{th:non_susp} computing the Hochschild-Pirashvili homology may be interpreted as "homotopy base change". \begin{prop}\label{p:non_susp} For $C=\Chev(\alg g)$ the Chevalley complex of a dg Lie algebra, and $A,M$ as above, the complexes $M_A$ and the complex $( M \hat \otimes \hChev(\tilde A \hat\otimes \alg g),d)$ of Theorem \ref{th:non_susp} are quasi-isomorphic. \end{prop} \begin{proof} Explicitly, the complex $M_A$ has the form \[ \hChev(\Ha(C)\hat \otimes A; { \mathrm{Harr} }(C; M)\otimes K ) \] where $\hChev(-;-)$ denotes the (completed) Chevalley complex with values in the second argument, and ${ \mathrm{Harr} }(-; -)$ denotes the Harrison complex. Using the augmentation we may now split $A=\mathbb{Q}\oplus \tilde A$, where $\tilde A$ is the kernel of the augmentation. Using this splitting we find the identification of graded vector spaces (recall that $K=\mathbb{Q}$) \begin{equation}\label{equ:Cisomorphism} \hChev(\Ha(C)\hat\otimes A; { \mathrm{Harr} }(C; M)\otimes K ) \cong \hChev(\Ha(C)\hat \otimes \tilde A)\hat \otimes \Chev(\Ha(C); { \mathrm{Harr} }(C; M) ). \end{equation} Note however, that this identification is not an identification of complexes (yet). The differential on the right-hand side is composed of two terms: the differential $d_1$ of the left-hand tensor factor and the differential $d_2$ of the right-hand tensor factor. The differential on the left-hand side of \eqref{equ:Cisomorphism} on the other hand has an additional term $d_{mixed}$ from the Chevalley differential, which is obtained by taking the coaction of $\Chev(\Ha(C); { \mathrm{Harr} }(C; M) )$ followed by a Lie bracket. Note that this term resembles the term $\delta$ in Theorem \ref{th:non_susp}. Note that we have a quasi-isomorphism of $\Chev(\Ha( C))$-comodules \[ M \to \Chev(\Ha(C); { \mathrm{Harr} }(C; M) ). \] Hence we obtain a quasi-isomorphism of complexes \begin{equation}\label{equ:Cisomorphism1} (\hChev(\Ha(C)\hat\otimes \tilde A) \otimes M, d_1+d_M+d_{mixed}) \stackrel{\sim}{\to} (\hChev(\Ha(C)\hat\otimes \tilde A) \hat\otimes \Chev(\Ha(C); { \mathrm{Harr} }(C; M) ), d_1+d_2+d_{mixed}) \cong M_A , \end{equation} where the part of the differential $d_{mixed}$ on the left-hand complex is defined as before by taking the coaction on $M$ followed by a Lie bracket with a factor of $\hChev(\Ha(C)\hat\otimes \tilde A)$. Furthermore, since $C=\Chev(\alg g)$ we have a quasi-isomorphism of dg Lie algebras \[ \Ha(C) \to \alg g. \] Hence we obtain a quasi-isomorphism \begin{equation}\label{equ:Cisomorphism2} (\hChev(\Ha(C)\hat\otimes \tilde A)\hat \otimes M, d_1+d_M+d_{mixed}) \stackrel{\sim}{\to} (\hChev(\alg g\hat \otimes \tilde A) \hat\otimes M, d) \end{equation} with the complex considered in Theorem \ref{th:non_susp}. By \eqref{equ:Cisomorphism1} and \eqref{equ:Cisomorphism2} the Proposition is shown. \end{proof} \bibliographystyle{plain}
1,108,101,566,818
arxiv
\section{Introduction} The question of whether there is one universe or a collection of different ones in a multiverse is inherently inhomogeneous, and therefore requires any quantum cosmological treatment to go beyond the common minisuperspace constructions. It remains extremely difficult to address in theories such as loop quantum gravity, which do not yet give rise to reliable intuitive and tractable phenomena in anything but the simplest models. Nevertheless, recent progress on effective descriptions of loop quantum gravity has revealed general, perhaps even universal, effects at high curvature, which can be used to test whether the theory makes it likely for the required structures of a multiverse to form. Quite surprisingly, the cosmological scenarios based on these new results are entirely unlike anything that has been imagined in most homogeneous models of cosmology. Some quantum-geometry effects can be so strong at high density that they trigger signature change, an implication which has been overlooked for several years because it can only be seen when inhomogeneity is implemented consistently. Consequences for a possible multiverse are discussed in this article. \section{Big-bang singularity} In isotropic loop quantum cosmology \cite{LivRev,Springer}, the wave function $\psi_{\mu}$, in terms of a geometrical variable $\mu$ quantizing the spatial volume or the scale factor, can be extended to a universe before the big bang, according to a difference equation of the form \begin{equation} \label{Diff} C_+(\mu) \psi_{\mu+1}- C_0(\mu)\psi_{\mu}+ C_-(\mu)\psi_{\mu-1} = \hat{H}_{{\rm matter}}(\mu)\psi_{\mu}\,. \end{equation} Following the recurrence, the wave function is evolved through $\mu=0$, the classical singularity \cite{Sing,IsoCosmo}. Departures not only from classical dynamics but also from the continuous Wheeler--DeWitt equation arise because there are strong ``holonomy modifications'' at nearly Planckian density, to be discussed in more detail later in this article. These corrections provide the terms by which the difference operator on the left-hand side of (\ref{ModFried}) differs from a second-order derivative by $\mu$ as it appears in the Wheeler--DeWitt equation. If finite shifts in the difference operator are Taylor-expanded, a series of higher-order corrections in the momentum of $\mu$ (a curvature component) is obtained \cite{SemiClass}. Holonomy modifications therefore contribute to higher-curvature corrections expected in any quantum theory of gravity, but they lack higher time derivatives and therefore do not provide complete curvature terms. Motivated by the use of holonomies instead of connection components in the full theory of loop quantum gravity, holonomy modifications replace the Hubble parameter $H$ by a bounded function $\sin(\ell H)/\ell$ in a modified Friedmann equation \begin{equation} \label{ModFried} \frac{\sin(\ell H)^2}{\ell^2}= \frac{8\pi G}{3} \rho\,, \end{equation} with an ambiguity parameter $\ell$ of the dimension of length (possibly related to the Planck length). Using this modified Friedmann equation, one obtains in simple models an effective picture of singularity resolution given by a bounce \cite{APS}: Clearly, the energy density for solutions to (\ref{ModFried}) must always remain bounded. Note that higher-order corrections in an expansion of $\sin(\ell H)^2/\ell^2\sim H^2(1+O(\ell^2 H^2))$ are indeed of the same size as usually expected for higher-curvature corrections, given by the matter density divided by something close to the Planck density. However, (\ref{ModFried}) ignores higher time derivatives which should be of similar size as they, too, contribute to higher curvature terms. As long as these terms are ignored, one cannot use (\ref{ModFried}) at high density, near the maximum of the sine function, and it remains unclear whether loop quantum cosmology generically gives rise to bounces as an effective picture of its singularity resolution. In some models with specific matter content (a free massless scalar or kinetic domination), one can show that higher time derivatives are absent or small \cite{BouncePert}. There is therefore a class of models in loop quantum cosmology in which the mechanism of singularity resolution can effectively be described as a bounce, and we will explore this scenario in more detail here. Toward the end of the article, we will comment on the entropy problem \cite{Tolman} which must be addressed in any bounce model. A bounce, in general terms, may give rise to a multiverse picture, using the following line of arguments: Consider a collapsing (part of the) universe. Inhomogeneity builds up as the universe evolves. If space is viewed as a patchwork of nearly homogeneous pieces, their size must be decreased after some time interval to maintain a good approximation, a mathematical process which can be seen as describing the dynamical fractionalization of space. These statements are true in any collapsing universe. If one uses a theory that gives rise to a bounce mechanism, denser patches that reach Planckian density earlier bounce first (assuming that homogeneous models are good for the patch evolution). These bounced patches appear as expanding regions embedded within a still-contracting neighborhood. Given the opposite expansion behaviors, it is difficult to imagine that they can maintain causal contact with their neighborhood. A multiverse picture not unlike that of bubble nucleation in inflation results, except that there is no analog of ``bubbles within bubbles'' because a patch, once it has bounced, keeps expanding and diluting for a long time and would have to recollapse to trigger new bounces. Another, independent mechanism that, too, starts with general properties of inhomogeneous collapse uses features of black holes: As inhomogeneity increases, black holes may form and grow. If the singularities they contain classically are resolved in quantum gravity, the question is where this dense space-time region leads to. The interior space-time within the horizon of a Schwarzschild black hole, Fig.~\ref{BlackHoleEng}, can be treated like cosmological models and is resolved just like the big-bang singularity if modifications of loop quantum cosmology are used \cite{BHInt}. Also here, the presence of largely uncomputed higher-curvature corrections means that no effective space-time for a non-singular black hole is known. But one can try to see how a non-singular, possibly bouncing interior could be embedded within an inhomogeneous black-hole space-time. The non-singular interior may be embedded in a spherically symmetric exterior in different ways, shown in Fig.~\ref{Interior}. If space-time splits off into a baby universe, a causally disconnected region is obtained, as illustrated in Fig.~\ref{Branch}. Multiple such processes provide a multiverse. \begin{figure} \includegraphics[height=.2\textheight]{BlackHoleEng} \caption{A classical space-time diagram for black-hole collapse, culminating in a singularity covered by a horizon. If space-time is spherically symmetric, the vacuum part of the interior within the horizon takes on the form of a homogeneous cosmological model. (This figure as well as Figs.~\ref{Interior} and \ref{Branch} are taken from \cite{Once}.) \label{BlackHoleEng}} \end{figure} \begin{figure} \includegraphics[height=.4\textheight]{Interior} \caption{The vacuum interior region of Fig.~\ref{BlackHoleEng} can be quantized with methods of loop quantum cosmology, removing the classical singularity. The enlarged, non-singular interior may be embedded in spherically symmetric space-time in two causally different ways. (It is not known at present how precisely to construct such embeddings.) First, the post-singularity interior may open up into a new space-time without causal contact with the original outside region. Secondly, the interior may re-open into the original space-time, spilling out its matter as some kind of black-hole explosion. \label{Interior}} \end{figure} \begin{figure} \includegraphics[height=.25\textheight]{Branch} \caption{The two alternatives shown in Fig.~\ref{Interior} give rise to different space-time models. A baby universe is obtained from every black hole if the post-singularity interior does not connect causally back to the original space-time. Many such processes would then give rise to a multiverse. If the interior reconnects to the original space-time, black holes are merely compact, extremely dense objects within a single universe. \label{Branch}} \end{figure} We need good control on inhomogeneity if we want to fill these scenarios with more details. This task is difficult to achieve in non-perturbative quantum gravity, but we can use effective theory to include the key effects in a tractable model. \section{Space-time structure} In order to develop ingredients for an effective description of quantum space-time in loop quantum gravity, we begin with a formulation and generalization of the relevant classical structures. Classical space-time has as symmetries Poincar\'e transformations, which one may view as linear deformations of spatial slices in space-time: we have deformations along the normal $\vec{n}$ by \[ N(\vec{x})=c\Delta t+\frac{\vec{v}\cdot \vec{x}}{c} \] with time translations and boosts, see Fig.~\ref{LorentzMink}, or within a slice along $\vec{w}(\vec{x})= \Delta \vec{x}+{\bf R}\vec{x}$ with spatial translations and rotations. \begin{figure} \includegraphics[height=.11\textheight]{LorentzMink} \caption{Lorentz boosts can be viewed as linear deformations of spatial slices in Minkowski space-time. Here, the standard Minkowski diagram is redrawn with a slightly different viewpoint, focussing on equal-time spatial slices in space-time. Tilted axes show how orthogonality in Minkowski geometry is to be represented after a boost to $(t',x')$. \label{LorentzMink}} \end{figure} \begin{figure} \includegraphics[height=.2\textheight]{HypDefLinMink} \caption{The Poincar\'e algebra follows geometrically from combinations of deformations as in Fig.~\ref{LorentzMink}. Shown here is the example of a (boost, time-translation) commutator, with the two orderings shown at the top and bottom. A normal deformation by $N_1(x)=v x/c$ (Lorentz boost) and one by $N_2(x)=c\Delta t-v x/c$ (reverse Lorentz boost and waiting $\Delta t$) commute up to a spatial displacement $w(x)=\Delta x=v\Delta t$, as computed using the small triangle with angle $v/c$. \label{HypDefLinMink}} \end{figure} Algebraic calculations of commutators can be replaced by geometrical pictures, such as the one shown in Fig.~\ref{HypDefLinMink}. In this way, it turns out, one is more open to potential modifications of the algebra due to quantum effects. It is also possible to extend the picture to general relativity without much effort. We simply view local Lorentz transformations or non-linear coordinate changes as non-linear deformations of space, as in Fig.~\ref{SurfaceDefMink}. Instead of the well-known commutators of the Poincar\'e algebra, we obtain the not-so-well-known hypersurface-deformation algebra of infinitely many generators $(S(\vec{w}(\vec{x})), T(N(\vec{x})))$, labeled by a vector field $\vec{w}(\vec{x})$ and a function $N(\vec{x})$ in space, with \cite{DiracHamGR} \begin{eqnarray} [S(\vec{w}_1),S(\vec{w}_2)]&=& S((\vec{w}_2\cdot\vec{\nabla})\vec{w}_1- (\vec{w}_1\cdot\vec{\nabla})\vec{w}_2) \label{DD}\\ {} [T(N),S(\vec{w})] &=& T(\vec{w}\cdot\vec{\nabla}N)\\ {} [T(N_1),T(N_2)] &=& S(N_1\vec{\nabla}N_2-N_2\vec{\nabla}N_1)\,. \label{HH} \end{eqnarray} \begin{figure} \includegraphics[height=.12\textheight]{SurfaceDefMink} \caption{The space-time structure of general relativity is obtained by allowing for non-linear deformations of spatial slices. The Poincar\'e algebra is replaced by the infinite-dimensional hypersurface-deformation algebra. Using classical space-time geometry, one can derive (\ref{DD})--(\ref{HH}); see e.g.\ \cite{Action}. \label{SurfaceDefMink}} \end{figure} Hypersurface deformations not only generalize the Poincar\'e algebra, they also geometrize the dynamics of space-time. As shown by \cite{Regained,LagrangianRegained}, second-order field equations for metrics, invariant under the hypersurface-deformation algebra, must equal Einstein's. Moreover, invariance under the hypersurface-deformation algebra implies general covariance. All this is classical physics. The problem of quantum gravity can be approached by asking: How does quantum physics change hypersurface deformations? \section{Canonical gravity} In order to see how quantum gravity may affect the relations (\ref{DD})--(\ref{HH}), we must find operators that quantize the classical expressions of $T(N)$ and $S(\vec{w})$. Posing the question in this way suggests that canonical quantum gravity might be closests to answering it, and loop quantum gravity is currently the best-developed canonical approach. In this framework, computing the quantum version especially of (\ref{HH}) in complete detail remains challenging, but a diverse set of methods, including but not restricted to effective techniques, has during the last few years led to mutually consistent and apparently universal results in most of the model systems usually considered in general relativity \cite{ConstraintAlgebra,LTBII,JR,ThreeDeform,ScalarHol,ModCollapse,TwoPlusOneDef,TwoPlusOneDef2,AnoFreeWeak,AnoFreeWeakDiff}; see \cite{Action,ReviewEff} for a general discussion. Since we will be using methods of loop quantum gravity \cite{Rov,ThomasRev}, we describe space-time geometry by a canonical pair of an su(2)-valued ``electric field'' $\vec{E}_i$ and a ``vector potential'' $\uvec{A}_i$. (An under-arrow indicates a covariant vector, or a 1-form.) The gravitational electric field is a triad and determines spatial distances and angles by three orthonormal vectors $\vec{E}_i$, $i=1,2,3$, at each point in space. The gravitational vector potential $\uvec{A}_i$ is a combination of different measures of curvature of space: the Ashtekar--Barbero connection. In quantum field theory, one uses integrated (smeared) fields to construct creation operators by which one can generate all Fock states out of the vacuum. In loop quantum gravity, the geometrical fields offer a natural smearing of $\uvec{A}_i$ along curves (exponentiated to holonomies) and $\vec{E}_i$ over surfaces (fluxes). Loop quantum gravity uses holonomies as creation operators to construct a state space. In what follows, we illustrate these objects using a U(1)-connection $\uvec{A}$ for simplicity. Holonomies are then $h_e=\exp(i\int_e{\mathrm d} \lambda \uvec{A}\cdot\vec{t}_e)$ integrated along curves $e$ in space, with tangent $\vec{t}_e$. For every possible $e$, $\hat{h}_e$ provides excitations of geometry along this curve: As we will see momentarily, surfaces intersected by $e$ gain area as the excitation level on $e$ is increased. To construct the corresponding quantum theory, we start with a basic state $\psi_0$, $\psi_0(\uvec{A})=1$. Excited states are obtained by acting with holonomies: \begin{eqnarray} \psi_{e_1,k_1;\ldots;e_i,k_i}(\uvec{A})&=& \hat{h}_{e_1}^{k_1}\cdots \hat{h}_{e_i}^{k_i}\psi_0(\uvec{A})\\ &=&\prod_{e} h_e(\uvec{A})^{k_e}=\prod_{e} \exp(ik_e \smallint_e {\mathrm d} \lambda \uvec{A}\cdot\vec{t}_e)\,,\nonumber \end{eqnarray} written in the connection representation. Many excitations along edges are needed to obtain a macroscopic, near-continuum space-time region. Quantum space-time is realized when only a small number of curves $e$ is geometrically excited, and the action of a single holonomy operator has strong implications on the overall state. Holonomy corrections to the classical equations, as already used in (\ref{ModFried}), are then significant. Derivative operators are quantized fluxes $\int_S{\mathrm d}^2y \uvec{n}\cdot\hat{\!\vec{E}}$ for surfaces $S$ in space, with co-normal $\uvec{n}$. They act by \begin{eqnarray} \label{Flux} \int_S{\mathrm d}^2y \uvec{n}\cdot\hat{\!\vec{E}}\psi_{g,k}&=& \frac{8\pi G\hbar}{i}\int_S{\mathrm d}^2y \uvec{n}\cdot \frac{\delta \psi_{g,k}}{\delta \uvec{A}(y)}\\ &=& 8\pi \ell_{\rm Pl}^2\sum_{e\in g} k_e {\rm Int}(S,e) \psi_{g,k}\nonumber \end{eqnarray} with the intersection number ${\rm Int}(S,e)$ and the Planck length $\ell_{\rm Pl}=\sqrt{G\hbar}$. On the right-hand side, we sum only integers, implying a discrete spectrum for fluxes. The same kind of discreteness is realized if we go back to SU(2)-valued fields, in which case we simply replace derivatives by angular-momentum operators, and integers $k_e$ by spin quantum numbers \cite{RS:Spinnet}. Geometry is discrete: for gravity, fluxes with discrete spectra represent the spatial metric. \subsection{Dynamics} For the discrete dynamics of cosmic expansion, one must quantize the gravitational Hamiltonian (constraint). Since it depends on the connection but only holonomies can be represented as operators in loop quantum gravity, modifications as in (\ref{ModFried}) are necessary, but now for the full theory \cite{RS:Ham,QSDI}. The modified dynamics then takes into account details of how discrete space grows by creating new lattice sites (atoms of space), changing the excitation level of geometry as measured by fluxes. The classical form of the Hamiltonian is somewhat analogous to that of Yang--Mills theory on Minkowski space-time, where \begin{equation} \label{HYM} H=\kappa \int{\rm d}^3x (|\vec{E}_i|^2+|\vec{B}_i|^2) \end{equation} for $\vec{B}_i= \uvec{\nabla}\times\uvec{A}_i+ C_{ijk}\uvec{A}_j\times \uvec{A}_k$ (and structure constants $C_{ijk}$). For gravity on any space-time, only showing the crucial terms, \begin{equation} \label{H} H(N)=\frac{1}{16\pi G}\int{\rm d}^3x N\frac{\sum_{ijk}\epsilon_{ijk}(\vec{B}_i\times \vec{E}_j)\cdot\vec{E}_k}{\sqrt{\frac{1}{6}|\sum_{ijk}\epsilon_{ijk} (\vec{E}_i\times \vec{E}_j)\cdot\vec{E}_k|}} +\cdots \end{equation} with $C_{ijk}=\epsilon_{ijk}$. The presence of a free function $N$ in (\ref{H}), as opposed to (\ref{HYM}), realizes the freedom of one's choice of time coordinate in generally covariant theories. Indeed, $H(N)$, plus matter Hamiltonians, plays the role of the time deformation generator $T(N)$ introduced earlier. If we can quantize $H(N)$ and compute commutators of the resulting operators, we can see if and how (\ref{HH}) and the space-time structure it encodes might change. Not surprisingly, the required calculations are rather complicated and remain incomplete, but some results are known. The form of the Hamiltonian together with properties of the loop representation implies characteristic corrections when quantized. We have already mentioned higher-order corrections resulting from an expansion of holonomies by $\uvec{A}_i$, used in place of the classical $\vec{B}_i$ in (\ref{H}). Holonomy corrections will modify any gravitational Hamiltonian in loop quantum gravity, and therefore indicate that (\ref{HH}) might be quantum corrected. However, holonomy corrections are not the only ones. There is also quantum back-reaction, which is present in any interacting quantum theory and gives rise to higher-time derivatives and curvature corrections. In canonical quantizations, effective techniques provide systematic methods to compute such terms \cite{EffAc,Karpacz,HigherTime}. Note that holonomy corrections and quantum back-reaction (higher-time derivatives) both depend on the curvature. The magnitude of holonomy corrections and quantum back-reaction therefore cannot easily be distinguished, but their algebraic features are sufficiently different from each other to disentangle their implications for quantum space-time, using the substitutes of (\ref{HH}) they imply. There is a third type of corrections in loop quantum gravity which is easier to handle and which we will discuss first. We obtain inverse-triad corrections from quantizing \cite{QSDV} \begin{equation} \label{Inv} \left\{\uvec{A}^i,\int{\sqrt{|\det E|}}\mathrm{d}^3x\right\}= 2\pi G \epsilon^{ijk} \frac{\vec{E}_j\times\vec{E}_k}{{\sqrt{|\det E|}}} \end{equation} whose right-hand side appears in the Hamiltonian (\ref{H}) but cannot be quantized directly, owing to non-existing inverses of flux operators (\ref{Flux}) with discrete spectra containing zero. The left-hand side of (\ref{Inv}), on the other hand, can be quantized and is regular, but implies quantum corrections especially for small flux eigenvalues; Fig.~\ref{alpha}. These corrections can be computed explicitly in models and provide an automatic cut-off of the $1/E$-divergences \cite{InvScale,QuantCorrPert,InflTest}. They refer to flux eigenvalues in relation to the Planck scale, and are therefore independent of holonomy and higher-curvature corrections which depend on the curvature scale or the energy density. Inverse-triad corrections are therefore more reliable than holonomy corrections, given that higher-curvature terms in loop quantum gravity remain largely unknown. \begin{figure} \includegraphics[height=.23\textheight]{alpha} \caption{Inverse-triad corrections of loop quantum gravity imply a correction function $\alpha(\mu)$ that depends on ratios $\mu$ of flux eigenvalues by the Planck area and multiplies any occurrence of inverse triads for instance in Hamiltonians. These functions approach the classical limit $\alpha=1$ from above for large fluxes (coarser spatial lattices) and cut off classical divergences by strong quantum corrections at small flux values. The parameter $r$ describes a quantization ambiguity, which does not affect qualitative features such as $\alpha$ becoming small near $\mu=0$ and $\alpha$ approaching one from above for $\mu\to\infty$. \label{alpha}} \end{figure} \subsection{Inverse-triad corrections} For any type of corrections, we can study dynamical implications by inserting their correction functions in the classical Hamiltonian. For inverse-triad corrections, for instance, we have \begin{equation} \frac{1}{16\pi G}\int{\rm d}^3xN \alpha(\vec{E}_l) \frac{\sum_{ijk}\epsilon_{ijk}(\vec{B}_i\times \vec{E}_j)\cdot\vec{E}_k}{\sqrt{\frac{1}{6}|\sum_{ijk}\epsilon_{ijk} (\vec{E}_i\times \vec{E}_j)\cdot\vec{E}_k|}} +\cdots \end{equation} with a correction function $\alpha(\vec{E}_l)$ as in Fig.~\ref{alpha}. The Hamiltonian generates time translations as part of the hypersurface-deformation algebra; when the Hamiltonian is modified, the Poisson-bracket algebra is therefore different from the classical one, (\ref{DD})--(\ref{HH}). By consistency conditions of gravity as a gauge theory, quantum corrections deform but do not violate covariance \cite{ConstraintAlgebra}. We have \begin{eqnarray} [S(\vec{w}_1),S(\vec{w}_2)]&=& S((\vec{w}_2\cdot\vec{\nabla})\vec{w}_1- (\vec{w}_1\cdot\vec{\nabla})\vec{w}_2) \label{DDalpha}\\ {} [T(N),S(\vec{w})] &=& T(\vec{w}\cdot\vec{\nabla}N)\\ {} [T(N_1),T(N_2)] &=& S(\alpha^2(N_1\vec{\nabla}N_2-N_2\vec{\nabla}N_1))\,. \label{HHalpha} \end{eqnarray} The algebra of hypersurface deformations is deformed, and the laws of motion on quantum space-time change. \begin{figure} \includegraphics[height=.2\textheight]{HypDefLinDefMink} \caption{The hypersurface-deformation algebra (\ref{HHalpha}) in the presence of inverse-triad corrections modifies the classical laws of motion. With the constructions in Fig.~\ref{HypDefLinMink}, spatial displacements during a time $\Delta t$ at velocity $v$ differ from the classical value: $\Delta x=\alpha^2 v\Delta t$. Classical geometry can no longer be used to derive this relation, but it can be computed from the commutator (\ref{HHalpha}) with the functions $N_1$ and $N_2$ given in Fig.~\ref{HypDefLinMink}. Since $\alpha>1$ in mildly-modified, semiclassical regimes, the displacement is larger than expected classically and velocities seem to increase compared to the expected $v$. Even the speed of light is larger than classically, but there is still a consistent causal structure \cite{Tensor} thanks to the closed algebra (\ref{HHalpha}). \label{HypDefLinDefMink}} \end{figure} Repeating the construction in Fig.~\ref{HypDefLinMink}, but using the deformed algebra as in Fig.~\ref{HypDefLinDefMink}, the relation of a spatial displacement to the boost velocity is modified: $\Delta x=\alpha^2v\Delta t$. Discrete space speeds up propagation. (According to Fig.~\ref{alpha}, we have $\alpha>1$ unless we are in strong quantum regimes of small $\mu$ in which additional corrections have to be taken into account.) Modified velocities can also be seen in cosmological perturbation equations, in which the dynamics of density perturbations $u$ and gravitational waves $w$ now is \cite{LoopMuk} \begin{eqnarray} -u''+s(\alpha)^2\Delta u +(\tilde{z}''/\tilde{z})u&=&0\,, \label{u}\\ -w''+\alpha^2\Delta w +(\tilde{a}''/\tilde{a})w&=&0\,. \label{w} \end{eqnarray} The function $\alpha$ that modifies the speed of gravitational waves is shown in Fig.~\ref{alpha}, and all other quantum-corrected functions, $s(\alpha)$, $\tilde{a}$ and $\tilde{z}$ are known in terms of $\alpha$ but by rather involved equations. As one general consequence, $s(\alpha)\not=\alpha$. We therefore have different speeds for different modes, and obtain corresponding corrections to the tensor-to-scalar ratio as a characteristic signature of deformed space-time. For falsifiability of the theory, it is crucial that $\alpha-1$ becomes large for small lattice spacing (flux values), while large lattice spacing implies discretization effects and strong violations of continuum physics. Two-sided bounds on the discreteness scale are obtained, improving the common dimensional expectations (effects of the size of the average density divided by the Planck density) by several orders of magnitude \cite{InflConsist,InflTest}. Before we move on to the other types of quantum corrections, we discuss the conceptual nature of quantum space-time subject to (\ref{DDalpha})--(\ref{HHalpha}) or related modifications. No effective line element can exist, for the metric and coordinate differentials transform in non-matching ways, the metric --- a phase-space function --- according to modified gauge transformations by (\ref{HHalpha}), but any ${\rm d}x$ by standard coordinate changes. One could try to find new differential-geometry structures, such as non-commutative \cite{Connes,NonCommST} or fractal ones \cite{Fractional}, that could provide an invariant line element together with metric coefficients subject to modified gauge transformations. But even without a concrete space-time model, quantum space-time is well-defined because all observables can be computed from (\ref{DDalpha})--(\ref{HHalpha}) by canonical methods. (See \cite{CUP} for canonical gravity.) Quantum space-time is also covariant since the full gauge algebra is realized, albeit in a deformed manner \cite{DeformedRel}. Quantum space-time is just not Riemannian space-time, but this is not to be expected anyway given the presence of discrete structures. \subsection{Quantum back-reaction} Higher-curvature terms in effective quantum gravity \cite{BurgessLivRev,EffectiveGR} modify the classical dynamics, but, unlike quantum-geometry corrections of loop quantum gravity, leave the algebra (\ref{HH}) unchanged \cite{HigherCurvHam}: they give theories in which the space-time structure is still classical. They can be derived by standard methods of low-energy effective actions, in which a derivative expansion expresses non-locality by higher time derivatives. Covariance of the theory together with a Poincar\'e-invariant vacuum state, around which the low-energy effective action expands the quantum dynamics, imply that corrections can only be by space-time scalars, or curvature invariants of increasing order in the derivative expansion. Loop quantum gravity, as a canonical theory, does not allow easy applications of standard low-energy effective actions which are often based on path integrals. Moreover, it is not clear whether it has a Poincar\'e-invariant vacuum (or other) state, or whether such a state would be the right base to expand around for, say, quantum cosmological phenomena. It is therefore necessary to use a procedure which is canonical, and at the same time general enough to encompass different quantum states. Such a procedure \cite{EffAc,Karpacz} can be found by turning Ehrenfest's equations into systematic expansions. Ehrenfest's equations in quantum mechanics express the rates of change of expectation values of basic operators in terms of other, usually more complicated expectation values. For instance, in quantum mechanics of a particle of mass $m$ in a potential $V(x)$, we have \begin{equation} \label{x} \frac{{\rm d}\langle\hat{x}\rangle}{{\rm d}t}= \frac{\langle[\hat{x},\hat{H}]\rangle}{i\hbar}= \frac{\langle\hat{p}\rangle}{m} \end{equation} of the simple classical form. The expectation value of the momentum, however, changes to \begin{equation} \label{p} \frac{{\rm d}\langle\hat{p}\rangle}{{\rm d}t}=-\langle V'(\hat{x})\rangle \end{equation} which for a non-quadratic potential is not a simple expectation value of $\hat{x}$. One can formally expand \begin{eqnarray} \langle V'(\hat{x})\rangle&=& \langle V'(\langle\hat{x}\rangle+ (\hat{x}-\langle\hat{x}\rangle))\rangle\nonumber\\ &=& V'(\langle\hat{x}\rangle)+ \frac{1}{2} V'''(\langle\hat{x}\rangle) (\Delta x)^2+\cdots \label{V} \end{eqnarray} with the fluctuation $\Delta x=\sqrt{\langle(\hat{x}-\langle\hat{x}\rangle)^2\rangle}$ and additional terms that contain higher moments $\langle(\hat{x}-\langle\hat{x}\rangle)^n\rangle$. Since moments of a quantum-mechanical state are degrees of freedom independent of expectation values, the system of equations (\ref{x}) and (\ref{p}) is not closed. However, by the same procedure, computing expectation values of commutators as in (\ref{x}), one can derive new evolution equations for $\Delta x$ and all other moments. In a semiclassical expansion, only finitely many equations need be taken into account at any fixed order, making the system manageable. Moreover, one can combine this expansion with an adiabatic one in which moments are assumed to vary more slowly than expectation values. With these expansions, as shown in \cite{EffAc}, one reproduces the usual low-energy effective action \cite{EffAcQM} for anharmonic systems. Higher orders in the adiabatic expansion provide higher-derivative terms \cite{HigherTime}. The procedure sketched is the right basis to derive effective actions for canonical quantum systems. However, an application to quantum gravity requires additional extensions, most importantly one to include constraints or gauge properties, and to address the problem of time in the absence of an absolute evolution parameter such as the one used in (\ref{x}). Also this extension is available at the canonical level \cite{EffCons,EffConsRel,EffConsComp} and allows one to solve the problem of time, at least semiclassically \cite{EffTime,EffTimeLong,EffTimeCosmo}. One could apply these techniques to a quantum version of the algebra (\ref{DD})--(\ref{HH}), whose symmetry generators in the gauge theory of gravity are constraints. However, a last ingredient required for a successful implementation is still being developed: a generalization of canonical quantum mechanical effective methods to quantum field theories. At present, it is therefore difficult to include quantum back-reaction in the algebra (\ref{HH}) to see how it could be corrected. One expects higher time derivatives to arise from these corrections, as has been established for quantum-mechanical systems \cite{HigherTime}, and therefore quantum back-reaction contributes to higher-curvature terms. If the corrections are purely higher curvature, they do not change the hypersurface-deformation algebra. But common low-energy arguments stating that effective quantum back-reaction in gravity is of higher-curvature form assume that there is a Poincar\'e-invariant vacuum state to be expanded around. In non-perturbative quantum gravity, especially one with a discrete spatial structure such as loop quantum gravity, it is unlikely that there is any exactly Poincar\'e-invariant state. Standard arguments for higher-curvature effective actions then break down, and in addition to curvature invariants there may well be other terms that modify the algebra (\ref{HH}), reflecting new space-time structures in the presence of discreteness. It is possible for quantum back-reaction to modify (\ref{HH}) and compete with the quantum-geometry corrections of loop quantum gravity. Inverse-triad corrections do not refer directly to the density or curvature scale and are therefore safe from competition by quantum back-reaction; they can be discussed separately and bounded observationally. But holonomy corrections always compete with higher-curvature terms. Any holonomy modification of (\ref{HH}) could, in principle, be undone by modifications due to quantum back-reaction. However, a more detailed look at how quantum back-reaction in constrained systems arises shows that whatever modification may be present, it cannot remove all possible modifications by holonomy corrections. The reason for this is the form of degrees of freedom. Holonomy corrections change even the dependence of Hamiltonian (constraints) on expectation values; they are modifications of the classical dynamics motivated by quantum geometry. Quantum back-reaction gives rise to corrections that depend on moments of a state, as in (\ref{V}). If one computes Poisson brackets of constraints corrected by quantum back-reaction, all correction terms will still depend on moments after taking the Poisson bracket: Moments are based on polynomial expressions in $x$ and $p$, at least of second degree, and the Poisson bracket of two polynomials of degree at least two is always a polynomial of degree at least two. Quantum corrections of (\ref{HH}) due to quantum back-reaction will therefore contain moments, while those due to holonomy corrections do not. Quantum back-reaction can cancel holonomy modifications only for special states in which moments are strictly related to expectation values in a specific way. Generically, these corrections, even though their magnitudes are similar, provide different terms that do not cancel each other. Even in the absence of consistent versions of (\ref{HH}) that include quantum back-reaction, it remains meaningful to study holonomy corrections and use their implications for quantum space-time structure, in the same spirit as already discussed for inverse-triad corrections. \subsection{Holonomy corrections and signature change} Holonomy corrections give rise to a difference equation (\ref{Diff}) for the wave function, and imply strong modifications at high density. However, quantum back-reaction and higher-curvature corrections are both significant in the same regime, and therefore the high-curvature behavior of loop quantum cosmology remains uncertain. It is, however, clear that holonomy corrections imply drastic effects on quantum space-time. The hypersurface-deformation algebra with holonomy corrections is not completely known, but all cases that have been computed so far give the same structure \cite{JR,ThreeDeform,ScalarHol}: \begin{eqnarray} [S(\vec{w}_1),S(\vec{w}_2)]&=& S((\vec{w}_2\cdot\vec{\nabla})\vec{w}_1- (\vec{w}_1\cdot\vec{\nabla})\vec{w}_2) \label{DDbeta}\\ {} [T(N),S(\vec{w})] &=& T(\vec{w}\cdot\vec{\nabla}N)\\ {} [T(N_1),T(N_2)] &=& S(\beta(N_1\vec{\nabla}N_2-N_2\vec{\nabla}N_1)) \label{HHbeta} \end{eqnarray} with a correction function $\beta$ that satisfies $\beta<0$ at high density (the putative ``bounce'' of simple models). With modifications as in (\ref{ModFried}), we have $\beta=\cos(2\ell H)$ in terms of the Hubble parameter. Notice that inhomogeneities, although they must be present to have a non-trivial derivation of the algebra (\ref{HHbeta}), are not the reason for the modification by $\beta$. The reason is holonomy modifications, which already appear for homogeneous background evolution. Inhomogeneity is merely used to show non-trivial space-time effects, as the right-hand side of (\ref{DDbeta})--(\ref{HHbeta}) vanishes identically for homogeneous $N$ and $\vec{w}$. A negative $\beta$, for instance $\beta=-1$ at high density, means that constructions such as those in Fig.~\ref{HypDefLinMink} lead to intransigent motion with $\Delta x=-v\Delta t$, a displacement opposite to the velocity. A geometrical interpretation is now more meaningful: A negative $\beta$ implies that the space-time signature becomes Euclidean \cite{Action}, as can be seen by comparing Fig.~\ref{HypDefLinMink} with the Euclidean version Fig.~\ref{HypDefLin1}. Indeed, if one computes cosmological perturbation equations analogous to (\ref{u}) and (\ref{w}), as done in \cite{ScalarHol,ScalarTensorHol}, the positive $\alpha^2$ is replaced by $\beta$, giving rise to an elliptic differential equation when $\beta<0$. (Holonomy corrections, so far, do not lead to different parameters $\beta$ and $s(\beta)$ for the independent modes of cosmological perturbation equations.) The same effect happens when inhomogeneity is treated non-perturbatively in spherically symmetric models \cite{JR}, where it can be shown to be largely insensitive to quantization ambiguities \cite{Action}. With signature change at high density, the cosmological scenario of loop quantum cosmology is not a bounce, but is reminiscent of the Hartle--Hawking picture, now derived as a consequence of quantum space-time structure in the presence of holonomy corrections. \begin{figure} \includegraphics[height=.2\textheight]{HypDefLin1} \caption{Holonomy corrections at high density imply a sign reversal in the correction function of (\ref{HHbeta}). The constructions of Fig.~\ref{HypDefLinMink} would indicate $\Delta x=-v\Delta t$, with reversed displacement arrows. A better geometrical explanation of the reversed sign and arrows is signature change: Redrawing Fig.~\ref{HypDefLinMink} with right angles according to Euclidean geometry, as shown here, implies the same reversal of spatial-displacement arrows as follows from a reversed sign in (\ref{HHbeta}). \label{HypDefLin1}} \end{figure} \section{Multiverse?} In view of these new results, we must revise our multiverse scenario sketched in the beginning of this article, based on cosmological bounces. (See also \cite{Silence}.) We observed that inhomogeneous collapse combined with a transition to expansion (a ``bounce'') may lead to causally disconnected regions, expanding within a larger multiverse. Inhomogeneity of this type is extremely hard to control with present-day non-perturbative quantum gravity, but good effective methods are now available to help us understand the relevant space-time structure. Loop quantum gravity, it turns out, implies radical modifications at Planckian densities, with a quantum version of 4-dimensional Euclidean space instead of space-time. In Euclidean space, initial-value problems are ill-posed and there is no propagation of structure from collapse to expansion, as assumed in bounce models. Expanding patches that may result are causally disconnected not just from their surrounding space-time, but also from their predecessor which was collapsing. Instead of a bounce, loop quantum cosmology, once inhomogeneity is taken into account consistently, gives rise to a non-singular beginning of the expanding Lorentzian phase we can observe. The transition from Euclidean to Lorentzian signature, when $\beta=0$, is a natural place to pose initial conditions, for instance for an inflaton state. These initial values are unaware of what happened in the collapse phase, so that the picture of dense collapsing patches bouncing first is not realized. The new signature-change of loop quantum cosmology shares with bounces the combination of collapse with expansion, but the collapse phase does not deterministically affect the expansion phase. As a consequence, there is no entropy problem because no complete information is transmitted through high densities. And yet, the model is non-singular \cite{NoSing}. One may still view the possible collection of expanding universes within one space(-time), combining Euclidean and Lorentzian pieces, as a multiverse. However, any causal contact realized is even weaker than what is usually possible in multiverses, and it may seem more appropriate to talk of separate universes instead of one however connected larger structure. Each of these expanding patches has its own beginning when space-time emerges by signature change from 4-dimensional space, giving it a clear status as a universe of its own. \section{Acknowledgements} I am grateful to Mariusz Dabrowski and the organizers of the conference Multicosmofun '12 for their invitation to give a talk on which this article is based. This work was supported in part by NSF grant PHY0748336.
1,108,101,566,819
arxiv
\section{Introduction} \label{sec:intro} The Goldstone boson equivalence theorem \cite{Cornwall:1974km,Lee:1977eg,Chanowitz:1985hj} states that the dynamics of massive gauge bosons is greatly simplified at energies far above the bosons' mass---but below the symmetry breaking scale---when a description based purely in terms of the interactions of the Goldstone scalars becomes applicable. An analogous story takes place in the context of theories of massive gravity. Even though the equivalent of the Higgs mechanism for spin-2 particles is unknown,\footnote{The concept of Higgs-type mechanism is used here in a loose sense, since it is likely that massive gravity does not possess any simple analogue of the traditional Higgs mechanism (see \textit{e.g.}~\cite{Bonifacio:2019mgk}). This of course does not invalidate the formalism of symmetry breaking and non-linear realizations as a tool to investigate theories of massive particles.} describing the spontaneously broken phase can still be achieved by means of the St\"uckelberg formulation. For a massive graviton, this amounts to the introduction of additional vector and scalar fields responsible for restoring diffeomorphism invariance, which is done while remaining agnostic about the physics behind the symmetry breaking. In this setting, at energies much higher than the graviton mass, a picture is recovered in which the relevant degrees of freedom correspond to massless spin-2, spin-1 and spin-0 particles, what is commonly referred to as the {\it decoupling limit} of massive gravity (see \cite{Hinterbichler:2011tt,deRham:2014zqa} for reviews). Generic interactions for a massive graviton give rise to a pathological degree of freedom besides the expected five polarizations of a massive spin-2 particle (in four dimensions)---the infamous Boulware--Deser ghost \cite{Boulware:1973my}. In the language of effective field theory (EFT), this translates into a very low strong coupling scale that renders the model uninteresting from a phenomenological point of view. In the decoupling limit, the ghost manifests itself in the fact that the spin-0 equation of motion is of fourth order, producing an additional propagating mode that is unstable as a consequence of the Ostrogradsky theorem. As is well known, however, this issue can be solved by a judicious tuning of the graviton interactions, in what is known as the de Rham--Gabadadze--Tolley (dRGT) theory of massive gravity \cite{deRham:2010ik,deRham:2010kj,Hassan:2011hr,Hassan:2011tf,Hassan:2011ea}. The decoupling limit of this ghost-free theory is particularly enlightening \cite{Ondo:2013wka,Gao:2014ula}, with the spin-0 sector being described by a {\it galileon} theory \cite{Nicolis:2008in},\footnote{The galileon arises as an effective degree of freedom in several other settings besides that of massive gravity; see \cite{Trodden:2011xh,deRham:2012az,Deffayet:2013lga} for reviews.} a property that is at the heart of some of the virtues of the model, such as the Vainshtein mechanism \cite{Vainshtein:1972sx}.\footnote{Not only its virtues but also some of its vices, such as the problem of superluminal fluctuations; see {\it e.g.}~\cite{Padilla:2010tj,deFromont:2013iwa,Garcia-Saenz:2013gya}.} In this paper we address what may be thought of as the opposite story to the decoupling limit: starting from a galileon theory for a single scalar field $\phi$, is it possible to systematically derive massive gravity as an {\it infrared} completion? We will answer this question in the affirmative---with some caveats. Our approach is based on the gauging of the symmetries that define the galileon, namely Poincar\'e invariance, a shift symmetry $\phi\to\phi+c$, and a ``galileon shift'' symmetry $\phi\to\phi+b_{\mu}x^{\mu}$.\footnote{This approach differs from that of \cite{Zhou:2011ix,Goon:2012mu} which considered the gauging of some additional symmetries under which the galileons transform linearly. This is also the case of the ``covariant galileon'' \cite{Deffayet:2009wt}, which can be seen to arise from the gauging of the linearly realized Poincar\'e symmetry.} That this is a natural starting point can be motivated by noting that this procedure works correctly in the simple case of a massive spin-1 field. There the decoupling limit theory is given by a scalar field $\pi$ and a shift invariance, with an action that depends on Lorentz invariant combinations of $\partial_{\mu}\pi$ (at lowest order in derivatives). Conversely, regarding the field $\pi$ as a Goldstone boson that non-linearly realizes a shift symmetry, gauging this symmetry amounts to introducing a 1-form gauge field $A_\mu$ while the shift is promoted to a local $U(1)$. The gauge theory is then constructed out of the invariant combination $\partial_{\mu}\pi+A_{\mu}$ according to the derivative expansion, yielding the action of a massive spin-1 field with $\pi$ now playing the role of a St\"uckelberg field. Returning to the galileon, in addition to the constant shift we now also have the galileon shift as a non-linearly realized symmetry. Upon gauging, the latter gives rise to a Lorentz vector-valued 1-form $h^a_{\phantom{a}\mu}$, which can naturally be interpreted as a vielbein for a dynamical spin-2 field. Indeed, just like the gauging of spacetime translations produces a massless spin-2 degree of freedom---a procedure that may be used to derive general relativity \cite{Delacretaz:2014oxa}---gauging the galileon shift, which may be better thought of as an ``internal'' translation, gives rise to a spin-2 field that is now generically massive, not being constrained by general coordinate invariance. Together with the gauge field associated to shifts and the original Goldstone scalar, we are thus left with all the necessary ingredients to construct theories of massive gravity in the St\"uckelberg formulation. Now to the caveats. First, the galileon symmetry has a crucial difference relative to the usual constant shift; namely, it is a {\it spacetime} symmetry, in the sense that it does not commute with Poincar\'e transformations. Because of this, and the fact that gauge symmetries must form a subgroup, we will be forced to also gauge (at least part of) the unbroken Poincar\'e group, yielding in principle an {\it additional} local translation. In order to describe a massive spin-2 degree of freedom, we will have to explicitly break this symmetry by fixing the associated vielbein as a non-dynamical background field. Although this is a welcome extra ingredient that will allow us to formulate massive gravity in the general case of an arbitrary reference metric, it has the disadvantage that some of the symmetries must be broken by hand. Second, even though our formalism produces the correct degrees of freedom and symmetries, the interactions that we can construct are not limited to be the ghost-free ones of dRGT massive gravity. This is however an expected outcome given that the structure of the dRGT action is a result of a tuning of operators' coefficients in the derivative expansion, barring the existence of some hidden symmetry. Motivated by the search of additional symmetries that could provide a rationale for the particular structure of dRGT theory, we also consider the gauging of the {\it special galileon} \cite{Hinterbichler:2015pqa}. The special galileon theory is a one-parameter subset of the generic galileon that enjoys an extended shift symmetry that is responsible for an enhanced soft behavior of scattering amplitudes \cite{Cheung:2014dqa}, among other interesting properties \cite{Novotny:2016jkh,Bogers:2018zeg,Roest:2019oiw}. In our formalism, the fact that this symmetry is spontaneously broken will allow for the presence of an additional Goldstone mode in addition to the massive spin-2 degrees of freedom that we are interested in. Remarkably, by imposing the absence of this extra field in the action at zeroth order in derivatives we will prove that the resulting potential for the graviton is of the ghost-free type, and that it is precisely the one-parameter subset of the dRGT action that maps onto the special galileon in the decoupling limit. In addition, we show that the two-derivative kinetic terms can also be engineered so as to decouple the extra Goldstone when linearized about flat spacetime. However, the question of whether a decoupling at the fully nonlinear level can be achieved remains open at this stage. We emphasize that our method is completely general and systematic, and thus opens the door to several generalizations, even beyond the context of galileons and massive gravity. It is based on the extension of the coset construction to accommodate spontaneously broken gauge symmetries, which we review in Sec.\ \ref{sec:coset}. The application of the formalism to the galileon symmetry as a way to investigate theories of massive gravity is presented in Sec.\ \ref{sec:galileon}, and in Sec.\ \ref{sec:special galileon} we consider the extension of the symmetries to the case of the special galileon. We conclude with a discussion of our results in Sec.\ \ref{sec:discussion}. Some technical calculations are given in Appendix \ref{sec:app ghost free pot}. For pedagogical reasons we also present in some detail in Appendix \ref{sec:app special gal} the coset construction of the special galileon algebra. \bigskip \noindent {\it Conventions:} We use the mostly-plus metric signature and work in $D$ spacetime dimensions, although we will specify some results to $D=4$. Coordinates of the spacetime manifold are labeled by greek indices (except in the general discussion of Sec.\ \ref{sec:coset}) and coordinates of the flat tangent space are labeled by latin indices. Symmetrization and antisymmetrization of indices is defined with unit weight. We will use anti-hermitian symmetry generators in order to avoid factors of $i$ in the algebras and in several other expressions. \section{Coset construction for gauge symmetries} \label{sec:coset} The coset construction \cite{Coleman:1969sm,Callan:1969sn} is a general and systematic method to derive the low-energy effective action for a set of Goldstone bosons associated with any given symmetry breaking pattern. This construction can also be used to describe the couplings between Goldstones and any additional field. While the Goldstones themselves transform nonlinearly under the broken symmetries, the coset construction furnishes a set of covariant building blocks which transform under {\it all} symmetries ({\it i.e.}, even the broken ones) according to linear representations of the unbroken group. Thus, by combining these elements in a way that is manifestly invariant under the unbroken symmetries, one can obtain the most general action that is invariant under all the symmetries. Moreover, the coset building blocks are usually organized in increasing order in derivatives, allowing one to produce invariant actions that systematically implement the derivative expansion. We refer the reader to \cite{Weinberg:1996kr} for a textbook treatment. The generalization of the original technique to accommodate spacetime symmetries ({\it i.e.}~symmetries that do not commute with the Poincar\'e group\footnote{One may of course consider background spacetimes other than Minkowski, in which case the Poincar\'e group would be replaced by the relevant isometry group. For simplicity, in this work we restrict our attention to Poincar\'e invariant theories.}) was developed in \cite{Volkov:1973vd, ogievetsky:1974ab}; some recent works that provide good self-contained reviews are \cite{Goon:2012dy,Nicolis:2013sga}. Here we limit ourselves to outline only the main elements of the coset construction, in order to establish our notation, and to review the method for including gauge symmetries in this set-up, following~\cite{Delacretaz:2014oxa}.\footnote{For a different but equivalent approach to gauge symmetries within the coset construction, see~\cite{Ivanov:1976pg,Goon:2014ika,Goon:2014paa}.} Let us consider a theory that is invariant under a symmetry group $G$, but with a ground state that spontaneously breaks $G$ down to a subgroup $H$. For simplicity, we will assume that spacetime translations are unbroken,\footnote{It would actually be sufficient for the discussion that follows to consider a static, homogeneous ground state. This weaker requirement amounts to demanding that there exists a set of generators $P_a$ that commute with each other and, if the system is also isotropic, have the correct transformation properties under rotations. Notice that the $P_a$'s could be a linear combination of the original space-time translations and internal generators---as is for instance the case in many condensed matter systems~\cite{Nicolis:2015sra}.} so that the algebra of $H$ is spanned by the generators $P_a$ of translations and by a set of generators $T_A$ that include all the other unbroken symmetries, both internal and spacetime, while the remaining generators of the algebra of $G$, the spontaneously broken ones, are denoted by $Z_\alpha$. We introduce the coset representative $\Omega(x)$, which is a $G$-valued field with no components along the unbroken generators $T_A$, \begin{equation} \Omega(x)\equiv e^{x^aP_a}e^{\pi^{\alpha}(x)Z_{\alpha}}\,, \end{equation} where the $\pi^\alpha$'s are the Goldstone fields associated with the broken generators $Z_\alpha$'s. From this, one defines the Maurer--Cartan form $\Omega^{-1}\mathrm{d}\Omega$, which is an algebra-valued field and as such may be expanded as a linear combination of all the generators: \begin{equation} \Omega^{-1}\mathrm{d}\Omega=e^aP_a+\omega^{\alpha}Z_{\alpha}+\omega^AT_A\,. \end{equation} The 1-forms $e^a$ are interpreted as vielbeins due to the way they transform under Lorentz and general coordinate transformations; in particular, one can define a metric $g_{\mu\nu}=\eta_{ab}e^a_{\phantom{a}\mu}e^b_{\phantom{b}\nu}$ with the expected transformation properties. The 1-forms $\omega^\alpha$ transform covariantly under all the symmetries, as announced, and are the basic building blocks out of which invariant actions may be constructed, simply by contracting indices with $H$-invariant tensors. It is in fact often more convenient to work with the Goldstone covariant derivatives $\nabla_a\pi^\alpha$, defined by \begin{equation} \nabla_a\pi^\alpha\equiv (e^{-1})_a^{\phantom{a}\mu}(\omega^{\alpha})_{\mu}\,. \end{equation} Lastly, the 1-forms $\omega^A$ transform as connections and are necessary in order to couple the Goldstones to matter fields, or to write higher-order covariant derivatives of the Goldstone fields themselves. As is well known, in the presence of symmetry breaking patterns that involve spacetime symmetries, the standard counting of gapless modes dictated by Goldstone's theorem does not apply---see {\it e.g.}~\cite{Low:2001bw}. Instead, one finds in general that some of the Goldstones are redundant, in the sense that they are not necessary to achieve a non-trivial nonlinear realization of the symmetries under consideration. This usually happens either because these modes are not independent, or because they are gapped and may be integrated out upon restricting oneself to energy scales below the gap.\footnote{See however~\cite{Rothstein:2017twg} for a interesting exception, namely the {\it dynamical} Higgs mechanism.} These redundant Goldstones can usually be eliminated directly at the level of the coset construction, by imposing the so-called ``inverse Higgs constraints'' \cite{Ivanov:1975zq}: whenever the algebra contains a commutator of the form $\left[P_a,Z^{\beta}\right]=\lambda_{a\alpha}^{\phantom{a\alpha}\beta}Z^{\alpha}+({\rm unbroken})$, then the constraint \begin{equation} \label{eq:general ihc} \lambda_{a\alpha}^{\phantom{a\alpha}\beta}\nabla^a\pi^{\alpha}=0\,, \end{equation} may be imposed and solved for the Goldstone $\pi^\beta$, which is redundant. The solution can then be substituted back into the components of the Maurer--Cartan form without affecting their transformation properties.\footnote{We refer the reader to \cite{Nicolis:2013sga,Brauner:2014aha,Klein:2017npd} for more complete discussions related to redundant Goldstones and inverse Higgs constraints, as well as their physical interpretation.} The discussion so far has been restricted to global symmetries, but the coset construction can be easily generalized to the case where some of the symmetries are gauged, be they broken or unbroken by the ground state \cite{Delacretaz:2014oxa}. Let $G_g\subseteq G$ be the gauged subgroup, and denote its generators by $V_I$. The inclusion of gauge fields $A^I(x)$ is done by modifying the Maurer--Cartan form as \begin{equation} \Theta\equiv \Omega^{-1}\left(\mathrm{d}+A^IV_I\right)\Omega\,. \end{equation} One can easily verify that $\Theta$ is invariant under local $G_g$ transformations, \begin{equation} \Omega\to g(x)\Omega\,,\qquad A^IV_I\to g(x)A^IV_I g^{-1}(x)+g(x)\mathrm{d} g^{-1}(x)\,, \end{equation} with $g(x)\in G_g$. As usual, the gauge invariance related to a spontaneously broken gauge symmetry can be used to completely eliminate the corresponding Goldstone field, by working in the so-called unitary gauge. To see this explicitly in our set-up, consider a subgroup $G_g'\subseteq G_g$ of gauged generators $V_I$ which are also broken, with associated Goldstone fields $\pi^I$. To leading order in the Goldstones we have \begin{equation} \omega^I=A^I+\mathrm{d}\pi^I+f^{I}_{\phantom{I}JK}A^J\pi^K+O(\pi^2)\,, \end{equation} with $f^{I}_{\phantom{I}JK}$ the structure constants of the algebra of $G_g'$. On the other hand, under an infinitesimal gauge transformation $g(x)=e^{\epsilon^I(x)V_I}\simeq 1+\epsilon^IV_I$, the gauge field changes as $A^I\to A^I-\mathrm{d}\epsilon^I-f^{I}_{\phantom{I}JK}A^J\epsilon^K$, so that the choice of gauge $\epsilon^I=\pi^I$ precisely eliminates the Goldstones $\pi^I$ from the Maurer--Cartan form. Although obviously more economic, the language of unitary gauge is not necessarily the most useful one, since depending on the context it may obscure the correct power counting of operators in the derivative expansion.\footnote{This is completely analogous to the treatment of massive field theories in the St\"uckelberg formulation, see {\it e.g.} \cite{deRham:2018qqo,Boulanger:2018dau}.} For this reason, it is in general good practice to carry out the coset construction keeping all the Goldstones and gauge fields, and only fix unitary gauge after building the action, if so desired. The Maurer--Cartan 1-form provides all the necessary ingredients to build kinetic terms for the Goldstones. On the other hand, kinetic terms for the gauge fields are furnished by the components of \begin{equation}\begin{aligned} \Theta_2&\equiv \Omega^{-1}\left(\mathrm{d}+A^IV_I\right)^2\Omega = \Omega^{-1}\left(2\mathrm{d} A^IV_I+A^I\wedge A^J[V_I,V_J]\right)\Omega\,, \end{aligned}\end{equation} which we will refer to as the Maurer--Cartan 2-form. Because the gauged generators form an algebra, with some structure constants $f^{I}_{\phantom{I}JK}$, we can write $\Theta_2$ as \begin{equation} \Theta_2=\Omega^{-1}\left(2\mathrm{d} A^I+f^{I}_{\phantom{I}JK}A^J\wedge A^K\right)V_I\Omega\,, \end{equation} and we recognize inside the parenthesis the usual gauge field strength, but which in the spontaneously broken case will in general receive corrections proportional to the Goldstones. \section{Gauged galileons and massive gravity} \label{sec:galileon} In this section we carry out the coset construction of the gauged galileon algebra and investigate its relation to theories of massive gravity. \subsection{Coset Construction} In addition to the generators of translations $P_a$ and Lorentz transformations $J_{ab}$, the galileon algebra contains a constant shift generator $C$ and some internal translations---or``galileon shifts''---generated by $Q_a$. On a scalar field $\pi$, the latter correspond to symmetry transformations $\delta_C \pi=1$ and $\delta_{Q_a}\pi=x_a$. The non-vanishing commutators of the algebra are given by \begin{equation}\begin{aligned} \label{eq:gal algebra} [P_a,Q_b]&=\eta_{ab}C\,,\\ [J_{ab},Q_c]&=\eta_{bc}Q_a-\eta_{ac}Q_b\,,\\ [J_{ab},P_c]&= \eta_{bc} P_a - \eta_{ac} P_b\,,\\ [J_{ab},J_{cd}]&=\eta_{bc} J_{ad} - \eta_{ac} J_{bd} - \eta_{bd} J_{ac} + \eta_{ad} J_{bc}\,. \end{aligned}\end{equation} with the last two making the usual Poincar\'e subalgebra. The Galileon theory is defined around a state that breaks the symmetry generators $C$ and $Q_a$ down to Poincar\'e \cite{Goon:2012dy}, so that the coset representative we consider is \begin{equation} \Omega=e^{x^aP_a}e^{\pi C}e^{\xi^bQ_b}\,, \end{equation} with Goldstones $\pi(x)$ and $\xi^a(x)$. Because we are interested in obtaining a spontaneously broken gauge theory, and because the gauged generators must form a subalgebra, we conclude that there are two possible ways to include gauge symmetries in this context: either by gauging $P_a$, $Q_a$ and $C$, or by gauging the whole algebra. However, the first case without gauging $J_{ab}$ would not give all the necessary ingredients that we expect in a sensible theory of gravity. Indeed, in the case of pure Poincar\'e it is necessary to gauge both $P_a$ and $J_{ab}$ in order to derive General Relativity \cite{Delacretaz:2014oxa}. We therefore focus on the case where all the symmetries are gauged, and introduce 1-form gauge fields $\tilde{e}^a$, ${\omega}^{ab}$, $\tilde{A}$ and ${h}^a$ respectively for translations, Lorentz transformations, shifts and galileon shifts. The Maurer--Cartan form is then \begin{equation}\begin{aligned} \Theta&\equiv\Omega^{-1}\left(\mathrm{d}+\tilde{e}^aP_a+\frac{1}{2}\,{\omega}^{ab}J_{ab}+{h}^aQ_a+\tilde{A}C\right)\Omega\\ &\equiv e^aP_a+\frac{1}{2}\,\omega^{ab}J_{ab}+\omega_Q^aQ_a+\omega_C C\,, \end{aligned}\end{equation} where the covariant 1-forms are given by \begin{equation}\begin{aligned} \label{eq:gal 1-forms} e^a&=\delta^a+\tilde{e}^a+{\omega}^{ab}x_b\,,\\ \omega_Q^a&=\mathrm{d}\xi^a+\omega^{ab}\xi_b+h^a\equiv e^b_{\phantom{b}\mu}\nabla_b\xi^a\,\mathrm{d} x^{\mu}\,,\\ \omega_C&=\mathrm{d}\pi+A+e^a\xi_a\equiv e^b_{\phantom{b}\mu}\nabla_b\pi\,\mathrm{d} x^{\mu}\,,\\ \end{aligned}\end{equation} with $\delta^a\equiv \delta^a_{\mu}\mathrm{d} x^{\mu}$ and $A\equiv \tilde{A}-{h}^ax_a$. By performing simple field redefinitions, we can now consider $e^a$, $\omega^{ab}$, $h^a$ and $A$ as the relevant variables for our gauge fields, and recall that $e^a$ and $\omega^{ab}$ define, respectively, the vielbein and spin connection of the spacetime manifold. It is convenient at this stage to eliminate the redundant Goldstones by identifying the inverse Higgs constraints. The algebra commutator $[P_a,Q_b]=\eta_{ab}C$ allows us to set (\textit{cf.}~\eqref{eq:general ihc}) \begin{equation} \nabla_a\pi=0\qquad \Rightarrow\qquad \xi_a=-\partial_a\pi-A_a\,, \end{equation} where as usual we employed the (inverse) vielbein to trade Lorentz for spacetime indices, {\it e.g.}~$A_a\equiv(e^{-1})_a^{\phantom{a}\mu}A_{\mu}$. Notice that, after solving the constraint, the choice of unitary gauge will allow us to eliminate both $\pi$ and $A$ from the action. The last ingredients we need are the gauge field strengths, which are necessary to build kinetic terms and derivative interactions for the gauge fields. The Maurer--Cartan 2-form is \begin{equation}\begin{aligned} \Theta_2&=\Omega^{-1}\left(\mathrm{d}+\tilde{e}^aP_a+\frac{1}{2}\,{\omega}^{ab}J_{ab}+{h}^aQ_a+\tilde{A}C\right)^2\Omega\\ &=T^aP_a+\frac{1}{2}\,R^{ab}J_{ab}+T_Q^aQ_a+T_CC\,, \end{aligned}\end{equation} and we find the components \begin{equation}\begin{aligned} \label{eq:gal 2-forms} T^a&=\mathrm{d} e^a-e^b\wedge\omega^a_{\phantom{a}b}\,,\\ T_Q^a&=\mathrm{d}\omega_Q^a-\omega_Q^b\wedge\omega^a_{\phantom{a}b}\,,\\ T_C&=\mathrm{d}\omega_C+e_a\wedge\omega_Q^a\,,\\ R^{ab}&=\mathrm{d}\omega^{ab}+\omega^{ac}\wedge\omega_c^{\phantom{c}b}\,. \end{aligned}\end{equation} The last term is the familiar Riemann 2-form for the spin connection. The 2-form associated with $C$ does not carry any new information, since it reduces to $T_C=e_a\wedge\omega_Q^a$ upon solving the inverse Higgs constraint, which we already knew to be a covariant object. Lastly, the 2-forms associated with $P_a$ and $Q_a$ correspond to torsion fields for $e^a$ and $\omega_Q^a$. \subsection{Background Vielbein and Torsion-free Condition} The 1-forms $e^a$ and $\omega_Q^a$ in \eqref{eq:gal 1-forms} and the 2-forms $T^a$, $T_Q^a$ and $R^{ab}$ in \eqref{eq:gal 2-forms} give all the building blocks required to construct invariant actions at lowest order in derivatives. At this stage we apply some physical input in view of the type of theories we seek, {\it i.e.}~theories with a massive spin-2 field. Although both $e^a$ and $h^a$---or equivalently $\omega_Q^a$---contain a rank-2 symmetric tensor (upon pulling back to the spacetime manifold), physically we cannot regard both as dynamical because with a single Riemann 2-form we can only write down one Einstein--Hilbert kinetic term. Recall that in general relativity and massive gravity this is done by expressing the spin connection $\omega^{ab}$ in terms of the vielbein after imposing that the torsion vanish. Here we have two torsions and one spin connection, so the best we can do is to set a particular combination of $T^a$ and $T_Q^a$ to zero, but not both. To identify this combination, the first step is to define the vielbein that is going to describe the spin-2 degree of freedom in our theory. A natural choice is \begin{equation} q^a\equiv e^a+\omega_Q^a\,, \end{equation} and we will interpret $e^a$ to be a {\it non-dynamical} background vielbein, and $\omega_Q^a$ as the fluctuation field about this background. That this choice makes sense can be seen by noting that, in unitary gauge where the Goldstones are set to zero, the ``vacuum'' configuration in which the gauge fields vanish corresponds to \begin{equation} e^a=\delta^a\,,\qquad \omega_Q^a=0\,, \end{equation} so that the above interpretation is consistent. Of course, since $e^a$ was originally the gauge field of local translations, by treating it as an externally prescribed field we are giving up diffeomorphism invariance, which is again consistent with our goal of constructing models of massive gravity. Another choice we will make on physical grounds is to eliminate the spin connection via some torsion-free condition. Having in mind that the kinetic interactions of $q^a$ should be given by the Einstein--Hilbert term,\footnote{Ghost-free massive spin-2 kinetic interactions that are not of the Einstein--Hilbert type have been ruled out in \cite{deRham:2013tfa,deRham:2015rxa}. This result is reminiscent of what happens for massive spin-1 fields, where the only ghost-free kinetic term is the gauge-invariant one. \label{footnote kinetic terms}} we impose the condition \begin{equation} T^a+T_Q^a=\mathrm{d} q^a-q^b\wedge\omega^a_{\phantom{a}b}=0\,, \end{equation} which has the standard solution for the spin connection, \begin{equation} \label{eq:spin connection sol} \omega^{ab}_{\phantom{ab}\mu}=(q^{-1})^{\rho[a}\partial_{\mu}q_{\rho}^{\phantom{\rho}b]}-(q^{-1})^{\lambda[a}\partial_{\rho}q_{\mu}^{\phantom{\rho}b]}-(q^{-1})^{\rho[a}(q^{-1})^{b]\sigma}q_{\mu c}\partial_{\rho}q_{\sigma}^{\phantom{\sigma}c}\,. \end{equation} In summary, we have eliminated $e^a$ and $\omega^{ab}$ as physical variables by choosing to give up diffeomorphism invariance and by imposing the condition of vanishing torsion. In addition, and as mentioned before, the Goldstone $\pi$ and the gauge field $A$---which may be better thought of as a Goldstone or St\"uckelberg field after the inverse Higgs constraint is imposed---are pure gauge degrees of freedom and may be fully eliminated by choosing to work in unitary gauge. This leaves us with $h^a$ as the only propagating field, and with local Lorentz transformations as the only relevant symmetry. These are precisely the defining properties of a theory of massive gravity. \subsection{Effective Action} The most general action can now be straightforwardly constructed through a standard derivative expansion. However, it is well known that generic interactions for a massive spin-2 particle lead to an additional ghostly degree of freedom \cite{Boulware:1973my}, which can only be remedied by a special tuning of operator coefficients.\footnote{Of course, in an effective field theory the presence of a ghost simply signals an incorrect identification of the UV cutoff. The requirement of ``having no ghosts'' is thus a phenomenological one, motivated by the need of enlarging the range of applicability of the theory.} This leads to the dRGT theory of massive gravity \cite{deRham:2010ik,deRham:2010kj}:\footnote{The prefactor has been chosen so that the Einstein--Hilbert term yields the standard normalization when switching to the metric formulation: $S_{\rm EH}=\frac{M_P^{D-2}}{2}\int d^Dx\sqrt{-g}\,R(g)$.} \begin{equation}\begin{aligned} \label{eq:drgt action} S_{\mathrm{dRGT}}&=\frac{M_P^{D-2}}{6(D-2)!}\Bigg[\int \epsilon_{a_1a_2\cdots a_D}R^{a_1a_2}(q)\wedge q^{a_3}\wedge\cdots\wedge q^{a_D}\\ &\quad +m^2\int\sum_{n=0}^{D-1}\frac{c_n}{n!(D-n)!}\,{\epsilon}_{a_1a_2\cdots a_D}e^{a_1}\wedge\cdots\wedge e^{a_n}\wedge q^{a_{n+1}}\wedge\cdots\wedge q^{a_D}\Bigg]\,, \end{aligned}\end{equation} with $M_P$ the Planck mass and $m$ the graviton mass. The Riemann form is now a function of $q^a$, so that the first term is the standard Einstein--Hilbert action in second-order form. Among the $D$ dimensionless coefficients $c_n$, one combination of them is fixed after $m$ is chosen as the physical graviton mass, and another combination can be set to zero by assuming the absence of a tadpole term. This leaves $D-2$ independent parameters in addition to $M_P$ and $m$. It is worth repeating that the action \eqref{eq:drgt action} is a result of a tuning of EFT coefficients in our formalism and does not follow from a derivative expansion. We have shown that the coset construction of the gauged galileon group provides all the necessary ingredients to build theories of massive gravity, but ultimately no additional symmetries are present that can be used to restrict the set of allowed interactions. This outcome was expected, as it is known that the form of the dRGT action is not protected against under quantum corrections \cite{deRham:2013qqa}, suggesting that it is not protected by any symmetry. An analogous conclusion was reached in~\cite{Goon:2014paa}, where a similar line of reasoning---although in a different set-up---was used to construct massive gravity as a spontaneously broken gauge theory. \section{Gauged special galileons} \label{sec:special galileon} The special galileon algebra extends the galileon algebra \eqref{eq:gal algebra} with a generator $S_{ab}$ that is symmetric and traceless in its Lorentz indices \cite{Hinterbichler:2015pqa}. It is realized on a scalar field $\phi$ as an extended shift symmetry, \begin{equation} \label{special Galileon symmetry} \delta_{S_{ab}}\pi=x_ax_b-\frac{1}{D}\,\eta_{ab}x^2-\alpha^2\left[\partial_a\pi\partial_b\pi-\frac{1}{D}\,\eta_{ab}(\partial\pi)^2\right]\,, \end{equation} where $\alpha$ is an arbitrary constant. In what follows, we will set $\alpha=1$ without loss of generality by a rescaling of generators. We will now repeat the analysis in the previous section for a Galileon field endowed with the additional symmetry \eqref{special Galileon symmetry}. \subsection{Coset Construction} In addition to \eqref{eq:gal algebra}, the special galileon algebra contains the following non-trivial commutators: \begin{equation}\begin{aligned} \left[P_a,S_{bc}\right]&=\eta_{ab}Q_c+\eta_{ac}Q_b-\frac{2}{D}\,\eta_{bc}Q_a\,,\\ [Q_a,S_{bc}]&=\eta_{ab}P_c+\eta_{ac}P_b-\frac{2}{D}\,\eta_{bc}P_a\,,\\ [S_{ab},S_{cd}]&=\eta_{bc} J_{ad}+\eta_{ac} J_{bd} + \eta_{bd} J_{ac} + \eta_{ad} J_{bc}\,,\\ [J_{ab},S_{cd}]&=\eta_{bc}S_{ad}-\eta_{ac}S_{bd}+\eta_{bd}S_{ac}-\eta_{ad}S_{bc}\,. \end{aligned}\end{equation} The special galileon theory is defined by a state that breaks $C$, $Q_a$ and $S_{ab}$ down to the Poincar\'e subgroup. Our goal is to work out the coset construction for the gauged special galileon. Goldstone bosons are introduced via the coset parametrization \begin{equation} \Omega=e^{x^aP_a}e^{\pi C}e^{\xi^bQ_b}e^{\frac{1}{2}\sigma^{cd}S_{cd}}\,, \end{equation} where $\sigma^{ab}$ is symmetric and traceless. The algebra contains three subalgebras: Poincar\'e, the subalgebra formed by $P_a$, $Q_a$ and $C$, and the galileon algebra \eqref{eq:gal algebra}. When it comes to choose which symmetries to gauge, we ignore as before the options of gauging only Poincar\'e (because we are interested in broken gauge symmetries), or only $P_a$, $Q_a$ and $C$ (because of the absence of a spin connection). This leaves us with the possibilities of gauging either the whole group or the galileon subgroup. We make the simplest choice of gauging only the galileon subgroup, which as we will see will allow us to recover a subset of the dRGT potentials. Nevertheless, the option of gauging the extended symmetry $S_{ab}$ is interesting and we will comment on it in Sec.\ \ref{sec:discussion}. The Maurer--Cartan form is therefore given by \begin{equation}\begin{aligned} \Theta&=\Omega^{-1}\left(\mathrm{d}+\tilde{e}^aP_a+\frac{1}{2}\,{\omega}^{ab}J_{ab}+{h}^aQ_a+\tilde{A}C\right)\Omega\\ &=E^aP_a+\frac{1}{2}\,\Omega^{ab}J_{ab}+\Omega_Q^aQ_a+\Omega_CC+\frac{1}{2}\,\Omega_S^{ab}S_{ab}\,, \end{aligned}\end{equation} where $E^a$ and $\Omega^{ab}$ (not to be confused with the coset representative) are the vielbein and spin connection, and $\Omega^a_Q$, $\Omega_C$ and $\Omega_S^{ab}$ are the covariant 1-forms associated to the broken generators. In terms of the redefined gauge fields $e^a\equiv \delta^a+\tilde{e}^a+{\omega}^{ab}x_b$ and $A\equiv\tilde{A}-{h}^ax_a$ introduced in the previous section, we find\footnote{The calculation of $\Omega^{ab}$ and $\Omega_S^{ab}$ is non-trivial; the interested reader can find more details in Appendix~\ref{sec:app special gal}.} \begin{equation} \label{MC 1 forms special galileon}\begin{aligned} E^a&=(\cosh\sigma)^{a}_{\phantom{a}b}e^b+(\sinh\sigma)^{a}_{\phantom{a}b}\omega_Q^b\,,\\ \Omega^{ab}&=\omega^{ab}+(\Lambda^{-1})^{ab}_{cd}\left(\cosh\Lambda-\mathbf{1}\right)^{cd}_{ef}\left(\mathrm{d}\sigma^{ef}+2\omega^{g(e}\sigma^{f)}_{\phantom{f)}g}\right) \,,\\ \Omega_Q^a&=(\sinh\sigma)^{a}_{\phantom{a}b}e^b+(\cosh\sigma)^{a}_{\phantom{a}b}\omega_Q^b\equiv E^b_{\phantom{b}\mu}\nabla_b\xi^a\,\mathrm{d} x^{\mu}\,,\\ \Omega_C&=\mathrm{d}\pi+A+e^a\xi_a\equiv E^a_{\phantom{a}\mu}\nabla_a\pi\,\mathrm{d} x^{\mu}\,,\\ \Omega_S^{ab}&=(\Lambda^{-1})^{ab}_{cd}\left(\sinh\Lambda\right)^{cd}_{ef}\left(\mathrm{d}\sigma^{ef}+2\omega^{g(e}\sigma^{f)}_{\phantom{f)}g}\right) \equiv E^c_{\phantom{c}\mu}\nabla_c\sigma^{ab}\,\mathrm{d} x^{\mu}\,. \end{aligned}\end{equation} where we defined $\Lambda^{ab}_{cd}\equiv \delta^a_c\sigma^b_{\phantom{b}d}-\delta^b_d\sigma^a_{\phantom{a}c}$. Notice that $E^a, \Omega^{a b}$ and $\Omega^a_Q$ respectively reduce to the quantities $e^a, \omega^{a b}$ and $\omega^a_Q$ introduced in the previous section when $\sigma^{ab} = 0$. The inverse Higgs constraint related to the galileon algebra commutator $[P_a,Q_b]=\eta_{ab}C$ is exactly the same as in Sec.\ \ref{sec:galileon}, {\it i.e.} \begin{equation} \nabla_a\pi=0\qquad \Rightarrow\qquad \xi_a=-\partial_a\pi-A_a\,. \end{equation} In the special galileon case there is also another constraint related to the commutator $[P_a,S_{bc}]=\eta_{ab}Q_c+\eta_{ac}Q_b-\frac{2}{D}\,Q_a\eta_{bc}$. This implies that we can set to zero the traceless symmetric part of the covariant derivative of $\xi^a$, \begin{equation} \label{eq:second ihc} \nabla_a\xi_b+\nabla_b\xi_a-\frac{2}{D}\,\eta_{ab}[\nabla\xi]=0\,, \end{equation} where $[\ldots]$ stands for the trace. This equation can in principle be solved to express $\sigma_{ab}$ in terms of derivatives of the other Goldstones and the gauge fields. Let us now define the rank-2 tensor $K_a^{\phantom{a}b}\equiv (e^{-1})_a^{\phantom{a}\mu}(\omega_Q^b)_\mu$. This is essentially the covariant derivative of the Goldstones $\xi^a$ in the absence of the field $\sigma^{a b}$---see Eq.\ \eqref{eq:gal 1-forms}. Switching for a moment to a matrix notation with contractions of Lorentz indices denoted by a dot, we can write $E=e\cdot(\cosh\sigma+K\cdot\sinh\sigma)$ and $\omega_Q=e\cdot(\sinh\sigma+K\cdot\cosh\sigma)$, and therefore \begin{equation} \nabla\xi=(\cosh\sigma+K\cdot\sinh\sigma)^{-1}\cdot (\sinh\sigma+K\cdot\cosh\sigma)\,. \end{equation} The inverse Higgs constraint \eqref{eq:second ihc} now implies a relation between $\sigma$ and $K$, which means that a solution exists only if $[\sigma,K]=0$. This observation allows us to rewrite $\nabla\xi$ as \begin{equation} \label{eq:nabla xi matrix} \nabla\xi=\tanh(\sigma+\tanh^{-1}K)\,. \end{equation} At this point we will make the additional assumption that $\nabla\xi$ is a symmetric matrix, {\it i.e.}~$\nabla_a\xi_b=\nabla_b\xi_a$. We will expand on this assumption below, where we will show that it is related to the so-called ``symmetric vielbein condition''~\cite{Deser:1974cy} commonly used in the context of massive gravity. Using such symmetry condition in \eqref{eq:second ihc} we find that $\nabla\xi=\frac{\mathbf{1}}{D}\,[\nabla\xi]$, which we substitute in \eqref{eq:nabla xi matrix} to obtain (reinstating indices) \begin{equation} \label{eq:second ihc solution} \sigma^{ab}=-(\tanh^{-1}K)^{ab}+\frac{1}{D}\,\eta^{ab}[\tanh^{-1}K]\,. \end{equation} This is the desired solution of the second inverse Higgs constraint, which defines $\sigma^{ab}$ as a function of the Goldstone $\pi$ and the gauge fields. \subsection{Effective Action: potential terms} It is worth pausing at this stage to consider the ingredients we have so far, and the type of invariant operators we can build out of them. The relevant fields are again $e^a$ and $\omega_Q^a$ (which reduces to $h^a$ in unitary gauge), the spin connection (which due to a torsion-free condition will ultimately not be independent), and now possibly also $\sigma^{ab}$ (depending on whether or not we choose to apply the second inverse Higgs constraint to express $\sigma^{ab}$ in terms of $e^a$ and $\omega_Q^a$ as in Eq.\ \eqref{eq:second ihc solution}). Keeping in mind our goal of constructing theories of massive gravity, we will remove the diffeomorphism symmetry associated to the gauging of translations by setting $e^a$ to be a non-dynamical background field, and interpret $\omega_Q^a$ as the spin-2 fluctuation about this background---precisely as we did in the standard galileon case. The difference is that now $e^a$ and $\omega_Q^a$ do not transform covariantly under the special galileon group, but instead the correct building blocks are now $E^a$ and $\Omega_Q^a$ defined in Eq.\ \eqref{MC 1 forms special galileon}. In particular, the dRGT type potentials we are interested in, namely the terms of the form \begin{equation} \label{eq:generic drgt pot} S_{\rm pot}=\int\sum_{n=0}^D\frac{b_n}{n!(D-n)!}\,{\epsilon}_{a_1a_2\cdots a_D}\Omega_Q^{a_1}\wedge\cdots\wedge\Omega_Q^{a_n}\wedge E^{a_{n+1}}\wedge\cdots\wedge E^{a_D}\,, \end{equation} will generically involve interactions between $\omega_Q^a$ and the Goldstone $\sigma^{ab}$, as well as self-interactions for the latter. If we choose to replace $\sigma^{ab}$ in terms of $\omega_Q^a$ by solving the inverse Higgs constraint, Eq.\ \eqref{eq:second ihc solution}, the resulting potential will clearly not be of the ghost-free type. If instead we decide not to apply the constraint and leave $\sigma^{ab}$ as an independent degree of freedom, which will generically be gapped, the action will now yield the correct self-interactions for $\omega_Q^a$, but also some extra operators involving the Goldstone $\sigma^{ab}$, and it is far from clear whether they will be free of pathologies. Remarkably, however, we can bypass these issues---at least as far as the potential is concerned---by observing that it is possible to choose the coefficients $b_n$ in \eqref{eq:generic drgt pot} in such a way that $\sigma^{ab}$ drops out completely from the expression in \eqref{eq:generic drgt pot}. In Appendix \ref{sec:app ghost free pot}, we show in fact that for the specific choice \begin{equation} b_n=\begin{cases} \beta_1 & \mbox{if $n$ is even}\,,\\ \beta_2 & \mbox{if $n$ is odd}\,, \end{cases} \end{equation} with $\beta_1$ and $\beta_2$ arbitrary constants, the action in \eqref{eq:generic drgt pot} reduces to \begin{equation}\begin{aligned} \label{eq:special drgt pot} S_{\rm pot}&=\int \,{\epsilon}_{a_1a_2\cdots a_D} \Bigg[ \sum_{n\,{\mathrm{even}}}\frac{\beta_1}{n! (D-n)!}\, e^{a_1}\wedge\cdots\wedge e^{a_n}\wedge \omega_Q^{a_{n+1}}\wedge\cdots\wedge \omega_Q^{a_D} \\ &\quad + \sum_{n\,{\mathrm{odd}}} \frac{\beta_2}{n! (D-n)!}\, e^{a_1}\wedge\cdots\wedge e^{a_n}\wedge \omega_Q^{a_{n+1}}\wedge\cdots\wedge \omega_Q^{a_D} \Bigg]\,. \end{aligned}\end{equation} This action has the desired ghost-free property, being a specific member of the dRGT family of potentials. However, the $n=1$ term in the above sum gives rise to a tadpole, and therefore we will set $\beta_2=0$ in what follows. The parameter $\beta_1$ instead is related to the graviton mass as explained in Sec.\ \ref{sec:galileon}. Thus, in this theory, the graviton mass is the unique parameter controlling the non-derivative spin-2 interactions. In relation to our original motivation of constructing IR completions of the special galileon, one can check that dRGT massive gravity with the ``special'' potential \eqref{eq:special drgt pot} indeed yields the special galileon theory in its decoupling limit.\footnote{Albeit with some additional interactions with helicity-2 modes.} Incidentally, although we have been able to engineer an action in which the Goldstone $\sigma^{ab}$ plays no role, it is worth going back to the assumption we made when solving the second inverse Higgs constraint, namely the symmetry condition $\nabla_a\xi_b=\nabla_b\xi_a$. Remembering that $\nabla_b\xi^a=(E^{-1})_b^{\phantom{b}\mu}(\Omega_Q^a)_{\mu}$, we observe that the symmetry of $\nabla_a\xi_b$ is equivalent to the so-called symmetric vielbein condition that is typically used in the context of ghost-free massive gravity, and which allows to connect the vielbein and metric formulations. It has been shown that, for the dRGT type potentials \eqref{eq:generic drgt pot}, this condition is not an extra assumption but actually follows from the equations of motion \cite{Hinterbichler:2012cn}.\footnote{It was later pointed out in \cite{Deffayet:2012zc} that this is not strictly true in general, as there exist non-perturbative field configurations for which the argument may fail depending on the parameters.} Precisely the same reasoning can be used in our case if we decide to focus on ghost-free potentials. \subsection{Effective Action: kinetic terms} Lastly, we turn our attention to the kinetic terms which are generated by the coset construction. The Maurer--Cartan 2-form is given by \begin{equation}\begin{aligned} \Theta_2&=\Omega^{-1}\left(\mathrm{d}+\tilde{e}^aP_a+\frac{1}{2}\,{\omega}^{ab}J_{ab}+{h}^aQ_a+\tilde{A}C \right)^2\Omega\\ &\equiv{\cal T}^aP_a+\frac{1}{2}\,{\cal R}^{ab}J_{ab}+{\cal T}_Q^aQ_a+{\cal T}_CC + \frac{1}{2}\,{\cal T}_S^{ab} S_{ab}\,, \end{aligned}\end{equation} and the various components are \begin{equation} \label{MC 2 forms special galileon}\begin{aligned} {\cal T}^a&=\mathrm{d} E^a-E^b \wedge\Omega^a_{\phantom{a} b} + \Omega_Q^b \wedge{\Omega_S}^a_{\phantom{a} b} \,,\\ {\cal R}^{ab}&=\mathrm{d}\Omega^{ab}+\Omega^a_{\phantom{a} c}\wedge\Omega^{cb}+{\Omega_S}^a_{\phantom{a} c}\wedge{\Omega_{S}}^{cb}\,,\\ {\cal T}_Q^a&=\mathrm{d}\Omega_Q^a-\Omega_Q^b \wedge\Omega^a_{\phantom{a} b} + E^b \wedge{\Omega_S}^a_{\phantom{a} b} \,,\\ {\cal T}_C&=\mathrm{d}\Omega_C+E_a \wedge\Omega_Q^a\,,\\ {\cal T}_S^{ab} &= \mathrm{d}\Omega_S^{ab}+\Omega^a_{\phantom{a} c}\wedge{\Omega_S}^{cb}+\Omega^b_{\phantom{a} c}\wedge{\Omega_S}^{ca}\, . \end{aligned}\end{equation} The tensors ${\cal T}^a$ and ${\cal R}^{ab}$ generalize the spacetime torsion and Riemann 2-forms to include additional contributions from the Goldstone $\sigma^{ab}$. The ``$Q$-torsion'' ${\cal T}^a_Q$ likewise involves extra terms proportional to $\sigma^{ab}$, while ${\cal T}_C$ is again redundant upon solving the first inverse Higgs constraint. At this stage we are faced again with the issue of constructing a non-pathological action for the spin-2 field $\omega_Q^a$ (or, in unitary gauge, $h^a$). When considering non-derivative operators in the previous subsection, we had a set of coefficients that could be tuned so as to yield a result that was independent of $\sigma^{ab}$, which was desirable in view of the complications related to this additional field. Now, the potential in Eq.\ \eqref{eq:special drgt pot} implies the absence of any mass terms for $\sigma^{ab}$ (which, being a redundant Goldstone, should generically be present). Therefore, the application of the inverse Higgs constraint, albeit still valid from a symmetry viewpoint, is no longer equivalent to integrating out $\sigma^{ab}$ at the level of the action. For this reason, we proceed by considering $\sigma^{ab}$ as an independent degree of freedom, in addition to $\omega^a_Q$. Focusing on the kinetic self-interactions of the spin-2 field $\omega_Q^a$, a natural guess for the kinetic term would be \begin{equation} \label{eq:special gal eh term} S_{\rm EH} = \frac{M_P^{D-2}}{6(D-2)!}\int {\epsilon}_{a_1\,a_2 \cdots a_D} {\cal R}^{a_1 a_2} \wedge q^{a_3} \wedge \cdots \wedge q^{a_D}\, , \end{equation} where $q^a\equiv E^a+\Omega_Q^a$. We would like this to reduce to the Einstein--Hilbert action for the vielbein $e^a+\omega_Q^a$ when $\sigma^{ab}$ is turned off, which implies that the correct torsion-free condition we should impose is \begin{equation} {\cal T}^a+{\cal T}^a_Q=\mathrm{d} q^a - q^b \wedge \Omega^a_{\phantom{a} b} + q^b \wedge {\Omega_S}^a_{\phantom{a} b} \equiv 0 \, . \end{equation} This can be solved for the spin connection as \begin{equation} \label{eq:special spin connection sol} \Omega^{ab}_{\phantom{ab}\mu}=\bar{\Omega}^{ab}_{\phantom{ab}\mu}(q) + 2 q_{\mu c} (q^{-1}) ^{\rho [a} \Omega_{S\phantom{ b] c}\rho}^{\phantom{S} b] c} \, , \end{equation} where $\bar{\Omega}^{ab}_{\phantom{ab}\mu}(q)$ is the usual solution for the spin connection as a function of $q^a$ ({\it cf.}\ \eqref{eq:spin connection sol}). While $S_{\rm EH}$ indeed generates the correct kinetic term for $\omega_Q^a$, it also induces self and mutual derivative interactions for the Goldstone $\sigma^{ab}$ (notice from Eq.\ \eqref{MC 1 forms special galileon} that $\Omega_S^{ab}\simeq\mathrm{d}\sigma^{ab}$ at lowest order in $\sigma^{ab}$). In fact, in the absence of an analogue of the Einstein--Hilbert term for $\sigma^{ab}$ (notice that the ``curvature'' ${\cal T}_S^{ab}$ in Eq.\ \eqref{MC 2 forms special galileon} is symmetric), we are led to consider the most general invariant two-derivative terms built out of the relevant covariant objects, namely the components of the Maurer--Cartan 2-form, the covariant derivative $\nabla_a\sigma^{bc}$, as well as the second covariant derivative of the Goldstone $\xi^a$. We should stress that, in restricting our attention to those terms that are invariant under all the symmetries, despite the fact that the treating $e^a$ as non-dynamical breaks some of them, we are simply following the same strategy that yields ghost-free kinetic terms for massive spin-1 and spin-2 fields---{\it cf.}~comment in footnote \ref{footnote kinetic terms}. This strategy yields in principle a slew of possible combinations, and we are faced with the problem of either (1) finding a specific choice (or set of choices) of operator coefficients such that the field $\sigma^{ab}$ can be completely removed from the kinetic terms, as we did for the potential terms; or (2) in the absence of such a choice, to determine if an action exists for both the spin-2 field $\omega_Q^a$ and the Goldstone $\sigma^{ab}$ that is free of pathologies. Although a full analysis is beyond the scope of this paper, a positive initial observation is that one can achieve the decoupling of $\sigma^{ab}$ at linear level when expanding the fields in perturbations about flat space. To prove this, it is sufficient to note that the Goldstone covariant derivatives already furnish, at linear order about flat space, all the possible combinations of kinetic operators for $\sigma^{ab}$ and the metric fluctuation $\gamma^a$, the latter being defined by $q^a=\delta^a+\frac{1}{2}\,\gamma^a+O(\gamma^2)$. Notice that, by expanding the original definition $q^a=E^a+\Omega_Q^a$, we have that $\gamma^a=2\left(\sigma^a_{\phantom{a}b}\delta^b+(\omega^a_Q)^{(1)}\right)$, where $(\omega^a_Q)^{(1)}$ denotes the first-order piece in $\omega^a_Q$. In terms of $\gamma^a$ we have \begin{equation} \nabla_a\xi^b=(E^{-1})_b^{\phantom{b}\mu}(\Omega^a_Q)_{\mu}= \frac{1}{2}\,\gamma^a_{\phantom{a}b}+\cdots\,, \end{equation} where $\gamma^a_{\phantom{a}b}\equiv (\gamma^a)_{\mu}\delta^{\mu}_b$, and the ellipses stand for non-linear terms (which involve both $\gamma^a$ and $\sigma^{ab}$). Thus, the Goldstone covariant derivatives \begin{equation} \nabla_a\nabla_b\xi_c=\frac{1}{2}\,\partial_a\gamma_{bc}+\cdots\,,\qquad \nabla_a\sigma^{bc}=\partial_a\sigma_{bc}+\cdots\,, \end{equation} are sufficient to construct {\it any} set of kinetic operators in the action at quadratic order. In particular, we are free to engineer an action in which the Goldstone $\sigma^{ab}$ disappears from the kinetic terms at the quadratic level (while keeping the standard Fierz--Pauli kinetic terms for $\gamma_{ab}$). Although this simple observation is encouraging, we have no reason to expect this decoupling to persist beyond linear order. Therefore, a more relevant question is whether the presence of $\sigma^{ab}$ necessarily leads to pathologies at the non-linear level. Notice that, if we were to choose to keep $\sigma^{ab}$ as a dynamical field, writing a healthy quadratic kinetic term for it is indeed possible, {\it i.e.} \begin{equation} S^{(2)}_{{\rm kin},\sigma}\propto \int d^Dx\bigg(-\frac{1}{2}\,\partial_a\sigma^{bc}\partial^a\sigma_{bc}+\partial_a\sigma^{bc}\partial_b\sigma^a_{\phantom{a}c}\bigg)\,. \end{equation} We have checked that this is the unique two-derivative quadratic action without ghosts for a traceless symmetric field. Unsurprisingly, this is nothing but the usual kinetic term for a spin-2 field, except that the trace of $\sigma$ is zero from the outset. This latter fact however does not affect the possible choices of kinetic operators that are free of ghosts. To summarize, we have investigated how theories of massive gravity can be obtained from the gauging of the special galileon. The additional symmetry of the special galileon group, which by definition is spontaneously broken, gives rise to an extra Goldstone boson $\sigma^{ab}$. Interestingly, by restricting the spectrum of the gauge theory to contain only a massive spin-2 particle at zeroth order in derivatives, we showed that the symmetries single out a unique action (at least among the ghost-free class of potentials). We have also shown that the decoupling of the additional Goldstone can be achieved at the two-derivative level when expanding the action to quadratic order in perturbations about flat space. It remains to be seen whether such decoupling can be attained at the fully non-linear level, as was possible for the potentials, or at the very least whether it is possible to build an interacting action that is free of pathologies. \section{Discussion} \label{sec:discussion} In this paper we have discussed the gauging of non-linearly realized symmetries as a method to systematically construct spontaneously broken gauge theories. More specifically, we have addressed the question of how to derive a gauge theory in the Higgs phase from the knowledge of the Goldstone theory that it corresponds to in the decoupling limit. We have put forth the coset construction, along with its extension to include gauge symmetries, as a very general and systematic method to tackle the problem. Focusing on the particular case of the galileon shift symmetry, we have argued that its gauging may be used to investigate infrared completions of galileon theory, the goal being to better understand how massive gravity can be formulated when viewed as a gauge theory for the spontaneously broken diffeomorphism invariance. Our results followed from a number of assumptions made on physical grounds, and it may be interesting to modify or relax some of them. First there is the question of which symmetries one wishes to gauge, and we remarked that the choices we made were not unique. This requires some physical input since formal consistency only demands that gauge symmetries make a subgroup. For the standard galileon we were naturally led to consider the gauging of the whole group: we insisted on gauging the galileon symmetries because of our interest in broken gauge theories, and we insisted on gauging Lorentz because of our wish to have healthy spin-2 kinetic terms. For the special galileon there is, in addition, the option of gauging the extended shift symmetry generated by $S_{ab}$. Although we chose not to do so, it would certainly be intriguing to consider this possibility. In this setting the additional Goldstone $\sigma^{ab}$ associated to the breaking of the extra symmetry would not by itself pose a problem, being now a pure gauge degree of freedom that would disappear in unitary gauge. However, the issue would then be to understand the possible interactions induced by the extra gauge field, a non-trivial task given that this field---a tensor-valued 1-form---would a priori contain a spin-3 degree of freedom upon expanding in perturbations around flat space. A second assumption needed in the implementation of the coset construction concerns the inverse Higgs constraints. Typically, as we have explained, the fields one removes via such constraints are not restricted to be gapless by the symmetries. They are therefore massive in the absence of fine tuning, and one is allowed to integrate them out in order to focus on the gapless modes. This was the situation in our treatment of the standard galileon, where the Goldstone associated to the broken galileon shift could be removed---at low energies---without loss of generality. On the other hand, in the case of the special galileon, our insisting on having a potential with the dRGT structure led us to a theory where the Goldstone $\sigma^{ab}$ was {\it gapless}, despite the fact that the symmetries allowed for mass terms and the option of removing $\sigma^{ab}$ via an inverse Higgs constraint. Although it is likely that our choice is the only one that leads to a ghost-free potential for the graviton, we cannot discard the logical possibility that other potentials may exist for a {\it gapped} Goldstone $\sigma^{ab}$ which, after integrating out the latter, could produce a ghost-free mass term for the graviton. Lastly, the derivation of our models relied on additional physical assumptions that are independent of the symmetry considerations dictated by the coset construction. One such assumption was the torsion-free condition that we chose to impose in both the standard and special galileon cases. Even though it was natural for us to avoid a dynamical torsion based on the spectrum we were interested in, it could be intriguing to eventually consider a Palatini-type formulation of our construction with the spin connection left unconstrained. Another assumption, based again on our wish to build a theory of massive gravity, was to set the vielbein associated to diffeomorphisms as a non-dynamical reference field. It would be of course natural to remove this constraint should one be interested in models of bi-gravity, but this would require some extra symmetries in order to produce the ingredients needed for building an additional spin-2 kinetic term. Given the generality of the method, it is clear that this work can be extended in several directions, some of which we hope to address in future investigations. For instance, our analysis may be extended to the case of multi-galileons \cite{Padilla:2010de,Padilla:2010ir,Hinterbichler:2010xn} with the goal of exploring theories of multi-gravity \cite{Hinterbichler:2012cn,Hassan:2012wt,Hassan:2018mcw}. The extension to multiple scalars would also allow for the inclusion of additional internal symmetries \cite{Andrews:2010km,Allys:2016hfl}, thus serving as a starting point to investigate theories of massive spin-2 fields with extra symmetries of this type. Going beyond the standard galileon, it would be interesting to consider other related theories such as the conformal galileon and the DBI galileon \cite{deRham:2010eu}. Also intriguing would be the application of our techniques to Goldstone theories for which little intuition is available regarding the nature of the corresponding gauge theories, such as the $p$-form and tensor galileons \cite{Deffayet:2010zh,Deffayet:2016von,Deffayet:2017eqq,Chatzistavrakidis:2016dnj}. Lastly, relativistic versions of the galileon group and its symmetry breaking pattern---for example $ISO(3,1)\times ISO(3,1)\to ISO(3,1)$---could be helpful to address the problem of deriving ghost-free kinetic terms for multiple spin-2 fields in our formalism. \begin{acknowledgments} We would like to thank Luca Delacr\'etaz for important collaboration during the early stages of this project, and Alberto Nicolis for continuous encouragement during the long gestation period of this work. We are also grateful to Tomas Brauner, Mariana Carillo Gonz\'alez, Kurt Hinterbichler, Austin Joyce and Rachel A.\ Rosen for useful comments and discussions. SGS is supported by the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013 Grant Agreement no.\ 307934, NIRG project); he also wishes to thank Carnegie Mellon University for generous hospitality. JK was supported by the Kwanjeong Educational Foundation. \end{acknowledgments}
1,108,101,566,820
arxiv
\section*{Introduction} The Riemann zeta function $\zeta(s)$ is one of the most important and fascinating functions in mathematics. When the complex number $s$ has $\Re(s) > 1$, we have $$ \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s} = \prod_{\text{primes} \; p} \Bigl(1 - \frac{1}{p^s} \Bigr)^{-1} , $$ and already from these equivalent expressions we see some of the key themes that dominate the study of $\zeta(s)$. Firstly, since $\zeta(s)$ is given by a {\em Dirichlet series} over all natural numbers $n$, without any difficult number theoretic coefficients, we can hope to use general analytic methods to obtain information about $\zeta(s)$. For example, one could hope to approximate $\sum_{n=1}^{\infty} \frac{1}{n^s}$ or its partial sums by an integral. In this way, one can extend the definition of $\zeta(s)$ to all $\Re(s) > 0$, and with more work to the entire complex plane. It turns out that this analytic continuation of $\zeta(s)$ is meromorphic, with only a simple pole at $s=1$. Furthermore, when $\Re(s) > 0$ the zeta function is the sum $\sum_{n \leq X} \frac{1}{n^s}$ plus some easily understood other terms, for suitable $X = X(s)$. Secondly, since $\zeta(s)$ is given by an {\em Euler product} over all primes $p$, we can hope to use results about the zeta function to draw conclusions about the distribution of primes. One can also go in the reverse direction, and hope to put in information about the primes to deduce things about the zeta function (from which, perhaps, we will later deduce other number theoretic information that we didn't have before). In this article we will discuss various results of this nature. Thirdly, note that the Euler product is absolutely convergent when $\Re(s) > 1$, and none of the individual factors $(1 - \frac{1}{p^s})^{-1}$ vanish, so we have $\zeta(s) \neq 0$ when $\Re(s) > 1$. It is well known that the zeros of the zeta function encode number theoretic information, and here one can glimpse why--- if one knows that $\zeta(s)$ doesn't vanish in a certain part of the complex plane, this suggests that something like the Euler product formula persists there, which implies something about the regularity of the primes. Again there is a kind of duality, since not only does the non-vanishing of zeta imply results about primes and products, but our methods for proving non-vanishing tend to involve establishing the influence of some kind of product formula in the region under study. \medskip The most interesting subset of the complex plane on which to study $\zeta(s)$ is the {\em critical line} $\Re(s) = 1/2$. Thus the {\em Riemann Hypothesis} (RH), possibly the most famous unsolved problem in pure mathematics, conjectures that if $\zeta(s)=0$ then either $s=-2,-4,-6,\dots$ (the so-called trivial zeros), or else $\Re(s) = 1/2$. This is known to be equivalent to the estimate $$ \Bigl|\#\{p \leq x : p \; \text{prime}\} - \int_{2}^{x} \frac{dt}{\log t}\Bigr| \ll x^{1/2} \log x $$ for the counting function of the primes (RH holds if and only if this estimate holds for all large $x$). For any fixed $\sigma > 1/2$, it is believed (and to some extent known) that the values taken by $\zeta(\sigma+it)$ have a rather simple behaviour: for example, $\zeta(\sigma+it)$ can only attain unusually large values as a result of ``conspiracies'' in the behaviour of $p^{-it}$ for small primes $p$. As we shall discuss extensively later, the situation on the critical line is very different. All the appearances of 1/2 here reflect the fact that in a random process, the typical size of fluctuations is like the square root of the variance. The extent to which $\zeta(s)$ behaves like various random objects, especially random objects related to the Euler product, is another key theme that we are going to explore. \medskip Our goal in this paper is to survey some recent work on the behaviour of $\zeta(1/2+it)$ in short intervals of $t$. In particular, we shall describe a conjecture of Fyodorov, Hiary and Keating~\cite{fyodhiarykeat, fyodkeat} about the size of $\max_{0 \leq h \leq 1} |\zeta(1/2 + it + ih)|$ as $t$ varies, and we shall explain some results that have been proved in the direction of this conjecture by Najnudel~\cite{najnudel} and by Arguin--Belius--Bourgade--Radziwi\l\l--Soundararajan~\cite{abbrs}. This paper is organised as follows. Firstly we shall set out some Basic Principles that will guide our thinking and arguments about the zeta function. Then, to illustrate the use of these principles and to compare with the later case of $\max_{0 \leq h \leq 1} |\zeta(1/2 + it + ih)|$, we shall describe what is known about the value distribution of $\zeta(1/2+it)$ (without any maximum) and what is known about the ``long range'' maximum $\max_{T \leq t \leq 2T} |\zeta(1/2 + it)|$. Next we shall introduce and motivate the conjecture of Fyodorov--Hiary--Keating, primarily using our Basic Principles rather than the random matrix theory/statistical mechanics arguments originally considered by those authors (although we shall mention those briefly). And then we shall discuss the statements and proofs of the results of Najnudel and of Arguin--Belius--Bourgade--Radziwi\l\l--Soundararajan, again seeing how these correspond to very nice implementations of those principles. \medskip {\em Notation.} We shall use various notation, of a fairly standard kind, to aid in the description of estimates and limiting processes. We write $f(x) = O(g(x))$, or $f(x) \ll g(x)$, to mean that there exists some constant $C$ such that $|f(x)| \leq Cg(x)$ for all $x$ of interest (the range of $x$ should always either be clear, or specified explicitly). We write $f(x) = \Theta(g(x))$, or $f(x) \asymp g(x)$, to mean that there exist constants $0 < c < C$ such that $cg(x) \leq f(x) \leq Cg(x)$ for all $x$ of interest. At some points we will give rough or heuristic descriptions of arguments, and in these we will use notation such as $\lesssim, \approx$. We do not try to assign a precise meaning to these symbols--- they mean that one quantity is smaller than another, or roughly the same size as another, up to terms that turn out to be unimportant in the rigorous implementation of the arguments. Much of this paper will involve the discussion of probabilistic issues. We will write $\mathbb{P}$ to denote a probability measure, and $\mathbb{E}$ to denote expectation (i.e. averaging, or more formally integration) with respect to the measure $\mathbb{P}$. \section{Basic principles} One can build up a great deal of understanding of the zeta function beginning from the following idea, which we first state in a heuristic way. \begin{principle}\label{bprin1} As $t$ varies, the numbers $(p^{-it})_{p \; \text{prime}}$ ``behave like'' a sequence of independent random variables, each distributed uniformly on the complex unit circle. \end{principle} It is clear that for any given $p$, as $t$ varies over an interval the quantity $p^{-it} = e^{-it\log p}$ rotates around the complex unit circle at ``speed'' $\log p$, behaving like a uniform random variable. Thus the interesting assertion in Principle~\ref{bprin1} is that we should think of the~$p^{-it}$ as being independent. That is because the primes are multiplicatively independent, or equivalently the speeds $\log p$ are linearly independent over the rationals. Both of these statements are just ways of expressing the uniqueness of prime factorisation. So as each of the $p^{-it}$ rotate around, there are no fixed relations between any combinations of them and so, heuristically, they shouldn't ``see one another's behaviour'' too much (unlike if we considered $2^{-it}, 3^{-it}, 6^{-it}$, say, for which always $6^{-it} = 2^{-it} 3^{-it}$). What rigorous statements can we make that would correspond to Principle~\ref{bprin1}? The following result, although easily proved, turns out to be a very powerful tool. \begin{lemma}\label{bcorrlem} Let $T \geq 1$, and let $p_{1}, \dots, p_{k}, p_{k+1}, \dots, p_{\ell}$ be any primes (not necessarily distinct). Let $(X_p)_{p \; \text{prime}}$ be a sequence of independent random variables, each distributed uniformly on the complex unit circle. Then $$ \frac{1}{T} \int_{T}^{2T} \prod_{j=1}^{k} p_{j}^{-it} \overline{\biggl(\prod_{j=k+1}^{\ell} p_{j}^{-it} \biggr)} dt = \mathbb{E} \prod_{j=1}^{k} X_{p_{j}} \prod_{j=k+1}^{\ell} \overline{X_{p_{j}}} + O\biggl(\frac{\min\{\prod_{j=1}^{k} p_{j}, \prod_{j=k+1}^{\ell} p_{j}\}}{T} \biggr) . $$ \end{lemma} \begin{proof}[Proof of Lemma~\ref{bcorrlem}] We can rewrite the integral on the left as $$ \frac{1}{T} \int_{T}^{2T} \exp\Bigl\{-it\Bigl(\sum_{j=1}^{k} \log p_{j} - \sum_{j=k+1}^{\ell} \log p_{j}\Bigr) \Bigr\} dt . $$ So if $\sum_{j=1}^{k} \log p_{j} = \sum_{j=k+1}^{\ell} \log p_{j}$ then the integral is exactly~$1$. And since this is equivalent (by uniqueness of prime factorisation) to saying that the $(p_j)_{j=k+1}^{\ell}$ are just some reordering, with the same multiplicities, of the $(p_j)_{j=1}^{k}$, we see that, in this case as well, $\mathbb{E} \prod_{j=1}^{k} X_{p_{j}} \prod_{j=k+1}^{\ell} \overline{X_{p_{j}}} = 1$ since every $X_p$ is paired with a conjugate copy. If $\sum_{j=1}^{k} \log p_{j} \neq \sum_{j=k+1}^{\ell} \log p_{j}$, then on the right some $X_p$ is not paired with a conjugate copy, so by independence and symmetry of the distributions of the $X_p$ we have $\mathbb{E} \prod_{j=1}^{k} X_{p_{j}} \prod_{j=k+1}^{\ell} \overline{X_{p_{j}}} = 0$. The integral on the left may be calculated explicitly as $$ \frac{1}{T} \left[ \frac{\exp\bigl\{-it(\sum_{j=1}^{k} \log p_{j} - \sum_{j=k+1}^{\ell} \log p_{j}) \bigr\}}{-i(\sum_{j=1}^{k} \log p_{j} - \sum_{j=k+1}^{\ell} \log p_{j})} \right]_{T}^{2T} \ll \frac{1}{T \Bigl|\log\Bigl(\frac{\prod_{j=1}^{k} p_{j}}{\prod_{j=k+1}^{\ell} p_{j}}\Bigr)\Bigr|} . $$ If $\prod_{j=1}^{k} p_{j} < (3/4)\prod_{j=k+1}^{\ell} p_{j}$ or if $\prod_{j=1}^{k} p_{j} > (4/3)\prod_{j=k+1}^{\ell} p_{j}$ then the logarithmic term here is $> \log 4/3$, so we get an acceptable error term $O(1/T)$. Otherwise, we can write $\log\Bigl(\frac{\prod_{j=1}^{k} p_{j}}{\prod_{j=k+1}^{\ell} p_{j}}\Bigr) = \log\Bigl(1 + \frac{\prod_{j=1}^{k} p_{j} - \prod_{j=k+1}^{\ell} p_{j}}{\prod_{j=k+1}^{\ell} p_{j}}\Bigr)$ and use the Taylor expansion of the logarithm. Since we know that $\prod_{j=1}^{k} p_{j} - \prod_{j=k+1}^{\ell} p_{j} \neq 0$, in fact it is $\geq 1$ and we get a lower bound $\gg \frac{1}{\prod_{j=k+1}^{\ell} p_{j}}$ from the Taylor expansion. Since we are in the case where $\prod_{j=1}^{k} p_{j}$ and $\prod_{j=k+1}^{\ell} p_{j}$ differ at most by a multiplicative factor 4/3, this can also be written as $\gg \frac{1}{\min\{\prod_{j=1}^{k} p_{j} , \prod_{j=k+1}^{\ell} p_{j}\}}$. \end{proof} Lemma~\ref{bcorrlem} implies that if we examine the $t$-average of some polynomial expression in the $p^{-it}$, this will be close to the corresponding average of the genuinely random $X_p$ provided that when we expand things out, the product of the primes involved is small compared with $T$. Since one can approximate quite general functions using polynomials (with the degree and coefficient size increasing as one looks for better approximations), one can hope to show rigorously that the distribution of sums of the $p^{-it}$ is often close to the distribution of sums of the $X_p$. A particular instance of this is the well known {\em method of moments} from probability theory. For example, if $P=P(T)$ is some large quantity, $(a_p)_{p \; \text{prime}} = (a_p(T))_{p \; \text{prime}}$ are complex numbers, and if one can show that for each $k \in \mathbb{N}$ one has $$ \frac{1}{T} \int_{T}^{2T} \Bigl(\Re \sum_{p \leq P} a_{p} p^{-it}\Bigr)^{k} dt \rightarrow \mathbb{E} N(0,1)^k \;\;\; \text{as} \; T \rightarrow \infty , $$ then it follows that the distribution of $\Re \sum_{p \leq P} a_{p} p^{-it}$ converges to the standard Normal distribution as $T \rightarrow \infty$. (Here we wrote $\mathbb{E} N(0,1)^k = (2\pi)^{-1/2} \int_{-\infty}^{\infty} {w^k e^{-w^{2}/2}} dw$ to denote the $k$-th power moment of the standard Normal distribution.) In view of the above discussion, if the size of the $a_p$ is under control then one could hope to prove such convergence (presuming it actually holds!) when $P(T) = T^{o(1)}$, so that the error terms in Lemma~\ref{bcorrlem} don't contribute too much. \medskip Our other basic principle is the following. \begin{principle}\label{bprin2} For many purposes (especially statistical questions not directly involving the zeta zeros), for any $\sigma \geq 1/2$ the Riemann zeta function $\zeta(\sigma+it)$ ``behaves like'' an Euler product $\prod_{\text{primes} \; p \leq P} (1 - \frac{1}{p^{\sigma+it}})^{-1}$ of ``suitable'' length $P=P(\sigma, t)$. \end{principle} As discussed in the Introduction, the reason for believing that something like Principle~\ref{bprin2} could prevail is that $\zeta(\sigma + it)$ is equal to an Euler product when $\sigma > 1$, and if the primes are well distributed then one expects this identity to continue to influence the behaviour of the zeta function for smaller $\sigma$. Indeed, the Riemann Hypothesis is the statement that it does continue to have an influence, at least to the extent that $\zeta(\sigma + it) \neq 0$ (like a finite product of non-vanishing terms) when $\sigma > 1/2$. It is much harder to prove rigorous statements corresponding to Principle~\ref{bprin2} than it was for Principle~\ref{bprin1}, and we shall discuss several examples of such statements in the sequel. One also needs to think carefully about the appropriate sense of ``behaves like'' here, especially when $\sigma = 1/2$, since the Riemann zeta function does have infinitely many zeros on the critical line which don't reflect Euler product type behaviour. But to fix ideas a little we state one nice result, which we will also come back to later. \begin{proposition}[Radziwi\l\l \, and Soundararajan, 2017]\label{radzsoundapprox} For all $T \leq t \leq 2T$, except for a set whose measure is $o(T)$ as $T \rightarrow \infty$, we have $$ \zeta\Bigl(1/2 + \frac{W}{\log T} + it\Bigr) = (1+o(1))\exp\Bigl\{\sum_{p^{k} \leq P} \frac{1}{k p^{k(1/2 + W/\log T + it)}} \Bigr\} , $$ where the sum is over prime powers $p^k$. Here $W= (\log\log\log T)^4$, and $P = T^{1/(\log\log\log T)^2}$, and the $o(1)$ term tends to $0$ as $T \rightarrow \infty$. \end{proposition} The reader needn't be too concerned about the exact choices of $W$ and $P$ here, and in any event there is some flexibility in those (they are related though, as $W$ increases one can take $P$ smaller). Proposition~\ref{radzsoundapprox} says that $\zeta(s)$ behaves like an Euler product (or the exponential of a prime number sum) provided one shifts away from the critical line $\Re(s) = 1/2$ by a small amount $W/\log T$. As discussed earlier, such a statement cannot hold when $\Re(s) = 1/2$ because of the zeros of the zeta function. But knowing the result when $\Re(s)$ is slightly larger is sufficient for many purposes, since one can use derivative estimates (or more sophisticated statements of a similar character) to pass from knowing things just off the critical line to knowing things on the critical line. We give a brief sketch of Radziwi\l\l \, and Soundararajan's~\cite{radsoundclt} proof of Proposition~\ref{radzsoundapprox}. Using a mean square calculation, one can show that for most $T \leq t \leq 2T$ we have $\zeta(1/2 + \frac{W}{\log T} + it) M(1/2 + \frac{W}{\log T} + it) = 1+o(1)$, where $M(s) = \sum_{n} \frac{c(n)}{n^s}$ and the coefficients~$c(n)$ are a truncated version of the coefficients one gets by formally expanding the product $\prod_{\text{primes} \; p} (1 - \frac{1}{p^{\sigma+it}})$. Note that one can compute such mean square averages fairly easily using e.g.\ classical approximations $\zeta(s) \approx \sum_{n \leq X} \frac{1}{n^s}$ for the zeta function, provided the coefficients $c(n)$ are zero when $n$ is large (larger than $T^{\epsilon}$, say). Proposition~\ref{radzsoundapprox} follows by combining this with the observation that, for most $T \leq t \leq 2T$, we have $M(1/2 + \frac{W}{\log T} + it) = (1+o(1)) \exp\{-\sum_{p^{k} \leq P} \frac{1}{k p^{k(1/2 + W/\log T + it)}} \}$ (note the minus sign here), which follows from the construction of $c(n)$, the series expansion of the exponential, and (importantly) the fact that $\sum_{p^{k} \leq P} \frac{1}{k p^{k(1/2 + W/\log T + it)}}$ isn't too large for most $T \leq t \leq 2T$. \medskip We conclude this section with some small computations that will recur a number of times in the sequel. Firstly we have a number theoretic calculation, which the reader might wish to compare with our earlier discussion of the method of moments. \begin{lemma}\label{pvarlem} For any $T \geq 1$ and any $1 \leq x \leq y$, we have $$ \frac{1}{T} \int_{T}^{2T} \Re \sum_{x \leq p \leq y} \frac{1}{p^{1/2+it}} dt = O\Bigl(\frac{\sqrt{y}}{T}\Bigr) , $$ $$ \frac{1}{T} \int_{T}^{2T} \Bigl(\Re \sum_{x \leq p \leq y} \frac{1}{p^{1/2+it}}\Bigr)^2 dt = \frac{1}{2} \log\Bigl(\frac{\log y}{\log x}\Bigr) + O\Bigl(\frac{1}{\log^{100}(2x)} + \frac{y^2}{T}\Bigr) . $$ \end{lemma} \begin{proof} We rewrite $\Re \sum_{x \leq p \leq y} \frac{1}{p^{1/2+it}} = \frac{1}{2}\bigl(\sum_{x \leq p \leq y} \frac{1}{p^{1/2+it}} + \overline{\sum_{x \leq p \leq y} \frac{1}{p^{1/2+it}}}\bigr)$. The first statement in Lemma~\ref{pvarlem} follows directly by combining this with Lemma~\ref{bcorrlem}. For the second statement, we can rewrite the left hand side as $$ \frac{1}{4} \sum_{x \leq p,q \leq y} \frac{1}{\sqrt{pq}} \frac{1}{T} \int_{T}^{2T} \bigl(p^{-it}q^{-it} + p^{-it}\overline{q^{-it}} + \overline{p^{-it}}q^{-it} + \overline{p^{-it}}\overline{q^{-it}}\bigr) dt , $$ and using Lemma~\ref{bcorrlem} this is the same as $$ \frac{1}{4} \sum_{x \leq p,q \leq y} \frac{1}{\sqrt{pq}} \biggl(\mathbb{E} X_{p} X_{q} + \mathbb{E} X_{p} \overline{X_{q}} + \mathbb{E} \overline{X_{p}} X_{q} + \mathbb{E} \overline{X_{p}} \overline{X_{q}} + O\Bigl(\frac{\min\{p,q\}}{T}\Bigr) \biggr) . $$ It is easy to check that if $p \neq q$ then by independence and symmetry all of these expectations vanish, whereas when $p=q$ we have $\mathbb{E} X_{p}^2 = 0$ and $\mathbb{E} |X_p|^2 = 1$. Lemma~\ref{pvarlem} finally follows using the standard estimate $\sum_{x \leq p \leq y} \frac{1}{p} = \log\bigl(\frac{\log y}{\log x}\bigr) + O\bigl(\frac{1}{\log^{100}(2x)}\bigr)$, say, which follows from the Prime Number Theorem. \end{proof} Lemma~\ref{pvarlem} tells us that the mean value of $\Re \sum_{p \leq \sqrt{T}} \frac{1}{p^{1/2+it}}$ (say) is very small for large $T$, and the mean square (which is essentially also the variance, since the mean is small) is $\sim (1/2)\log\log T$. We will see the quantity $\log\log T$ appear in many places later, and this variance calculation is one of the key sources of it. Let us emphasise that $\log\log T \rightarrow \infty$ as $T \rightarrow \infty$, whereas if one attempted a similar calculation with $\Re \sum_{p \leq \sqrt{T}} \frac{1}{p^{\sigma+it}}$ for any fixed $\sigma > 1/2$ then the mean square would be convergent. This is one of the key sources of difficulty and interest on the critical line, as compared with elsewhere in the complex plane. On the other hand, $\log\log T$ is a very slowly growing function, which turns out to be key to the success of many of the arguments that we can implement. We also record a probabilistic calculation. \begin{lemma}\label{maxnormlem} Let $Z_1, \dots{}, Z_n$ be independent Gaussian random variables, each having mean zero and standard deviation $\sigma > 0$. Then for any $u \geq 0$, we have $$ 1 - e^{-\Theta\bigl(n \frac{e^{-u^{2}/2}}{1+u}\bigr)} \leq \mathbb{P}\Bigl(\max_{1 \leq i \leq n} Z_i > u\sigma\Bigr) \ll n \frac{e^{-u^{2}/2}}{1+u} . $$ In particular, for any $\epsilon > 0$ we have $$ \mathbb{P}\Bigl(1-\epsilon \leq \frac{\max_{1 \leq i \leq n} Z_i}{\sigma\sqrt{2\log n}} \leq 1\Bigr) \rightarrow 1 \;\;\;\;\; \text{as} \; n \rightarrow \infty . $$ \end{lemma} \begin{proof}[Proof of Lemma~\ref{maxnormlem}] Using the union bound, we have $\mathbb{P}(\max_{1 \leq i \leq n} Z_i > u\sigma) \leq \sum_{i=1}^{n} \mathbb{P}(Z_i > u\sigma)$. And $\mathbb{P}(Z_i > u\sigma)$ is just the probability that a $N(0,1)$ random variable is $> u$, which is $\asymp \frac{e^{-u^{2}/2}}{1+u}$. This proves the first upper bound. To prove the lower bound, it will suffice to show that $\mathbb{P}(\max_{1 \leq i \leq n} Z_i \leq u\sigma) \leq e^{-\Theta\bigl(n \frac{e^{-u^{2}/2}}{1+u}\bigr)}$. But by independence, this probability is equal to $\prod_{i=1}^{n} \mathbb{P}(Z_i \leq u\sigma)$. And again, $\mathbb{P}(Z_i \leq u\sigma)$ is $1 - \Theta\bigl(\frac{e^{-u^{2}/2}}{1+u}\bigr) = \exp\bigl\{-\Theta\bigl(\frac{e^{-u^{2}/2}}{1+u}\bigr)\bigr\}$, which gives the result. For the second statement, we just note that if we take $u = (1-\epsilon)\sqrt{2\log n}$, where $\epsilon > 0$ is small and $n$ is large, then $n \frac{e^{-u^{2}/2}}{1+u} \rightarrow \infty$ as $n \rightarrow \infty$ and so $\mathbb{P}(\max_{1 \leq i \leq n} Z_i > u\sigma) \geq 1 - o(1)$. Similarly, if we take $u = \sqrt{2\log n}$ then $n \frac{e^{-u^{2}/2}}{1+u} \rightarrow 0$ as $n \rightarrow \infty$, and so $\mathbb{P}(\max_{1 \leq i \leq n} Z_i > u\sigma) = o(1)$. \end{proof} Note that we made little use of the assumption that the $Z_i$ were Gaussian/Normal random variables. This just gave us a rather explicit form for the tail probabilities $\mathbb{P}(Z_i > u\sigma)$ that arose in the argument. It would also be easy to replace the second statement with something more precise, and later we shall extensively discuss the precise asymptotics of the maxima of Gaussian random variables. \section{General landscape of the values of zeta} To set the scene for our discussion of $\zeta(1/2+it)$ in short intervals of $t$, we now review some of the key information we have (both unconditional, conditional and conjectural) when $t$ varies over a wide range. \medskip Firstly one might ask about the ``typical'' size of $\zeta(1/2+it)$. A natural way to make this precise is to ask about the distribution of $|\zeta(1/2+it)|$, where $T \leq t \leq 2T$ (say) is chosen uniformly at random. This situation is described by a beautiful classical result of Selberg. \begin{theorem}[Selberg Central Limit Theorem, 1946]\label{selbergclt} For any $z \in \mathbb{R}$, we have $$ \frac{1}{T} \mathrm{meas}\Bigl\{T \leq t \leq 2T : \frac{\log|\zeta(1/2+it)|}{\sqrt{(1/2)\log\log T}} \leq z \Bigr\} \rightarrow \Phi(z) \;\;\; \text{as} \; T \rightarrow \infty , $$ where $\mathrm{meas}\{\cdot\}$ denotes Lebesgue measure, and where $\Phi(z) := \int_{-\infty}^{z} \frac{e^{-w^{2}/2}}{\sqrt{2\pi}} dw$ is the standard Normal cumulative distribution function. \end{theorem} Let us remark that although we will have $\zeta(1/2+it) = 0$ (and therefore $\log|\zeta(1/2+it)|$ will be undefined) for some points $T \leq t \leq 2T$ (in fact for $\asymp T\log T$ points), since these points form a discrete set they contribute nothing from the point of view of measure, so are irrelevant to the statement of Theorem~\ref{selbergclt}. The Selberg Central Limit Theorem is the prototypical manifestation of the Basic Principles discussed in the previous subsection. Looking at things heuristically, we have $$ \log|\zeta(1/2+it)| = \Re\log\zeta(1/2+it) \approx - \Re \sum_{p \leq P} \log\Bigl(1 - \frac{1}{p^{1/2+it}}\Bigr) \approx \Re \sum_{p \leq P} \frac{1}{p^{1/2+it}} , $$ for ``suitable'' $P = P(T)$. Then as $T \leq t \leq 2T$ varies, the terms $p^{-it}$ behave like independent random variables, and so $\log|\zeta(1/2+it)|$ behaves roughly like a sum of many independent random variables. This is exactly the situation where one expects to have convergence in distribution to a Normal random variable. The second part of the heuristic is rather easy to make rigorous to an acceptable level of precision in this setting, by computing moments of the sums $\Re \sum_{p \leq P} \frac{1}{p^{1/2+it}}$ and showing that they converge to the moments of a Normal distribution. The approximation $\log|\zeta(1/2+it)| \approx \Re \sum_{p \leq P} \frac{1}{p^{1/2+it}}$ has traditionally been more difficult to establish rigorously. As we already discussed, nothing like this can hold pointwise on the critical line because the left hand side will be undefined at some points $t$, so one wants to show that $\log|\zeta(1/2+it)| \approx \Re \sum_{p \leq P} \frac{1}{p^{1/2+it}}$ in some kind of average sense. The classical proofs of this entailed quite complicated manipulations to work around the zeros of zeta, but recently Radziwi\l\l \, and Soundararajan~\cite{radsoundclt} have given a very neat and conceptual proof using Proposition~\ref{radzsoundapprox}. \medskip Another key question is about the largest values attained by $|\zeta(1/2+it)|$ as $t$ varies. Unconditionally, our best upper bounds for the size of $\zeta(1/2+it)$ are rather weak despite the application of some very powerful methods to the problem. \begin{theorem}[Bourgain, 2017] For any $\epsilon > 0$ and all large $t$, we have the upper bound $|\zeta(1/2+it)| \ll_{\epsilon} t^{13/84 + \epsilon}$ (where the implicit constant may depend on $\epsilon$). \end{theorem} Bourgain~\cite{bourgainzeta} proved this result by combining the Hardy--Littlewood approximation $\zeta(1/2+it) \approx \sum_{n \leq t} \frac{1}{n^{1/2+it}}$, exponential sum methods of Bombieri--Iwaniec, and progress in the theory of ``decoupling'' from harmonic analysis. For comparison, general complex analysis arguments (``convexity'') can prove a bound $\ll_{\epsilon} t^{1/4 + \epsilon}$, and long ago Hardy and Littlewood proved the bound $\ll_{\epsilon} t^{1/6 + \epsilon}$. Bourgain's exponent $13/84 \approx 0.155$ is the latest in a long line of improvements. Meanwhile the classical {\em Lindel\"of Hypothesis} (the truth of which follows from the Riemann Hypothesis) conjectures that $|\zeta(1/2+it)| \ll_{\epsilon} t^{\epsilon}$ for any $\epsilon > 0$ and all large $t$. The bound $t^{\epsilon}$ proposed by the Lindel\"of Hypothesis is still rather soft, so what upper bound should we really expect, in other words what is the true size of $\max_{T \leq t \leq 2T} |\zeta(1/2+it)|$? There isn't a universal consensus about this, but the following results set some limits on where the truth can lie. \begin{theorem}[Littlewood, 1924]\label{littlewoodupper} If the Riemann Hypothesis is true, then for all large $t$ we have $$ |\zeta(1/2+it)| \leq \exp\left\{C\frac{\log t}{\log\log t}\right\} , $$ for a certain absolute constant $C > 0$. \end{theorem} \begin{theorem}[Bondarenko and Seip, 2018]\label{bondseiplower} For all large $T$, we have $$ \max_{1 \leq t \leq T} |\zeta(1/2+it)| \geq \exp\left\{(1 + o(1))\sqrt{\frac{\log T \log\log\log T}{\log\log T}}\right\} , $$ where the $o(1)$ term tends to $0$ as $T \rightarrow \infty$. \end{theorem} Apart from a sequence of improvements to the value of $C$, Littlewood's~\cite{littlewood} result in Theorem~\ref{littlewoodupper} hasn't been improved for almost a century. Theorem~\ref{bondseiplower} is a recent breakthrough of Bondarenko and Seip~\cite{bondseip, bondseip2}, improving on earlier lower bounds of a similar shape but without the $\log\log\log T$ factor inside the square root. By further elaboration of their method, the constant $1+o(1)$ has even more recently been improved to $\sqrt{2}+o(1)$ by La Bret\`eche and Tenenbaum~\cite{dlBtenGalSums}. \medskip To appreciate these bounds, and contemplate where the truth might lie between them, it is instructive to consider a rough outline of the proofs. Assuming the truth of the Riemann Hypothesis, one can prove upper bounds of roughly the following shape: for any large $t$ and any parameter $x \leq t$, we have \begin{equation}\label{soundupper} \log|\zeta(1/2+it)| \lesssim \Re \sum_{p \leq x} \frac{1}{p^{1/2+it}} + O\left(\frac{\log t}{\log x}\right) . \end{equation} See, for example, the Main Proposition of Soundararajan~\cite{soundmoments}. Note that this is another very nice manifestation of Principle~\ref{bprin2}: if we are only interested in upper bounds, we can control $\log|\zeta(1/2+it)|$ by sums over primes at {\em every} point $t$, even on the critical line. As noted previously, one cannot hope for a similar {\em lower} bound at every point, since when $\zeta(1/2+it) = 0$ the left hand side will be undefined (equal to $-\infty$, informally). It is difficult to give a pointwise bound for this sum over primes except in a trivial way (especially when $x$ is small), namely $\Re \sum_{p \leq x} \frac{1}{p^{1/2+it}} \leq \sum_{p \leq x} \frac{1}{\sqrt{p}} \sim \frac{2x^{1/2}}{\log x}$. So to obtain the best possible upper bound for $\log|\zeta(1/2+it)|$, we choose $x$ to balance the size of this term and the ``big Oh'' term. Choosing $x \asymp \log^{2}t$ is optimal, and yields the claimed bound $\log|\zeta(1/2+it)| \ll \frac{\log t}{\log\log t}$ assuming the Riemann Hypothesis. To prove their lower bound, Bondarenko and Seip~\cite{bondseip} work to compare the sizes (roughly speaking) of $\int_{1}^{T} \zeta(1/2+it) |R(t)|^2 dt$ and $\int_{1}^{T} |R(t)|^2 dt$, where $R(t)$ is an auxiliary ``resonator'' function that is chosen to concentrate its mass at points where $\zeta(1/2+it)$ should be large. For any choice of $R(t)$, upper bounding $|\zeta(1/2+it)|$ by $\max_{1 \leq t \leq T} |\zeta(1/2+it)|$ implies that $$ \max_{1 \leq t \leq T} |\zeta(1/2+it)| \geq \frac{|\int_{1}^{T} \zeta(1/2+it) |R(t)|^2 dt|}{\int_{1}^{T} |R(t)|^2 dt} , $$ and if $R(t)$ is well chosen one can hope for this lower bound to be fairly efficient. One of Bondarenko and Seip's main innovations, as compared with previous arguments, is to choose $R(t) = \sum_{m \in \mathcal{M}} r(m) m^{-it}$ for certain intricately constructed coefficients $r(m)$ whose support is {\em not} constrained to the interval $[1,T]$ (as would be usual to allow one to control error terms when evaluating the integrals). Instead they allow $r(m) \neq 0$ even when $m$ is extremely large, although only on a very sparse sequence of $m$ so that the error terms remain under control. Very roughly speaking, Bondarenko and Seip's resonator $R(t)$ concentrates its mass on those $t$ for which $\sum_{Pe < p < Pe^{(\log\log T)^{c}}} \frac{1}{p^{1/2 + it}}$ is very large, where $P = C\log T \log\log T$ and $c < 1$ and $C$ are suitable constants, and with some penalisation (something like $\frac{1}{\log(p/P)}$) of the larger $p$ in the interval that are harder to control. For a typical $t$, one expects this sum to have order roughly its standard deviation, namely $$ \approx \sqrt{\sum_{e < \frac{p}{P} < e^{(\log\log T)^{c}}} \frac{1}{p \log(\frac{p}{P})} } \asymp \sqrt{\frac{1}{\log\log T} \sum_{e < \frac{n}{P} < e^{(\log\log T)^{c}}} \frac{1}{n \log(\frac{n}{P})} } \asymp \sqrt{\frac{\log\log\log T}{\log\log T} } , $$ using e.g.\ Chebychev's bounds for the density of the primes. So if we look for the largest values attained as $1 \leq t \leq T$ varies, then motivated by the second part of Lemma~\ref{maxnormlem} we could expect this to have size $\asymp \sqrt{\log T} \sqrt{\frac{\log\log\log T}{\log\log T} }$, and this is precisely the lower bound that Bondarenko and Seip are able to prove for zeta by computing the integrals with their choice of $R(t)$. Note that when applying Lemma~\ref{maxnormlem} to get an idea of what to expect here, it is unimportant whether we assume that varying over $1 \leq t \leq T$ corresponds to taking about $T$ independent samples (as would be the usual heuristic, see below), or $T^2$ or $\sqrt{T}$ samples (say), as the logarithms of all these quantities have the same order of magnitude. \medskip If we compare these upper and lower bound arguments, we see that both of them come down to analysing contributions from fairly small primes, of size $\log^{2}T$ at most. In the upper bound arguments, one would like to show some cancellation but is forced to resort to trivial estimates, whereas in the lower bounds one wants to show large values {\em are} attained. But considering the problem heuristically, there is no reason to believe that the extreme behaviour of $|\zeta(1/2+it)|$ should be dominated by the behaviour of these very small primes that we are forced to focus on due to methodological limitations. We have the following conjecture of Farmer--Gonek--Hughes~\cite{farmergonekhughes} about the true size of $\max_{0 \leq t \leq T} |\zeta(1/2+it)|$. \begin{conjecture}[Farmer, Gonek and Hughes, 2007]\label{fghconj} We have $$ \max_{0 \leq t \leq T} |\zeta(1/2+it)| = \exp\Bigl\{\bigl(\frac{1}{\sqrt{2}} + o(1)\bigr)\sqrt{\log T \log\log T}\Bigr\} , $$ where the $o(1)$ term tends to $0$ as $T \rightarrow \infty$. \end{conjecture} Farmer, Gonek and Hughes supply various arguments in support of this conjecture, including a random matrix model, a random primes model (essentially Principle~\ref{bprin1}), and a combination of these. If we assume that something like the Selberg Central Limit Theorem remains valid in a very large deviations regime (so that $\log|\zeta(1/2+it)| \approx N(0,(1/2)\log\log T)$), and further assume that varying over $0 \leq t \leq T$ corresponds to taking about $T$ independent samples, then Lemma~\ref{maxnormlem} would suggest that $$ \max_{0 \leq t \leq T} \log|\zeta(1/2+it)| \approx \sqrt{2\log T} \sqrt{(1/2)\log\log T} = \sqrt{\log T \log\log T} . $$ So this rather simple approach gives a conjecture of the same shape as Conjecture~\ref{fghconj}, although with a constant 1 instead of $1/\sqrt{2}$ in the exponent. Farmer, Gonek and Hughes~\cite{farmergonekhughes} credit this observation to Montgomery. If one combines this simple line of argument with the bound \eqref{soundupper}, one can in fact recover Conjecture~\ref{fghconj} exactly. Thus if $T \leq t \leq 2T$, and we take $x = e^{\sqrt{\log T}}$ in \eqref{soundupper}, we get (assuming RH) $$ \log|\zeta(1/2+it)| \lesssim \Re \sum_{p \leq e^{\sqrt{\log T}}} \frac{1}{p^{1/2+it}} + O\bigl(\sqrt{\log T} \bigr) . $$ This choice of $x$ is basically the smallest possible such that the ``big Oh'' term will be of small order compared with the lower bounds we have. (If the reader prefers, he or she could take $x = e^{\sqrt{\log T \log\log T}}$ so the ``big Oh'' term would really be smaller than the lower bound we know from Theorem~\ref{bondseiplower}. This will make no difference to the conjecture we shall derive, it would just be messier to write!) Then using Lemma~\ref{pvarlem}, the mean square of $\Re \sum_{p \leq e^{\sqrt{\log T}}} \frac{1}{p^{1/2+it}}$ is $\sim (1/4)\log\log T$. So now if we assume that this sum will behave like a $N(0,(1/4)\log\log T)$ random variable, then Lemma~\ref{maxnormlem} suggests that \begin{align*} \max_{0 \leq t \leq T} \log|\zeta(1/2+it)| \approx & \sqrt{2\log T} \sqrt{\frac{1}{4} \log\log T} +O(\sqrt{\log T}) \\ =& \sqrt{\frac{1}{2} \log T \log\log T} + O(\sqrt{\log T}) . \end{align*} \medskip The key thing that links all the heuristic arguments leading to Conjecture~\ref{fghconj}, and other conjectures of the same shape, is the assumption of some {\em independence} somewhere. One generally assumes independence in the values of $\zeta(1/2+it_1), \zeta(1/2+it_2)$ when $t_1, t_2$ are sufficiently far apart (e.g.\ $|t_1 - t_2| > 1$, although as noted above the analysis isn't very sensitive to the details of this). One also assumes some independence in modelling the value distribution of zeta at a single point (this is explicit in the random primes model/Principle~\ref{bprin1}, in the random matrix models it is less explicit but there is still much independence in the definition of the random matrices and in their behaviour). In contrast, if one believes that instead of independence there could be an {\em extreme conspiracy} in the values of the $p^{-it}$ for $p$ small, then one might reasonably believe that the upper bound in Theorem~\ref{littlewoodupper} is closer to the truth. Farmer, Gonek and Hughes~\cite{farmergonekhughes} discuss this in the final section of their paper. The author tends to prefer the independence to the conspiracy assumption, but it is hard to see how one can really distinguish between these possibilities short of actually determining the size of $\max_{0 \leq t \leq T} |\zeta(1/2+it)|$, which we are probably still far from doing. \section{The conjecture of Fyodorov--Hiary--Keating} Whereas the Selberg Central Limit Theorem gives, unconditionally, a full description of the typical behaviour of $\log|\zeta(1/2+it)|$ (at least to an initial level of precision), we have seen that our understanding of the largest values attained by $\log|\zeta(1/2+it)|$ is far less complete. Why is this? One answer is that the largest values attained by $\log|\zeta(1/2+it)|$ correspond to very low probability events (i.e.\ sets of $t$ with measure much smaller than $T$), far in the tails of the distribution. Even in a purely probabilistic setting, such problems can present considerable difficulties. For example, the quantitative error terms in probabilistic central limit theorems are often relatively large, so they become much less useful when directly applied to rare events. Fyodorov and Keating~\cite{fyodkeat} and Fyodorov, Hiary and Keating~\cite{fyodhiarykeat} recently initiated study of a problem that is intermediate between these two regimes (although rather closer to the typical behaviour than the largest values). \begin{problem}\label{shortintervalsprob} As $T \leq t \leq 2T$ varies, how is $\max_{0 \leq h \leq 1} |\zeta(1/2 + it + ih)|$ distributed? \end{problem} Note that for some $T \leq t \leq 2T$, the interval $[t,t+1]$ will contain a point $t^{*}$ for which $|\zeta(1/2 + it^{*})| = \max_{T \leq t \leq 2T} |\zeta(1/2 + it)|$, whose size we don't understand well. But since Problem~\ref{shortintervalsprob} is a distributional question, this small subset of $t$ can be ignored (just as in the statement of the Selberg Central Limit Theorem one needn't worry about the zeros of zeta) and one can hope to have a tractable yet interesting question. \medskip The next obvious query is whether we should expect the behaviour of the short interval maximum $\max_{0 \leq h \leq 1} |\zeta(1/2 + it + ih)|$ to be much different than the behaviour of $|\zeta(1/2 + it)|$? At first glance, taking the maximum over an interval of bounded length might not be expected to alter things too significantly, in which case the answer to Problem~\ref{shortintervalsprob} might be a result of a similar shape to the Selberg Central Limit Theorem. Here is a heuristic line of argument that suggests the distributional behaviour of $\max_{0 \leq h \leq 1} |\zeta(1/2 + it + ih)|$ could actually be a lot different than the behaviour at a single point. For the sake of this argument we shall make three temporary assumptions, then later we will examine how reasonable these are. \begin{itemize} \item(Assumption 1) The Selberg Central Limit Theorem remains valid even some way into the tails of the probability distribution, in other words the left hand side of Theorem~\ref{selbergclt} is still well approximated by $\Phi(z)$ even when $z$ grows with $T$ ``at a suitable rate''. \item(Assumption 2) As $T \leq t \leq 2T$ varies, the values $|\zeta(1/2+it+ih_1)|, |\zeta(1/2+it+ih_2)|$ are ``roughly the same'' when $|h_1 - h_2| \leq 1/\log T$. \item(Assumption 3) As $T \leq t \leq 2T$ varies, the values $|\zeta(1/2+it+ih_1)|, |\zeta(1/2+it+ih_2)|$ behave ``roughly independently'' when $|h_1 - h_2| > 1/\log T$. \end{itemize} Much of our analysis will duplicate steps from the proof of Lemma~\ref{maxnormlem}, but we will write it out explicitly for ease of reference and to attain greater precision at some points. If Assumption 2 is correct, then we have $$ \max_{0 \leq h \leq 1} |\zeta(1/2 + it + ih)| \approx \max_{1 \leq j \leq \log T} \Bigl|\zeta\Bigl(1/2 + it + i\frac{j}{\log T}\Bigr)\Bigr| . $$ Then for any real $u$, we have the simple union upper bound: \begin{align*} & \frac{1}{T} \mathrm{meas}\Bigl\{T \leq t \leq 2T : \max_{1 \leq j \leq \log T} \Bigl|\zeta\Bigl(1/2+it + i\frac{j}{\log T}\Bigr)\Bigr| \geq e^{u} \Bigr\} \\ & \leq \sum_{1 \leq j \leq \log T} \frac{1}{T} \mathrm{meas}\Bigl\{T \leq t \leq 2T : \log\Bigl|\zeta\Bigl(1/2+it + i\frac{j}{\log T}\Bigr)\Bigr| \geq u \Bigr\} \end{align*} If Assumption 1 is correct, and if we assume to simplify the writing that $u \geq \sqrt{\log\log T}$, then each summand here will be $$ \approx \mathbb{P}\biggl(N(0,1) \geq \frac{u}{\sqrt{(1/2)\log\log T}}\biggr) \approx \frac{\sqrt{\log\log T}}{u} e^{-u^{2}/\log\log T} . $$ In particular, if $u = \log\log T - (1/4)\log\log\log T + U$ for some $U \geq 0$ then the right hand side is $$ \ll \frac{1}{\sqrt{\log\log T}} e^{-(\log\log T - (1/4)\log\log\log T + U)^{2}/\log\log T} \ll \frac{1}{\log T} e^{-2U} e^{-\Theta(U^{2}/\log\log T)} . $$ Summing over $1 \leq j \leq \log T$, we find that if $U$ is large then the sum will be small, in other words we can expect that for most $t$, the maximum $\max_{0 \leq h \leq 1} |\zeta(1/2 + it + ih)|$ has size {\em at most} $e^{\log\log T - (1/4)\log\log\log T + O(1)}$. For a lower bound, we note that if Assumption 3 is correct then for any $u \in \mathbb{R}$, \begin{align*} & \frac{1}{T} \mathrm{meas}\Bigl\{T \leq t \leq 2T : \max_{1 \leq j \leq \log T} \Bigl|\zeta\Bigl(1/2+it + i\frac{j}{\log T}\Bigr)\Bigr| \leq e^{u} \Bigr\} \\ \approx & \prod_{1 \leq j \leq \log T} \frac{1}{T} \mathrm{meas}\Bigl\{T \leq t \leq 2T : \log\Bigl|\zeta\Bigl(1/2+it + i\frac{j}{\log T}\Bigr)\Bigr| \leq u\Bigr\} . \end{align*} And using Assumption 1 as before to estimate each term in the product, we find the above is $$ \approx \left(1 - \frac{\sqrt{\log\log T}}{u} e^{-u^{2}/\log\log T} \right)^{\lfloor \log T \rfloor} . $$ In particular, if we take $u = \log\log T - (1/4)\log\log\log T - U$ for some fixed $U \geq 0$ (note that we have $-U$ here, not $U$) then each bracket will be $\approx (1 - \frac{e^{2U}}{\log T})$, and the product of $\lfloor \log T \rfloor$ copies of this will be small. So we can expect that for most $t$, the maximum $\max_{0 \leq h \leq 1} |\zeta(1/2 + it + ih)|$ has size {\em at least} $e^{\log\log T - (1/4)\log\log\log T + O(1)}$ as well. \medskip We shall revisit this heuristic argument later, but record a few immediate observations. Firstly, the typical size of the maximum derived above is close to $e^{\log\log T}$, as opposed to the size $e^{\Theta(\sqrt{\log\log T})}$ at a typical point $t$ provided by the Selberg Central Limit Theorem. So, if the above heuristic is roughly accurate, there should be a real difference between these situations. Note, however, that this size is still much smaller than the regime considered in Theorems~\ref{littlewoodupper} and~\ref{bondseiplower}, so we are much less far into the tails of the distribution and can have hopes of a good rigorous analysis of the situation. Another striking contrast is that in the Selberg Central Limit Theorem, the distribution of $\log|\zeta(1/2+it)|$ is shown to have mean zero and to vary around this on a scale of $\sqrt{\log\log T}$. In our heuristic for the short interval maximum of log zeta, the random variation occurs on a {\em smaller} scale $O(1)$, whilst one has a deterministic main term of size $\sim \log\log T$. Let us also note that Assumption 3, the independence assumption, was only required for the proof of the lower bound. Thus one might suspect, and it will turn out to be the case, that it should be easier to make our heuristic argument rigorous for the upper bound than for the lower bound. \medskip As well as proposing the study of Problem~\ref{shortintervalsprob}, Fyodorov, Hiary and Keating~\cite{fyodhiarykeat, fyodkeat} also made a precise conjecture about the answer. \begin{conjecture}[Fyodorov--Hiary--Keating, 2012]\label{fhkconj} For any real function $g(T)$ that tends to infinity with $T$, we have that $$ \frac{1}{T} \mathrm{meas}\bigl\{0 \leq t \leq T : \bigl|\max_{|h| \leq 1} \log|\zeta(1/2+it+ih)| - (\log\log T - (3/4)\log\log\log T)\bigr| \leq g(T) \bigr\} $$ tends to 1 as $T \rightarrow \infty$. \end{conjecture} In fact, Fyodorov--Hiary--Keating make an even more precise conjecture than this, about the {\em distribution} of the difference between $\max_{|h| \leq 1} \log|\zeta(1/2+it+ih)|$ and $(\log\log T - (3/4)\log\log\log T)$. But this seems far beyond anything that is rigorously attackable at present, so we shall not discuss it further here. We also note that the choice of the interval $|h| \leq 1$ is rather arbitrary, and in fact Fyodorov--Hiary--Keating looked primarily at the interval $0 \leq h \leq 2\pi$, which corresponds more naturally with the random matrix setting. But one will have an analogous conjecture and results for any interval of fixed non-zero length. Fyodorov, Hiary and Keating were led to their conjecture via a two step process, which we shall briefly explain. For given $t$, in order to understand the behaviour of $\max_{|h| \leq 1} \log|\zeta(1/2+it+ih)|$ one might try to compute quantities such as $$ \int_{|h| \leq 1} e^{2\beta \log|\zeta(1/2+it+ih)|} dh = \int_{t-1}^{t+1} |\zeta(1/2+ iw)|^{2\beta} dw , $$ for varying $\beta > 0$. The idea is that as $\beta$ becomes larger, the size of the integral will be increasingly dominated by the largest values attained by $\log|\zeta(1/2+it+ih)|$. In the language of mathematical physics, this kind of integral is the {\em partition function} associated with $\log|\zeta(1/2+it+ih)|$. Since we are interested in what happens as $t$ varies, we could further try to understand this by computing quantities such as $$ \int_{0}^{T} \left( \int_{t-1}^{t+1} |\zeta(1/2+ iw)|^{2\beta} dw \right)^{q} dt , $$ where now $q > 0$ is a further parameter. For given $\beta$, if we can understand the size of these integrals for all (or many) $q$ we might hope to get a good understanding of the distribution of $\int_{t-1}^{t+1} |\zeta(1/2+ iw)|^{2\beta} dw$. And in turn, if one can understand this for suitable $\beta$ one might hope to get a good understanding of $\max_{|h| \leq 1} \log|\zeta(1/2+it+ih)|$. To understand how all these objects might behave, Fyodorov, Hiary and Keating turned to the well known idea that $\zeta(1/2+it)$ behaves like the characteristic polynomial of suitable random matrices. In the random matrix setting, they were able to compute the analogous integrals for a certain range of $q \in \mathbb{N}$ (depending on $\beta$), when $\beta < 1$. Although this amount of information is {\em not} sufficient to rigorously draw conclusions about the maximum, even in the random matrix setting, they noticed that the quantities computed agreed with some analogous integrals arising in statistical mechanics. The Fyodorov--Hiary--Keating conjecture then arises from supposing that characteristic polynomials of random matrices, and further the Riemann zeta function, behave in the way suggested by those statistical mechanics models. We shall not say more about Fyodorov, Hiary and Keating's motivation for their conjecture, referring the reader instead to the original papers~\cite{fyodhiarykeat, fyodkeat}, which also describe some interesting numerical evidence. We just note that one of the important features of their statistical mechanics problem is a {\em logarithmic correlation structure}, which we shall discuss much further below. We also note that some parts of Fyodorov, Hiary and Keating's conjectures in the random matrix setting, and about the partition function $\int_{t-1}^{t+1} |\zeta(1/2+ iw)|^{2\beta} dw$, have recently been proved using ideas related to those we shall describe here. See the papers~\cite{abbrandmat, argouiradz, paqzeit}, for example. \medskip Conjecture~\ref{fhkconj} suggests that our earlier heuristic analysis isn't quite right, but almost, since the first order term $\log\log T$ that we obtained was the same. But this suggestion is a little misleading. As we shall now explain, it is possible to modify the heuristic to give another supporting heuristic for Conjecture~\ref{fhkconj} (and possible to prove some of this rigorously, as we shall come to later), but this requires quite careful thought about our Assumptions 2 and 3. Recall that we assumed earlier that $|\zeta(1/2+it+ih_1)|, |\zeta(1/2+it+ih_2)|$ are ``roughly the same'' when $|h_1 - h_2| \leq 1/\log T$, and ``roughly independent'' when $|h_1 - h_2| > 1/\log T$. The reason for these starting assumptions is that when $T \leq t \leq 2T$ is large, we have rigorously (the Hardy--Littlewood approximation) that $$ \zeta(1/2 + it) = \sum_{n \leq T} \frac{1}{n^{1/2+it}} + O\Bigl(\frac{1}{\sqrt{T}}\Bigr) , $$ and we have heuristically (as in Principle~\ref{bprin2}) that $$ \log|\zeta(1/2+it)| \approx \Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it}} , $$ say. In both of these expressions, the most rapidly varying terms are of the form $e^{-it\log n}$ with $\log n \asymp \log T$. Thus if $t$ varies by less than $1/\log T$, we can expect the sums not to change much, but if $t$ varies by more one starts to see significant variation. (Another possible justification is that the average spacing between imaginary parts of zeta zeros around $T$ is $\asymp 1/\log T$.) The assumption that $\zeta(1/2+it)$ doesn't usually change much when $t$ varies by less than $1/\log T$ is actually very reasonable, at least if one replaces $1/\log T$ by something slightly smaller such as $1/\log^{1.01}T$. But if we look at the sum $\Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it}}$, although it is true that the terms with $p \approx T^{1/3}$ start to vary when $t$ shifts by more than $1/\log T$, the smaller terms in the sum don't change until $t$ shifts by much more. In some situations (e.g.\ if we looked at $\sum_{p \leq T^{1/3}} p^{-it}$), the size of a sum is dominated by the final terms and so this effect wouldn't matter, but in $\sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it}}$ the contributions from different parts of the sum are typically much more equal. So $|\zeta(1/2+it+ih_1)|, |\zeta(1/2+it+ih_2)|$ will {\em not} behave entirely independently just because $|h_1 - h_2| > 1/\log T$. \medskip To explain this more precisely, note that we can decompose \begin{equation}\label{decompeq} \Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it}} = \sum_{0 \leq k \leq \log\log T} \Re \sum_{e^{e^{k-1}} < p \leq \min\{e^{e^{k}}, T^{1/3}\}} \frac{1}{p^{1/2+it}} . \end{equation} By Principle~\ref{bprin1}, since the inner sums here involve disjoint sets of primes we expect them to behave independently of one another as $t$ varies. The reason for decomposing into sums on these ranges is that, by Lemma~\ref{pvarlem}, each inner sum has very small mean value and has mean square $$ \frac{1}{2} \log\Bigl(\frac{e^k}{e^{k-1}}\Bigr) + O\Bigl(\frac{1}{e^{100k}} + \frac{T^{2/3}}{T}\Bigr) = \frac{1}{2} + O\Bigl(\frac{1}{e^{100k}}\Bigr) . $$ In other words, we have split up into pieces whose typical orders of magnitude are comparable. And for all the terms in the $k$-th sum we have $\log p \asymp e^k$, so the scale of $t$ on which this sum doesn't change much is {\em not} $1/\log T$, but the wider scale $1/e^k$. Another way to capture this phenomenon is to calculate the {\em correlation} between $\Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it}}$ and $\Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it+ih}}$. By exactly the same kind of argument as in Lemma~\ref{pvarlem}, one can show that \begin{multline*} \frac{1}{T} \int_{T}^{2T} \Bigl(\Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it}}\Bigr) \Bigl(\Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it+ih}}\Bigr) dt \approx\\ \left\{ \begin{array}{ll} (1/2)\log\log T & \text{if} \; |h| \leq 1/\log T \\ (1/2)\log(1/|h|) & \text{if} \; 1/\log T < |h| \leq 1. \end{array} \right. \end{multline*} Thus if $|h| \leq 1/\log T$, this average is roughly the same size as the mean square of $\Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it}}$, in other words the sums are almost perfectly correlated and behave in the same way. As $|h|$ increases, so more and more sums at $t$ and $t+h$ in the decomposition \eqref{decompeq} become decoupled, the correlation goes down and the behaviour of $\Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it}}$ and $\Re \sum_{p \leq T^{1/3}} \frac{1}{p^{1/2+it+ih}}$ becomes increasingly different. It is a general principle that, given Gaussian random variables with mean zero and equal variances, if they are positively correlated then the maximum is smaller (in a distributional sense) than if they were independent. For example, this follows from a very useful probabilistic result called Slepian's Lemma. This is somewhat intuitive, since positive correlations mean that the random variables tend to be big or small together, so we have fewer ``genuinely independent'' tries at obtaining a very large value. Thus we can see, in quite a soft way, that if $\log|\zeta(1/2+it+ih_1)|, \log|\zeta(1/2+it+ih_2)|$ are positively correlated (rather than independent) when $|h_1 - h_2| > 1/\log T$, then $\max_{0 \leq h \leq 1} |\zeta(1/2 + it + ih)|$ should be smaller than our initial analysis predicted. This is fully consistent with Conjecture~\ref{fhkconj}. There is no such soft argument for determining exactly {\em how much smaller} we should expect the maximum to be in the presence of positive correlation, but in recent years the probabilistic tools to do this have become available. \begin{theorem}[Harper, 2013]\label{harperthm} Let $(X_p)_{p \; \textup{prime}}$ be a sequence of independent random variables, each distributed uniformly on the complex unit circle. Then with probability tending to 1 as $T \rightarrow \infty$, we have \begin{align*} & \max_{|h| \leq 1} \Re \sum_{p \leq T} \frac{X_p}{p^{1/2+ih}} \geq \log\log T - 2\log\log\log T - C(\log\log\log T)^{3/4} \\ \text{and}\quad & \max_{|h| \leq 1} \Re \sum_{p \leq T} \frac{X_p}{p^{1/2+ih}} \leq \log\log T - (1/4)\log\log\log T + C\sqrt{\log\log\log T} , \end{align*} where $C > 0$ is a certain absolute constant. \end{theorem} \begin{theorem}[Arguin, Belius and Harper, 2017]\label{abharpthm} Let $(X_p)_{p \; \textup{prime}}$ be a sequence of independent random variables, each distributed uniformly on the complex unit circle. Then for any $\epsilon > 0$, with probability tending to 1 as $T \rightarrow \infty$ we have \begin{align*} & \max_{|h| \leq 1} \Re \sum_{p \leq T} \frac{X_p}{p^{1/2+ih}} \geq \log\log T - (3/4 + \epsilon)\log\log\log T \\ \text{and}\quad & \max_{|h| \leq 1} \Re \sum_{p \leq T} \frac{X_p}{p^{1/2+ih}} \leq \log\log T - (3/4 - \epsilon)\log\log\log T . \end{align*} \end{theorem} Strictly speaking, Theorem~\ref{harperthm} is proved for a slightly different sum (with a smooth weight), and both Theorems are proved for slightly different ranges of $h$. But the methods would certainly yield the stated results. Harper~\cite{harperlcz} proves the upper bound in Theorem~\ref{harperthm} using a union bound argument, and proves the lower bound by substituting the logarithmic correlation structure of these sums into general lower bound results for random processes from~\cite{harpergp}. Arguin, Belius and Harper~\cite{abh} prove Theorem~\ref{abharpthm} by working explicitly with a decomposition like \eqref{decompeq}, using methods from the theory of branching random walks. See e.g. Kistler's survey~\cite{kistler} for a description of such methods as applied in many different contexts. Note that the conclusion of Theorem~\ref{abharpthm} exactly agrees, for these randomised prime number sums, with Conjecture~\ref{fhkconj} (although Theorem~\ref{abharpthm} is less precise). So now {\em if} we believe in suitably strong versions of Principle~\ref{bprin1} (so that $\Re \sum_{p \leq T} \frac{1}{p^{1/2+it+ih}}$ behaves like $\Re \sum_{p \leq T} \frac{X_p}{p^{1/2+ih}}$ as $t$ varies) and Principle~\ref{bprin2} (so that $\log|\zeta(1/2+it+ih)|$ is typically close to $\Re \sum_{p \leq T} \frac{1}{p^{1/2+it+ih}}$ as $t$ varies), then we have another strong reason for believing Conjecture~\ref{fhkconj}. \section{Progress towards the conjecture} In this section we describe some rigorous theorems about the zeta function that make progress towards Conjecture~\ref{fhkconj}. \begin{theorem}[Najnudel, 2018]\label{thmnajnudel} For any real function $g(T)$ that tends to infinity with $T$, we have $$ \frac{1}{T} \mathrm{meas}\bigl\{0 \leq t \leq T : \max_{|h| \leq 1} \log|\zeta(1/2+it+ih)| \leq \log\log T + g(T) \bigr\} \rightarrow 1 \;\;\; \text{as} \; T \rightarrow \infty . $$ Furthermore, if the Riemann Hypothesis is true then for any $\epsilon > 0$ we have $$ \frac{1}{T} \mathrm{meas}\bigl\{0 \leq t \leq T : \max_{|h| \leq 1} \log|\zeta(1/2+it+ih)| \geq (1-\epsilon)\log\log T \bigr\} \rightarrow 1 \;\;\; \text{as} \; T \rightarrow \infty . $$ \end{theorem} \begin{theorem}[Arguin, Belius, Bourgade, Radziwi\l\l, Soundararajan, 2019]\label{thmabbrs} Najnudel's Theorem is true without the need to assume the Riemann Hypothesis. \end{theorem} Before we turn to the proofs of these results, we make a few explanatory remarks. Najnudel's paper~\cite{najnudel} appeared in preprint form on the arXiv in November 2016, and the independent paper of Arguin, Belius, Bourgade, Radziwi\l\l \; and Soundararajan~\cite{abbrs}, which didn't require the assumption of the Riemann Hypothesis, was posted to the arXiv in December 2016. Najnudel proves analogous results (assuming RH) for the imaginary part of $\log\zeta(1/2+it+ih)$ as well. It is possible, but not certain, that some of these could also be made unconditional using the methods of Arguin, Belius, Bourgade, Radziwi\l\l \; and Soundararajan. \medskip The upper bounds in Theorems~\ref{thmnajnudel} and~\ref{thmabbrs} are much easier than the lower bounds, and aside from differences in detail are proved in similar ways. Essentially the same argument was also sketched at the end of the introduction to the author's preprint~\cite{harperlcz}. If we looked at a discrete maximum over points $h = j/\log T$ with $|j| \leq \log T$, instead of the maximum over a continuous interval $|h| \leq 1$, we could argue that \begin{align*} & \frac{1}{T} \mathrm{meas}\Bigl\{0 \leq t \leq T : \max_{|j| \leq \log T} \log\Bigl|\zeta\Bigl(1/2+it+i\frac{j}{\log T}\Bigr)\Bigr| > \log\log T + g(T) \Bigr\} \\ & \leq \sum_{|j| \leq \log T} \frac{1}{T} \mathrm{meas}\Bigl\{0 \leq t \leq T : \log\Bigl|\zeta\Bigl(1/2+it+i\frac{j}{\log T}\Bigr)\Bigr| > \log\log T + g(T) \Bigr\} \\ & \leq \sum_{|j| \leq \log T} \frac{1}{T} \int_{0}^{T} \frac{|\zeta(1/2+it)|^2}{e^{2(\log\log T + g(T))}} dt . \end{align*} It is a classical result of Hardy and Littlewood that $\int_{0}^{T} |\zeta(1/2+it)|^2 dt \sim T\log T$ as $T \rightarrow \infty$, so the right hand side is $\ll e^{-2g(T)}$, which indeed tends to $0$ as $T \rightarrow \infty$. To pass from the continuous maximum to the discrete maximum, one can just use classical analytic techniques such as the Sobolev--Gallagher inequality (essentially estimating the average size of the derivative of $\zeta(1/2+it)$). See e.g.\ the paper of Arguin--Belius--Bourgade--Radziwi\l\l--Soundararajan~\cite{abbrs}. Note that this argument is really quite similar to the heuristic one we gave before, with the second moment asymptotic for the zeta function (which is an exponential moment calculation for $\log|\zeta(1/2+it)|$) providing the necessary large deviation estimate for $\log|\zeta(1/2+it)|$. The fact that we don't get the extra subtracted term $-(1/4)\log\log\log T$ in the rigorous argument reflects a standard inefficiency when bounding large deviation probabilities/measures using exponential moments. \medskip To prove the lower bound in Theorem~\ref{thmnajnudel}, Najnudel's main number theoretic input is a striking estimate of the following shape: if the Riemann Hypothesis is true, and if $t$ is large and $1 \leq x \ll t$ is a parameter, then \begin{equation}\label{najlower} \max_{|h| \leq 1} \log|\zeta(1/2+it+ih)| \gtrsim \max_{|h| \leq 1/2} \Re \sum_{p \leq x} \frac{1}{p^{1/2+it+ih}} + O\Bigl(\frac{\log t}{(\log x)^{C}} + \frac{x^C}{t} \Bigr) . \end{equation} The reader should compare this with Soundararajan's upper bound \eqref{soundupper}. The correct statement of this lower bound is a bit more complicated, in particular the sum $\sum_{p \leq x} \frac{1}{p^{1/2+it+ih}}$ should really be an infinite sum with a smooth cutoff that decays when $p > x$, and there is some contribution from prime squares as well. But to get an idea of the argument one can just think of \eqref{najlower}. As in many similar situations (e.g.\ Soundararajan's~\cite{soundmoments} proof of \eqref{soundupper}), Najnudel assumes the Riemann Hypothesis when proving \eqref{najlower} to avoid the appearance of other large terms corresponding to possible zeros of the zeta function off the critical line. This reflects the general duality between prime numbers being well distributed, Euler product type formulae roughly holding, and the zeros of the zeta function being well behaved, as discussed at the very beginning of this paper. The other important thing to note here is the role played by the maximum over $h$. We have remarked several times that it would be impossible to prove a pointwise lower bound comparable to \eqref{soundupper} or \eqref{najlower}, because at a zero of the zeta function the prime number sum is finite but log zeta becomes undefined. Roughly speaking, in the course of proving \eqref{najlower} Najnudel exploits the fact that $$ \max_{|h| \leq 1/\log^{0.99}x} \log|\zeta(1/2+it+ih)| \geq \frac{\log^{0.99}x}{2} \int_{|h| \leq 1/\log^{0.99}x} \log|\zeta(1/2+it+ih)| dh . $$ On the one hand, one can cover the interval $|h| \leq 1$ by small intervals of length $2/\log^{0.99}x$ (with a small error at the ends, hence the change to the interval $|h| \leq 1/2$ on the right hand side of \eqref{najlower}), and hope that replacing the maximum in each small interval by its average (whilst still taking the maximum {\em over} all the intervals) won't reduce the size too much. On the other hand, since an interval of length $2/\log^{0.99}x$ is large compared with the average spacing $\asymp 1/\log t$ of zeta zeros with imaginary part around $t$, by integrating over such an interval one smooths out (and removes the effect of) the blow-up at the zeros. The inequality \eqref{najlower} is the manifestation of Principle~\ref{bprin2} in Najnudel's argument. Having passed to prime number sums, with some flexibility in the choice of the length $x$, Najnudel shows that they behave like sums of independent random variables (Principle~\ref{bprin1}) by moment calculations, similarly as discussed following Lemma~\ref{bcorrlem}. Thus he can argue about the size of $\max_{|h| \leq 1/2} \Re \sum_{p \leq x} \frac{1}{p^{1/2+it+ih}}$ with a similar style of argument, motivated by branching random walk, as Arguin, Belius and Harper~\cite{abh} used for their randomised model of zeta. \medskip For their unconditional lower bound, Arguin, Belius, Bourgade, Radziwi\l\l \; and Soundararajan use a result like Proposition~\ref{radzsoundapprox} to serve as their realisation of Principle~\ref{bprin2}. The choices of $W$ and $P$ are a bit different than in Proposition~\ref{radzsoundapprox}, but the proof is essentially the same as the one we sketched for that proposition. To apply this to give lower bounds for $\max_{|h| \leq 1} \log|\zeta(1/2+it+ih)|$, a couple of other auxiliary manoeuvres are required. Since Proposition~\ref{radzsoundapprox} concerns points slightly off the critical line, one wants to know that for most $t$, if there is a large value slightly off the critical line there will also be one nearby on the critical line. This is swiftly proved using, essentially, an average bound for the size of the derivative of zeta, obtained by manipulating the Hardy--Littlewood approximation $\zeta(1/2 + it) = \sum_{n \leq T} \frac{1}{n^{1/2+it}} + O(\frac{1}{\sqrt{T}})$. Also, whereas Proposition~\ref{radzsoundapprox} supplies information at most individual points $T \leq t \leq 2T$, Arguin--Belius--Bourgade--Radziwi\l\l--Soundararajan need results that hold for most {\em intervals} $[t-1,t+1]$. This extension is obtained by noting that in the proof of Proposition~\ref{radzsoundapprox}, the individual steps (such as the approximation $\zeta(s) M(s) = 1+o(1)$) hold uniformly for most intervals $[t-1,t+1]$, thanks again to classical Sobolev--Gallagher type manipulations. By shifting a little off the critical line, and only seeking to approximate (the shifted version of) $\max_{|h| \leq 1} \log|\zeta(1/2+it+ih)|$ by prime number sums for {\em most} $t$, Arguin--Belius--Bourgade--Radziwi\l\l--Soundararajan can avoid Najnudel's appeal to the Riemann Hypothesis. Having reached this stage, moment calculations with the prime number sums again show that they behave like sums of independent random variables (Principle~\ref{bprin1}), and one can conclude with a branching random walk style argument. \medskip We finish with a glance at what remains to be done to prove Conjecture~\ref{fhkconj}. Both Theorems~\ref{thmnajnudel} and~\ref{thmabbrs} are less precise than the conjecture, but it seems quite reasonable to think that the methods have not yet been fully perfected, so that more precise results could be extracted. On the other hand, to increase the precision in these methods one needs to approximate the zeta function by prime number sums that are longer, and at points that are closer to the critical line. At a certain point the influence of the zeta zeros, and (more technically) of off-diagonal terms that would start to appear in the analysis, obstructs progress. Because the scale $\log\log T$ on which one is working grows so slowly with $T$, one has quite a lot of flexibility in truncating sums, etc. if one just wants to get close to the answer, but this starts to disappear if one wants a precise answer. One particular landmark en route to proving Conjecture~\ref{fhkconj}, which might be achievable, would be to prove that usually $\max_{|h| \leq 1} \log|\zeta(1/2+it+ih)| \leq \log\log T - c\log\log\log T$ for some $c > 1/4$. The Conjecture predicts that one can take $c = 3/4 + o(1)$, whereas we have seen (in our initial heuristic argument) that one would get $c = 1/4 + o(1)$ if $|\zeta(1/2+it+ih_1)|, |\zeta(1/2+it+ih_2)|$ behaved ``roughly independently'' when $|h_1 - h_2| > 1/\log T$. We saw in our later analysis that this shouldn't really be the case, and proving an upper bound with some fixed $c > 1/4$ would give a concrete (if rather subtle) manifestation of this failure of independence. \medskip {\em Acknowledgements.} The author would like to thank Louis-Pierre Arguin, Paul Bourgade, and N.~Bourbaki for their comments and suggestions on a draft of this paper.
1,108,101,566,821
arxiv
\section{Preliminaries} \newcommand{\V}{\mathcal{V}} \newcommand{\U}{\mathcal{U}} Let $K$ be a finite field of size $\card{K}= q$ and let $W$ be a vector space over $K$ of dimension greater than one. Let $m$ be a positive integer and let $V_i, U_i \subseteq W$ be vector spaces, where $i \in \myset{m}$. Recall for the pair of sets $X \subseteq Y$ the indicator function $\id_X : Y \rightarrow \{0,1\}$ is defined as $\id_X(x) = 1$ for $x \in X$ and $\id_X(x) =0$ otherwise. The following equation is called an \emph{isometry equation}, \begin{equation}\label{eq-main-counting-space} \sum_{i=1}^m \frac{1}{\card{V_i}} \id_{V_i}= \sum_{i=1}^m \frac{1}{\card{U_i}} \id_{U_i}\;. \end{equation} Denote $\V = (V_1, \dots, V_m)$, $\U = (U_1,\dots,U_m)$ and call the \emph{tuples of spaces}. A pair of tuples $(\U, \V)$ is called a \emph{solution} if it satisfies \cref{eq-main-counting-space}. The easiest way to find a solution is to chose any spaces $V_1, \dots, V_m \subseteq W$ and define $U_i = V_{\pi(i)}$, for some permutation $\pi \in S_m$, where $i \in \myset{m}$. We say that tuples $\V$ and $\U$ are \emph{equivalent} ($\U \sim \V$) if there exists a permutation $\pi \in S_m$, such that $V_i = U_{\pi(i)}$, for all $i \in \myset{m}$. Such a solution $(\U,\V)$, where the tuples $\U$ and $\V$ are equivalent, is called \emph{trivial}. Note that the defined equivalence of tuples is really an equivalence relation. We say that two pairs $(\U,\V)$ and $(\U',\V')$ are equivalent (denote $(\U,\V) \sim (\U',\V')$) if $\U \sim \U'$, $\V \sim \V'$ or $\V \sim \U'$, $\U \sim \V'$. The defined equivalence of pairs is also an equivalence relation on the set of all pairs of tuples of spaces. A pair $(\U,\V)$ is a solution if and only if any equivalent is a solution. Moreover, $(\U,\V)$ is a trivial solution if and only if any equivalent pair is a trivial solution. In general, not all the solutions are trivial. Denote by $\mathbb{P}_1(K)$ a projective space of dimension one over $K$. Note that $\card{\mathbb{P}_1(K)} = \card{K} + 1 = q + 1$. For $m = q + 1$ there exists an example of a nontrivial solution. \begin{definition} A pair $(\U,\V)$ is called a pair of \emph{Type A}, if there exist a subspace $S \subseteq W$ of dimension $k$ and two different vectors $a,b \in W$, with $S \cap \gen{a,b} = \{0\}$, such that $V_1 = \dots = V_{q} = \gen{S,a,b}$, $V_{q+1} = S$ and $U_i = \gen{S, \alpha_i a + \beta_i b}$, for $i \in \myset{q+1}$, where $[\alpha_i: \beta_i]$ is the $i$th element of $\mathbb{P}_1(K)$. \end{definition} In fact, the spaces $U_1,\dots, U_m$ from a pair of Type A are all different hyperplanes in $\gen{S,a,b}$ that contain the subspace $S$. In \cite{d1} it was proved that a pair of Type A is a nontrivial solution. Indeed, denote $V = \gen{S, a, b}$, then \begin{equation*} \sum_{i=1}^{q+1} \term{V_i} = q \summand{q^{k+2}}{V} + \summand{q^{k}}{S} = \frac{1}{q^{k+1}} \left( \id_V + q \id_S \right) \;, \end{equation*} \begin{equation*} \sum_{i=1}^{q+1} \term{U_i} = \frac{1}{q^{k+1}}\sum_{i=1}^{q+1} \id_{U_i} = \frac{1}{q^{k+1}} \left( \id_V + q \id_S \right)\;. \end{equation*} Evidently, a solution of Type A is nontrivial. The inclusion diagram of spaces from a pair of Type A is presented in \Cref{fig-type-a}. \begin{figure}[!ht] \centering \begin{tikzpicture} \node (top) at (0,2) {$V_1$}; \node (topr) at (1.5,2) {$ = \dots = V_{m-1}$}; \node (l) at (-1, 1) {$U_1$}; \node (c) at (0, 1) {$\dots$}; \node (r) at (1, 1) {$U_m$}; \node (down) at (0, 0) {$V_m$}; \draw (top) -- (l) -- (down) -- (r) -- (top); \end{tikzpicture} \caption{Solution of Type A} \label{fig-type-a} \end{figure} \comment{In this paper we are concerned with the nontrivial solutions. The importance of study of the nontrivial solutions is because it is related to the description of code isometries. In \cite{d1} it was proved that there exists a nontrivial solutions if and only if there exists an unextendible additive isometry of an additive code. There the were also proven some properties of nontrivial solutions and minimum requirement for their existence. } To classify all the solutions, up to equivalence, for some $m$, we have to describe all trivial and all nontrivial solutions. For trivial solutions the task is easy --- all such solutions are parametrized by tuples of spaces of length $m$, where the spaces are subspaces of $W$. The case of nontrivial solutions is more complicated. We introduce several important properties of nontrivial solutions that we are using further. \begin{lemma}\label{lemma-minimum-covering-number-of-one-space} Let $V$ be a nonzero vector space over $K$ and let $U_i \subset V$ be proper subspaces, for $i \in \myset{m}$. If $V = \bigcup_{i =1}^m U_i$, then $m$ is greater than the cardinality of ${K}$. \end{lemma} \begin{proof} For any $i \in \myset{m}$, $\dim_K U_i \leq \dim_K V - 1$ and hence $\card{U_i} \leq \frac{\card{V}}{\card{K}}$. Thus we have \begin{equation*} \card{V} < \sum_{i =1}^m \card{U_i} \leq m \frac{\card{V}}{\card{K}} \end{equation*} that implies $m > \card{K}$. \end{proof} \begin{lemma}\label{lemma-minimum-size-of-space-equation} Let $U_1,\dots, U_r, V_1, \dots, V_s$ be different vector spaces over $K$. Assume that $a_1,\dots, a_r, b_1,\dots, b_s > 0$ and \begin{equation*} \sum_{i = 1}^{r} a_i \id_{U_i} = \sum_{i = 1}^{s} b_i \id_{V_i} \,\, . \end{equation*} Then $\max\{r,s\}$ is greater than the cardinality of ${K}$. \end{lemma} \begin{proof} Among the spaces $V_1, \dots, V_s, U_1, \dots, U_r$ choose one that is maximal under inclusion. It is either $V_i$ for some $i \in \myset{s}$, or $U_j$ for some $j \in \myset{t}$. In the first case $V_i = \bigcup_{j = 1}^{r} (V_i \cap U_j)$, where for all $j \in \myset{r}$, $V_i \cap U_j \subset V_i$. From Lemma \ref{lemma-minimum-covering-number-of-one-space}, $r > \card{K}$. Similarly, in the second case $s > \card{K}$. \end{proof} \begin{proposition}\label{thm-nontrivial-solution-minimal-length} There exists a nontrivial solution if and only if $m \geq q +1$. \end{proposition} \begin{proof} Let $(\U,\V)$ be a nontrivial solution. Simplify \cref{eq-main-counting-space} by combining and elimination of all equal spaces. The resulting equation is in the form of the equation from \Cref{lemma-minimum-size-of-space-equation} and therefore $m > q$. Conversely, let $(\U,\V)$ be of Type A. If $m = q+ 1$ we have already shoved that $(\U,\V)$ is a nontrivial solution. If $m > q + 1$, let $X_1,\dots, X_{m-q-1}$ be subspaces in $W$. Define tuples $\U',\V'$ by adding all these spaces to both tuples $\U$ and $\V$. The pair $(\U',\V')$ is a nontrivial solution. \end{proof} In this paper our objective is the description of all solutions of \cref{eq-main-counting-space} for $m = q + 1$, up to equivalence, and to study their properties. As we mentioned above, this task is reduced to the description of nontrivial solutions. From \Cref{thm-nontrivial-solution-minimal-length} we see the importance of the coverings of a vector space by proper subspaces. In the case $m = q+ 1$ there is a description of all such coverings. \begin{lemma}[see \cite{d1}]\label{lemma-minimum-dense-covering-desc} Let $V$ be a nonzero space over $K$. Let $W_i \subset V$, for $i \in \myset{q+1}$, be proper subspaces and $V = \bigcup_{i = 1}^{q+1} W_i$. There exists subspace $S \subset V$ of codimension $2$ such that $\{W_1, \dots, W_m\}$ is the set of all hyperplanes in $V$ that contain $S$. The equality holds, \begin{equation}\label{eq-dense-covering} \sum_{i=1}^m \id_{W_i} = \id_V + q \id_S \;. \end{equation} \end{lemma} \begin{proof} The proof is based on the fact that a finite $K$-linear space can be covered by at least $q+1$ proper subspaces. Using the sieve theorem for the size of the covering, it is easy to prove that the only possible covering by the minimum number of the proper subspaces is the one presented in the statement. \Cref{eq-dense-covering} evidently follows. \end{proof} \section{Properties of minimal nontrivial solutions} In \cite{d1} we completely observed the solutions with different maximum dimensions of spaces in two tuples, $\max_{1 \leq i \leq m} \dim_K V_i \neq \max_{1 \leq i \leq m} \dim_K U_i$. \begin{proposition}[see \cite{d1}]\label{thm-type-a} Let $(\U,\V)$ be a nontrivial solution and $$\max_{1 \leq i \leq q+1} \dim_K V_i > \max_{1 \leq i \leq q+1} \dim_K U_i\;.$$Then $(\U,\V)$ is equivalent to a solution of Type A. \end{proposition} \begin{proof} The proof is based on \Cref{lemma-minimum-dense-covering-desc}. Due to the different maximum dimensions, without a loss of generality, the space $V_1$ has dimension greater than one and is covered by the spaces $U_1,\dots, U_{q+1}$. \end{proof} \emph{From now in this section we suppose that $(\V, \U)$ is a nontrivial solution, $m = q+1$, $\max_{1 \leq i \leq m} \dim_K V_i = \max_{1 \leq i \leq m} \dim_K U_i = n > 1$ and the maximum is achieved on the spaces $V_1$ and $U_1$}. \begin{lemma}\label{lemma-properties} For all $i,j \in \myset{m}$ the following hold, \begin{itemize} \item[(a)] $U_i \neq V_j$, \item[(b)] $V_i \subseteq V_j$ implies $i = j$, \item[(c)] $\dim_K V_i \geq n-1$, $\dim_K U_j \geq n - 1$, \item[(d)] if $\dim_K U_j > \dim_K V_i$, then $V_i \subset U_j$, \item[(e)] if $\dim_K U_j = n$, then there exists a subspace $S \subset U_j$ of dimension $n - 2$ such that $S \subset V_i$ for all $i \in \myset{m}$. \end{itemize} \end{lemma} \begin{proof} Assume that (a) does not hold. Without loss of generality, we can assume that $U_m = V_m$. Reduce \cref{eq-main-counting-space} to $\sum_{i = 1}^{m-1} \frac{1}{\card{V_i}} \id_{V_i} = \sum_{i=1}^{m-1} \frac{1}{\card{U_i}} \id_{U_i}$. The pair $\big( (V_1,\dots,V_{m-1}), (U_1,\dots,U_{m-1}) \big)$ is therefore a nontrivial solution of the new equation and, by \Cref{thm-nontrivial-solution-minimal-length}, $m - 1 \geq q +1$, which contradicts to the fact that $m = q+1$. Let $l \in \myset{m}$ be such that $\dim_K U_l = n$. Assume that for some $i,j \in \myset{m}$, $V_i \subseteq V_j$. Then $V_i \cap U_l \subseteq V_j \cap U_l$. Since $U_l$ has the largest dimension among all the spaces, according to (a), for all $t \in \myset{m}$, $V_t \cap U_l \neq U_l$. Also $U_l = \bigcup_{t = 1}^{m} V_t \cap U_l$. From \Cref{lemma-minimum-dense-covering-desc}, since $m = q+1$, the spaces $V_t \cap U_l$, for $t \in \myset{m}$, are all different hyperplanes in $U_l$. Thus $V_i \cap U_l \subseteq V_j \cap U_l$ implies $i = j$ and $\dim_K V_i \cap U_l = n - 1$, for $i,j \in \myset{m}$. Therefore we have (b) and (c). If $\dim_K V_t = n - 1$, then $V_t = V_t \cap U_l \subset U_l$, which proves (d). Also, from \Cref{lemma-minimum-dense-covering-desc} there exists a space $S \subset U_l$ of dimension $n-2$, such that $S \subset V_t \cap U_l \subseteq V_t$, for all $t \in \myset{m}$. This proves (e). \end{proof} \begin{lemma}\label{lemma-common-subspace} There exists a space $S$ of dimension $n-2$ such that for all $i \in \myset{m}$, $S \subset U_i, V_i$. \end{lemma} \begin{proof} From \Cref{lemma-properties} (e), there exists a subspace $S \subset U_1$ of dimension $n-2$ such that for all $i \in \myset{m}$, $S \subset V_i$. Restrict both sides of \cref{eq-main-counting-space} on the space $S$. In result we get, \begin{equation*} \sum_{i = 1}^{m} \frac{1}{\card{V_i}} \id_{S} = \frac{1}{\card{U_1}}\id_{S} + \sum_{i = 2}^m \frac{1}{\card{U_i}} \id_{U_i \cap S}\; \iff \end{equation*} \begin{equation}\label{t3e2} \iff \left(\sum_{i = 1}^{m} \frac{1}{\card{V_i}} - \frac{1}{\card{U_1}}\right) \id_{S} = \sum_{i = 2}^m \frac{1}{\card{U_i}} \id_{U_i \cap S}\;. \end{equation} Calculating \cref{eq-main-counting-space} in zero we get the equality, $\sum_{i = 1}^{m} \frac{1}{\card{V_i}} = \sum_{i = 1}^{m} \frac{1}{\card{U_i}}$. Thus the coefficient on the left side of \cref{t3e2} is positive. On the right side of \cref{t3e2} there are $m-1$ terms and therefore, by \Cref{lemma-minimum-size-of-space-equation}, there exists $i \in \{2,\dots, m\}$, without loss of generality assume $i = 2$, such that $S \subset U_2$. Continuing the procedure of elimination for all $i \in \{3,\dots,m\}$, we get $S \subset U_i$, for all $i \in \myset{m}$. \end{proof} \Cref{lemma-properties} (c) states that in a nontrivial solution the only possible dimensions of spaces are $n-1$ and $n$. Denote $X = \{ i \in \myset{m} \mid \dim_K V_i = n - 1 \}$ and $Y = \{ i \in \myset{m} \mid \dim_K U_i = n - 1 \}$. \begin{lemma}\label{lemma-two-types} The cardinalities of $X$ and $Y$ are equal and they are not greater than one. \end{lemma} \begin{proof} Verify that $\card{X} = \card{Y}$. Calculate \cref{eq-main-counting-space} in the point $\{0\}$. Since all the spaces contain zero, we have $\sum_{i = 1}^{m} \frac{1}{\card{V_i}} = \sum_{i = 1}^{m} \frac{1}{\card{U_i}}$ or, the same, $\card{X} \frac{1}{q^{n-1}} + (m - \card{X}) \frac{1}{q^n} = \card{Y} \frac{1}{q^{n-1}} + (m - \card{Y}) \frac{1}{q^n}$. Hence $\card{X} = \card{Y}$. From \Cref{lemma-properties} (d), we have the inclusions, \begin{equation}\label{eq-xy-cap-cup} \bigcup_{i \in X} V_i \subseteq \bigcap_{i \notin Y} U_i \text{ and } \bigcup_{i \in Y} U_i \subseteq \bigcap_{i \notin X} V_i \;. \end{equation} Prove that $\card{X} > 1$ implies $\card{Y} \geq m - 1$. By the contradiction, assume that $\card{Y} < m - 1$. Inequality $\card{X}>1$ implies that there exist $i \neq j \in \myset{m}$ such that $\dim_K V_i = \dim_K V_j = n - 1$. By \Cref{lemma-properties} (b), $V_i \neq V_j$ and, by \Cref{lemma-properties} (e), $\dim_K V_i \cap V_j = n-2$. From \cref{eq-xy-cap-cup}, $V_i \cup V_j \subseteq \bigcap_{t \notin Y} U_t$ and, using the fact that $\card{ \myset{m} \setminus Y } \geq 2$, we have, \begin{equation*} 2q^{n-1} - q^{n-2} = \card{V_i} + \card{V_j} - \card{V_i \cap V_j} = \card{V_i \cup V_j} \leq \card{\bigcap_{t \notin Y} U_t} \leq q^{n - 1}\;. \end{equation*} This inequality does not hold and hence, by contradiction, $\card{Y} \geq m - 1$. The general assumption in the section is $\dim_KV_1 = n$, hence $1 \notin X$ and$\card{X} \leq m - 1$. Combining it with the result above, $\card{X} > 1$ implies $\card{Y} = \card{X} = m - 1$. Prove that $\card{X} = m - 1$ is impossible. Assume that $\card{X} = m - 1$. This means that $\dim_K V_1 = \dim_K U_1 = n$ and $\dim_K U_i = \dim_K V_i = n -1$, for $i \in \{2,\dots,m\}$. Using \Cref{lemma-properties}, it is easy to see that $U_1 \cap V_i = V_i$ and $U_1 \cap U_i = S$, for $i \in \{2,\dots,m\}$, where $S \subset V_1$ is a space from \Cref{lemma-properties} (e) with $\dim_K S = n - 2$ and for all $i \in \myset{m}$, $S \subset U_i$. Calculate the restrictions of \cref{eq-main-counting-space} on $U_1$, \begin{equation}\label{teq} \frac{1}{q^n}\id_{U_1} + \frac{1}{q^{n-1}}\sum_{i=2}^m \id_{S} = \frac{1}{q^n} \id_{V_1 \cap U_1} + \frac{1}{q^{n-1}}\sum_{i = 2}^{m} \id_{V_i}\,\,, \end{equation} \Cref{teq} implies $U_1 = (V_1 \cap U_1) \cup \bigcup_{i=2}^m V_i$, where the spaces $V_1 \cap U_1$, $V_2, \dots, V_m$ do not equal $U_1$. From \Cref{lemma-minimum-dense-covering-desc}, $\id_{U_1} + q \id_S = \sum_{i=2}^m \id_{V_i} + \id_{V_1 \cap U_1}$ on $U_1$. Substituting it to \cref{teq}, we get $\sum_{i=2}^m \id_{V_i} = q \id_S$ that is not true. By the contradiction, $\card{X} < m - 1$. In result, there are two possibilities, $\card{X} = \card{Y} = 0$ and $\card{X} = \card{Y} = 1$. \end{proof} So, there exist at most two possible dimension vectors for nontrivial solutions, with $\card{X} = \card{Y} = 1$ and $\card{X} = \card{Y} = 0$. Let $S$ be the space with $\dim_K S = n - 2$ such that for all $i \in \myset{m}$, $S \subset U_i, V_i$. Let $Z_{ij}$ denote the space $U_i \cap V_j$ for $i, j \in \myset{m}$. Without loss of generality, assume that if $\card{X} = \card{Y} = 1$, then $\dim_K V_m = \dim_K U_m = n - 1$. \begin{lemma}\label{lemma-zij-properties} For all $i, j \in \myset{m}$ the following statements hold, \begin{itemize} \item [(a)] if $i \neq j$, then $\dim_K Z_{ij} = n -1$, \item [(b)] if $\dim_K V_j = n$, then \begin{equation}\label{teq0} \sum_{i = 1}^m \id_{Z_{ij}} = \id_{V_j} + q\id_{S}\;, \end{equation} \item [(c)] for all $k,l \in \myset{m}$, such that $i \neq k$ or $j \neq l$, $Z_{ij} \cap Z_{kl} = S$. \end{itemize} \end{lemma} \begin{proof} Note that, for all $j \in \myset{m}$, we have $\bigcup_{i = 1}^m Z_{ij} = V_j$. Also, by \Cref{lemma-properties} (a) and (c), $\dim_K Z_{ij} = \dim_K U_i \cap V_j \leq n - 1$. If $\dim_K V_j = n$, then $Z_{ij}\subset V_j$ and, by \Cref{lemma-minimum-dense-covering-desc}, the spaces $Z_{ij}$ for $i \in \myset{m}$ form a covering of $V_j$ by hyperplanes that intersect in $S$. Therefore, for all $j \in \myset{m}$, such that $\dim_K V_j = n$, for all $i \in \myset{m}$, $\dim_K Z_{ij} = n -1$ and \cref{teq0} holds. If $j = m$, $\dim_K V_j = n - 1$ and $i \neq m$ then, by \Cref{lemma-properties} (d), $Z_{ij} = Z_{ji} = V_m$ and $\dim_K Z_{ij} = \dim_K Z_{ji} = \dim_K V_m = n - 1$. Consider again the equality $\bigcup_{i = 1}^m Z_{ij} = V_j$, for all $j \in \myset{m}$, and calculate $Z_{ij} \cap Z_{kl} = U_i \cap V_j \cap U_k \cap V_l = U_i \cap U_k \cap V_j \cap V_l$. By \Cref{lemma-properties}, $n -2 \leq \dim_K U_i \cap U_j, \dim_K V_j \cap V_l \leq n - 1$. If one of these two spaces $U_i \cap U_j$ or $V_j \cap V_l$ has dimension $n - 2$, then it is equal to $S$ and therefore $Z_{ij} \cap Z_{kl} = S$. Consider the case $\dim_K U_i \cap U_j = \dim_K V_k \cap V_l = n - 1$. This implies $\dim_K U_i = \dim_K U_j= \dim_K V_k =\dim_K V_l = n$. In this case, if $i \neq k$, $S \subseteq Z_{ij} \cap Z_{kl} \subseteq U_i \cap U_k \cap V_j = U_i \cap V_j \cap U_k \cap V_j = Z_{ij} \cap Z_{kj} = S$, since $Z_{ij}$ and $Z_{kj}$ are different hyperplanes in $V_j$. The same holds if $j \neq l$. \end{proof} \section{Detailed description of minimal nontrivial solutions} In this section we keep all the assumptions and notations from the previous section. \begin{proposition}\label{thm-factor-equation} Let $W$ be a $K$-space, $V_1, \dots, V_m, U_1, \dots, U_m\subseteq W$ be $K$-spaces that have a common subspace $S$. The equality $\sum_{i=1}^m \frac{1}{\card{V_i}} \id_{V_i} = \sum_{i=1}^m \frac{1}{\card{U_i}} \id_{U_i}$ of the functions on $W$ is equivalent to the equality \begin{equation}\label{eq-factor-main-by-common} \sum_{i=1}^m \frac{1}{\card{V_i/S}} \id_{V_i/S} = \sum_{i=1}^m \frac{1}{\card{U_i/S}} \id_{U_i/S} \end{equation} of the functions on $W/S$. \end{proposition} \begin{proof} Let $S, V \subseteq W$ be spaces such that $S \subseteq V$. Let $\pi_S: W \rightarrow W/S$, $x \mapsto \bar{x}$ be a canonical projection, where $\bar{x} = x + S$. The equality $\id_V(x) = \id_{V/S}(\bar{x})$ holds. Indeed, $x \in V$ implies $\bar{x} \in V/S$, and conversely, $\bar{x} \in V/S$ implies that there exists $x' \in V$ such that $\bar{x} = \bar{x'}$, which is equivalent to the fact that $x - x' \in S$ and thus $x \in V$. Let $W_1, \dots, W_r \subseteq W$ be spaces with a common subspace $S$. Consider the function $F: W \rightarrow \mathbb{R}$, $a \mapsto \sum_{i=1}^r x_i \id_{W_i}(a)$, where $x_i \in \mathbb{R}$ for $i \in \myset{r}$. Let $\bar{F} = \sum_{i=1}^r x_i \id_{W_i/S}: W/S \rightarrow \mathbb{R}$. For each $x \in W$ the equality $F(x) = \bar{F}(\pi_S(x))$ holds, $\bar{F}(\pi_S(x)) = \sum_{i=1}^r x_i \id_{W_i/S}(\bar{x}) = \sum_{i=1}^r x_i \id_{W_i}(x) = F(x)$. Since $\pi_S$ is a projection, the identity $F \equiv 0$ on $W$ is equivalent to the identity $\bar{F} \equiv 0$ on $W/S$. Using the arguments above, the equation $\sum_{i=1}^m \frac{1}{\card{V_i}} \id_{V_i} = \sum_{i=1}^m \frac{1}{\card{U_i}} \id_{U_i}$ of the functions on $W$ is equivalent to the equation $\sum_{i=1}^m \frac{1}{\card{V_i}} \id_{V_i/S} = \sum_{i=1}^m \frac{1}{\card{U_i}} \id_{U_i/S}$ of the functions on $W/S$. Since $\card{V_i/S} = \card{V_i}/\card{S}$ and $\card{U_i/S} = \card{U_i}/\card{S}$, for all $i \in \myset{m}$, it is the same as $\card{S}\sum_{i=1}^m \frac{1}{\card{V_i/S}} \id_{V_i/S} = \card{S}\sum_{i=1}^m \frac{1}{\card{U_i/S}} \id_{U_i/S}$. The set $S$ contains zero, so $\card{S} > 0$ and we divide both sides of the equality by $S$, obtaining the necessary equality. \end{proof} Since the spaces in the nontrivial solution $(\U,\V)$ have a common subspace $S$ of dimension $n - 2$, we can factorize all the spaces by $S$ and describe nontrivial solutions in the case $S = \{0\}$. In such a way we can simplify sometimes the proof without loss of properties of nontrivial solutions. Therefore, we can assume that $n = 2$. We will use this assumption when we need it. \comment{ We say that two solutions $(\V,\U)$ and $(\V',\U')$ are equivalent if $\V \sim \V'$ and $\U \sim \U'$ or $\V \sim \U'$ and $\U \sim \V'$. To simplify the formulation of the following proposition, let $\V/S$ denote the tuple $(V_1/S, \dots, V_m/S)$. \begin{lemma}\label{lemma-factor-solution-iff-original} Two solutions $(\U, \V)$ and $(\U', \V')$ are equivalent if and only if the corresponding two solutions of \cref{eq-factor-main-by-common} $(\U/S, \V/S)$ and $(\U'/S, \V'/S)$ are equivalent. \end{lemma} \begin{proof} Obviously, if $S$ is a common subspace of $V_1, V_2$ in $W$, then $V_1 = V_2$ in $W$ if and only if $V_1/S = V_2/S$ in $W/S$. \end{proof} } \begin{definition} Call a pair of tuples $(\U,\V)$ to be of Type B, if there exists a subspace $S \subset W$ and linearly independent vectors $a,b,c \in W$, where $S \cap \gen{a,b,c} = \{0\}$, such that $V_m = \gen{S, a}$, $V_i = \gen{S, b, \alpha_i a + c}$, $U_m = \gen{S, b}$ and $U_i = \gen{S, a, \alpha_i b + c}$, where $\alpha_i$ is the $i$th element in the field $K$, for $i \in \myset{q}$. \end{definition} In \Cref{fig-typeB} there is presented the inclusion diagram of spaces in a pair of Type B along with intersections. \begin{figure}[!ht] \centering \begin{tikzpicture} \node (topU1) at (-5,2) {$U_1$}; \node (udotsl) at (-4,2) {$\dots$}; \node (udots) at (-3,2) {$U_i$}; \node (udotsr) at (-2,2) {$\dots$}; \node (topUq) at (-1,2) {$U_{m-1}$}; \node (midVm) at (-3,1) {$V_m$}; \node (S) at (0,0) {$S$}; \node (topV1) at (1,2) {$V_1$}; \node (vdotsl) at (4,2) {$\dots$}; \node (vdots) at (3,2) {$V_j$}; \node (vdotsr) at (2,2) {$\dots$}; \node (topVq) at (5,2) {$V_{m-1}$}; \node (midUm) at (3,1) {$U_m$}; \node (zij) at (0,1) {$Z_{ij}$}; \draw (topU1) -- (midVm) -- (S) -- (midVm) -- (topUq); \draw (topV1) -- (midUm) -- (S) -- (midUm) -- (topVq); \draw (udots) -- (midVm); \draw (vdots) -- (midUm); \draw (udots) -- (zij) -- (S); \draw (vdots) -- (zij) -- (S); \end{tikzpicture} \caption{A pair of Type B} \label{fig-typeB} \end{figure} \begin{proposition}\label{thm-typeB-detailed-description} A pair $(\U,\V)$ of Type B is a nontrivial solution with $\card{X} = \card{Y} = 1$. If a pair $(\U,\V)$ is a nontrivial solution with $\card{X} = \card{Y} = 1$, then $(\U,\V)$ is equivalent to a solution of Type B. \end{proposition} \begin{proof} To simplify both parts of the proof, according to \Cref{lemma-common-subspace} and \Cref{thm-factor-equation}, we can assume $S = \{0\}$ and $n = 2$. Prove the first part. Calculate the intersection of the spaces, $Z_{ij} = U_i \cap V_j = \gen{a, \alpha_i b + c \rangle_K \cap \langle b, \alpha_j a + c}$, for $i,j \in \myset{q}$. After computations we get $Z_{ij} = \gen{\alpha_j a + \alpha_i b + c}$. All the spaces $Z_{ij}$ for $i,j \in \myset{q}$, $V_m$ and $U_m$ are different. From \Cref{lemma-zij-properties} (b), $\sum_{i=1}^q \id_{Z_{ij}} + \id_{U_m} = \id_{V_j} + q\id_S$, for any $j \in \myset{q}$. Note that $V_i \cap V_j = \gen{b} = U_m$, for $i \neq j\in \myset{q}$. For all $j \in \myset{q}$ calculate the projection of both sides of \cref{eq-main-counting-space}, multiplied from both sides by $q^n$, on $V_j$, where $j \in \myset{q}$, \begin{equation}\label{teq1} \left.\left(\sum_{i=1}^{q} \id_{V_i} + q \id_{V_m}\right) \right\vert_{V_j} = \sum_{i=1}^{q} \id_{V_i \cap V_j} + q \id_{V_m \cap V_j} = \id_{V_j} + (q-1) \id_{U_m} + q\id_{V_m \cap V_j}\;, \end{equation} \begin{equation}\label{teq2} \left.\left(\sum_{i=1}^{q} \id_{U_i} + q\id_{U_m}\right) \right\vert_{V_j} = \sum_{i=1}^{q} \id_{U_i \cap V_j} + q \id_{U_m \cap V_j} = \sum_{i=1}^{q} \id_{Z_{ij}} + q \id_{U_m}\;. \end{equation} Considering the fact that $V_m \cap V_j = S$, the projection of the left and the right side of \cref{eq-main-counting-space} are equal and therefore the pair $(\U,\V)$ of the Type B is a nontrivial solution. It is easy to see that $\card{X} = \card{Y} = 1$, the corresponding spaces of dimension $n-1$ are $V_m$ and $U_m$. Prove the second part. Let $(\U,\V)$ be a nontrivial solution with $\card{X} = \card{Y} = 1$. At first, note some properties of the spaces in $\V$ and $\U$. From \Cref{lemma-properties} (d), $U_m \subset V_i$ and $V_m \subset U_i$ for all $i \in \myset{m}$. As a result, using \Cref{lemma-properties} (b), $V_i \cap V_j = U_m$ and $U_i \cap U_j = V_m$ for all $i \in \myset{m}$. Also, $V_m \cap V_i = U_m \cap U_i = S$ for $i \in \myset{q}$. From \Cref{lemma-zij-properties} (c), all the spaces $V_m$, $U_m$ and $Z_{ij}$, $i \in \myset{q}$ are different. Let $a,b,c \in W$ be three vectors, such that $V_m = \langle a \rangle_K$, $U_m = \langle b \rangle_K$, $Z_{11} = \langle c \rangle_K$. From the properties that we mentioned above, the spaces $V_m$, $U_m$ and $Z_{11}$ are all different so the vectors $a,b,c$ are pairwise linearly independent. Obviously, $V_1 =\langle b,c \rangle_K$ and $U_1 = \langle a, c\rangle_K$. \Cref{lemma-properties} (a) states that $V_1 \neq U_1$ and thus all three vectors $a,b,c \in W$ are linearly independent. The plane $U_1$ is covered by $m$ different lines $V_m, Z_{11}, \dots, Z_{1q}$. Let $v_2, \dots, v_q$ be such that $Z_{1i} = \gen{v_i}$ for $i \in \{2, \dots, q\}$. In the same way, the plane $V_1$ is covered by the lines $U_m, Z_{11}, \dots, Z_{q1}$. Let $Z_{i1} = \langle u_i \rangle_K$, for some $u_i \in W$, where $i \in \{2, \dots, q\}$. In \Cref{tab-type-b} there are illustrated the intersections of the spaces $V_1, \dots, V_m$, $U_1, \dots, U_m$. Note that if we calculate the union of all spaces in a row of the table, we get the space that corresponds to the row. The same is for the columns of the tables. To satisfy this requirement, for all $i \in \{ 2, \dots, q \}$, $v_i \in \langle a, c \rangle_K$, $u_i \in \langle b, c \rangle_K$, $V_i = \langle b, v_i \rangle_K$ and $U_i = \langle a, u_i \rangle_K$. Hence $v_i = \alpha_i a + \beta_i c$, $u_i = \gamma_i b + \delta_i c$ for some $\alpha_i, \beta_i, \gamma_i, \delta_i \in K$, $i \in \{2,\dots,q\}$. Then $Z_{ij} = U_i \cap V_j = \langle a, u_i \rangle_K \cap \langle b, v_j \rangle_K= \langle a, \gamma_i b + \delta_i c \rangle_K \cap \langle b, \alpha_j a + \beta_j c \rangle_K$. After computations we get $Z_{ij} = \langle \delta_i \alpha_j a + \gamma_i \beta_j b + \delta_i \beta_j c \rangle_K$ or the same, since $b_j \neq 0$ and $\delta_i \neq 0$, $Z_{ij} = \langle \frac{\alpha_j}{\beta_j} a + \frac{\gamma_i}{\delta_i} b + c \rangle_K$. As we have mentioned before, all the spaces $Z_{ij}$ for $i,j \in \myset{q}$ should be different, thus the values $\frac{\alpha_i}{\beta_i}$ and $\frac{\gamma_i}{\delta_i}$ should both run through $K$ while $i$ runs through $\myset{q}$. As a result, we showed that the pair $(\U,\V)$ is, up to an order of spaces in tuples, exactly of the Type B. \end{proof} \begin{table}[!ht] \centering \begin{tabular}{c|c|c|c|c|c|c|} $\cap$ & $V_m$ & $V_1$ & \dots & $V_i$ & \dots & $V_q$ \\ \hline $U_m$ & $\{0\}$ & $\gen{b}$ & \dots & $\gen{b}$ & \dots & $\gen{b}$ \\ \hline $U_1$ & $\gen{a}$ & $\gen{c}$ & \dots & $\gen{v_i}$ & \dots & $\gen{v_q}$ \\ \hline $\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ \hline $U_j$ & $\gen{a}$ & $\gen{u_j}$ & $\dots$ & $Z_{ji}$ & $\dots$ & $Z_{jq}$ \\ \hline $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$& $\ddots$ & $\vdots$ \\ \hline $U_q$ & $\gen{a}$ & $\gen{u_q}$ & $\dots$ & $Z_{qi}$ & $\dots$ & $Z_{qq}$ \\ \hline \end{tabular} \caption{Intersection table for a solutions of Type B} \label{tab-type-b} \end{table} \begin{definition} Say that a pair $(\U,\V)$ if of Type C, if there exist a subspace $S \subset W$ and linearly independent vectors $a,b,c,d \in W$, where $S \cap \gen{a,b,c,d} = \{0\}$, such that $V_i = \gen{S, \alpha_i a + \beta_i b, \alpha_i c + \beta_i d}$ and $U_i = \gen{S, \alpha_i a + \beta_i c, \alpha_i b + \beta_i d}$, where $[\alpha_i : \beta_i]$ is the $i$th element in $\mathbb{P}_1(K)$, for $i \in \myset{m}$. \end{definition} In \Cref{fig-typeC} there is presented the inclusion diagram of spaces in a pair of Type C along with intersections. \begin{figure}[!ht] \centering \begin{tikzpicture} \node (topU1) at (-5,2) {$U_1$}; \node (udotsl) at (-4,2) {$\dots$}; \node (udots) at (-3,2) {$U_i$}; \node (udotsr) at (-2,2) {$\dots$}; \node (topUq) at (-1,2) {$U_{m}$}; \node (S) at (0,0) {$S$}; \node (topV1) at (1,2) {$V_1$}; \node (vdotsl) at (4,2) {$\dots$}; \node (vdots) at (3,2) {$V_j$}; \node (vdotsr) at (2,2) {$\dots$}; \node (topVq) at (5,2) {$V_{m}$}; \node (zij) at (0,1) {$Z_{ij}$}; \draw (topU1) -- (S) -- (udots) -- (S) -- (topUq); \draw (topV1) -- (S) -- (vdots) -- (S) -- (topVq); \draw (udots) -- (zij) -- (S); \draw (vdots) -- (zij) -- (S); \end{tikzpicture} \caption{ A pair of Type C} \label{fig-typeC} \end{figure} \begin{proposition}\label{thm-typeC-detailed-description} A pair $(\U,\V)$ of Type C is a nontrivial solution with $\card{X} = \card{Y} = 0$. If a pair $(\U,\V)$ is a nontrivial solution with $\card{X} = \card{Y} = 0$, then $(\U,\V)$ is equivalent to a solution of Type C. \end{proposition} \begin{proof} To simplify both parts of the proof, according to \Cref{lemma-common-subspace} and \Cref{thm-factor-equation}, we can assume $S = \{0\}$ and $n = 2$. Prove the first part. Calculate the intersection of the spaces, $Z_{ij} = U_i \cap V_j = \gen{\alpha_i a + \beta_i c, \alpha_i b + \beta_i d} \cap \gen{\alpha_j a + \beta_j b, \alpha_j c + \beta_j d}$, for $i,j \in \myset{m}$. After computations we get $Z_{ij} = \gen{\alpha_i \alpha_j a + \beta_i \alpha_j b + \alpha_i \beta_j c + \beta_i \beta_j d}$. All the spaces $Z_{ij}$ for $i,j \in \myset{m}$ are different. From \Cref{lemma-minimum-dense-covering-desc}, $\sum_{i=1}^m \id_{Z_{ij}} = \id_{V_j} + q\id_S$, for any $j \in \myset{m}$. Note that $V_i \cap V_j = \{0\}$, for $i \neq j \in \myset{m}$. We calculate for $j \in \myset{m}$ the restriction of both sides of \cref{eq-main-counting-space}, multiplied by $q^n$, on the space $V_j$, \begin{equation}\label{t2eq1} \left.\left(\sum_{i=1}^m \id_{V_i} \right)\right\vert_{V_j} = \id_{V_j} + \sum_{i \neq j} \id_{V_i \cap V_j}\,\,, \end{equation} \begin{equation}\label{t2eq2} \left.\left(\sum_{i=1}^m \id_{U_i}\right) \right\vert_{V_j} = \sum_{i=1}^m \id_{U_i \cap V_j} = \sum_{i=1}^m \id_{Z_{ij}}\,\,. \end{equation} Obviously, the pair $(\U,\V)$ of Type C satisfy these equations for any $j \in \myset{m}$, and therefore is a nontrivial solution with $\card{X}= \card{Y} = 0$. Prove the second part. Let $(\U,\V)$ be a nontrivial solution with $\card{X} = \card{Y} = 0$. Using \cref{teq0}, and the fact that the right sides of \cref{t2eq1} and \cref{t2eq2} are equal, for the fixed $j$, $\sum_{i \neq j} \id_{V_i \cap V_j} = q \id_{S}$. From \Cref{lemma-minimum-dense-covering-desc} for all $i \neq j \in \myset{m}$, $V_i \cap V_j = S$. In the same way for all $i \neq j \in \myset{m}$, $U_i \cap U_j = S$. From \Cref{lemma-zij-properties} for all $i,j \in \myset{m}$, $\dim_K Z_{ij} = n-1$ and the spaces $Z_{ij}$, $i,j \in \myset{m}$ are all different. Let $a, b, c, d \in W$ be such that $Z_{11} = \langle a \rangle_K$, $Z_{12} = \langle b \rangle_K$, $Z_{21} = \langle c \rangle_K$ and $Z_{22} = \langle d \rangle_K$. We deduce that $V_1 = \langle a, c \rangle_K$, $V_2 = \langle b, d \rangle_K$, $U_1 = \langle a, b \rangle_K$ and $U_2 = \langle c,d \rangle_K$. The intersection $V_1 \cap V_2 = \{0\}$ and $\dim_K V_1 = \dim_K V_2 = 2$ that implies the linearly independence of the vectors $a,b,c,d$. Consider the \Cref{tab-type-c} where in the cells there are the one-dimensional spaces $Z_{ij} = U_i \cap V_j$. The union of the lines in each row gives the space that represents the row and the union of the lines in each column gives the space that represents the column. Let $\alpha_i, \beta_i, \alpha'_i, \beta'_i, \gamma_i, \delta_i, \gamma'_i, \delta'_i \in K$ be such that $Z_{1i} = \langle \alpha_i a + \beta_i b \rangle_K$, $Z_{i1} = \langle \gamma_i a + \delta_i c \rangle_K$, $Z_{i2} = \langle \gamma'_i b + \delta'_i d \rangle_K$ and $Z_{2i} = \langle \alpha'_i c + \beta'_i d \rangle_K$ for $i \in \myset{m}$. With the defined coefficients we get $V_i = \langle \alpha_i a + \beta_i b, \alpha'_i c + \beta'_i d \rangle_K$ and $U_i = \langle \gamma_i a + \delta_i c, \gamma'_i b + \delta'_i d \rangle_K$. Since all the spaces $Z_{ji}$, $i,j \in \myset{m}$ should be different, the equalities $\alpha_i'\beta_i = \beta_i' \alpha_i$, $\gamma'_i \delta_i = \delta_i' \gamma_i$ hold for all $i \in \myset{m}$ and the intersection space is $Z_{ji} = \gen{ \alpha_i\gamma_j a + \gamma_j \beta_i b + \alpha_i \delta_j c + \delta_j \beta_i d}$. It is easy to verify that $(\U,\V)$ is of Type C. \end{proof} \begin{table}[!ht] \centering \begin{tabular}{c|c|c|c|c|c|c|} $\cap$& $V_1$ & $V_2$ & \dots & $V_i$ & \dots & $V_m$ \\ \hline $U_1$ & $\gen{a}$ & $\gen{b}$ & \dots & & \dots & \\ \hline $U_2$ & $\gen{c}$ & $\gen{d}$ & \dots & & \dots & \\ \hline $\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ \hline $U_j$ & & & $\dots$ & $Z_{ji}$ & $\dots$ & $Z_{jm}$ \\ \hline $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$& $\ddots$ & $\vdots$ \\ \hline $U_m$ & & & $\dots$ & $Z_{mi}$ & $\dots$ & $Z_{mm}$ \\ \hline \end{tabular} \caption{Intersection table for the solutions of Type C} \label{tab-type-c} \end{table} \begin{theorem}\label{thm-main-abc} Let $(\U,\V)$ be a nontrivial solution of \cref{eq-main-counting-space} with $m = \card{K} + 1$. Up to an equivalence, the pair $(\U,\V)$ is of one of the following types: Type A, Type B or Type C. \end{theorem} \begin{proof} If $\max_{1 \leq i \leq q+1} \dim_K V_i \neq \max_{1 \leq i \leq q+1} \dim_K U_i$, by \Cref{thm-type-a}, $(\U,\V)$ is equivalent to a pair of Type A. If $\max_{1 \leq i \leq q+1} \dim_K V_i = \max_{1 \leq i \leq q+1} \dim_K U_i$, by \Cref{lemma-two-types}, either $\card{X} = \card{Y} = 1$ or $\card{X} = \card{Y} = 0$. In the first case, using \Cref{thm-typeB-detailed-description}, $(\U,\V)$ is equivalent to a pair of Type B. In the first case, using \Cref{thm-typeC-detailed-description}, $(\U,\V)$ is equivalent to a pair of Type C. \end{proof} After the full description and classification of all the nontrivial minimal solutions have been made, we can prove some interesting facts on their properties. \begin{proposition}\label{thm-number-solutions-unique-nontrivial} Let $m = q+1$. For any tuple of spaces $\V$ there exists at most one tuple of spaces $\U$, up to an equivalence, such that $(\U,\V)$ is a nontrivial solution. \end{proposition} \begin{proof} The statement is obvious for the solution of Type A. By \Cref{lemma-common-subspace} and \Cref{thm-factor-equation}, we can assume $S = \{0\}$. Consider the solution $(\U,\V)$ of Type B. Having the tuple $\U$ we can uniquely, up to equivalence, recover the tuple $\V$. Really, at first, we recover the space $V_m = \gen{a}$ as the intersection of any two two-dimensional spaces and we already have $U_m = \gen{b}$. Assume that there exists another solution $(\U, \V')$, where $\V' = (V_1',\dots, V_m')$. But then $V'_i = \gen{b, x_i}$, $x_i \in U_1$ for $i \in \myset{q}$, where $\{x_i\}_{i \in \myset{m}}$ is a set of all lines in $U_1$. Therefore $\V \sim \V'$. Consider the solution $(\U,\V)$ of Type C. Using the notations from \Cref{tab-type-c}, the vector $a+c$ is in $V_1$ and in $U_i$, for some $i \in \myset{m}$. Let $(\U,\V')$ be another solution. For this solution let $i,j \in \myset{m}$ be such that $a \in V_i \cap U_1$ and $b \in V'_j \cap U_1$. Also, let $c',d' \in W$ be such that $\gen{c'} = V'_i \cap U_2$ and $\gen{d'} = V'_j \cap U_2$. Then $V'_i = \gen{a,c'}$, $V'_j = \gen{b,d'}$. The vector $a + c$ should be presented in some intersection with $V'_k$, for $k \in \myset{m}$. This is only possible if $\gen{c} = \gen{c'}$. In the same way, observing the vector $b +d$, we deduce $\gen{d'} = \gen{d}$ and thus $\V \sim \V'$. \end{proof} \begin{proposition}\label{thm-sum-of-two-spaces} Let $(\U,\V)$ be a nontrivial solution with $m = q+1$. For any $i\neq j \in \myset{m}$, for any $k \in \myset{m}$, $V_k,U_k \subseteq V_i + V_j$. For any $i\neq j \in \myset{m}$, $\dim_K V_i + V_j \leq 2 + \max_{i \in \myset{m}} \dim_K V_i$. \end{proposition} \begin{proof} The description of all the nontrivial solutions for the codes of the length $m = q + 1$ is given in \Cref{thm-main-abc}. Let $n = \max_{i \in \myset{m}} \dim_K V_i$ and fix $i,j \in \myset{m}$, $i\neq j$. If the solution is of Type B, then the space $V_i+V_j$ is of dimension $\dim_K V_1 + 1 = n + 1$ and contains all the spaces $V_1, \dots, V_m$. If the solution is of Type C, then the space $V_i+ V_j$ has the dimension $\dim_K V_1 +2 = n + 2$ and contains all the spaces $V_1,\dots, V_m$. Regarding the solution of Type A, depending on which tuple of spaces we observe, the space $V_i + V_j$ contains all the spaces from the tuple and has the dimension $n$ or $n+1$. Combining these three cases, all the spaces $V_1, \dots, V_m$ are in the space $V_i + V_j$ and therefore the spaces $U_1, \dots, U_m$ are all in $V_i + V_j$. Also, $\dim_K V_i + V_j \leq n + 2$. \end{proof} \comment{ Let $(\cdot, \cdot) : W \times W \rightarrow K$ be a $K$-bilinear non-degenerate form. For a $K$-linear space $V \subseteq W$ let $V^{\perp} = \{w \in W \mid \forall v \in V: (w,v) = 0 \}$ be the orthogonal space. Let $f: W \rightarrow \mathbb{C}$ be a function. Fourier transformation $\mathcal{F}$ of $f$ is defined as $\mathcal{F}(f)(s) = \sum_{w \in W} f(w) \chi_s(w)$, for $s \in W$, where $\chi_s(w) = \xi^{\tr_{K/\mathbb{F}_p}(sw)}$, $s,w \in K$. The inverse Fourier transformation is given by the following formula: $\mathcal{F}^{-1}(f)(s) = \frac{1}{\card{W}} \sum_{w \in W} f(w) \overline{\chi_s(w)}$. \begin{proposition} The $K$-linear map $f: C \rightarrow L^m$ is an isometry if and only if \begin{equation}\label{eq-main-dual} \sum_{i = 1}^{m} \id_{V_i^{\perp}} = \sum_{i = 1}^{m} \id_{U_i^{\perp}}\;. \end{equation} \end{proposition} \begin{proof} Evidently, $\chi_w(v) = 1$ if and only if $(w,v) = 0$, for $w,v \in W$. Then $\mathcal{F}(\id_V)(u) = \sum_{w \in W} \id_V(w) \chi_w(u) = \sum_{w \in V} \chi_w(u) = \card{V} \id_{V^{\perp}}(u)$. Thus $\mathcal{F}\left( \sum_{i= 1}^m \frac{1}{\card{V_i}} \id_{V_i} \right) = \sum_{i= 1}^m \id_{V_i^{\perp}}$. This means that any solution of \cref{eq-main-counting-space} is a solutions of the \cref{eq-main-dual}. Taking the inverse Fourier transform on both parts of \cref{eq-main-dual}, we get the statement in the other direction. In such a way, using \Cref{thm-isometry-criterium}, we prove the proposition. \end{proof} } \footnotesize
1,108,101,566,822
arxiv
\section{Introduction} While the synchronous stochastic gradient descent (SSGD) remarkably reduces the training time of the large-scale DNN on the complex dataset by allocating the overall workload to multiple workers, it is additionally required to synchronize local gradients of the workers to keep the convergence of the models~\cite{Dean2012Large}. Hence, the introduced gradient synchronizing phase in the cluster will consume much time, making the acceleration effect of the distributed training non-linear and deteriorating the cluster's scalability. Thus, the communication cost caused by the network I/O and transmission of the synchronization generally becomes the most significant bottleneck of the distributed DNN training with the increasing number of workers and model parameters~\cite{COS}, especially when the communication-to-computation ratio is high (e.g., Gate Recurrent Unit~(GRU)~\cite{cho2014learning}). Flourish developments have been made to overcome this problem, including batch-size enlarging~\cite{jia2018highly}, periodically synchronizing~\cite{stich2018local} and data compressing~\cite{karimireddy2019error, lin2017deep, yu2018gradiveq, xu2020compressed}. However, although these methods considerably reduce the communication load, many side effects are brought by them to the distributed DNN training process as well, respectively be it the generalization ability degradation~\cite{hoffer2017train}, the added performance-influential hyper-parameter $\gamma$ (i.e., configuration of the interval between synchronizations) or the introduced time-consuming extra phases during training (e.g., sampling, compressing, decompressing, etc.). Moreover, they are all focused on reducing either the worker-to-worker communication rounds or the data transfer size, which limits the results they can achieve since neither the round nor the size can be reduced to $0$. In this paper, we propose Hierarchical Parallel SGD (HPSGD) algorithm that not only fully overlaps the synchronization phase with the local training phase with hierarchical computation but also mitigates the gradients staleness problem and therefore achieves high performances. The desired timeline of HPSGD is illustrated in Fig~\ref{fig:HPSGD-timeline}, which implies that it also ensures synchronous training progresses between workers (i.e., workers start to feed-forward at the same time). The main challenge of all algorithms that separate the local training phase and synchronization phase, including HPSGD, is the gradients staleness problem, meaning the model is updated using stale gradients, which is a detriment to the model convergence. However, Unlike previous literature that tries to counteract stale gradients' effects~\cite{zhang2015staleness}, HPSGD treats these gradients as the features of unknown global optimization surface and thus uses these features to optimize the global training function. In this scenario, the local training phase that overlaps with the synchronization phase helps the global training function collect valuable gradients information and optimize. As a result, the HPSGD algorithm fully utilizes the computational performance, and also maintains model convergence. \begin{figure}[htb] \centering \includegraphics[width=0.65\linewidth]{PIC/hpsgd.pdf} \caption{In HPSGD, every worker has two processes. One of them is the local training process doing continuous model training and the other one is the synchronizing process doing continuous data exchanging. These two processes run in parallel.} \label{fig:HPSGD-timeline} \end{figure} The contributions of this paper are summarised as follow: \begin{itemize} \item We entirely overlap the synchronization phase with the local training phase by utilizing hierarchical computation, which significantly boosts the distributed training process. \item We utilize an optimized algorithm based on hierarchical computation to address the gradients staleness problem, and therefore improving the training speed, stability, and model accuracy of the distributed DNN training process. \item We demonstrate and verify the reliability and effectiveness of HPSGD by applying it to sufficient experiments with various approaches to extensive models. The source code and parameters of all experiments are open-sourced for reproducibility\footnote{\url{https://github.com/Soptq/Hierarchical_Local_SGD}}. \end{itemize} The rest of this paper is organized as follows. The literature review is illustrated in Section~\ref{section:two}, where some background information is introduced. In Section~\ref{section:three}, the structure and implementation of the proposed HPSGD algorithm are presented. Then the experimental design and result analysis are detailedly documented in Section~\ref{section:four}. Finally, the conclusions of this paper are drawn in Section~\ref{section:five}. \section{Literature review} \label{section:two} \textbf{Synchronous and asynchronous SGD}: Synchronous SGD (SSGD) is generally a distributed training's model updating strategy that evenly distributes the workload among multiple workers. Then, it updates the model parameters by utilizing SGD algorithm with global gradients aggregated by averaging all local gradients of the different workers. Particularly, the convergence of the model is unaffected with SSGD since it ensures the synchronized gradients are the latest. SSGD can be employed in both centralized\cite{li2014scaling, li2014communication} and decentralized\cite{lian2017can} architectures and the timeline of decentralized SSGD is drawn in Fig~\ref{fig:ssgd-timeline}, where it can be noticed that before starting synchronizing, there is a waiting phase where some workers might have already finished local training and wait for the slower workers to catch up, which leads to a wasted resource. Asynchronous SGD (ASGD) overcomes this problem by allowing workers to work independently. Specifically, fast workers instantly $push$ the calculated local gradients to the parameter servers once they finished training. The timeline of ASGD is shown in Fig~\ref{fig:asgd-timeline}. Although ASGD eliminates the waiting time before synchronizing, it can be utilized only in the centralized architecture, indicating that the cluster is more likely to incur communication overload. Moreover, as workers are not aware of other workers' status, the gradients staleness problem can be easily triggered. For example, $worker_i$ uses $W_0$ to compute local gradients $\nabla_0$ and synchronizes $\nabla_0$ to parameter servers to start a global model updating operation. However, the global model is updated to $W_1$ during its synchronization phase due to the faster synchronization speed of another worker. Thus $worker_i$ eventually updates the global model $W_1$ to $W_2$ using $\nabla_0$ computed by $W_0$, which will considerably impact the convergence of the model. \begin{figure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth]{PIC/ssgd.pdf} \caption{} \label{fig:ssgd-timeline} \end{subfigure}% \hspace*{\fill} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth]{PIC/asgd.pdf} \caption{} \label{fig:asgd-timeline} \end{subfigure}% \caption{(a) A worker waits for other workers to finish local training before starting data exchanging. (b) A worker instantly exchanges data with the parameter server and then steps into the next epoch.} \end{figure} It is worth noting that in both centralized and decentralized architectures, synchronizations are processed in the worker's main thread, implying the next epoch's training will be prevented until the current epoch's synchronization phase is completed. Consequently, in both SSGD and ASGD, the processing units of the worker (e.g., CPU, GPU) are idle during the synchronization phase, which is generally a much bigger waste of the worker's performance compared to the waiting phase in SSGD, considering synchronizing usually takes much more time than waiting in practice. \textbf{Local SGD}: Local SGD~\cite{stich2018local} is a well-known algorithm that utilizes periodically model averaging to reduce the number of synchronization rounds .It is capable of achieving good performance both theoretically and practically. Specifically, it introduces a new hyper-parameter $\gamma$ that configures the frequency of the model synchronizing. When synchronizing, workers synchronize the model parameters in place of gradients. However, there are several drawbacks of Local SGD and its variations~\cite{haddadpour2019local, haddadpour2019trading}. 1) Local SGD delivers a relatively slow convergence rate per epoch, and the introduced hyper-parameter $\gamma$ is required to be configured manually to achieve the model's best performance. 2) Although Local SGD reduces the number of synchronization rounds, the computing performance is still idle and not been fully utilized during synchronizing. The pseudo-code of the Local SGD is illustrated in Algorithm~\ref{alg:LSGD}. \begin{algorithm}[h] \caption{Local SGD} \label{alg:LSGD} \begin{algorithmic}[1] \State Initialization: Cluster size $n$. Learning rate $\mu \geq 0$. Max training $epoch$. Local gradient $\widehat{\nabla}^{e}$. Synchronous period $\gamma$; \ForAllP {$i \in {1,...,n}$} \For {$e \in {1,...,epoch}$} \State Update local model: $w_i^{e+1} = w_i^{e}- \mu\widehat{\nabla}^{e}$; \If {$e~mod~\gamma == 0 $} \State Average model: $w_i^{e+1} = AllReduce(w_i^{e+1})$; \EndIf \EndFor \EndFAP \end{algorithmic} \end{algorithm} \section{Methodology} \label{section:three} In this section, we present the implementation of the proposed HPSGD detailedly, which includes: 1) Spawning two process $P_s$ and $P_t$ to Perform data synchronizing and local training, respectively. 2) Applying a model updating algorithm to alleviate the gradients staleness problem. \subsection{Implementation of hierarchical computation} Hierarchical computation enables the synchronization phase to be fully overlapped with the local training phase and is achieved by spawning a dedicated process $P_s$ responsible for data synchronizing. $P_t$ and $P_s$ are located in the same worker and they share the same rank in the distributed system. These two processes connect and communicate via shared memory, and there are typically the following variables that need to be shared. \begin{itemize} \item \textbf{$status$}: The variable that indicates the status of $P_s$. It has two states: $synchronizing$ and $idling$. \item \textbf{$replica$}: The replica of the latest global model, which will be detailedly discussed in Section~\ref{gradienst_utilization}. \item \textbf{$\nabla_{i}^{a}$}: The $i$-th worker's accumulated gradients when workers are performing local training. \item \textbf{$\widehat{\nabla}_{i}^{e}$}: The local gradients calculated by $i$-th worker at epoch $e$. \item \textbf{$counter$}: The integer that represents how many times has $P_t$ trained locally. \end{itemize} specifically, when $status$ is $synchronizing$, $P_t$ will firstly make a replica of the global model if the $counter$ equals $0$, then it will accumulated the calculated gradients to the $\nabla_{i}^{a}$, updating $replica$ and finally increasing the $counter$ by $1$. On the other hand, $P_t$ will activate $P_s$ to start to synchronize and mark the $status$ as $synchronizing$ when the $status$ is $idling$, and $P_s$ will firstly update the global model using the global gradients which was synchronized in the last time, and then $AllReduce$ing the $\nabla_{i}^{a}$, resetting $counter$ to $0$, and finally marking the $status$ as $idling$. The workflow of the HPSGD algorithm is demonstrated as algorithm \ref{alg:HPSGD-hierarchical} \begin{algorithm}[h] \caption{Hierarchical Parallel SGD Algorithm} \label{alg:HPSGD-hierarchical} \begin{algorithmic}[1] \State Initialization: Cluster size $n$. Learning rate $\mu \geq 0$. Max training $epoch$; \ForAllP {$i \in {0,...,n-1}$} \For {$e \in {0,...,{epoch-1}}$} \If {$state == synchronizing$} \If {$counter == 0$} \State Make replica: $r_i = clone(w_i^e)$; \EndIf \State Update replica: $r_i = r_i- \mu\widehat{\nabla}_{i}^{e}$; \State Accumulate gradients: $\nabla_{i}^{a} = \nabla_{i}^{a} + \widehat{\nabla}_{i}^{e}$; \State Increase counter: $counter = counter + 1$; \Else \State Accumulate gradients: $\nabla_{i}^{a} = \nabla_{i}^{a} + \widehat{\nabla}_{i}^{e}$ \State Mark $status$: $status = synchronizing$; \State Instruct $P_s$: $StartSync(P_s)$; \label{HPSGD-instrut} \EndIf \EndFor \EndFAP \end{algorithmic} \end{algorithm} In step \ref{HPSGD-instrut}, $StartSync$ is a function to activate $P_s$ to start to synchronize. It is an operation processed in $P_s$, which does not block the training process in $P_t$, so that $P_t$ can keep training instead of idling. Furthermore, $StartSync$ function also reduces the gradients staleness effects and will be explained in detail below. \subsection{Gradients utilization} \label{gradienst_utilization} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{PIC/optimization_surface.png} \caption{Intersections on the surface represent sample points. The local optimization surface with massive reduced number of sampling points still retains similar features as the global optimization surface, including the coordinates of the extremums, the variations of partial derivatives over an interval, etc. } \label{fig:optimization_surface} \end{figure} HPSGD considers stale gradients advanced instead of stale and lets workers make replicas of the current global model when starting to synchronize. When performing local training, calculated $\widehat{\nabla}_{i}^{e}$ will be applied to workers' respective replicas and be added to $\nabla_{i}^{a}$. Finally, when the synchronization completes, accumulated $\nabla_{i}^{a}$ during the local training will be committed to the previous global model. The key concept of HPSGD's model updating algorithm is that with the increase of the dataset size, the local optimization surface modeled by a worker with a sub-dataset becomes more similar to the global optimization surface. Therefore, the local gradients of different workers on its own sub-dataset can be utilized to help optimize the global training function, which is illustrated in Fig~\ref{fig:optimization_surface}. The HPSGD's model updating function can be formulated as: \begin{equation} \label{equ:hpsgd} W_i^{e_2+1} = W_i^{e_1} - \mu\frac{\sum_{i=0}^{n}\sum_{e=e_1}^{e_2}\widehat{\nabla}_{i}^{e}}{n} \end{equation} Where $W_e$ denotes the model parameters at epoch $e$, $n$ refers to the number of workers in the distributed system and $e_1$, $e_2$ denote the epoch range of local training. The equation can be further simplified to: \begin{equation} \label{equ:hpsgd-simp} W_{i}^{e_1} - \mu\frac{\sum_{i=0}^{n}\sum_{e=e_1}^{e_2}\widehat{\nabla}_{i}^{e}}{n} = \frac{\sum_{i=0}^{n}W_{i}^{e_2}}{n} \end{equation} Which suggests that HPSGD is generally a Local SGD's deformation in the form of gradients, thus the convergence of HPSGD can be proved by \cite{yu2019parallel, stich2018local}. Although the formula is essentially the same for HPSGD and Local SGD, HPSGD focuses on achieving a lock-free and highly paralleled model updating algorithm with minimal influence from gradients staleness problem while Local SGD specializes in synchronization rounds reducing. In addition, since the synchronization phase is overlapped with the local training phase, it is deemed impossible to synchronize and update model parameters at the same time, because the model parameters would then be written simultaneously by $P_s$ and $P_t$ at the next epoch. Thus the simplified model updating function Equ~\ref{equ:hpsgd-simp} can not be used in the real scenario and therefore the gradients are synchronized between workers instead of model parameters to update the model with Equ~\ref{equ:hpsgd} as gradients are generally intermediate data in the model update process and do not need to be persisted. Furthermore, HPSGD eliminates the hyper-parameter $\gamma$. Instead, it continuously performs synchronization in another process (i.e, performs synchronizing whenever possible), making the synchronization phase highly flexible and bringing two benefits to the distributed DNN training process: 1) Improved robustness. Methods like Local SGD fix $\gamma$, assuming the synchronization time is stable throughout the training process, which is not practical in real scenarios where exceptions are unpredictable. 2) the maximal number of synchronizations, which improves the convergence rate and stability of the distributed DNN training process. For example, assume the local training time is $t_{train}$ seconds and synchronization time is $t_{sync}$ seconds where $t_{sync} = k \cdot t_{train}$. In Local SGD, a complete loop that contains $\gamma$ local trainings and one data synchronizing takes $((\gamma + 1) \cdot t_{train} + t_{sync})$ seconds. For the same period of time, HPSGD can synchronize for averagely $(\frac{\gamma + 1}{k} + 1)$ times and train locally for $(\gamma + k + 1)$ times, suggesting that in a given fixed time, HPSGD could sample more features in optimization surfaces and perform data synchronizing more times. Consequently, HPSGD eliminates the hyper-parameter $\gamma$, while making the global model iterate faster, sample more features and achieve better accuracy. The pseudo-code of these behaviors is presented in Algorithm \ref{alg:HPSGD-synchronizing}. \begin{algorithm}[h] \caption{$P_s$'s behavior} \label{alg:HPSGD-synchronizing} \begin{algorithmic}[1] \If {$e-counter > 0$} \label{DPG:update-condition} \State Update global model: $w_i^{e+1} = w_i^{e - counter}- \mu\widehat{\nabla}_{i}^{e - counter}$ \EndIf \State $AllReduce$ gradients: $\widehat{\nabla}_{i}^{e} = AllReduce({\nabla_{i}^{a}})$ \State Reset the counter: $counter = 0$ \State Mark the $status$: $status = idle$ \end{algorithmic} \end{algorithm} As the pseudo-code has shown, $P_s$ $AllReduce$s the gradients and will perform global model updating with these synchronized gradients at the next synchronization's beginning. Thus, step \ref{DPG:update-condition} is used for ensuring that the synchronized gradients exist when performing the first global model updating, since the first epoch is almost certainly used for local training instead of synchronizing in practice. Consequently, the global model updating operation is always $1$ synchronization delayed compared to the corresponding $AllReduce$ operation due to the above updating strategy of $P_s$. \section{Experiments} \label{section:four} \subsection{Experimental setup} \textbf{Hardwares}: An Nvidia DGX-Station is employed to set up the environment of the experiments with 4 Nvidia Tesla V100 32G GPU and Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz. \textbf{Softwares}: All experiments are done in an nvidia-docker environment with CUDA 9.0.176. Pytorch 1.6.0\footnote{\url{https://pytorch.org}} is utilized to simulate the distributed training process of the cluster by spawning multiple processes, whereas each stands for an individual worker. \textbf{Methods}: SSGD, HPSGD, Local SGD~\cite{stich2018local}, and purely offline training (PSGD, no communication between workers during training). PSGD will only be presented in the convergence rate comparison since it is utilized to only give a reference of fast training speed. \textbf{Models}: ResNet~\cite{he2016deep}, DenseNet~\cite{huang1608densely}, MobileNet~\cite{sandler2018mobilenetv2} and GoogLeNet~\cite{szegedy2015going}. \textbf{Datasets}: Cifar-10~\cite{krizhevsky2009learning} dataset, which consists of $60,000$ $32 \times 32$ images in total with both RGB channels. \textbf{Other settings}: Learning rate: $0.01$, batch size: $128$, $\gamma$ of Local SGD: $8$, epoch size $100$, loss function: cross entropy, optimizer: SGD. \subsection{Experiment Design and analysis} \textbf{Convergence and training loss}: Various models have been trained with different methods in order to verify the convergence of HPSGD. Accuracy curve of 4 workers with different models and of different cluster size with the ResNet-101 model is presented in Figs~\ref{fig:all-acc} and \ref{fig:resnet-scale-acc}, respectively. \begin{figure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth]{PIC/all_acc.pdf} \caption{} \label{fig:all-acc} \end{subfigure}% \hspace*{\fill} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth]{PIC/resnet_scale_acc.pdf} \caption{} \label{fig:resnet-scale-acc} \end{subfigure}% \caption{(a) Accuracy comparisons of various models each with different methods. (b) The accuracy comparison with 4 workers, 8 workers, 12 workers and 16 workers.} \end{figure} As shown in Fig~\ref{fig:all-acc}, the reached accuracy of HPSGD at epoch 100 is generally identical to the SSGD's among all experiments, suggesting HPSGD maintains the convergence of the model by utilizing local gradients to help global training function optimize. Moreover, the accuracy curve of HPSGD is significantly smoother than other methods, especially in GoogLeNet. This phenomenon is caused by the fact that the global model in HPSGD is updated by gradients that are repeatedly sampled on the sub-dataset between synchronizations, which could considerably reduce the instability of mini-batch SGD. Furthermore, in some cases, the accuracy of HPSGD even outperforms SSGD (e.g., DenseNet-121). We believe it is mainly due to the HPSGD algorithm's characters that it is less likely to be trapped in a local optimum. Notably, as HPSGD generally lets workers independently compute their solutions and lastly applies them to the global model, it can be seen as the global model is simultaneously optimized toward multiple directions in the global optimization surface. Thus, the probability of multiple models simultaneously trapping in their local optimums is significantly reduced compared to a single model updating toward one direction. \textbf{Scale efficiency}: The scale efficiency of different methods with the ResNet-101 is shown in Fig~\ref{fig:all-scale}. As the figure demonstrated, both HPSGD and Local SGD have a much larger scale efficiency than SSGD when the number of workers is relatively small with significantly reduced network traffic. However, with the cluster size increase, Local SGD's scale efficiency is drastically decreased, indicating the communication jam is triggered, and its synchronization interval $\gamma$ needs to be larger. However, enlarging the $\gamma$ could lead to a slower convergence rate. Meanwhile, the impacts on HPSGD are considerably smaller than other methods when the number of workers increases. Specifically, HPSGD obtains the same performance compared to Local SGD and 133\% more performances compared to SSGD when four workers participated in the training. When there are 16 workers, HPSGD obtains 75\% more performance compared to Local SGD and 250\% more performance compared to SSGD. This is mainly because, theoretically, in HPSGD, there is no extra synchronizing time at all during distributed DNN training procedure as it is entirely overlapped with the local training phase. Thus, the main reason for the decreasing scale efficiency and the performance loss for HPSGD is the increasing time spent in the waiting phase, which is caused by different computational performance of workers (gray part in Fig~ \ref{fig:HPSGD-timeline}). This will be left as our future optimization direction. \begin{figure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{PIC/resnet_scale.pdf} \caption{} \label{fig:all-scale} \end{subfigure}% \hspace*{\fill} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{PIC/resnet_train_time_anaysis.pdf} \caption{} \label{fig:resnet-realtime} \end{subfigure}% \caption{(a) The scale efficiency of different cluster size with ResNet-101 model. (b) The total time of different methods training Cifar-10 with ResNet-101.} \end{figure} \textbf{Convergence rate}: Fig~\ref{fig:resnet-realtime} illustrates the cost time for each epoch with different methods. Here LSGD refers to Local SGD due to tight space. PSGD serves as the lower bound of the cost time of the distributed DNN training. The time difference between HPSGD and PSGD is mainly due to the impact of limited CPU time and workers' performance difference. The breakdown of the total training time is presented in Fig~\ref{fig:resnet-realtime-breakdown}. It can be shown that the computation time of SSGD and HPSGD is roughly the same when reaching either 80\% accuracy or 40\% accuracy, suggesting that while HPSGD shares the same converge rate as SSGD, it drastically reduce the non-computation-related time and thereby boosting the distributed training process. On the other hand, although the total time of Local SGD is shorter than SSGD when reaching either 80\% accuracy or 40\% accuracy, the computation time is relatively longer, indicating that the converge rate of Local SGD is lower than the SSGD and HPSGD. This phenomenon matches and verifies the explanation in Section~\ref{gradienst_utilization}. To avoid chance, we performed more experiments on total training time (wall time) of four different models with the same configuration, which is illustrated in Fig~\ref{fig:all-realtime} \begin{figure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{PIC/resnet_realtime_anaysis.pdf} \caption{} \label{fig:resnet-realtime-breakdown} \end{subfigure}% \hspace*{\fill} \begin{subfigure}{0.63\textwidth} \includegraphics[width=\linewidth]{PIC/all_realtime.pdf} \caption{} \label{fig:all-realtime} \end{subfigure}% \caption{a) Upper: Time breakdown for reaching 80\% accuracy. Lower: Time breakdown for reaching 40\% accuracy. b) Total training time used to reach 80\% and 40\% accuracy on different models with different methods.} \end{figure} \section{Conclusions and future work} \label{section:five} In this paper, we propose a novel Hierarchical Parallel SGD (HPSGD) algorithm that firstly overlaps the time-consuming synchronization phase with the local training phase by deploying hierarchical computation across two processes, which significantly boosts the distributed training. Then it alleviates the stale gradients problem by utilizing the sub-gradients calculated by different workers to help global model update. Detailedly, workers perform training on a replica of the global model independently, recording local gradients and lastly committing these gradients to the global model. In such circumstances, the sub-gradients of different workers are not stale but advanced and can be taken advantage of. Extensive experiments and comparisons turn out that the performance of HPSGD surpasses SSGD and Local SGD, which actively verifies its effectiveness and high efficiency. However, although HPSGD drastically drops the synchronization time of the distributed training process, the waiting phase remains, which is caused by workers' imbalanced performances. In future work, we would like to investigate methods capable of reducing such waiting costs and thereby further improving the scalability of the cluster in distributed training. \bibliographystyle{splncs04}
1,108,101,566,823
arxiv
\section{Introduction} The notion of a $\Gamma$-set is a fundamental constituent in mathematics: it is the most embracing generalization of the datum given on a set by a commutative addition with a zero element and it provides a common framework for many of the present efforts to understand the ``field with one element". In \cite{CCprel} we defined on the Arakelov compactification $\overline{{\rm Spec\,}{\mathbb Z}}$ of the algebraic spectrum of the integers a structure sheaf of $\Gamma$-rings which agrees with the classical structure sheaf when restricted to ${\rm Spec\,} {\mathbb Z}$, but whose stalk at the archimedean place uses in a crucial way the new freedom of moving from the category of abelian groups to that of $\Gamma$-sets. To define $\Gamma$-sets one first introduces the small, full subcategory $\Gamma^{\rm op}$ of the category $\frak{Fin}_*$ of pointed finite sets, whose objects are pointed sets $k_+:=\{0,\ldots ,k\}$, for each integer $k\geq 0$ ($0$ is the base point) and with morphism the sets ${\Gamma^{\rm op}}(k_+,m_+)=\{f: \{0,1,\ldots,k\}\to\{0,1,\ldots,m\}\mid f(0)=0\}$. A $\Gamma$-set is then defined as a (covariant) functor ${\Gamma^{\rm op}}\longrightarrow{\Se_*}$ between pointed categories and the morphisms in this category are natural transformations. The closed structure of the category $\Gamma{\Se_*}$ of $\Gamma$-sets is defined by setting \begin{equation}\label{closedstructure0} \underline{\Gamma\Se}_*(M,N)=\{k_+\mapsto \Gamma{\Se_*}(M,N(k_+\wedge -))\}, \end{equation} where $\wedge$ is the smash product of pointed sets. This formula uniquely defines the smash product of $\Gamma$-sets by applying the adjunction $$ \underline{\Gamma\Se}_*(M_1\wedge M_2,N)=\underline{\Gamma\Se}_*(M_1,\underline{\Gamma\Se}_*(M_2,N)). $$ The notions of rings and modules then acquire a meaning in this symmetric monoidal closed category. In particular, $\Gamma$-sets can equivalently be viewed as modules over the simplest $\Gamma$-ring ${\mathfrak s}: {\Gamma^{\rm op}}\longrightarrow{\Se_*}$ whose underlying $\Gamma$-set is the identity functor, whence the name ${\mathfrak s}$-module to denote a $\Gamma$-set, and the more suggestive notation for morphisms in $\Gamma{\Se_*}$ $$ {\mbox{Hom}}_{\mathfrak s}(M,N):=\Gamma{\Se_*}(M,N), \ \ \underline{\mbox{Hom}}_{\mathfrak s}(M,N):= \underline{\Gamma\Se}_*(M,N). $$ Abelian groups form a full subcategory of the category ${\sss-{\rm Mod}}$ of ${\mathfrak s}$-modules: the inclusion functor associates to an abelian group $A$ the functor (Eilenberg-Mac Lane object) $HA:{\Gamma^{\rm op}}\longrightarrow{\Se_*}$ which assigns to a finite pointed set $X$ the pointed set of $A$-valued maps on $X$ vanishing at the base point of $X$ (\!\!\cite{DGM}, 2.1.2). \newline At the conceptual level, it is important to make as explicit as possible the link between the category ${\sss-{\rm Mod}}$ and the naive interpretation of vector spaces over ${\mathbb F}_1$ as pointed sets (see \cite{KS}). This link can be understood by viewing ${\mathfrak s}$-modules as {\em pointed objects} in the topos $\widehat{\Gamma}$ of covariant functors ${\Gamma^{\rm op}}\longrightarrow\frak{Sets}$. Thus, provided one works in $\widehat{\Gamma}$, one may think of our basic objects as ``pointed sets". The reason for this choice of topos is to provide room for the identity functor ${\mbox{Id}}: {\Gamma^{\rm op}}\longrightarrow {\Gamma^{\rm op}}$ which defines the simplest $\Gamma$-ring: ${\mathfrak s}$. In other words both ${\Gamma^{\rm op}}$ and objects in $\widehat{\Gamma}$ are based on the idea of pointed sets which underlies the naive interpretation of ${\mathbb F}_1$. In this way one reaches a workable framework that extends strictly the category of ${\mathbb Z}$-modules. To perform homological algebra one needs, guided by the Dold-Kan correspondence, to move from the basic category ${\sss-{\rm Mod}}$ to its simplicial version, namely the category ${\Gamma\cS_*}$ of $\Gamma$-spaces, where ${\mathcal S_*}$ denote the category of pointed simplicial sets, {\it i.e.\/}\ contravariant functors $\Delta \longrightarrow \frak{Sets}_* $ where $\Delta$ is the ordinal number category and $\frak{Sets}_*$ is the category of pointed sets. The category ${\Gamma\cS_*}$ plays a central role in \cite{DGM}. We denote by $\underline{\mbox{Hom}}_{\mathcal S_*}$ the internal hom functor in ${\mathcal S_*}$. As explained in {\it op.cit.\/}\ , one can use the closed structure of ${\mathcal S_*}$ to endow ${\Gamma\cS_*}$ with the structure of a symmetric monoidal closed category. The closed structure is defined as follows \begin{equation}\label{closedstructure} \underline{\mbox{Hom}}_{\Gamma\cS_*}(M,N):=\{(k_+,[q])\mapsto {\mbox{Hom}}_{\Gamma\cS_*}(M\wedge \Delta[q]_+,N(k_+\wedge -))\}. \end{equation} The monoidal structure is given by the smash product where $M\wedge N$ is defined using the closed structure and can be described as a Day's product (see {\it op.cit.\/}\ 2.1.2.1) $$ (M\wedge N)(Z)=\int^{(X,Y)}\left( M(X)\wedge N(Y)\right)\wedge {\Gamma^{\rm op}}(X\wedge Y,Z). $$ The key result of Lydakis ({\it op.cit.\/}\ Theorem 2.1.2.4) states that there are choices of coherency isomorphisms so that the triple $({\Gamma\cS_*}, \wedge,{\mathfrak s})$ is a symmetric monoidal closed category. Our goal is to use $\Gamma$-spaces to perform homological algebra in the category ${\sss-{\rm Mod}}$ by applying an analogue of the Dold-Kan correspondence. For our arithmetic applications it is crucial to work with {\em non-fibrant} $\Gamma$-spaces and define a suitable substitute for the homotopy groups. In homotopy theory the Kan extension property is used in two ways:\newline -~to show that the relation of homotopy is an equivalence relation,\newline -~to define the group structure on $\pi_n$ for $n\geq 1$. \newline To define the homology $H_n(X,F)$ of a pointed simplicial set $X$ with coefficients in an ${\mathfrak s}$-module $F$, the problem to obtain the substitute of the group structure does not arise since, already in the classical case where $F=HA$ corresponds to an abelian group, the interchange law shows that the group structure in homology is the same as that inherited from the underlying $\Gamma$-set (see Remark \ref{homotopgroup}). Thus the issue created by the lack of the Kan extension property occurs mainly at the level of pointed (non fibrant) simplicial sets $X$, and ${\Gamma^{\rm op}}$ is not involved there. One thus needs, as an intermediate step, to extend the combinatorial construction of the homotopy $\pi_n(X,\star)$ for a pointed simplicial set which is not fibrant. This step is described in Section \ref{sectcombinatorial1} of the present paper. The main difficulty to obtain a meaningful combinatorial notion is that the relation of homotopy between $n$-simplices $x,y\in X_n$ as in \cite{May} Definition 3.1 is no longer an equivalence relation. By definition (see {\it op.cit.\/}\ ) \begin{equation}\label{Maydefn} R=\{(x,y)\in X_n\times X_n\mid \partial_j x=\partial_jy\, \forall j \, \&\, \exists z \mid \partial_jz=s_{n-1}\partial_j x\, \forall j<n, \ \partial_nz=x,\, \partial_{n+1}z=y\} \end{equation} The simplices involved in the definition of $\pi_n$ correspond to the elements of ${\mbox{Hom}}_{\mathcal S_*}(S^n,X)$, {\it i.e.\/}\ by Yoneda's lemma to $x\in X_n$ with $ \partial_j x=*\, \forall j$. Here $S^n$ is the combinatorial sphere, {\it i.e.\/}\ the pointed simplicial set $(\Delta[n],\partial \Delta[n])$ obtained by collapsing the boundary $\partial \Delta[n]$ of the standard simplex to a single base point. The relation $R$ on ${\mbox{Hom}}_{\mathcal S_*}(S^n,X)\subset X_n$ coincides with the relation on the $0$-skeleton $Y_0$ associated to the two boundary maps $\partial_j:Y_1\to Y_0$, where $Y:=\Omega^n(X)$ is obtained from $X$ by iterating $n$-times the endofunctor $\Omega:{\mathcal S_*}\longrightarrow {\mathcal S_*}$ of \cite{Moore} (Definition 1.6). In this way one reduces the problem to the definition of $\pi_0Y$ for $Y=\Omega^n(X)$. Then one can simply define $\pi_0Y$ as the quotient $\pi^{\rm comb}_0Y$ of $Y_0$ by the equivalence relation generated by $R$. \begin{defn}\label{defnpij} Let $n\geq 0$ be an integer and $X$ a pointed simplicial set. Define \begin{equation}\label{pins00} \pi^{\rm comb}_n(X):=\pi^{\rm comb}_0(\Omega^n(X))={\mbox{Hom}}_{\mathcal S_*}(S^n,X)/\tilde R \end{equation} where $\tilde R$ is the equivalence relation generated by the restriction of the relation $R$ of \eqref{Maydefn}. \end{defn} This notion developed in Section \ref{sectcombinatorial1} suffices for the goals of the present paper, but for future applications we also wish to keep the finer information contained in the relation $R$. This is achieved by introducing the topos $\frak{Sets}^{(2)}$ in which the finer notion, denoted $\pi^{(2)}_n(X,\star)$, takes its value : {\it i.e.\/}\ $\pi^{(2)}_n(X,\star)$ is a $2$-set, {\it i.e.\/}\ an object of $\frak{Sets}^{(2)}$. This construction is described in Section \ref{sectcombinatorial1bis} where we also show that the topos $\frak{Sets}^{(2)}$ is related to the topos of quivers. In Section \ref{secthomotop} we then obtain a general definition of homology of $\Gamma$-spaces, considered as simplicial $\Gamma$-sets. This homology is not, in general, a group but is a {$\Gamma$-2-set } {\it i.e.\/}\ a pointed covariant functor ${\Gamma^{\rm op}} \longrightarrow \frak{Sets}^{(2)}_*$. In Section \ref{secthomology} we construct, given an integer $n\geq 0$, an arbitrary pointed simplicial set $X$ and an arbitrary ${\mathfrak s}$-module ($\Gamma$-set) $F$, the homology $H_n(X,F)$ as follows \begin{defn}\label{defnhomol} Let $n\geq 0$ be an integer, $X$ a pointed simplicial set, and $F$ an ${\mathfrak s}$-module. Define \begin{equation}\label{pins1} H_n(X,F):=\{k\mapsto \pi^{\rm comb}_n(F\circ ( X\wedge k_+ ))\} \end{equation} as an ${\mathfrak s}$-module. \end{defn} As in \cite{DGM}, we extend the $\Gamma$-set $F$ to an endofunctor of the category of pointed sets. When $F=HA$ for an abelian group $A$, $H_n(X,F)$ coincides with the standard definition of homology : \begin{thm}\label{propcompareintro} Let $A$ be an abelian group, and $X$ a pointed simplicial set. For any integer $n\geq 0$ one has the equality of ${\mathfrak s}$-modules \begin{equation}\label{equbasic} H_n(X,HA)=H(H_n(X,A)) \end{equation} where $H_n(X,A)$ is the (reduced) abelian group homology of $X$ with coefficients in $A$. \end{thm} Again, we stress the fact that we apply Definition \ref{defnhomol} in cases where the pointed simplicial set $F\circ ( X\wedge k_+ )$ is {\em not fibrant}. In particular, in our applications the ${\mathfrak s}$-modules $H_n(X,F)$ are rarely groups. In Section \ref{sectgromov} we apply Definition \ref{defnhomol} to the ${\mathfrak s}$-modules we introduced in \cite{CCprel}, at the archimedean place of $\overline{{\rm Spec\,}{\mathbb Z}}$. We show that these coefficients yield a semi-norm on the ordinary singular homology $H_n(X,{\mathbb R})$ of a topological space $X$ and our goal is to compare this semi-norm with the Gromov norm, whose definition is recalled in Section \ref{simplicialvol}. In Section \ref{sectspecZ} we review our construction (see \cite{CCprel}) of the structure sheaf ${{\cO\subset H\Q}}$ of ${\mathfrak s}$-algebras on $\overline{{\rm Spec\,}{\mathbb Z}}$. The sheaves ${\mathcal O}(D)$ associated to Arakelov divisors $D=D_{\rm finite}+D_\infty$, as in {\it op.cit.\/}\ provide a one parameter family of ${\mathfrak s}$-modules $\Vert H{\mathbb R} \Vert_\lambda$ ($\lambda\in{\mathbb R}_+$) which we can use as coefficients in formula \eqref{pins1}. In Section \ref{sectequivalence}, Proposition \ref{conj}, we prove that for any topological space $X$ the filtration of the singular homology group $H_n(X,{\mathbb R})$ by the $H_n(X,\Vert H{\mathbb R} \Vert_\lambda)$ defines a semi-norm which is equivalent to the Gromov norm. The final Section \ref{sectequal} is entirely devoted to show that the two norms on $H_n(X,{\mathbb R})$: the Gromov norm and our new norm, are in fact equal when $X=\Sigma$ is a compact Riemann surface. The difficulty in the proof of this result is due to the fact that in order to obtain elements of the homology $H_2(\Sigma,\Vert H{\mathbb R} \Vert_\lambda)$ one needs to get singular chains which are not only cycles but are such that all their simplicial boundaries actually vanish. While one knows that this Moore normalization is possible the problem is to effect it without increasing the $\ell^1$-norm of the chain : this requires a delicate geometric work described in Sections \ref{block} and \ref{block1}. One then obtains the desired equality in the form of the following \begin{thm}\label{chainthmintro} Let $\Sigma$ be a compact Riemann surface and $[\Sigma]$ its fundamental class in homology. Then $[\Sigma]$ belongs to the range of the canonical map $H_2(\Sigma,\Vert H{\mathbb R} \Vert_\lambda)\to H_2(\Sigma,{\mathbb R})$ if and only if $\lambda$ is larger than the Gromov norm of $[\Sigma]$. \end{thm} We expect that a similar statement holds in hyperbolic geometry in any dimension. The natural test ground for the homology $H_n(X,\Vert H{\mathbb R} \Vert_\lambda)$ is in hyperbolic spaces since the Gromov norm does not vanish there for $n>1$ while it vanishes identically on all spheres. This is in contrast with the construction of the spectra associated to $\Gamma$-spaces $M$ where the associated endofunctor $X \mapsto M\circ X$ is only tested on spheres. \section{Homology of a simplicial set with coefficients in an ${\mathfrak s}$-module}\label{sectcombinatorial} Our goal in this section is to reach a good definition of the homology of a pointed simplicial set with coefficients in an ${\mathfrak s}$-module and to show that it generalizes the standard notion in algebraic topology. This is achieved in Definition \ref{fact1} and Theorem \ref{propcompare}. As a preliminary step we need to refine the definition of the homotopy groups $\pi_n$ by remaining at the combinatorial level and ignoring the group structure. Classically (see {\it e.g.\/}\ \cite{DGM} Appendix A.2.3), the function space of maps between pointed simplicial sets $X$ and $Y$ is defined as the pointed simplicial set: \begin{equation}\label{functionspace} {\rm Map}_*(X,Y):=\underline{\mbox{Hom}}_{\mathcal S_*}(X,{{\rm sin}}\vert Y\vert) \end{equation} This amounts to replace $Y$ with the {\em fibrant} simplicial set ${{\rm sin}}\vert Y\vert$ and it entails that the $\pi_n$, defined using such a fibrant replacement (see also {\it op.cit.\/}\ A.2.5.1), are then groups for $n\geq 1$ (abelian for $n>1$). Thus in the definition of the homotopy groups of a $\Gamma$-space $M:{\Gamma^{\rm op}}\longrightarrow{\mathcal S_*}$ (see {\it op.cit.\/}\ Definition 2.2.1.2) \begin{equation}\label{functionspace1} \pi_qM:=\varinjlim_k \pi_{k+q}M(S^k) \end{equation} the terms involved in the colimit are groups, hence $\pi_qM$ is an abelian group. \newline For our applications however, the simplification effected by the definition \eqref{functionspace} hides certain finer features of $\Gamma$-spaces which become relevant for arithmetic constructions. We shall thus work directly in the category ${\Gamma\cS_*}$ without performing this fibrant replacement. \subsection{Homotopy for pointed simplicial sets}\label{sectcombinatorial1} In order to define the new homotopy $\pi^{\rm new}_n(X,\star)$ for a general pointed simplicial set $(X,\star)$, we shall first reduce to the case of $\pi^{\rm new}_0$, following \cite{Moore} Definition 1.9. One defines an endofunctor $\Omega$ of ${\mathcal S_*}$ which associates to a pointed simplicial set $(X,*)$ the pointed simplicial set $\Omega(X,*)$ defined as follows (with $k$ a positive integer) \begin{equation}\label{omegadef} \Omega(X,*)_k:=\{x\in X_{k+1}\mid \partial_0(x)=*, \ \partial_{i_0}\ldots \partial_{i_k}x=*\,,\,~\forall i_j\in \{0,\ldots ,k+1\}\} \end{equation} with the simplicial structure given by faces \begin{equation}\label{omegadef1} \partial_j:\Omega(X,*)_k\to \Omega(X,*)_{k-1}, \ \ \partial_j(x)=\partial_{j+1}^X(x) \end{equation} and degeneracies \begin{equation}\label{omegadef2} s_j:\Omega(X,*)_k\to \Omega(X,*)_{k+1}, \ \ s_j(x)=s_{j+1}^X(x). \end{equation} The definition of the homotopy $\pi^{\rm new}_n(X,\star)$ is then reduced to that of $\pi^{\rm new}_0$ for the simplicial set $\Omega^n(X)$ obtained after iterating the endofunctor $\Omega$ n-times : \begin{equation}\label{pindef} \pi^{\rm new}_n(X,\star):=\pi^{\rm new}_0(\Omega^n(X)). \end{equation} One shows by induction\footnote{Note that a product $\partial_{i_0}\ldots \partial_{i_k}$ can be reordered using the simplicial rules so that the indices fulfill $i_0\geq i_1\geq \ldots \geq i_k$.} on $n$ that \begin{equation}\label{omegadef3} \Omega^n(X,*)_k =\{x\in X_{n+k}\mid \partial_j(x)=*, \ \forall j<n, \ \partial_{i_0}\ldots \partial_{i_k}x=*\,,\,~\forall i_j\in \{0,\ldots ,k+n\}\} \end{equation} while the face and degeneracies are obtained as in \eqref{omegadef1} and \eqref{omegadef2} but using $\partial_{j+n}^X$ and $s_{j+n}^X$. One describes directly the first levels of $\Omega^n(X)$ as follows \begin{lem}\label{levels} Let $(X,*)$ be a pointed simplicial set.\newline $(i)$~The $0$-skeleton $(\Omega^n(X))_0$ is the set of simplices $x\in X_n$ with all $\partial_j(x)$ equal to the base point.\newline $(ii)$~$(\Omega^n(X))_0$ coincides with ${{\mbox{Hom}}}_{\mathcal S_*}(S^n,X)\subset X_n$ where $S^n$ is obtained by collapsing the boundary $\partial \Delta[n]$ of the standard simplex to a single base point\footnote{This is not the definition used in \cite{DGM}, where $S^n$ is defined as the $n$-fold smash product $S^1\wedge \dots \wedge S^1 $ of $S^1=\Delta[1]/ \partial \Delta[1]$. This distinction in the definition of the homotopy groups is irrelevant in the fibrant case since the geometric realizations are homeomorphic, but as in \cite{Moore} our choice is more convenient to compute the set of maps using Yoneda's Lemma.}.\newline $(iii)$~The $1$-skeleton $(\Omega^n(X))_1$ is the set of $x\in X_{n+1}$ which fulfill the conditions $$ \partial_i\partial_j(x)=*\,,\,~\forall i,j, \ \ \partial_j(x)=*\,,\,~\forall j\in \{0,\ldots, n-1\}. $$ $(iv)$~The boundaries $\partial_i:(\Omega^n(X))_1\to (\Omega^n(X))_0$ for $i=0,1$ are given by $\partial_n$ and $\partial_{n+1}$.\newline $(v)$~The relation $R$ on $(\Omega^n(X))_0={{\mbox{Hom}}}_{\mathcal S_*}(S^n,X)\subset X_n$ given by $$ x R y \iff \exists z\in (\Omega^n(X))_1~\text{s.t.}~ \partial_0 z=x~\text{and}~ \partial_1 z=y $$ coincides with the relation of homotopy between $n$-simplices as in \eqref{Maydefn}. \end{lem} \proof $(i)$~Follows from \eqref{omegadef3} for $k=0$.\newline $(ii)$~By Yoneda's lemma one checks that the morphisms $y\in{{\mbox{Hom}}}_{\mathcal S_*}(S^n,X)$ {\it i.e.\/}\ the elements of ${{\mbox{Hom}}}_{\mathcal S_*}(\Delta[n],X)$ which send $\partial \Delta[n]$ to the base point, are the same as the elements of the $0$-skeleton $(\Omega^n(X))_0$.\newline $(iii)$~Follows from \eqref{omegadef3} for $k=1$.\newline $(iv)$~Follows from $\partial_j=\partial_{j+n}^X$ for $j=0,1$. \newline $(v)$~This follows from the previous part of the lemma since the relation \eqref{Maydefn} restricts to \begin{equation}\label{Maydefnbis} R=\{(x,y)\in X_n\times X_n\mid \partial_j x=\partial_jy=*\, \forall j \, \&\, \exists z \mid \partial_jz=*\, \forall j<n, \ \partial_nz=x,\, \partial_{n+1}z=y\} \end{equation} \endproof \begin{rem}\label{geomrem} The geometric meaning of the endofunctor $\Omega$ can be understood as explained to us by B. Dundas: first, as in \cite{DGM} A.2.7, one has a combinatorial model $PX$ mimicking the path space of a simplicial set $X$ by precomposing the functor $X$ with the endofunctor $[0]\coprod \bullet: \Delta\longrightarrow \Delta$. This simply shifts the indices {\it i.e.\/}\ one has $(PX)_k=X_{k+1}$ and the indices of faces and degeneracies are shifted by $1$. The link with ordinary paths is given by precomposing with the morphism of simplicial sets $ \gamma: \Delta[1]\times \Delta[q]\to \Delta[q+1] $, associated as $\gamma:=N(p)$ by the nerve functor $N$ to $$ p: [1]\times [q]\to [q+1], \ \ p(0,j):=0 \ \forall j, \ \ p(1,j):=j+1 \ \forall j $$ Requiring that the two end points of the path associated to $x\in X_{k+1}={\mbox{Hom}}_\Delta(\Delta[q+1],x)$ are equal to the base point $*$ (when $X$ is pointed) gives exactly the conditions of \eqref{omegadef} defining $\Omega(X)$. When $X$ is fibrant one obtains in this way a model for its loop space. \end{rem} For a {\em fibrant} simplicial pointed set $X$, the relation \eqref{Maydefnbis} is an {\em equivalence} relation and the quotient by this relation defines $\pi_0(\Omega^n(X))$ which is known to be a group, for $n\geq 1$ (see \cite{May,Moore}, or Theorem 7.2 in Chapter III of \cite{GJ}). Note also that when $X$ is fibrant the above equivalence relation on ${{\mbox{Hom}}}_{\mathcal S_*}(S^n,X)\subset X_n$ coincides with the one defined by the two boundary maps from the $1$-skeleton of the simplicial set $\underline{\mbox{Hom}}_{\mathcal S_*}(S^n,X)$ (see \cite{Moore} Lemma 1B.3). \newline On the other hand, the simplicial sets $X$ we consider here are {\em not necessarily fibrant} and the relation $R$ is not in general transitive (nor symmetric). The easy solution to bypass this problem is to define $\pi^{\rm comb}_n(X)$ as the quotient by the equivalence relation generated by the relation $R$ in agreement with Definition \ref{defnpij}. This provides a first notion of homotopy which suffices for the goal of the present paper. One has by construction \begin{equation}\label{pij} \pi^{\rm comb}_n(X,\star):=\pi^{\rm comb}_0(\Omega^n(X)). \end{equation} We state simple properties of this combinatorial notion \begin{prop}\label{propprepa} $(i)$~Let $X$ be a pointed simplicial set and $k>0$ an integer. Then for any $n$ $$ \pi^{\rm comb}_n(X \wedge k_+)=\pi^{\rm comb}_n(X)\wedge k_+. $$ $(ii)$~Let $X,Y$ be pointed simplicial sets, one has for any $n$ $$ \pi^{\rm comb}_n(X \times Y)=\pi^{\rm comb}_n(X)\times \pi^{\rm comb}_n(Y). $$ \end{prop} \proof $(i)$~An element $x\in (X \wedge k_+)_n$, $x\neq *$ is of the form $x=(a,j)$ with $a\in X_n$ and $0<j\leq k$. Two elements $x=(a,j)$ and $x'=(a',j')$ fulfill $(x,x')\in R$ as in \eqref{Maydefnbis} if and only if $j=j'$ and $(a,a')\in R_X$ since the boundaries preserve the index $j$. \newline $(ii)$~This follows since $(X\times Y)_n=X_n\times Y_n$ and the boundaries act componentwise. \endproof \subsection{The finer notion $\pi^{(2)}_n(X)$ and the topos $\frak{Sets}^{(2)}$}\label{sectcombinatorial1bis} For later applications to Arakelov divisors Definition \ref{defnpij} is too coarse and one would like to \begin{itemize} \item keep all the information about the relation $R$ and \item still think of $\pi^{\rm new}_0$ as a set. \end{itemize} The idea of ``topos" of Grothendieck \cite{SGA} comes to the rescue providing a satisfactory answer. We consider the topos $\frak{Sets}^{(2)}$ of contravariant functors to the category of sets from the small category obtained by restricting the objects of $\Delta$ to $[0]$ and $[1]$ and keeping the same morphisms as in the following definition \begin{defn}\label{defnpin} $(i)$~Let $X:{\Delta^{\rm op}}\longrightarrow \frak{Sets}$ be a simplicial set. We define $\pi^{(2)}_0(X)$ as the object of $\frak{Sets}^{(2)}$ which is the restriction of the functor $X$ to the full subcategory of $\Delta$ with objects $[0],[1]$ and same morphisms as $\Delta$. \newline $(ii)$~Let $X$ be a pointed simplicial set, then we define \[\pi^{(2)}_n(X):=\pi^{(2)}_0(\Omega^n(X)). \] \end{defn} It turns out that the topos $\frak{Sets}^{(2)}$ can also be described as the dual of the small category with a single object whose morphisms form the monoid ${\mathcal M}$ with three elements $1,m_0,m_1$ and the multiplication table specified by the rule $m_j x=m_j$ for all $j\in \{0,1\}$. \begin{prop}\label{propquivers} The topos $\frak{Sets}^{(2)}$ is the same as the dual of the monoid ${\mathcal M}$. \end{prop} \proof By definition an object $F$ of the topos $\frak{Sets}^{(2)}$ is a pair of sets $F(0),F(1)$, with two maps $\partial_j:F(1)\to F(0)$, $j\in \{0,1\}$ and a map $s:F(0)\to F(1)$ such that $\partial_j\circ s={\mbox{Id}}$. This implies that $s:F(0)\to F(1)$ is an injection and one can thus view $F(0)$ as a subset of $F(1)$ and consider the two self-maps $T_j=s\circ \partial_j: F(1)\to F(1)$. They fulfill the rule $$ T_i\circ T_j=T_j \,,\,~\forall i,j\in \{0,1\} $$ since $s \circ (\partial_i\circ s) \circ \partial_j=s \circ ({\mbox{Id}}) \circ \partial_j=s\circ \partial_j$. Thus one obtains an object in the dual $\hat {\mathcal M}$ of the monoid defined by the opposite of the above rules. Conversely given an object $X$ of $\hat {\mathcal M}$, {\it i.e.\/}\ a set $X$ endowed with a right action of ${\mathcal M}$ one defines an object of $\frak{Sets}^{(2)}$ by setting $F(1):=X$, $F(0):={\rm Range}( T_j)$ which does not depend on the choice of $j\in \{0,1\}$. One lets $s:F(0)\to F(1)$ be the inclusion as a subset, and $\partial_j:F(1)\to F(0)$ is given by $T_j$. One checks that $\partial_j\circ s={\mbox{Id}}$. One obtains in this way two functors $\frak{Sets}^{(2)}\longrightarrow \hat {\mathcal M}$ and $\hat {\mathcal M}\longrightarrow \frak{Sets}^{(2)}$ which are inverse of each other. \endproof \begin{rem} The topos $\frak{Sets}^{(2)}$ is closely related to the topos of quivers but is not the same. By definition a quiver is a directed graph where loops and multiple arrows between two vertices are allowed, {\it i.e.\/}\ a multidigraph. It is described in a precise manner by two sets $(V,E)$ which represent the vertices and the edges of the graph and two maps $d_j:E\to V$, $j\in \{0,1\}$, which give the source and the target of an edge. These two maps do not fulfill any condition. One obtains an object of $\frak{Sets}^{(2)}$ by setting $F(0):=V$ and $F(1):=V \coprod E$, the disjoint union of $V$ and $E$. One then lets $s:F(0)\to F(1)$ be the inclusion while the two maps $\partial_j:F(1)\to F(0)$, $j\in \{0,1\}$ are given by $\partial_j=({\mbox{Id}}, d_j):V \coprod E\to V$. One has $\partial_j\circ s={\mbox{Id}}$ by construction, and $F$ is an object of $\frak{Sets}^{(2)}$. Conversely, given an object $F$ of $\frak{Sets}^{(2)}$ one obtains a quiver by setting $$ V=F(0), \ \ E=F(1)\setminus s(F(0)), \ \ d_j=\partial_j \vert E $$ However this second construction is not functorial. In fact the topos of quivers has two points given by the functors to the set of vertices and the functor to the set of edges. Similarly these two functors give the two points of the topos $\frak{Sets}^{(2)}$ but in the latter case the functor to the set of edges never takes the value $\emptyset$ when the functor to the set of vertices takes a non-empty value. This shows that the topos of quivers is not the same as the topos $\frak{Sets}^{(2)}$. \end{rem} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.6]{subobject1.pdf} \end{center} \caption{Subobject classifier for $\frak{Sets}^{(2)}$. \label{subobject} } \end{figure} \begin{lem}\label{omega} The subobject classifier $\Omega$ of the topos $\frak{Sets}^{(2)}$ is the object with two vertices {False, True} and five edges which besides the two degenerate ones form the graph of Figure \ref{subobject}. \end{lem} \proof It is a general fact (see \cite{MM}, \S I.4) that for a topos which is the dual of a monoid ${\mathcal M}$ (viewed as a category with a single object), {\it i.e.\/}\ the topos of sets with a right action of ${\mathcal M}$, the subobject classifier is given by the set ${\mathcal J}$ of right ideals of ${\mathcal M}$ on which the right action of ${\mathcal M}$ is defined by $$ J.m:=\{n\in {\mathcal M}\mid mn\in J\}\,,\,~\forall J\in {\mathcal J}, \ m \in {\mathcal M}. $$ Taking the above ${\mathcal M}$ with three elements $1,m_0,m_1$ and the multiplication table specified by the rule $m_j x=m_j$ for all $j\in \{0,1\}$, one finds that ${\mathcal J}$ contains five elements $$ {\mathcal J}=\{\emptyset, \{m_0\}, \{m_1\}, \{m_0,m_1\}, {\mathcal M}\} $$ and that the right action $T_j$ of $m_j\in{\mathcal M}$ fixes $\emptyset$ and ${\mathcal M}$ (which are hence degenerate edges, {\it i.e.\/}\ vertices) while $T_j\{m_j\}={\mathcal M}$ and $T_i\{m_j\}=\emptyset$ for $i\neq j$. Thus the set $V$ of vertices contains two elements $\emptyset$ and ${\mathcal M}$ and the non-degenerate edges are the three edges shown in Figure \ref{subobject}.\newline The reason for renaming the vertices $\emptyset$ as ``False" and ${\mathcal M}$ as ``True" and for the choice of the labels of the edges comes from the construction of the classifying map associated to a subobject $G'$ of an object $G$ in $\frak{Sets}^{(2)}$. One finds that the classifying map $f$ is obtained as follows as a map from $G$ to $\Omega$: \begin{enumerate} \item $\epsilon\in G'\Rightarrow f(\epsilon)={\rm True}$ \item $\epsilon\notin G' \, \text{and} \ \partial_j \epsilon\notin G'\Rightarrow f(\epsilon)={\rm False}$ \item $\epsilon\notin G', \, \ \partial_0 \epsilon\notin G' \, \text{and} \ \partial_1 \epsilon\in G'\Rightarrow f(\epsilon)={\rm Repair}$ \item $\epsilon\notin G', \ \partial_0 \epsilon\in G' \, \text{and} \ \partial_1 \epsilon\notin G'\Rightarrow f(\epsilon)={\rm Doubt}$ \item $\epsilon\notin G', \ \partial_0 \epsilon\in G' \, \text{and} \ \partial_1 \epsilon\in G'\Rightarrow f(\epsilon)={\rm Check}$ . \end{enumerate} The terminology ``False" and ``True" is the standard one for the two extremes in subobject classifiers, the notations for the edges are suggestive but more arbitrary. \endproof This determination of the subobject classifier shows that the topos $\frak{Sets}^{(2)}$ is two valued and not boolean (see \cite{MM}, VI). \subsection{Homotopy of $\Gamma$-spaces}\label{secthomotop} If ${\mathcal C}$ is a pointed category with initial and final object denoted $*$, one defines (see \cite{DGM}) the category of $\Gamma$-objects of ${\mathcal C}$ as the category $\Gamma{\mathcal C}$ of pointed covariant functors ${\Gamma^{\rm op}} \longrightarrow{\mathcal C}$. This construction applies to the category ${\mathcal S}_*$ of pointed simplicial sets to yield the category ${\Gamma\cS_*}$ of $\Gamma$-spaces. It also applies to the category $\frak{Sets}^{(2)}_*$ of pointed objects in $\frak{Sets}^{(2)}$. We shall call {$\Gamma$-2-sets } the objects of $\Gamma\frak{Sets}^{(2)}_*$. \begin{prop}\label{defnforgamma} $(i)$~Let $X$ be a $\Gamma$-space and $n\in {\mathbb N}$. Then the map $ k\mapsto \pi^{(2)}_n(X(k_+)) $ (resp. $k\mapsto \pi^{\rm comb}_n(X(k_+))$) extends to a pointed covariant functor $\pi^{(2)}_n(X):{\Gamma^{\rm op}} \longrightarrow\frak{Sets}^{(2)}_*$ (resp. to a $\Gamma$-set).\newline $(ii)$~For $n\in {\mathbb N}$, $\pi^{(2)}_n$ defines a functor $\pi^{(2)}_n:{\Gamma\cS_*}\longrightarrow\Gamma\frak{Sets}^{(2)}_*$ from $\Gamma$-spaces to {$\Gamma$-2-sets }\!.\newline $(iii)$~For $n\in {\mathbb N}$, $\pi^{\rm comb}_n$ defines a functor $\pi^{\rm comb}_n:{\Gamma\cS_*}\longrightarrow\Gamma\frak{Sets}_*$ from $\Gamma$-spaces to ${\sss-{\rm Mod}}$. \end{prop} \proof This follows from the naturality of Definitions \ref{defnpij} and \ref{defnpin}. \endproof The relation between $\pi^{(2)}_n$ and $\pi^{\rm comb}_n$ is given by \begin{equation}\label{defnpinbis} \pi^{\rm comb}_n:=\ell \circ \pi^{(2)}_n \end{equation} {\it i.e.\/}\ composition with the functor \[\ell:\frak{Sets}^{(2)}\longrightarrow \frak{Sets}, \ \ \ell(X):=\varinjlim_{{\mathcal C}^{o}} X(c)\] which assigns to a $2$-set its set of components. Here, ${\mathcal C}$ is any of the small categories defining $\frak{Sets}^{(2)}$ by duality as $\hat {\mathcal C}$ as in Proposition \ref{propquivers} and ${\mathcal C}^{o}$ is its opposite. Note that the functor $\ell$ does not correspond to a point of the topos $\frak{Sets}^{(2)}$. \subsection{$\Gamma$-sets as endofuntors } \label{sectendofunct} In this section we recall the construction of \cite{DGM} of the endofunctor in the category ${\mathcal S_*}$ associated to a $\Gamma$-space, in the case of discrete $\Gamma$-spaces, {\it i.e.\/}\ ${\mathfrak s}$-modules. By construction an ${\mathfrak s}$-module is a covariant functor $M:{\Gamma^{\rm op}}\longrightarrow \frak{Sets}_*$ and, as in Section 2.1.2.1 of {\it op.cit.\/}\ , we view pointed sets as discrete pointed simplicial sets, {\it i.e.\/}\ as constant functors ${\Delta^{\rm op}}\longrightarrow{\Se_*}$. \begin{lem}\label{EilMac} Let $M:{\Gamma^{\rm op}}\longrightarrow {\Se_*}$ be an ${\mathfrak s}$-module. Then the associated endofunctor of the category ${\mathcal S_*}$ of pointed simplicial sets is obtained by composition with $M$ viewed as an endofunctor of the category ${\Se_*}$ of pointed sets. \end{lem} \proof One first extends the functor $M:{\Gamma^{\rm op}}\longrightarrow {\Se_*}$ to an endofunctor ${\Se_*}\longrightarrow {\Se_*}$ in pointed sets. This is done by taking a colimit on the finite subsets as explained in \S 2.2.1.1 of \cite{DGM}. Then, one applies the technique described in {\it op.cit.\/}\ that uses, for a simplicial set $X=\{[q]\mapsto X_q\}$, the diagonal $$ M(X):=\{[q]\mapsto M(X_q)_q\}. $$ Since by construction $M(X_q)$ is a discrete simplicial set, it is the same in all degrees so that the index $q$ in $M(X_q)_q$ disappears, thus we simply write $M(X_q)$. Hence starting with the pointed simplicial set $X:{\Delta^{\rm op}}\longrightarrow {\Se_*}$, we obtain a new pointed simplicial set by composition {\it i.e.\/}\ \begin{equation}\label{commp} X\mapsto M(X)= M\circ X: {\Delta^{\rm op}} \longrightarrow {\Se_*}. \end{equation} In summary the result follows from \S 2.2.1.1 of {\it op.cit.\/}\ . \endproof The basic example of an ${\mathfrak s}$-module is given in 2.1.2.1 of \cite{DGM} where one associates to an abelian monoid $A$ with a zero element, the functor $M=HA$ \begin{equation}\label{hadefn} HA(k_+)=A^k, \ \ Hf:HA(k_+)\to HA(n_+), \ Hf(m)(j):=\sum_{f(\ell)=j} m_\ell \end{equation} where $m=(m_1,\ldots ,m_k)\in HA(k_+)$. The zero element of $A$ gives meaning to the empty sum. In the special case when the monoid $A$ is an abelian group, the composition \eqref{commp}, {\it i.e.\/}\ the functor $HA\circ X: {\Delta^{\rm op}} \longrightarrow {\Se_*}$ factors through simplicial abelian groups (the functor $HA$ is the composite of a more precise functor $A\mapsto {\rm Ab}A$ to abelian groups with the forgetful functor from abelian groups to pointed sets, where the base point is the $0$) and always fulfills the Kan extension property. The geometric realization $\vert HA\circ X\vert$ only uses the underlying simplicial pointed set but the finer structure as a simplicial abelian group, and the Dold-Kan correspondence in the form of Corollary 2.5 of \cite{GJ}, Chapter III, show that the homotopy groups of the geometric realization $\vert HA\circ X\vert$ are given by the (reduced) homology\footnote{For a pointed simplicial set $(X,*)$ we use the notation $H_n(X,A)$ for the reduced homology $H_n((X,*),A)$.} of the associated complex of abelian groups, {\it i.e.\/}\ $\pi_n(\vert HA\circ X\vert)=H_n(X,A)$. This suffices to conclude for instance that $\vert HA\circ S^n\vert$ is an Eilenberg-MacLane space $K(A,n)$. \subsection{The homology with coefficients in an ${\mathfrak s}$-module } \label{secthomology} In our arithmetic context we are interested in ${\mathfrak s}$-modules $M$ which are no longer of the form $HA$ where $A$ is an abelian group. In a first class of examples $M$ is still of the form $HA$, where $A$ is a monoid. A second class of examples are those constructed in \cite{CCprel} to specify the geometric structure of $\overline{{\rm Spec\,}{\mathbb Z}}$ at the archimedean place. In all these cases it is no longer true that the composite $M\circ X$ is fibrant, even when the simplicial set $X$ itself is fibrant. We shall use the equality $\pi_n(\vert HA\circ X\vert)=H_n(X,A)$ holding for abelian groups as the motivation to extend the definition of the homology of a pointed simplicial set with coefficients in an arbitrary ${\mathfrak s}$-module as follows \begin{defn}\label{fact1} Let $M$ be an ${\mathfrak s}$-module, and $X$ a pointed simplicial set. For any integer $n\geq 0$ one defines the homology $H_n(X,M)$ as the ${\mathfrak s}$-module \begin{equation}\label{pins1bis} H_n(X,M)(k_+):=\pi^{\rm comb}_n ( M\circ(X\wedge k_+) ). \end{equation} Here, $k_+$ is viewed as a discrete simplicial pointed set {\it i.e.\/}\ constant in all degrees. \end{defn} As in Lemma \ref{EilMac}, $M$ is viewed as an endofunctor of the category ${\Se_*}$ and $\pi^{\rm comb}_n$ is defined in Definition \ref{defnpij}. There is in fact a refined version of homology $H_n^{(2)}(X,M)$ using $\pi^{(2)}_n$ instead of $\pi^{\rm comb}_n$ but we shall not need it in the present paper. The following result establishes several basic properties of the new homology. \begin{prop}\label{propprel} $(i)$~For any $n\geq 0$, $H_n(X,M)$ is a covariant bifunctor $$H_n:{\mathcal S_*}\times {\sss-{\rm Mod}}\longrightarrow {\sss-{\rm Mod}}. $$ $(ii)$~Let $M_1,M_2$ be ${\mathfrak s}$-modules. One has a natural transformation $$ H_n(M_1\circ X,M_2)\to H_n(X,M_2\circ M_1) $$ which is an isomorphism when evaluated on $1_+$.\newline $(iii)$~For any pointed simplicial set $X$ one has $H_n(X,{\mathfrak s})=\pi^{\rm comb}_n(X)\wedge {\mathfrak s}$.\newline $(iv)$~For $n\neq m$ : $H_m(S^n,{\mathfrak s})=\{*\}$ while for $n=m$ one has $H_m(S^n,{\mathfrak s})={\mathfrak s}$.\newline $(v)$~For $n\neq m$ : $H_m(S^n,H{\mathbb B})=\{*\}$ while for $n=m$ one has $H_m(S^n,H{\mathbb B})=H{\mathbb B}$. \end{prop} \proof $(i)$~By construction, $H_n(X,M)$ is a covariant functor of $X$ for fixed $M$, and of $M$ for fixed $X$. To prove that it is a bifunctor it suffices, using the bifunctor lemma (see \cite{ML} Proposition 1 Chapter II, \S 3), to show that it satisfies the interchange law which states that given morphisms $f \in {\mbox{Hom}}_{\mathcal S_*}(X,Y)$ and $h\in {\mbox{Hom}}_{\mathfrak s}(M,N)$ one has the equality \begin{equation}\label{biflem} H_n(f,N)\circ H_n(X,h)=H_n(Y,h)\circ H_n(f,M)\in {\mbox{Hom}}_{\mathfrak s}(H_n(X,M),H_n(Y,N)). \end{equation} Both sides of this formula are ${\mathfrak s}$-modules {\it i.e.\/}\ functors ${\Gamma^{\rm op}}\longrightarrow \frak{Sets}_*$ thus it is enough to check the equality pointwise {\it i.e.\/}\ by evaluating both sides on $k_+$ for fixed $k$. Since $\pi^{\rm comb}_n:{\mathcal S_*}\longrightarrow \frak{Sets}_*$ is a functor the equality follows provided one shows that the same equality holds if one replaces $H_n(X,M)$ by $F(X,M):=M\circ X$ which is a separately covariant functor to ${\mathcal S_*}$ with arguments in ${\mathcal S_*}$ and ${\sss-{\rm Mod}}$. Again it is enough to check this equality pointwise {\it i.e.\/}\ replacing ${\mathcal S_*}$ by $\frak{Sets}_*$ and $F(X,M):=M\circ X$ by $G(X,M):=M(X)$ which is a separately covariant functor to $\frak{Sets}_*$ with arguments in $\frak{Sets}_*$ and ${\sss-{\rm Mod}}$. Since $M$ and $N$ are endofunctors of $\frak{Sets}_*$ and the morphism $h\in {\mbox{Hom}}_{\mathfrak s}(M,N)$ is a natural transformation from $M$ to $N$ one has, for any $f \in {\mbox{Hom}}_{\frak{Sets}_*}(X,Y)$, the equality $$ N(f)\circ h_X=h_Y\circ M(f)\in {\mbox{Hom}}_{\frak{Sets}_*}(M(X),N(Y)) $$ which gives \eqref{biflem}.\newline $(ii)$~As in \cite{DGM}, (2.2.1.2 equation (2.2)), one has natural maps $M(X)\wedge Y\mapsto M(X\wedge Y)$. We apply this with $Y=k_+$ and thus obtain natural maps $$ \eta_k:M_1(X)\wedge k_+\to M_1(X\wedge k_+) \,,\,~\forall k. $$ This yields a natural morphism $$ M_2(\eta_k):M_2(M_1(X)\wedge k_+)\to M_2(M_1(X\wedge k_+)) $$ and by composition with $\pi^{\rm comb}_n$ one gets the natural transformation $$ \pi^{\rm comb}_n(M_2(\eta_k)):H_n(M_1\circ X,M_2)(k_+)\to H_n(X,M_2\circ M_1)(k_+) $$ which is, by construction, an isomorphism for $k=1$.\newline $(iii)$~Since the endofunctor of $\frak{Sets}_*$ associated to ${\mathfrak s}$ is the identity, the result follows from Proposition \ref{propprepa} $(i)$. $(iv)$~Using $(iii)$ it is enough to determine $\pi^{\rm comb}_n(S^m)$. The pointed simplicial set $S^n$ is obtained by collapsing $\partial \Delta[n]$ to a base point. This means that one considers the sub-functor $[q]\mapsto \partial \Delta([q])\subset {\mbox{Hom}}_\Delta([q],[n])$ given by the maps $[q]\to [n]$ which are not surjective and one identifies all the elements of $\partial \Delta([q])$ with the base point. For $h\in {\mbox{Hom}}_\Delta([q'],[q])$ one has $\partial \Delta([q])\circ h\subset \partial \Delta([q'])$ so that the collapsing gives a pointed simplicial set. An element of ${\mbox{Hom}}_{\mathcal S_*}(S^n,X)$ is an element of ${\mbox{Hom}}_{\mathcal S_*}(\Delta[n],X)$ which maps $\partial \Delta[n]$ to the base point. This means, by Yoneda's lemma, an element $x\in X_n$ such that $\partial_j(x)=*$ for all $j$ (since any map $[q]\to [n]$ which is not surjective factors through a $d_j:[n-1]\to [n]$). For $X=S^m$, such an $x\in X_n$ is, if it is not the base point, an element $\phi \in {\mbox{Hom}}_\Delta([n],[m])$ which is surjective and such that $\phi\circ d_j$ fails to be surjective for any $j$. This latter condition implies that $\phi$ is also injective and one concludes that $n=m$ and $\phi$ is the identity map. This gives $\pi^{\rm comb}_n (S^m)=\{*\}$ for $n\neq m$. To prove that $\pi^{\rm comb}_n (S^n)=\{*,{\mbox{Id}}\}$ one just needs to show that the element ${\mbox{Id}}$ does not get identified with the base point under the equivalence relation generated by the relation \eqref{Maydefnbis}, {\it i.e.\/}\ $$ xRy\iff \exists z \mid \partial_jz=*\, \forall j<n, \ \partial_nz=x,\, \partial_{n+1}z=y\}. $$ Any $z\neq *$ in $S^n_{n+1}$ is given by a surjective map $s_i \in {\mbox{Hom}}_\Delta([n+1],[n])$ such that $s_i (i)=s_i(i+1)$ and the condition $\partial_jz=*\, \forall j<n$ shows that the index $i$ is equal to $i=n$. It follows that $\partial_nz=\partial_{n+1}z$ and that the relation $R$ is the diagonal. \newline $(v)$~The endofunctor $H{\mathbb B}$ associates to a pointed set $E$ the (pointed) set of all finite subsets of $E$ which contain the base point $*$, and to a map $f:E\to F$ the direct image map $Z\mapsto f(Z)$. Note the equivalence, \begin{equation}\label{charoneequi} f(Z)=\{*\}\iff f(x)=*\,,\,~\forall x\in Z \end{equation} It follows that there are only two elements $u\in (H{\mathbb B}\circ S^n)_n=H{\mathbb B}(S^n_n)=H{\mathbb B}(\{*,{\mbox{Id}}\})$, namely the base point $*$ and the subset $u=\{*,{\mbox{Id}}\}$. Let us show that these two elements are not equivalent under the equivalence relation generated by the relation \eqref{Maydefnbis} for the simplicial pointed set $H{\mathbb B}\circ S^n$. An element of $(H{\mathbb B}\circ S^n)_{n+1}=H{\mathbb B}(S^n_{n+1})$ is a subset $z=\{*,s_{i_1}, \ldots , s_{i_k}\}$ of the set $S^n_{n+1}$ described in the proof of $(iv)$. The condition $\partial_jz=*\, \forall j<n$ shows that all indices $i_j$ are equal to $n$ so that either $z=*$ or $z=\{*,s_n\}$. This shows, as in the proof of $(iv)$, that the relation $R$ is diagonal and $\pi^{\rm comb}_n(H{\mathbb B}\circ S^n)=H{\mathbb B}(1_+)={\mathbb B}$. Let then $k>0$ be an integer and $E$ a pointed set. One has a natural isomorphism $H{\mathbb B}\circ (E \wedge k_+)\simeq (H{\mathbb B}\circ E)^k$. It follows that for any pointed simplicial set $X$ one has a natural isomorphism $H{\mathbb B}\circ (X\wedge k_+)\to (H{\mathbb B}\circ X)^k$. Then by Proposition \ref{propprepa} $(ii)$ one gets $$ \pi^{\rm comb}_n(H{\mathbb B}\circ (S^n \wedge k_+))=(\pi^{\rm comb}_n(H{\mathbb B}\circ S^n))^k=H{\mathbb B}(k_+). $$ By construction the natural identifications are compatible with the structures of $\Gamma$-sets. This shows that $H_n(S^n,H{\mathbb B})=H{\mathbb B}$. The proof of $(iv)$ together with \eqref{charoneequi} show that $H_n(S^n,H{\mathbb B})=\{*\}$ for $m\neq n$. \endproof Definition \ref{fact1} provides a meaning to the following equality \eqref{equbasic} whose two sides are ${\mathfrak s}$-modules. \begin{thm}\label{propcompare} Let $A$ be an abelian group, and $X$ a pointed simplicial set. For any integer $n\geq 0$ one has the equality of ${\mathfrak s}$-modules \begin{equation}\label{equbasic} H_n(X,HA)=H(H_n(X,A)) \end{equation} where $H_n(X,A)$ is the (reduced) abelian group homology of $X$ with coefficients in $A$. \end{thm} \proof For any simplicial set $Y$ the composite $HA\circ Y$ is a simplicial abelian group and hence has the Kan extension property (see \cite{Moore}, Theorem 2.2). It follows that the combinatorial homotopy $\pi^{\rm comb}_n(HA\circ Y)$ coincides with the usual homotopy $\pi_n(\vert HA\circ Y\vert)$ of the geometric realization \begin{equation}\label{usualhomot} \pi^{\rm comb}_n(HA\circ Y)=\pi_n(\vert HA\circ Y\vert). \end{equation} Moreover the group law of these homotopy groups coincides with the abelian group law inherited from the simplicial abelian group structure (see {\it op.cit.\/}\ Proposition 2.4). The Dold-Kan correspondence (see \cite{GJ}, Chapter III Corollary 2.5) gives a canonical bijection \begin{equation}\label{usualhomot1} \delta_Y: H_n(Y,A)\to \pi^{\rm comb}_n(HA\circ Y). \end{equation} Furthermore this bijection is a natural transformation of covariant functors from pointed simplicial sets to pointed sets. More precisely given a morphism $\psi:Y\to Y'$ of pointed simplicial sets, one obtains the equality \begin{equation}\label{usualhomot2} \pi^{\rm comb}_n(HA(\psi))\circ \delta_Y=\delta_{Y'}\circ H_n(\psi,A). \end{equation} Indeed, it is enough to check this equality on cycles $c\in Z_n(Y,A)$ which are Moore normalized, {\it i.e.\/}\ $\partial_j c=0$ $\forall j$. The element $\delta_Y(c)$ is then given by the combinatorial class directly associated to $c$ viewed as an element of ${\mbox{Hom}}_{\mathcal S_*}(S^n,HA\circ Y)={\mbox{Hom}}_{\mathcal S_*}((\Delta^n,\partial \Delta^n),HA\circ Y)$ which is a subset of $(HA\circ Y)_n=HA(Y_n)$. Let $c=\sum a_j y_j$ where $a_j \in A$ and $y_j\in Y_n$. Then $\pi^{\rm comb}_n(HA(\psi))\circ \delta_Y(c)$ is represented by the combinatorial class obtained from $\delta_Y(c)$ by applying the functor $HA(\psi)$. This gives $HA(\psi)(\sum a_j y_j)=\sum a_j \psi(y_j)$, as one sees using the definition \eqref{hadefn} of the functor $HA$. But one has similarly $H_n(\psi,A)(c)=\sum a_j \psi(y_j)$, thus one gets the required equality \eqref{usualhomot2}. From \eqref{usualhomot1} one gets the bijection $\delta_X: H_n(X,A)\to \pi^{\rm comb}_n(HA\circ X)$. Let then $k>0$ be an integer and $E$ a pointed set. One has a natural isomorphism $HA\circ (E \wedge k_+)\simeq (HA\circ E)^k$ since both sides consist of maps $(x,j)\mapsto \phi(x,j)\in A$, $x\in E$, $j\in \{1, \ldots, k\}$ with finite support and such that $\phi(*,j)=0$ for all $j$. Thus one obtains a natural isomorphism of simplicial sets $$ HA\circ (X\wedge k_+)=(HA\circ X)^k. $$ The same equality holds for the geometric realizations, and using \eqref{usualhomot} one derives $$ \pi^{\rm comb}_n(HA\circ (X\wedge k_+))=\pi_n(\vert HA\circ (X\wedge k_+) \vert)= \pi_n((\vert HA\circ X\vert)^k)= H_n(X,A)^k. $$ At the set-theoretic level this coincides with $H(H_n(X,A))(k_+)$. In fact one can obtain the same result more directly as a consequence of \eqref{usualhomot1} and of the equality of (reduced) homology groups $H_n(X_1\vee X_2,A)=H_n(X_1,A)\oplus H_n(X_2,A)$.\newline It remains to show that given a morphism $\phi:k_+\to m_+$ in ${\Gamma^{\rm op}}$ the associated map $$\pi^{\rm comb}_n(HA\circ (X\wedge k_+))\to \pi^{\rm comb}_n(HA\circ (X\wedge m_+)), $$ is the same as the map $HK(\phi): H_n(X,A)^k\to H_n(X,A)^m$ associated to the group law of $K=H_n(X,A)$ and the functor $HK$. Using \eqref{usualhomot2} it is enough to show that $HK(\phi)$ equals the homology map $$ H_n({\mbox{Id}}_X \wedge \phi,A): H_n(X\wedge k_+,A)\to H_n(X\wedge m_+,A). $$ With $e_j$, $j\in \{1,\ldots, k\}$, the canonical basis of $H_0(k_+)$, and $e_0:=0$, the above map is given by $$ H_n({\mbox{Id}}_X \wedge \phi,A)(\sum c_j\otimes e_j)=\sum c_j\otimes e_{\phi(j)} $$ and using the definition \eqref{hadefn} of the functor $HK$ one gets the required equality.\endproof \begin{rem} \label{homotopgroup} In homotopy theory the homotopy groups $\pi_n$ are abelian groups for $n>1$. The group operation arises, at the combinatorial level, from the Kan extension property of fibrant simplicial sets together with combinatorial constructions involving simplices. Definition \ref{fact1} does not involve any of these constructions and yet Theorem \ref{propcompare} shows that one recovers the same group law on the homotopy groups $\pi_n$ from the $\Gamma$-set (${\mathfrak s}$-module) obtained using the functorial nature of the map $k_+\mapsto X\wedge k_+$. The reason behind this equality of structures is the interchange law which is fulfilled by the group law of the homotopy group $\pi_n$ and the group law induced by the abelian coefficients. In that sense, Definition \ref{fact1} takes into account the ${\mathfrak s}$-module structure of the coefficients to obtain a replacement of the group structure of homotopy groups. We shall see in the next sections a striking example where this additional structure is put to work. \end{rem} \section{The archimedean place and the Gromov norm}\label{sectgromov} In this section we show that the singular homology $H_*(X, {\mathbb R})$ of a topological space inherits a natural semi-norm from the filtration of the ${\mathfrak s}$-module $H{\mathbb R}$ by the sub-${\mathfrak s}$-modules $\Vert H{\mathbb R} \Vert_\lambda$ associated to the archimedean place of $\overline{{\rm Spec\,}{\mathbb Z}}$ as constructed in \cite{CCprel}. Moreover we prove that this semi-norm is equivalent to the Gromov semi-norm on singular homology. \subsection{$\overline{{\rm Spec\,}{\mathbb Z}}$ at the archimedean place}\label{sectspecZ} In \cite{CCprel} we showed how to endow the Arakelov compactification $\overline{{\rm Spec\,}{\mathbb Z}}$ with a structure sheaf of ${\mathfrak s}$-algebras, which coincides with the standard structure sheaf of ${\rm Spec\,}{\mathbb Z}$ on the dense open set ${\rm Spec\,}{\mathbb Z}\subset\overline{{\rm Spec\,}{\mathbb Z}}$ using the fully faithful functor $H$ from rings to ${\mathfrak s}$-algebras. The new feature is the structure of this sheaf at the archimedean place which is obtained using the following proposition\footnote{With the nuance that in \eqref{subvert1} we use the strict inequality.} of \cite{CCprel} \begin{prop}\label{sssalg2} $(i)$~Let $R$ be a semiring, and ${\Vert}\ \ {\Vert}$ a sub-multiplicative seminorm on $R$. Then $HR$ is naturally endowed with a structure of ${\mathfrak s}$-subalgebra ${\Vert} HR{\Vert}_1\subset HR$ defined as follows \begin{equation}\label{subvert} {\Vert} HR{\Vert}_1: \Gamma^{{\rm op}}\longrightarrow \frak{Sets}_*\qquad {\Vert} HR{\Vert}_1(X):=\{\phi\in HR(X)\mid \sum_{X\setminus \{*\}} {\Vert}\phi(x){\Vert}\leq 1\}. \end{equation} $(ii)$~Let $E$ be an $R$-semimodule and ${\Vert}\ \ {\Vert}^E$ a seminorm on $E$ such that ${\Vert} a\xi{\Vert}\leq {\Vert} a{\Vert} {\Vert} \xi{\Vert}$, $\forall a\in R$, $\forall \xi \in E$, then for any $\lambda\in {\mathbb R}_+$ the following defines a module ${\Vert} HE{\Vert}^E_\lambda$ over ${\Vert} HR{\Vert}_1$ \begin{equation}\label{subvert1} {\Vert} HE{\Vert}^E_\lambda: \Gamma^{{\rm op}}\longrightarrow \frak{Sets}_*\qquad {\Vert} HE{\Vert}^E_\lambda(X):=\{\phi\in HE(X)\mid \sum_{X\setminus \{*\}} {\Vert}\phi(x){\Vert}^E< \lambda\}. \end{equation} \end{prop} The first statement of Proposition \ref{sssalg2} is applied for the ring $R={\mathbb Q}$ of rational numbers and its archimedean absolute value to construct the stalk at $\infty$ of the structure sheaf. One obtains in this way a sheaf ${{\cO\subset H\Q}}$ of ${\mathfrak s}$-algebras over $\overline{{\rm Spec\,}{\mathbb Z}}$. The second statement of Proposition \ref{sssalg2} is then applied to the one-dimensional real vector space ${\mathbb R}$ to obtain, given an Arakelov divisor $D=D_{\rm finite}+D_\infty$, the sheaf ${\mathcal O}(D)$ of ${\mathcal O}$-modules over $\overline{{\rm Spec\,}{\mathbb Z}}$ \begin{equation}\label{sheafsmod} {\mathcal O}(D)(\Omega):={\Vert} H{\mathcal O}(D_{\rm finite})(\Omega\setminus\{\infty\}){\Vert}_{e^a}, \ \ D_\infty=a\{\infty\}. \end{equation} Thus the ${\mathfrak s}$-modules at work at the archimedean place depend on a positive real parameter $\lambda>0$ and are implemented by the functor $\Vert H{\mathbb R} \Vert_\lambda:{\Gamma^{\rm op}} \longrightarrow \frak{Sets}_*$ which associates to a pointed set $X$ the pointed set \begin{equation}\label{hrlam} \Vert H{\mathbb R} \Vert_\lambda(X)=\{x:X\to {\mathbb R}\mid \#\{j,x(j)\neq 0\}<\infty\ \& \sum \vert x(j)\vert < \lambda\}. \end{equation} In fact \eqref{hrlam} describes also the extension of $\Vert H{\mathbb R} \Vert_\lambda$ as an endofunctor of $\frak{Sets}_*$. There is an obvious analogue of \eqref{hrlam} when ${\mathbb R}$ is replaced by ${\mathbb Q}$ and this analogue is what is needed in \eqref{sheafsmod}; on the other hand it is more natural to work with the local field ${\mathbb R}$ associated to the archimedean place of ${\mathbb Q}$. \subsection{Simplicial volume}\label{simplicialvol} We recall briefly the notion of simplicial volume introduced by M. Gromov \cite{gromov82}. Let first $X$ be a topological space and $C_*(X,{\mathbb R})$ the associated singular chain complex with real coefficients. One defines the $\ell^1$-norm on singular chains as follows \begin{equation}\label{ell1} \Vert c\Vert_1:=\sum \vert a_j\vert \,,\,~\forall c= \sum a_j \sigma_j, \ \sigma_j\in {\rm Top}(\Delta^*,X) \end{equation} The induced semi-norm on the singular homology $H_*(X, {\mathbb R})$ is the quotient semi-norm \begin{equation}\label{ell1bis} \Vert \alpha\Vert_1:=\inf_{c\in \alpha} \Vert c\Vert_1. \end{equation} The Gromov norm $\vert M\vert$ of an oriented closed connected manifold of dimension $n$ is then defined as the semi-norm of its fundamental class $\vert M\vert:=\Vert [M]\Vert_1$. A fundamental result of the theory (\!\cite{gromov82}, \cite{thurston} Thm 6.2) is the proportionality principle: \begin{thm}\label{gromov} (M. Gromov) Let $\Sigma$ be any compact oriented hyperbolic manifold of dimension $n>1$, then one has $$\vert \Sigma\vert= \frac{v(\Sigma)}{v_n}$$ where $v(\Sigma)$ is the volume of $\Sigma$ and $v_n$ is the maximal volume of straight simplices in hyperbolic space. \end{thm} We refer to \cite{thurston} chapter 6 for the description of the straightening of singular simplices and singular chains. The constant $v_2$ is equal to $\pi$ and one thus has \begin{cor}\label{gromov1} Let $\Sigma$ be a Riemann surface of genus $g>1$, then $\vert \Sigma\vert=4(g-1)$. \end{cor} The fact that the norm does not vanish is dual to the boundedness of cohomology and this holds in the hyperbolic case, thus for $k>1$ the semi-norm \eqref{ell1bis} is in fact a norm on the homology $H_k(M,{\mathbb R})$ when $M$ is an hyperbolic manifold (see {\it op.cit.\/}\ ). \subsection{Moore normalization}\label{sectmoore} Let $A$ be a simplicial abelian group. The standard complex (still denoted $A$ for simplicity) of abelian groups associated to $A$ is defined using the boundary map \begin{equation}\label{standard} \partial:=\sum_{0}^{ n}\,(-1)^j d_j: A_n \to A_{n-1}. \end{equation} The associated normalized complex $NA$ is defined as follows \begin{equation}\label{normal} NA_n:=\cap_{0}^{ n-1} \,{\mbox{Ker}}\, d_j\subset A_n,\ \ d:=d_n:NA_n\to NA_{n-1} \end{equation} (the simplicial identity $d_{n} d_{n-1}=d_{n-1}d_{n-1}$ shows that it defines a complex). For each $n$ one lets $D_n\subset A_n$ be the subgroup generated by the ranges of the degeneracies. The boundary map $\partial$ of \eqref{standard} fulfills $ \partial(D_n)\subset D_{n-1}$ and induces a map $$ \partial: A_n/D_n\to A_{n-1}/D_{n-1}. $$ The corresponding quotient complex $A/D$ is the complex modulo degeneracies. By construction one has two morphisms of complexes $i:NA\to A$ and $p:A\to A/D$. Moreover (see \cite{GJ}, Theorem 2.1), the morphism $p\circ i: NA\to A/D$ is an isomorphism of chain complexes. It follows that the composite morphism $\nu:=(p\circ i)^{-1}\circ p:A\to NA$ is a projection. As in the proof of Theorem 2.4 of \cite{GJ}, one constructs explicitly a chain map $f^{(n)}:A_n\to NA_n$ as the composition \begin{equation}\label{normal1} f^{(n)}:=f^{(n)}_{n-2}\circ \ldots \circ f^{(n)}_j\circ f^{(n)}_0 \end{equation} where $f^{(n)}_j:A_n\to A_n$ is defined as $f^{(n)}_j={\mbox{Id}} -s_{j+1}d_{j+1}$. Moreover one also constructs explicitly a chain homotopy $T_k:A_k\to A_{k+1}$ such that \begin{equation}\label{normal2} {\mbox{Id}}-f^{(n)}=T\circ \partial +\partial\circ T. \end{equation} Since each $f^{(n)}_j$ acts as the identity in the quotient $A_n/D_n$ the same holds for $f^{(n)}$ and one obtains the equality $\nu_n=f^{(n)}$. \begin{lem}\label{seminorm} Let $X$ be pointed simplicial set. Let the simplicial vector space $A=H{\mathbb R}\circ X$ be endowed with the norm \begin{equation}\label{norm1} \Vert \phi\Vert := \sum \vert \phi(x)\vert \,,\,~\forall \phi \in A_n=H{\mathbb R}(X_n). \end{equation} $(i)$~The linear map $\nu_n=f^{(n)}:A_n\to NA_n$ is of norm $\leq 2^{n-1}$.\newline $(ii)$~Let $c\in Z_n(A)$ be a cycle. Then $\nu_n(c)$ is a homologous normalized cycle and $\Vert \nu_n(c)\Vert\leq 2^{n-1}\Vert c\Vert $. \end{lem} \proof $(i)$~The statement follows from \eqref{normal1} and the inequalities $$ \Vert f^{(n)}_j\Vert \leq \Vert {\mbox{Id}}\Vert+\Vert s_{j+1}d_{j+1}\Vert\leq 2, \ \ \Vert f\circ g\Vert\leq \Vert f\Vert\Vert g\Vert. $$ $(ii)$~This follows from $(i)$ and \eqref{normal2}. \endproof On the real vector space $H_n(X,{\mathbb R})$ we consider the following semi-norm which is induced by the $\ell^1$-norm \eqref{norm1} on the normalized complex $NA_n$: \begin{equation}\label{normN} \Vert c\Vert^{\rm nor}:=\inf \{\Vert \phi\Vert \mid \phi \in NA_n, \ \phi \sim c\} \end{equation} where $\phi \sim c$ means that $\phi \in NA_n$ is homologous to the cycle $c\in Z_n(A)$. By applying Lemma \ref{seminorm} one obtains the basic inequalities \begin{equation}\label{normN1} \Vert c\Vert_1\leq \Vert c\Vert^{\rm nor}\leq 2^{n-1} \Vert c\Vert_1\,,\,~\forall c\in H_n(X,{\mathbb R}). \end{equation} \subsection{Equivalence with the Gromov norm}\label{sectequivalence} The filtration of the ${\mathfrak s}$-module $H{\mathbb R}$ by the sub-${\mathfrak s}$-modules $\Vert H{\mathbb R} \Vert_\lambda\subset H{\mathbb R} $, $\lambda\in {\mathbb R}_+$, of \eqref{hrlam} provides, for any pointed simplicial set $X$ and integer $n\geq 0$ natural morphisms of ${\mathfrak s}$-modules \begin{equation}\label{filtr1} \rho_{n,\lambda}:H_n(X,\Vert H{\mathbb R} \Vert_\lambda)\to H_n(X, H{\mathbb R}), \end{equation} and a filtration by the ranges of the $\rho_{n,\lambda}$. Theorem \ref{propcompare} gives a natural isomorphism $ H_n(X,H{\mathbb R})=H(H_n(X,{\mathbb R}))$. \begin{thm}\label{propcompare1} Let $X$ be a pointed simplicial set. For any integer $n\geq 0$ and $\lambda\in {\mathbb R}_+$, the range of the natural morphism of ${\mathfrak s}$-module is \begin{equation}\label{equbasic2} \rho_{n,\lambda}\left(H_n(X,\Vert H{\mathbb R} \Vert_\lambda)\right) =\Vert H(H_n(X,{\mathbb R}))\Vert_\lambda^{\rm nor} \end{equation} where $\Vert c\Vert^{\rm nor}$ is the semi-norm defined in \eqref{normN}. \end{thm} \proof It follows from \eqref{hrlam} that the endofunctor $\Vert H{\mathbb R} \Vert_\lambda$ assigns to a pointed set $(X,*)$ the set of maps with finite support $$ \Vert H{\mathbb R} \Vert_\lambda(X):=\{\phi:X\to {\mathbb R}\mid \phi(*)=0, \ \#\{x\mid \phi(x)\neq 0\}<\infty, \ \sum \vert \phi(x)\vert < \lambda\}. $$ By construction the range $\rho_{n,\lambda}\left(H_n(X,\Vert H{\mathbb R} \Vert_\lambda)\right)$ is a sub-functor of $H(H_n(X,{\mathbb R}))$ thus to show \eqref{equbasic2} it is enough to prove that for any integer $k>0$ one has \begin{equation}\label{equbasic3} \rho_{n,\lambda}\left(H_n(X,\Vert H{\mathbb R} \Vert_\lambda)\right)(k_+) =\Vert H(H_n(X,{\mathbb R}))\Vert_\lambda^{\rm nor}(k_+). \end{equation} The right hand side of \eqref{equbasic3} is given by $k$-tuples $(\gamma_j)_{1\leq j\leq k}$, $\gamma_j\in H_n(X,{\mathbb R})$ such that $\sum \Vert \gamma_j\Vert^{\rm nor}< \lambda$, {\it i.e.\/}\ $$ \exists \phi_j\in H{\mathbb R}\circ X_n\mid d_i\phi_j=0 \, \forall i, \ \phi_j\sim \gamma_j, \ \sum_{X_n\times \{1,\ldots,k\}} \vert \phi_j(x)\vert < \lambda. $$ For the left hand side of \eqref{equbasic3} one has $$ \Vert H{\mathbb R} \Vert_\lambda(X_n\wedge k_+)=\{(\psi_j)_{j\in \{1,\ldots,k\} }\mid \psi_j:X_n\to {\mathbb R}, \, \psi_j(*)=0, \ \#\{x\mid \psi_j(x)\neq 0\}<\infty, \ \sum \vert \psi_j(x)\vert < \lambda\} $$ and moreover the simplicial structure satisfies $$ d_i\left((\psi_j)_{j\in \{1,\ldots,k\} }\right)=(d_i\psi_j)_{j\in \{1,\ldots,k\}}. $$ Thus the $0$-chains $ {\mbox{Hom}}_{\mathcal S_*}(S^n,\Vert H{\mathbb R} \Vert_\lambda(X\wedge k_+) ) $ are exactly the same as the ones involved in the right hand side of \eqref{equbasic3} and one gets \eqref{equbasic2}. \endproof For a topological space $X$ one lets ${{\rm sin}} X$ be the associated simplicial set of singular simplices $$ {{\rm sin}} X=\{[n]\mapsto {\rm Top}(\Delta^n,X)\} $$ where the standard simplex $\Delta^n$ of dimension $n$ is given concretely as $$ \Delta^n=\{(\lambda_0,\ldots,\lambda_n)\mid \lambda_j\geq 0 , \ \sum \lambda_j=1\} $$ Then, Definition \ref{fact1} extends to topological spaces and arbitrary ${\mathfrak s}$-modules as \begin{equation}\label{pins2} H_n(X,M):=H_n ({{\rm sin}} X,M ) \end{equation} \begin{cor}\label{conj} Let $X$ be a topological space. The filtration of the singular homology group $H_n(X,{\mathbb R})$ by the ${\mathfrak s}$-modules $H_n(X,\Vert H{\mathbb R} \Vert_\lambda)$ defines a semi-norm which is equivalent to the Gromov norm. \end{cor} \proof This follows from Theorem \ref{propcompare1} and the basic inequalities \eqref{normN1}.\endproof \vspace{0.1cm} \section{Equality for Riemann surfaces of genus $g>1$}\label{sectequal} We show that for a compact oriented $2$-dimensional manifold $\Sigma$ of genus $g>1$, the normalized norm \eqref{normN} on singular homology agrees with the Gromov norm. Since these two norms are equivalent and the Gromov norm vanishes except on $H_2(\Sigma,{\mathbb R})$ it is enough to prove the equality for the fundamental class $[\Sigma]\in H_2(\Sigma,{\mathbb R})$. The difficulty is to construct singular cycles $c$ in the homology class $[\Sigma]$ which not only have $\ell^1$-norm $\Vert c\Vert_1$ close to the expected value $4(g-1)$ but are also {\em normalized}, {\it i.e.\/}\ such that all boundaries vanish $\partial_j(c)=0$. This is achieved in three steps. In section \ref{block} we deal with the relative situation of the building block $K$ and construct a normalized cycle relative to its boundary $\partial K$. In section \ref{block1} we assemble together $g$ copies of $K$ and obtain a surface of genus $g$ and a normalized cycle representing the fundamental class whose $\ell^1$-norm is $4g$. The third step is standard and uses cyclic covers to improve the estimate to the expected value $4(g-1)$. \subsection{Moore normalization for the building block}\label{block} A compact oriented $2$-dimensional manifold $\Sigma$ of genus $g > 1$ is obtained by gluing together $g$ copies of a building block $K$ which we now describe. This building block is the quotient of the convex polygon ${\rm Conv}(0,1,2,3,4,5)$ of Figure \ref{gromovnorm1} by the equivalence relation $R$ generated by $$ \Delta(\{1,2\})(x)\sim_R \Delta(\{4,3\})(x), \ \ \Delta(\{2,3\})(x)\sim_R\Delta(\{5,4\})(x) \,,\,~\forall x\in \Delta^1 $$ where given $n+1$ points $(P_0,\ldots,P_n)$ in the real affine plane $E={\mathbb R}^2$, one denotes $$ \Delta(\{P_0,\ldots,P_n\})\in {\rm Top}(\Delta^n,E), \quad (\lambda_0,\ldots,\lambda_n)\mapsto \sum \lambda_j P_j. $$ By transitivity one finds that the five vertices $(1,2,3,4,5)$ are equal modulo $R$, since $1 \sim_R 4 \sim_R 3\sim_R 2\sim_R 5$. Thus one has by constrution a continuous map \begin{equation}\label{gamma} \gamma: {\rm Conv}(0,1,2,3,4,5)\to K. \end{equation} The building block $K$ thus consists of $4$ triangles with the common vertex $0$ and where the external sides are identified following the rules \begin{equation}\label{rule} \Delta(\{1,2\})\sim \Delta(\{4,3\}), \ \Delta(\{2,1\})\sim\Delta(\{3,4\}), \ \Delta(\{2,3\})\sim\Delta(\{5,4\}), \ \Delta(\{3,2\})\sim\Delta(\{4,5\}). \end{equation} It is shown geometrically in Figure \ref{flattriang1} as a subset of the $2$-torus (before the identifications of the edges) and in Figures \ref{cuttorus1} and \ref{triangulationtor} after these identifications have been performed. These Figures keep track of the natural triangulations. \begin{figure}[!htb] \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=.6\linewidth]{octogon.pdf} \caption{Basic polygon in $E={\mathbb R}^2$.}\label{gromovnorm1} \end{minipage}\hfill \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=.6\linewidth]{flattriang.pdf} \caption{Basic polygon inside the fundamental domain for ${\mathbb Z}^2$ acting on ${\mathbb R}^2$.}\label{flattriang1} \end{minipage} \end{figure} By construction one has $\Delta(\{P_0,\ldots,P_n\})\in {\rm Top}(\Delta^n,{\rm Conv}(P_j))$ where ${\rm Conv}(P_j)$ is the convex hull of the points $P_j$. The composition \begin{equation}\label{ruleprime} {\Delta'}(\{P_0,\ldots,P_n\}):=\gamma\circ \Delta(\{P_0,\ldots,P_n\}) \in {\rm Top}(\Delta^n,K) \end{equation} defines singular simplices {\it i.e.\/}\ elements of ${{\rm sin}}\, K$. From \eqref{rule} one obtains the equalities \begin{equation}\label{rulepri} {\Delta'}(\{1,2\})= {\Delta'}(\{4,3\}), \ {\Delta'}(\{2,1\})={\Delta'}(\{3,4\}), \ {\Delta'}(\{2,3\})={\Delta'}(\{5,4\}), \ {\Delta'}(\{3,2\})={\Delta'}(\{4,5\}) \end{equation} \begin{figure}[t!] \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=.8\linewidth]{cuttorus1.pdf} \caption{Triangulated building block $K$.}\label{cuttorus1} \end{minipage}\hfill \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=.9\linewidth]{neighborr.pdf} \caption{Neighborhood of point $P$.}\label{triangulationtor} \end{minipage} \end{figure} To each pair $(i,j)$ of $>0$ integer indices we associate the simplicial chain $c(0,i,j)$ \begin{equation}\label{chaingood} c(0,i,j):= {\Delta'} (\{0,i,j\})+{\Delta'} (\{i,j,0\})+2 {\Delta'} (\{j,0,i\})-{\Delta'} (\{j,i,0\})-{\Delta'} (\{0,j,i\})-2 {\Delta'} (\{i,0,j\}) \end{equation} The boundaries of $c(0,i,j)$ are described as follows \begin{lem}\label{chaingood1} The following equalities hold \begin{align*} \partial_0 (c(0,i,j))&=2 {\Delta'} (\{0,i\})-2 {\Delta'} (\{0,j\})-{\Delta'} (\{i,0\})+{\Delta'} (\{j,0\})+{\Delta'} (\{i,j\})-{\Delta'} (\{j,i\}) \\ \partial_1 (c(0,i,j))&=-{\Delta'} (\{0,i\})+{\Delta'} (\{0,j\})+{\Delta'} (\{i,0\})-{\Delta'} (\{j,0\})+2 {\Delta'} (\{j,i\})-2 {\Delta'} (\{i,j\}) \\ \partial_2 (c(0,i,j))&={\Delta'} (\{0,i\})-{\Delta'} (\{0,j\})+2 {\Delta'} (\{j,0\})-2 {\Delta'} (\{i,0\})+{\Delta'} (\{i,j\})-{\Delta'} (\{j,i\}) \end{align*} \end{lem} \proof The result follows by linearity of the $\partial_j$ and the equalities $$ \partial_0 {\Delta'} (\{a,b,c\})={\Delta'} (\{b,c\}), \ \partial_1 {\Delta'} (\{a,b,c\})={\Delta'} (\{a,c\}), \ \partial_2 {\Delta'} (\{a,b,c\})={\Delta'} (\{a,b\}) $$ \endproof We now combine the above simplicial chains and use the rules \eqref{rulepri} to get a chain which is normalized relative to the boundary $\partial K$ of $K$. \begin{lem}\label{chaingood2} Let $c_{(1,5)}:=\sum_{1\leq i\leq 4} c(0,i,i+1)$. One has \begin{align*} \partial_0 c_{(1,5)}&=2 {\Delta'} (\{0,1\})-2 {\Delta'} (\{0,5\})-{\Delta'} (\{1,0\})+{\Delta'} (\{5,0\}) \\ \partial_1 c_{(1,5)}&=-{\Delta'} (\{0,1\})+{\Delta'} (\{0,5\})+{\Delta'} (\{1,0\})-{\Delta'} (\{5,0\}) \\ \partial_2 c_{(1,5)}&= {\Delta'} (\{0,1\})-{\Delta'} (\{0,5\})-2 {\Delta'} (\{1,0\})+2 {\Delta'} (\{5,0\}) \end{align*} \end{lem} \proof The cancelations follow from the equalities \begin{align*} \sum_{1\leq i\leq 4} \left({\Delta'} (\{0,i\})-{\Delta'} (\{0,i+1\}) \right)&={\Delta'} (\{0,1\})-{\Delta'} (\{0,5\}) \\ \sum_{1\leq i\leq 4} \left({\Delta'} (\{i,0\})-{\Delta'} (\{i+1,0\}) \right)&={\Delta'} (\{1,0\})-{\Delta'} (\{5,0\}) \end{align*} and from the following one which uses the rules \eqref{rulepri} $$ \sum_{1\leq i\leq 4}\left(\Delta (\{i,i+1\})-\Delta (\{i+1,i\}) \right)=0 $$ since \eqref{rulepri} shows that the following terms all vanish $$ \Delta(\{1,2\})- \Delta(\{4,3\}), \ \Delta(\{2,3\})-\Delta(\{5,4\}),\ \Delta(\{3,4\})-\Delta(\{2,1\}), \ \Delta(\{4,5\})-\Delta(\{3,2\}) $$ We thus get the required formulas. \endproof \begin{figure}[t!] \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=.9\linewidth]{octodisk.pdf} \caption{Surface of genus $2$.}\label{genus2} \end{minipage}\hfill \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=.8\linewidth]{genus3.pdf} \caption{Domain for surface of genus $3$.}\label{octoexample} \end{minipage} \end{figure} \subsection{Moore normalization for a Riemann surface of genus $g>1$}\label{block1} Let $g>1$ and $P$ be obtained (see Figure \ref{octoexample}) as the union of $g$ copies $P(w)=P_{(1+4w,5+4w)}$ for $0\leq w<g$, of the basic polygon ${\rm Conv}(0,1,2,3,4,5)$ of Figure \ref{gromovnorm1}, where the side $(0, 5+4w)$ is common to $P(w)$ and $P(w+1)$ for $w<g-1$ and is common to $P(g-1)$ and $P(0)$ for $w=g-1$, while the external sides are identified pairwise as in $P$. The quotient of $P$ by these identifications is a surface $\Sigma(g)$ of genus $g$. \begin{lem}\label{chaingood3} The singular chain $$c=c_{(1,5)}+c_{(5,9)}+\ldots +c_{(1+4w,5+4w)}+\ldots +c_{(1+4(g-1),1)}$$ is closed and normalized, {\it i.e.\/}\ one has $\partial_j c=0$ for $j\in \{0,1,2\}$. \end{lem} \proof By Lemma \ref{chaingood2} one gets for $0\leq w\leq g-1$, and with $5+4(g-1)\sim 1$, $$ \partial_0 c_{(1+4w,5+4w)}=2 {\Delta'} (\{0,1+4w\})-2 {\Delta'} (\{0,5+4w\})-{\Delta'} (\{1+4w,0\})+{\Delta'} (\{5+4w,0\}) $$ which gives $$ \partial_0 c=\partial_0 \sum_{0\leq w\leq g-1} c_{(1+4w,5+4w)}=\sum_{0\leq w\leq g-1} \partial_0 c_{(1+4w,5+4w)}=0. $$ The same reasoning applies to show that $\partial_j c=0$ for $j\in \{1,2\}$. \endproof \begin{lem}\label{chaingood4} The singular chain $c$ of Lemma \ref{chaingood3} represents the singular homology class $8[\Sigma]$ ($[\Sigma]=$ fundamental class of $\Sigma$). \end{lem} \proof The result follows since each chain $c(0,i,i+1)$ as in \eqref{chaingood} is homologous to $8{\Delta'} (\{0,i,i+1\})$ while the $4g$ triangles ${\Delta'} (\{0,i,i+1\})$ for $1\leq i\leq 4g$ give a triangulation of $\Sigma$.\endproof \begin{lem}\label{chaingood5} The $\ell^1$-norm of the singular chain $c$ of Lemma \ref{chaingood3} is $\leq 32 g$. \end{lem} \proof This follows from the triangle inequality and the definition \eqref{chaingood} of the chain $c(0,i,j)$ whose $\ell^1$-norm is $\leq 8$. \endproof \begin{thm}\label{chainthm} Let $\Sigma$ be a compact Riemann surface and $[\Sigma]$ its fundamental class in homology. Then $[\Sigma]$ belongs to the range of the canonical map $H_2(\Sigma,\Vert H{\mathbb R} \Vert_\lambda)\to H_2(\Sigma,{\mathbb R})$ if and only if $\lambda$ is larger than the Gromov norm of $[\Sigma]$. \end{thm} \proof The result follows from Theorem \ref{propcompare1} if one shows that the fundamental class $[\Sigma]$ fulfills the equality $$ \Vert [\Sigma]\Vert^{\rm nor}=\Vert [\Sigma]\Vert_1 $$ The inequality $\geq$ follows from \eqref{normN1}. Moreover, as recalled in section \ref{simplicialvol}, for a surface of genus $g$ the Gromov norm $\Vert [\Sigma]\Vert_1$ is equal to $4(g-1)$. Thus it remains to show that $\Vert [\Sigma]\Vert^{\rm nor}\leq 4(g-1)$. By applying Lemmas \ref{chaingood4} and \ref{chaingood5} one obtains the inequality $\Vert [\Sigma]\Vert^{\rm nor}\leq 4g$. One then applies a standard technique which is to use the same inequality for the covering space $\Sigma'$ of $\Sigma$ associated to an infinite cyclic subgroup of the fundamental group $\pi_1(\Sigma)$. The genus of a cyclic cover $\Sigma'$ of degree $n$ is $g'=n(g-1)+1$ since the Euler characteristic is multiplied by $n$. Thus the inequality $\Vert [\Sigma']\Vert^{\rm nor}\leq 4g'$ entails $$ n\Vert [\Sigma]\Vert^{\rm nor}\leq 4g'=4(n(g-1)+1). $$ By passing to the limit when $n\to \infty$ one obtains the desired inequality $\Vert [\Sigma]\Vert^{\rm nor}\leq 4(g-1)$.\endproof
1,108,101,566,824
arxiv
\section{Introduction} \paragraph{Motivation} A problem of fundamental importance in the study of harmonic analysis is the classification of irreducible complex admissible representations of $G(F)$ where $F$ is a non-archimedean local field, and $G$ is a reductive group over $F$. The local Langlands correspondence, a guiding principle for many areas of number theory in the last 40 years, posits a parameterization of such admissible representations in terms of equivalence classes of parameters related to the Galois theory of $F$. These parameters come in several forms. Chief amongst these are the \emph{complex $L$-parameters} which are homomorphisms $\psi\colon W_F\times \SL_2(\mathbb{C})\to {^L}\!G(\mathbb{C})$ satisfying certain properties (cf.\@ \cite[\S3]{SilbergerZink}), and \emph{complex Weil--Deligne parameters} which are pairs $(\varphi,N)$ where $\varphi\colon W_F\to {^L}\!G(\mathbb{C})$ is a homomorphism and $N$ is a nilpotent element of the Lie algebra of $\widehat{G}(\mathbb{C})$, satisfying certain properties (cf.\@ \cite[\S2.1]{GRAinv}). The notion of equivalence in both cases is that of $\widehat{G}(\mathbb{C})$-conjugacy. The classical theorem of Jacobson--Morozov (cf.\@ \cite[\S III.11, Theorem 17]{Jacobson}) asserts that the \emph{Jacobson--Morozov map} $\theta \mapsto d\theta\left(\left(\begin{smallmatrix} 0 & 1\\ 0 & 0\end{smallmatrix}\right)\right)$ gives a surjection \begin{equation*} \mathsf{JM}\colon \left\{\begin{matrix}\text{Algebraic homomorphisms}\\ \theta\colon \SL_2(\mathbb{C})\to \widehat{G}(\mathbb{C})\end{matrix}\right\}\to \left\{\begin{matrix}\text{Nilpotent elements}\\ N\in\mathrm{Lie}(\widehat{G}(\mathbb{C}))\end{matrix}\right\}, \end{equation*} which becomes a bijection on the level of $\widehat{G}(\mathbb{C})$-quotients. One may extend this to a \emph{Jacobson--Morozov map} \begin{equation*} \mathsf{JM}\colon \left\{\begin{matrix}\text{Complex }L \text{-parameters}\\ \psi\colon W_F\times \SL_2(\mathbb{C})\to {^L}\!G(\mathbb{C})\end{matrix}\right\}\to \left\{\begin{matrix}\text{Complex Weil--Deligne parameters}\\ (\varphi,N)\end{matrix}\right\}. \end{equation*} This map is not a bijection, even up to equivalence and, in fact, is not even surjective (see Example \ref{eg:JM-not-surj}). That said, the Jacobson--Morozov map \emph{does} give a bijection between equivalence classes of Frobenius semi-simple parameters (see \cite[Proposition 2.2]{GRAinv} or \cite[Proposition 1.13]{ImaLLCell}), those which feature most prominently in the local Langlands correspondence. Therefore, in practice the Jacobson--Morozov map allows one to pass fairly freely between these two notions of parameter and to treat them as essentially equivalent. This is useful as each of these perspectives has its own advantages (e.g.\@ as illustrated quite well in \cite{GRAinv}). The goal of this article is to put the above results on a \emph{moduli-theoretic footing}. Namely we define and study a moduli space of $L$-parameters, and construct a \emph{Jacobson--Morozov morphism} \begin{equation*} \mathsf{JM}\colon \mathsf{LP}_G\to\mathsf{WDP}_G \end{equation*} between the moduli space of $L$-parameters and the moduli space of Weil--Deligne parameters. We then show that there is a natural stratification of the moduli space of Weil--Deligne parameters with the property that over each stratum the Jacobson--Morozov morphism takes a particularly simple form. Using this, we show that the Jacobson--Morozov morphism satisfies some birational-like properties, is an isomorphism over the discrete locus, and that a version of the above bijection between equivalence classes of complex Frobenius semi-simple parameters has an analogue over an arbitrary $\mathbb{Q}$-algebra.\footnote{The reason we do not restrict our attention to semi-simple parameters is that they do not form a representable presheaf. Thus, to do geometry we are required to work with arbitrary parameters.} \medskip \paragraph{Statement of main results} Let $F$ be a non-archimedean local field and $G$ a reductive group over $F$. In \S\ref{ss:L-param-def} we define the \emph{moduli space of $L$-parameters} for $G$ which we denote $\mathsf{LP}_G$. \begin{propi}[{see Corollary \ref{cor:LP-pi0}}]\label{propi:L-nice} The moduli space $\mathsf{LP}_G$ is smooth over $\mathbb{Q}$ and has explicitly parameterized affine connected components. \end{propi} On the other hand, let $\mathsf{WDP}_G$ denote the moduli space of Weil--Deligne parameters (e.g.\@ as in \cite[\S3.1]{ZhuCohLp}). In \S\ref{ss:JM-mor} we define the \emph{Jacobson--Morozov} morphism \begin{equation*} \mathsf{JM}\colon \mathsf{LP}_G\to \mathsf{WDP}_G. \end{equation*} Our major result may then be stated as follows. \begin{thmi}[{see Theorem \ref{thm:JM-omnibus} and Theorem \ref{thm:JM-isom-disc-locus}}]\label{thmi:JM-weakly-bir} The Jacobson--Morozov morphism is weakly birational and induces an isomorphism $\mathsf{LP}^\mathrm{disc}_G\isomto\mathsf{WDP}_G^\mathrm{disc}$ over the discrete loci. \end{thmi} Here we say a morphism of schemes $f\colon Y\to X$ is \emph{weakly birational} if there exists a dense open subset $U\subseteq X$ such that $f\colon f^{-1}(U)\to U$ is an isomorphism. A weakly birational map $f$ is birational if and only if $f$ induces a bijection at the level of irreducible components. Also, the discrete loci inside of $\mathsf{LP}_G$ and $\mathsf{WDP}_G$ are defined, at least when $G$ is semi-simple, as the locus of points where the centralizer of the universal parameter is quasi-finite over the base (see Definition \ref{defn:locus-of-red} and Definition \ref{defn:disc-locus-L} for general definitions). To prove Theorem \ref{thmi:JM-weakly-bir} we stratify $\mathsf{WDP}_G$ by its nilpotent orbits. Denote by $\widehat{\mc{N}}$ the nilpotent variety for $\widehat{G}$ and form the stratification $\widehat{\mc{N}}^\sqcup\vcentcolon= \bigsqcup_N \mc{O}_N$ by its $\widehat{G}$-orbits which we treat as a disconnected scheme over $\mathbb{Q}$. We then obtain a stratification $\mathsf{WDP}^\sqcup_G$ by pulling back $\widehat{\mc{N}}^\sqcup$ along the natural map $\mathsf{WDP}_G\to \widehat{\mc{N}}$. We give an explicit description of the structure of this variety. \begin{propi}[{see Corollary \ref{cor:WDP-pi0}}]\label{propi:sqcup-nice} The moduli space $\mathsf{WDP}^\sqcup_G$ is smooth over $\mathbb{Q}$ and has explicitly parameterized connected components. \end{propi} The Jacobson--Morozov morphism factorizes through $\mathsf{WDP}_G^\sqcup$ and interacts well with the explicit decompositions indicated in Proposition \ref{propi:L-nice} and Proposition \ref{propi:sqcup-nice}. Utilizing this we show the following, which implies the weakly birational portion of Theorem \ref{thmi:JM-weakly-bir}. \begin{propi}[{see Theorem \ref{thm:JM-omnibus}}]\label{propi:JM-bir} The morphism $\mathsf{JM}\colon \mathsf{LP}_G\to\mathsf{WDP}_G^\sqcup$ is birational. \end{propi} A key component of our proof of Proposition \ref{propi:JM-bir} is a relative version of the bijection between equivalence classes of complex Frobenius semi-simple parameters. Here, Frobenius semi-simplicity is somewhat delicate and defined in Definition \ref{defn:Frob-ss-WD-param} and Definition \ref{defn:Frob-ss-L-param}. \begin{thmi}[{see Theorem \ref{thm:rel-JM-param}}]\label{thmi:disc-isom}For any $\mathbb{Q}$-algebra $A$ the map \begin{equation*} \mathsf{JM}\colon \mathsf{LP}_G(A)/\widehat{G}(A)\, \to \,\mathsf{WDP}_G^{\sqcup}(A)/\widehat{G}(A) \end{equation*} is a bijection on Frobenius semi-simple elements. \end{thmi} We finally mention that another important ingredient in our proof of Proposition \ref{propi:JM-bir} is a result which may be interpreted as a stronger version of the isomorphy of the Jacobson--Morozov morphism over the discrete loci, as stated in Theorem \ref{thmi:JM-weakly-bir}. Namely, in Proposition \ref{prop:JM-isom-over-red-locus} we show that the Jacobson--Morozov morphism is an isomorphism over the locus of points of $\mathsf{WDP}_G$ whose centralizer has reductive identity component. The relationship to birationality comes from Proposition \ref{prop:dense-tor-cent} which shows that the locus of such points is dense in $\mathsf{WDP}_G$ and thus, a fortiori, dense in $\mathsf{WDP}^\sqcup_G$ (the same holds true for $\mathsf{LP}_G$). As the moduli space of Weil--Deligne parameters has featured quite prominently in recent developments in the Langlands program and adjacent fields (e.g.\@ see \cite{BeGeGdef}, \cite{DHKMModLp}, \cite{ZhuCohLp} and \cite{FaScGeomLLC}) we feel that these results will be valuable in the study of the fine structure of the space $\mathsf{WDP}_G$. In particular, one may in theory reduce many questions involving `generic' geometric structure of $\mathsf{WDP}_G$ to the study of $\mathsf{LP}_G$. More specifically, we have stratified the geometric space $\mathsf{WDP}_G$ into pieces such that each stratum is smooth and (essentially) like a homogenous space for a group, and thus simple geometrically (cf.\@ Theorem \ref{thm:WD-const-decomp}). Moreover, each of these strata is birational to similarly defined strata in the representation-theoretically simpler space $\mathsf{LP}_G$. In fact, such ideas have already implicitly appeared in several important geometric results concerning $\mathsf{WDP}_G$ (e.g.\@ see \cite[\S2.3]{BeGeGdef}). In addition to its potential uses to study the geometry of $\mathsf{WDP}_G$, we believe that these moduli-theoretic results are clarifying in several other ways. Namely, the weak birationality of the Jacobson--Morozov morphism helps qualify in the classical setting that almost every complex Weil--Deligne parameter is in the image of the Jacobson--Morozov map. Moreover, the isomorphy over the discrete locus may also be used to deduce results of interest even in this classical case (e.g.\@ see Proposition \ref{prop:disc-ss-prop}). Finally, we feel that our explicit description of the moduli space of $L$-parameters (e.g.\@ its set of connected components) helps explain some phenomena differentiating $\mathsf{LP}_G$ from $\mathsf{WDP}_G$ as previously observed by others (c.f.\@ the introduction to \cite{DHKMModLp}). \medskip \paragraph{Future directions} While our results are written over $\mathbb{Q}$, it is clear that they extend over $\mathbb{Z}[\frac{1}{N}]$ for sufficiently divisible $N$. Evidently one cannot hope to extend our results over all of $\mathbb{Z}[\frac{1}{p}]$ as currently written. But, as in op.\@ cit.\@ (and \cite{Helm}), the correct analogue of $\mathsf{WDP}_G$ over $\mathbb{Z}[\frac{1}{p}]$ does not directly involve Weil--Deligne paramters but, instead, involves a scheme of $1$-cocycles for the discretization $W^0_F$ of the tame inertia group. One may then ask whether there is an analogous description of $\mathsf{LP}_G$ which allows our results to work over $\mathbb{Z}[\frac{1}{p}]$. Also, as the morphism $\mathsf{JM}\colon \mathsf{LP}_G\to\mathsf{WDP}_G$ is weakly birational there exists a dense open subset $U$ of $\mathsf{WDP}_G$ such that $\mathsf{JM}\colon \mathsf{JM}^{-1}(U)\to U$ is an isomorphism. In Proposition \ref{prop:temp-cent-equal} below, we essentially show that the analytication $\mathsf{JM}^{-1}(U)_\mathbb{C}^\mathrm{an}$ contains all (essentially) tempered $L$-parameters. From a geometric perspective (e.g.\@ from the perspective of \cite{FaScGeomLLC}) it is more natural to consider $\ell$-adic $L$-parameters instead of complex ones. One is then naturally led to the ask whether $\mathsf{JM}^{-1}(U)_{\mathbb{Q}_\ell}^\mathrm{an}$ contains the analogue of (essentially) tempered representations, which are the (essentially) $\nu$-tempered representations of Dat (see \cite{DatNu}). \medskip \paragraph{Notation and conventions} \begin{itemize} \item $F$ is a non-archimedean local field with residue field of characteristic $p$ and size $q$, \item $W_F$ is the Weil group of $F$, \item for a Galois extension of fields $k'/k$, we write the Galois group as $\Gamma_{k'/k}$ and we write $\Gamma_k$ for the absolute Galois group of $k$, \item for a ring $R$ we shall denote by $\cat{Alg}_R$ the category of $R$-algebras, \item we shall frequently abuse terminology and call a covariant functor $\cat{Alg}_R\to\mc{C}$ a $\mc{C}$-valued presheaf, \item a reductive group $S$-scheme $H$ will always have connected fibers, \item for a set $X$ we shall denote by $\underline{X}$ the associated constant scheme over $\mathbb{Q}$. \end{itemize} \medskip \paragraph{Acknowledgements} We thank Brian Conrad, Rahul Dalal, Ildar Gaisin, Tasho Kaletha, Marcin Lara, Rachel (Nakyung) Lee, and Alexander Sherman for fruitful discussions. The last named author would particularly like to thank Piotr Achinger for several helpful discussions related to algebraic geometry. The first named author was partially supported by NSF RTG grant DMS-1840234. The second named author was supported by JSPS KAKENHI Grant Number 18H01109. The last named author completed this work under the auspices of the JSPS fellowship. \section{Some group theoretic preliminaries} In this section we establish some notation, definitions, and basic well-known results that we shall often use without comment in the sequel. We encourage the reader to skip this section on first reading, referring back only when necessary. \subsection{The nilpotent variety, unipotent variety, and exponential map} Let us fix $k$ to be a field of characteristic $0$ and $H$ to be a reductive group over $k$. We denote by $\mf{h}$ the Lie algebra of $H$ thought of both as a vector $k$-space and as a $k$-scheme. Let $A$ be a $k$-algebra and $x$ an element of $\mf{h}_A$. Recall then that as in \cite[II, \S6, \textnumero 3]{DemazureGabriel} one may associate an element $\exp(Tx)$ in $H(A\llbracket T\rrbracket)$ to $x$. We then say that $x$ is \emph{nilpotent} if it satisfies any of the following equivalent conditions. \begin{prop}\label{prop:nilp-equiv} The following are equivalent: \begin{enumerate} \item for all finite-dimensional representations $\rho\colon H\to\GL(V)$ the endomorphism $d\rho(x)$ of $V_A$ is nilpotent, \item there exists a faithful finite-dimensional representation $\rho\colon H\to\GL(V)$ such that the endomorphism $d\rho(x)$ of $V_A$ is nilpotent, \item $\exp(Tx)$ belongs to $H(A[T])$, \item there exists a morphism of group $A$-schemes $\alpha\colon \mathbb{G}_{a,A}\to H_A$ such that $x=d\alpha(1)$, \end{enumerate} if $A$ is in addition reduced, then (1)-(4) are equivalent to \begin{enumerate} \item[(5)] $x$ belongs to $\mf{h}^\mathrm{der}_A$ and $\ad(x)$ is a nilpotent transformation of $\mf{h}^\mathrm{der}_A$. \end{enumerate} \end{prop} \begin{proof} The equivalence of (1)-(4) is given by \cite[II, \S6, \textnumero 3, Corollaire 3.5]{DemazureGabriel}. To see the equivalence of (1) and (5), in the case when $A$ is reduced, we may assume that $A$ is a field. Let $\sigma\colon H/Z(H^\mathrm{der})\to \GL(W)$ be the faithful representation given by taking a direct sum of $\mathrm{Ad} \colon H \to \mathrm{GL}(\mathfrak{h}^{\mathrm{der}})$ and the composition of $H \to H^{\mathrm{ab}}$ with a faithful representation of $H^{\mathrm{ab}}$. It is clear that applying (1) to $\sigma$ shows that (5) holds. Conversely, suppose that (5) holds, so then $d\sigma (x)$ is nilpotent. Let $\rho$ be as in (1). We may assume that $\rho$ is irreducible. We put $n=\lvert Z(H^{\mathrm{der}}) \rvert$. Then $\rho^{\otimes n} \colon H \to \mathrm{GL}(V^{\otimes n})$ factors through $H/Z(H^{\mathrm{der}})$. Hence by \cite[Proposition 3.1 (a)]{DeligneHodge} $d\rho^{\otimes n} (x)$ is nilpotent. This implies that $d\rho(x)$ is nilpotent. \end{proof} Let us consider the symmetric algebra on $\mf{h}^\ast$ (resp.\@ the graded ideal of positive degree tensors) \begin{equation*} S(\mf{h}^\ast)=\bigoplus_{d\geqslant 0}S^d(\mf{h}^\ast)=\Hom(\mf{h},\mathbb{A}^1_k),\qquad \bigg(\mathrm{resp.}\,\,S^+(\mf{h}^\ast)\vcentcolon=\bigoplus_{d>0}S^d(\mf{h}^\ast)\bigg). \end{equation*} Let $S(\mf{h}^\ast)^H$ be the $k$-subalgebra of $S(\mf{h}^\ast)$ which is invariant for the adjoint action of $H$ on $\mf{h}$ (in the sense of \cite[Definition 0.5 i)]{Mumford}). Let us then consider the radical ideal \begin{equation*} S^+(\mf{h}^\ast)^H\vcentcolon= S^+(\mf{h}^\ast)\cap S(\mf{h}^\ast)^H. \end{equation*} The \emph{nilpotent variety} of $H$ is the closed subshceme of $\mf{h}$ given by $\mc{N}\vcentcolon= V\left(S^+(\mf{h}^\ast)^H\right)$ (or $\mc{N}_H$ when we want to emphasize $H$). This is not a misnomer as for any extension $k'$ of $k$ we have \begin{equation*} \mc{N}(k')=\left\{x\in \mf{h}_{k'}: x\text{ is nilpotent}\right\} \end{equation*} (cf.\@ \cite[\S 6.1, Lemma]{Jantzen}). In particular, $\mc{N}$ is the unique reduced subscheme of $\mf{h}$ whose $\ov{k}$-points consist of the nilpotent elements of $\mf{h}_{\ov{k}}$. The nilpotent variety $\mc{N}$ is an integral (cf.\@ \cite[\S6.2, Lemma]{Jantzen}) finite type affine $k$-scheme of dimension $\dim(H)-r$ where $r$ is the geometric rank of $H$ (see \cite[\S6.4]{Jantzen}). In fact, as $k$ is of characteristic $0$, it is normal by the results of \cite{KostantLieGroupReps}. Observe that the nilpotent variety is stable under the adjoint action of $H$. Also observe that if $f\colon H\to H'$ is a morphism of reductive groups over $k$ it induces a morphism $df\colon \mc{N}_H\to \mc{N}_{H'}$ and satisfies $df(\Ad(h)(x))=\Ad(f(h))(df(x))$. \begin{eg}\label{eg:gln-nilp} Let $\mathrm{Mat}_{n,k}$ be the scheme of $n$-by-$n$ matrices over $k$, and let $I\subseteq \mc{O}(\mathrm{Mat}_{n,k})$ be generated by those polynomials corresponding to $(a_{ij})^n=0$. Then, $\mathcal{N}_{\GL_{n,k}}=V(\sqrt{I})$. \end{eg} From this example, and the functoriality of the nilpotent variety, it's easy to see that if $A$ is a $k$-algebra, then one has the containment \begin{equation*} \mc{N}(A)\subseteq \{x\in \mf{h}_A : x\text{ is nilpotent}\}, \end{equation*} which is an equality if $A$ is reduced, but can differ otherwise. That said, from this containment we see that for any element $x$ of $\mathcal{N}(A)$ we may define an element $\exp(x)$ of $H(A)$ as in \cite[II, \S6, \textnumero 3, 3.7]{DemazureGabriel}. As this association is functorial we obtain an $H$-equivariant morphism of schemes $\mc{N}\to H$ called the \emph{exponential morphism} and denoted by $\exp$ (or $\exp_H$ when we want to emphasize $H$) which is functorial in $H$. We would now like to describe the image of $\exp$. To this end, note that there exists a unique reduced closed subscheme $\mc{U}$ (or $\mc{U}_H$ when we want to emphasize $H$) of $H$ such that \begin{equation*} \mc{U}(k')=\left\{h\in H(k'): h\text{ is unipotent}\right\}, \end{equation*} for all extensions $k'$ of $k$ (see \cite[Proposition 1.1]{SpringerUnipotent}). We call $\mc{U}$ the \emph{unipotent variety} associated to $H$. It is an integral finite type affine $k$-scheme of dimension $\dim(H)-r$ which is stable under the conjugation action of $H$ (see loc.\@ cit.\@). Moreover, as $k$ is of characteristic $0$, it is normal (see \cite[Proposition 1.3]{SpringerUnipotent}). We observe that $\mc{U}$ is stable under the conjugation action of $H$. Observe that $\exp$ factorizes through $\mc{U}$, as both are reduced, and so this may be checked on the level of $\ov{k}$-points. We have the following ombnibus result concerning the exponential morphism. \begin{prop}\label{prop:exp-omnibus}Let $H$ be a reductive group over a characteristic $0$ field $k$. Then, \begin{enumerate} \item the exponential map $\exp\colon \mc{N}_H\to \mc{U}_H$ is an $H$-equivariant isomorphism, \item for any $k$-algebra $A$ and any $x$ in $\mc{N}_H(A)$, $\Ad(\exp(x))$ is equal to $\sum_{i=0}^\infty \frac{1}{i!}\ad(x)^i$, \item for any $k$-algebra $A$ and any nilpotent Lie subalgebra $\mf{n}$ of $\mf{h}_A$ contained in $\mc{N}(A)$ the subset $\exp(\mf{n})\subseteq H(A)$ is a subgroup. If the functor $\mf{n}\mapsto \mf{n}\otimes_A B$ is representable by a closed subgroup scheme of $\mc{N}_A$ then $\exp(\mf{n})$ is actually a closed subgroup scheme of $H_A$ such that $\exp(\mf{n})_x$ is unipotent for all $x$ in $\Spec(A)$. \end{enumerate} \end{prop} \begin{proof} For (1), as $\mc{N}_H$ and $\mc{U}_H$ are connected and normal, and $\exp$ may be checked to be a bijection on $\ov{k}$-points, this follows from Zariski's main theorem as $k$ is of characteristic $0$. Claim (2) follows by the functoriality of the exponential map (cf.\@ \cite[II, \S6, \textnumero 3, 3.7]{DemazureGabriel}). Finally, (3) may be deduced by the Campbell--Hausdorff series (see \cite[II, \S6, \textnumero 4, Th\'eor\`eme 2]{Bourbaki}). \end{proof} \subsection{The $L$-group and $C$-group}\label{ss:L-and-C} Fix $F$ to be a non-archimedean local field, and let $G$ be a reductive group over $F$. In this subsection we define the $C$-group of $G$, which is a modification of the $L$-group of $G$ that is better suited to the theory of parameters over a general $\mathbb{Q}$-algebra. To begin, let $\Psi(G)$ denote the canonical based root datum of $G_{\ov{F}}$ (see \cite[\S1.1]{KotStfcus} and \cite[\S21.42]{MilneGroups}) which comes equipped with an action of $\Gamma_F$. We fix once and for all a \emph{Langlands dual group of} $G$ by which we mean a pinned reductive group $(\widehat{G},\widehat{B},\widehat{T},\{x_\alpha\})$ over $\mathbb{Q}$ (see \cite[\S23.d]{MilneGroups}) together with an isomorphism between $\Psi(\widehat{G},\widehat{B},\widehat{T})$ and $\Psi(G)^\vee$. We denote by $\widehat{\mf{g}}$ the Lie algebra of $\widehat{G}$, and by $\widehat{\mc{N}}$ the nilpotent variety of $\widehat{G}$. Next, let $\mathcal{W}_F$ denote the \emph{Weil group scheme} over $\mathbb{Q}$ associated to $F$ as in \cite[(4.1)]{TatNtb}. For a $\mathbb{Q}$-algebra $A$ one may identify $\mc{W}_F(A)$ with the set of continuous maps $f\colon \pi_0(\Spec(A))\to W_F$ where here $\pi_0(\Spec(A))$ is thought of as a profinite space (cf.\@ \stacks{0906}) and $W_F$ is given its usual topology. In particular, $\mc{W}_F(A)=\underline{W_F}(A)$ when $\pi_0(\Spec(A))$ is discrete (e.g.\@ if $A$ is connected or Noetherian), but can differ otherwise. For $w$ in $W_F$ we shall occasionally abuse notation and use $w$ to also denote its image in $\mc{W}_F(A)$. Note that if $d\colon W_F\to \mathbb{Z}$ is the degree map sending a lift of arithmetic Frobenius to $-1$, then there is a morphism of $\mathbb{Q}$-group schemes $d\colon \mc{W}_F\to \underline{\mathbb{Z}}$ which takes a map $f$ to $d\circ f$. Observe that $\underline{\mathbb{Z}}$ admits an embedding of group $\mathbb{Q}$-schemes into $\bb{G}_{m,\mathbb{Q}}$ corresponding to $1\mapsto q^{-1}$ and we denote the composition of $d$ with this map by $\|\cdot\|\colon \mc{W}_F\to \bb{G}_{m,A}$. We define $\mathcal{I}_F =\ker( \| \cdot \|)$, which is an \emph{affine scheme} equal to $\varprojlim \underline{I_F/I_K}$ as $K$ travels over all finite extensions of $F$. Note that if $A$ is a $\mathbb{Q}$-algebra and $X$ an $A$-scheme locally of finite presentation then any morphism of $A$-schemes $\mc{I}_{F,A}\to X$ must factorize through $\underline{I_F/I_K}$ for some $K$ (cf.\@ \stacks{01ZC}). \begin{rem} One reason to prefer $\mc{W}_F$ over the constant group scheme $\underline{W_F}$ is that the topological group $\pi_0(\mc{W}_F)$ is equal to $W_F$ with its usual topology, and similarly for $\mc{I}_F$. \end{rem} Returning to $G$, note that the action of $\Gamma_F$ on $\Psi(G)$ gives rise to an action of $\Gamma_F$ on $(\widehat{G},\widehat{B},\widehat{T},\{x_\alpha\})$ and, in particular, on $\widehat{G}$ as a group $\mathbb{Q}$-scheme. We define a finite Galois extension $F^\ast$ of $F$ characterized by the equality $\Gamma_{F^\ast}=\ker (W_F \to \Aut (\widehat{G}))$. Equivalently, $F^\ast$ is the minimal field splitting $G^\ast$, the quasi-split inner form of $G$. We write $\Gamma_\ast$ for $\Gamma_{F^\ast/F}$. As $\underline{\Gamma_\ast}$ acts on $\widehat{G}$ and $\mc{W}_F$ admits $\underline{\Gamma_\ast}$ as a quotient, we obtain an action of $\mathcal{W}_F$ on $\widehat{G}$. Define the \emph{$L$-group scheme} of $G$ to be the group $\mathbb{Q}$-scheme ${^L}\!G =\widehat{G} \rtimes \mathcal{W}_F$. Observe that there is a natural inclusion $\widehat{G}\hookrightarrow {^L}\!G$ which identifies $\widehat{G}$ as a normal subgroup scheme of ${^L}\!G$. In particular, there is a natural conjugation action of ${^L}\!G$ on $\widehat{G}$, which in turn induces an adjoint action of ${^L}\!G$ on $\widehat{\mf{g}}$. As the action of $\mc{W}_F$ on $\widehat{G}$ factorizes through a finite quotient, we see by Lemma \ref{lem:fixed-points-reductive} below that the group presheaf associating a $\mathbb{Q}$-algebra $A$ to $Z_0(\widehat{G})(A)\vcentcolon= Z(\widehat{G})(A)^{\mc{W}_F(A)}$ is representable. \begin{lem}\label{lem:fixed-points-reductive} Let $A$ be a $\mathbb{Q}$-algebra, $H$ a reductive group over $A$, and $\Sigma$ a finite group acting on $H$ by group $A$-scheme automorphisms. Then, the group functor \begin{equation*} H^\Sigma\colon\cat{Alg}_A\to \cat{Grp},\quad B\mapsto H(B)^\Sigma \end{equation*} is represented by a subgroup scheme of $H$ smooth over $A$, with $H^\circ$ reductive over $A$, and such that for all $A$-algebras $B$ one has the equality $\mathrm{Lie}(H^\Sigma)(B)=\mathrm{Lie}(H)(B)^\Sigma$. \end{lem} \begin{proof} Write $H=\Spec(R)$, then one easily verifies that $\Spec(R_\Sigma)$, where $R_\Sigma$ is the ring of coinvariants, represents $H^\Sigma$. As $A$ is a $\mathbb{Q}$-algebra, it is evident that $R_\Sigma$ is a direct summand of $R$ and thus $H^\Sigma$ is flat over $A$, and thus smooth. By \cite[Exposé VIB, Corollaire 4.4]{SGA3-1} we know that $H^\circ$ is representable and smooth over $A$, and it is then reductive by \cite[Theorem 2.1]{PrasadYu}). The claim about Lie algebras is clear as the functor of $\Sigma$-invariants preserves kernels. \end{proof} Let $X^\ast$ denote the cocharacter component of $\Psi(G)$ and $R^+$ the positive root component, and define $\delta$ to be the element of $X^\ast$ given by the sum over the elements of $R^+$. By our identification between $\Psi(\widehat{G},\widehat{B},\widehat{T})$ and $\Psi(G)^\vee$ we see that $\delta$ corresponds to an element of $X_\ast(\widehat{T})$ which we also denote by $\delta$. Let us set $z_G\vcentcolon=\delta(-1)\in \widehat{T}(\mathbb{Q})[2]$. By the proof of \cite[Proposition 5.39]{BuGeconjc}, $z_G$ lies in $Z_0(\widehat{G})(\mathbb{Q})$. Thus, the action of $\mc{W}_F$ on $\widehat{G}\times \bb{G}_{m,\mathbb{Q}}$ (with trivial action on the second component) fixes the pair $(z_G,-1)$. Therefore, $\mc{W}_F$ acts on $\check{G}\vcentcolon= (G\times \bb{G}_{m,\mathbb{Q}})/\langle (z_G,-1)\rangle$. We then define the \emph{$C$-group scheme} of $G$ to be ${^C}\!G =\check{G} \rtimes \mathcal{W}_F$. Note that by \cite[Proposition 5.39]{BuGeconjc} there exists a central extension $\widetilde{G}$ of $G$ such that ${^C}\!G$ is naturally isomorphic to ${^L}\!\widetilde{G}$. The group $\widehat{G}$ admits a natural embedding into $\check{G}$, with normal image, via the first factor, and therefore we obtain a conjugation action of ${^C}\!G$ on $\widehat{G}$, and thus an adjoint action of ${^C}\!G$ on $\widehat{\mf{g}}$. Also, the morphism \begin{equation*} (\widehat{G}\times \bb{G}_{m,\mathbb{Q}})\rtimes \mathcal{W}_F\to \bb{G}_{m,\mathbb{Q}} \times \mathcal{W}_F,\qquad (g,z,w)\mapsto (z^2,w) \end{equation*} annihilates $\langle (z_G,-1)\rangle$, and thus induces a morphism \begin{equation*} p_C =(p_{\mathbb{G}_m},p_{\mathcal{W}_F})\colon {^C}\!G \to \bb{G}_{m,\mathbb{Q}} \times \mathcal{W}_F . \end{equation*} Finally, we observe that if $k$ is an extension of $\mathbb{Q}$, and $c$ is any element of $k$ such that $c^2=q$, then there is a morphism $i_c\colon {^L}\!G_k\to {^C}\!G_k$ obtained as the composition \begin{equation*} {^L}\!G_k \xrightarrow{(g,w)\mapsto (g,c^{-d(w)},w)}(\widehat{G}_k \times \bb{G}_{m,k} )\rtimes \mathcal{W}_{F,k}\to {^C}\!G_k . \end{equation*} \subsection{Scheme of homomorphisms and cross-section homomorphisms} We establish here some terminology and basic results concering the scheme of homomorphisms as well as the scheme of cross-section homomorphisms (in the sense of \cite[Appendix A]{DHKMModLp}). Throughout the following we fix $k$ to be field of characteristic $0$. \medskip \paragraph{Scheme of homomorphisms} Let $H$ and $H'$ be reductive groups over $k$ with Lie algebras $\mf{h}$ and $\mf{h'}$. For a $k$-algebra $A$ denote by $\Hom(H_A,H'_A)$ the set of group $A$-scheme morphisms $H_A\to H'_A$. Consider the following functor \begin{equation*} \underline{\Hom}(H,H')\colon \cat{Alg}_k\to \cat{Set},\qquad A\mapsto \Hom(H_A,H'_A), \end{equation*} and define the functor $\underline{\Hom}(\mf{h},\mf{h}')$ similarly, both of which carry a natural $H'$-conjugation action. \begin{prop}\label{prop:hom-schem-omnibus} The following statements hold true. \begin{enumerate} \item The functor $\underline{\Hom}(H,H')$ is representable by a smooth $k$-scheme for which the action map \begin{equation*} \mu \colon H'\times\underline{\Hom}(H,H')\to\underline{\Hom}(H,H') \end{equation*} is smooth, \item if $H$ is semi-simple then $\underline{\Hom}(H,H')$ is affine, and if $H$ furthemore simply connected then the map \begin{equation*} \underline{\Hom}(H,H')\to\underline{\Hom}(\mf{h},\mf{h}'),\qquad f\mapsto df, \end{equation*} is an $H'$-equivariant isomorphism, \item for any $k$-algebra $A$ the natural map \begin{equation*} \Hom(H_A,H'_A)\to \Hom(H(A),H'(A)) \end{equation*} is injective. \end{enumerate} \end{prop} \begin{proof} Statements (1) and (2) follow from \cite[Exp.\@ XXIV, Proposition 7.3.1]{SGA3-3new} and \cite[Theorem 2]{Brion} respectively. Statement (3) follows from Proposition \ref{prop:unirational} below as $H$ and $H'$ are integral and unirational (see \cite[Summary 1.36, Theorem 3.23, and Theorem 17.93]{MilneGroups}). \end{proof} \begin{prop}\label{prop:unirational} Suppose that $X$ and $Y$ are finite type integral $k$-schemes with $X$ unirational. Then for any $k$-algebra $A$, the natural map \begin{equation*} \Hom(X_A,Y_A)\to \Hom(X(A),Y(A)) \end{equation*} is injective. \end{prop} \begin{proof} One quickly reduces to the case when $X=D(w)\subseteq \mathbb{A}^n_k$ for $ w$ in $k[x_1,\ldots,x_n]$, $Y=\mathbb{A}^1_k$, $f$ lies in $A[x_1,\ldots,x_n]$ and $g$ is the zero map. As $X(F)\to X(A)$ is injective, we will be done if we can show that $f$ does not vanish on $D(w)(k)$. If $\{ a_i \}_{i \in I}$ is a basis of $A$ as a $k$-vector space then we may write $f=\sum_{i \in I} a_i f_i$ where $f_i \in k[x_1,\ldots, x_n]$. As $f$ is non-zero there exists some $i$ such that $f_i$ is non-zero. As $D(w)(k)$ is Zariski dense in $\mathbb{A}^n_k$ as $k$ is infinite, there then exists some $x$ in $D(w)(k)$ such that $f_i(x)\ne 0$. Then, by setup, $f(x)\ne 0$. \end{proof} In the future, we call a homomorphism of groups $H(A)\to H'(A)$ \emph{algebraic} if it is the map on $A$-points of a morphism (necessarily unique) of group $A$-schemes $H_A\to H'_A$. \medskip \paragraph{Schemes of cross-section homomorphisms} Fix an abstract group $\Sigma$ and a reductive group $H$ over $k$. We then consider the presheaf \begin{equation*} \underline{\Hom}(\Sigma,H)\colon\cat{Alg}_k\to \cat{Set},\qquad \Hom(\Sigma,H(A))=\Hom(\underline{\Sigma}_A,H_A). \end{equation*} This presheaf clearly carries an $H$-conjugation action. If, in addition, $\Sigma$ acts on $H$ by group $k$-scheme morphisms then for a $k$-algebra $A$ we say a homomorphism $f\colon \underline{\Sigma}_A\to H_A\rtimes \underline{\Sigma}_A$ is a \emph{cross-section homomorphism} over $A$ if $p_2(f(\sigma))=\sigma$ for all $\sigma$, where $p_2\colon H_A\rtimes \underline{\Sigma}_A\to\underline{\Sigma}_A$ is the scheme-theoretic projection. We denote by $\underline{Z}^1(\Sigma,H)(A)$ the set of cross-section homomorphisms over $A$ which is clearly a presheaf on $k$-algebras which carries an $H$-conjugation action\footnote{The notation $\underline{Z}^1(\Sigma,H)$ is used as this object is equal to the scheme of $1$-cocycles in \cite[Appendix A]{DHKMModLp}.}. \begin{prop}[{\cite[Lemma A.1 and Corollary A.2]{DHKMModLp}}]\label{prop:cocycle-scheme} Suppose that $\Sigma$ is finite. Then, $\underline{\Hom}(\Sigma,H)$ (resp.\@ $\underline{Z}^1(\Sigma,H)$) is represented by a finite type smooth affine $k$-scheme. Moreover, for all $k$-algebras $A$, and all $f$ in $\underline{\Hom}(\Sigma,H)(A)$ (resp.\@ $\underline{Z}^1(\Sigma,H)(A)$) the orbit map \begin{equation*} \mu_f\colon H_A\to \underline{\Hom}(\Sigma,H)_A,\qquad \bigg(\text{resp.}\,\, \mu_f\colon H_A\to\underline{Z}^1(\Sigma,H)_A\bigg) \end{equation*} is smooth. \end{prop} \subsection{Transporter and centralizer schemes} Let $R$ be a ring, $H$ a group-valued functor on $\cat{Alg}_R$, and $X$ a set-valued functor on $\cat{Alg}_R$. Then, for an $R$-algebra $S$ and two elements $\alpha$ and $\beta$ of $X(S)$ we define the \emph{transporter set} to be \begin{equation*} \mathrm{Transp}_H(\alpha,\beta)\vcentcolon= \left\{h\in H(S): h\cdot \alpha=\beta\right\}. \end{equation*} We then define the \emph{transporter presheaf} to be the presheaf \begin{equation*} \underline{\mathrm{Transp}}_H(\alpha,\beta)\colon \cat{Alg}_S\to \cat{Set},\qquad T\mapsto \mathrm{Transp}_H(\alpha_T,\beta_T). \end{equation*} We abbreviate $\underline{\mathrm{Transp}}_H(\beta,\beta)$ to $Z_H(\beta)$ and call it the \emph{centralizer presheaf}, which is clearly a group presheaf. We then have the following obvious proposition. \begin{prop}\label{prop:transp-rep} Suppose that $H$ is a group $R$-scheme and that $X$ is a separated $R$-scheme of finite presentation. Then, for any $R$-algebra $S$ and any elements $\alpha$ and $\beta$ of $X(S)$, the presheaves $\underline{\mathrm{Transp}}_H(\alpha,\beta)$ and $Z_H(\beta)$ are representable by closed finitely presented subschemes of $H_S$. Moreover, for any $S$-algebra $T$ one has the natural equalities \begin{equation*} \underline{\mathrm{Transp}}_H(\alpha,\beta)_T=\underline{\mathrm{Transp}}_H(\alpha_T,\beta_T),\qquad Z_H(\beta)_T=Z_H(\beta_T). \end{equation*} \end{prop} \section{The classical setting} In this section we recall the Jacobson--Morozov theorem and the Jacobson--Morozov theorem for parameters in their classical settings. This will not only serve to emphasize the results we wish to geometrize, but will play an important role in the proof of these more general results. \subsection{The Jacobson--Morozov theorem} Let $k$ be a field of characteristic $0$ and $H$ an algebraic group over $k$ such that $H^{\circ}$ is reductive. It will be useful to explicitly name the matrices \begin{equation*} e_0=\begin{pmatrix}0 & 1\\ 0 & 0\end{pmatrix},\quad h_0=\begin{pmatrix}1 & 0\\ 0 & -1\end{pmatrix},\quad f_0=\begin{pmatrix}0 & 0\\ 1 & 0\end{pmatrix}, \end{equation*} which form a $k$-basis of the Lie algebra $\mf{sl}_{2,k}$. We then have the Jacobson--Morozov Theorem as follows. \begin{thm}[{cf.\cite[VIII, \S 11, \textnumero 2, Proposition 2 and Corollaire]{BourLie78}}]\label{thm:JM-classical} The map \begin{equation*} \mathsf{JM}\colon \Hom(\SL_{2,k},H)\to \mc{N}(k),\qquad \theta\mapsto d\theta(e_0) \end{equation*} is an $H(k)$-equivariant surjection, and induces a bijection \begin{equation*} \Hom(\SL_{2,k},H)/H(k)\to \mc{N}(k)/H(k). \end{equation*} \end{thm} Let us call a triple $(e,h,f)$ of elements an \emph{$\mf{sl}_2$-triple} in $\mf{h}$ if the following equalities hold \begin{equation*} [h,e]=2e,\quad [h,f]=-2f,\quad [e,f]=h. \end{equation*} Let us denote by $\mc{T}(k)$ (or $\mc{T}_H(k)$ when we want to emphasize $H$), the set of $\mf{sl}_2$-triples in $\mf{h}$. The natural adjoint action of $H(k)$ on $\mf{h}$ induces an action of $H(k)$ on $\mc{T}(k)$. \begin{thm}\label{thm:rel-JM-triples-classical} The following diagram is commutative and each arrow is a bijection \begin{equation*} \xymatrixrowsep{3pc}\xymatrixcolsep{5pc}\xymatrix{\Hom(\SL_{2,k},H)/H(k)\ar[r]^{\theta\,\longmapsto \,d\theta}\ar[d]^{\mathsf{JM}} & \Hom(\mf{sl}_{2,k},\mf{h})/H(k)\ar[d]^{\nu\mapsto (\nu(e_0),\nu(h_0),\nu(f_0))}\\ \mathcal{N}(k)/H(k) & \mathcal{T}(k)/H(k). \ar[l]_{e\,\longmapsfrom \,(e,h,f)}} \end{equation*} \end{thm} We end this subsection by explaining the relationship between the centralizers of $\theta$ and $N=\mathsf{JM}(\theta)$. Namely, let us set \begin{equation*} \mf{u}^N = \mathrm{im}(\ad (N)) \cap \ker (\ad (N)),\qquad U^N=\exp(\mf{u}^N). \end{equation*} Then, we have the following Levi decomposition statement. \begin{prop}\label{prop:Zudec} The equality $Z_{H}(N) = U^N\rtimes Z_{H}(\theta)$ holds. Further we have \begin{equation*} \Lie (Z_{H}(\theta))=\Lie (Z_{H}(N))_0,\qquad \Lie (U^N)=\bigoplus_{i >0} \Lie (Z_{H}(N))_i, \end{equation*} where for an integer $i$ we set \begin{equation*} \Lie (Z_{H}(N))_i=\{ x \in \Lie Z_{H}(N) : \Ad \left( \theta \left( \left( \begin{smallmatrix}z & 0\\ 0 & z^{-1} \end{smallmatrix} \right) \right) \right)x=z^i x \}. \end{equation*} \end{prop} \begin{proof} The first claim is proved in the same way as \cite[Proposition 2.4]{BaVoUnipss}. The second follows from \cite[Lemma 5.1]{Elkington} by taking the derived group of $H^{\circ}$. \end{proof} \subsection{The Jacobson--Morozov theorem for parameters}\label{ss:JM-for-params-classical} We now recall the analogue of the Jacobson--Morozov theorem for parameters. We use the notation from \S\ref{ss:L-and-C}. \begin{defn} Topologize ${^L}\!G(\mathbb{C})$ by giving $\widehat{G}(\mathbb{C})$ the classical topology. \begin{enumerate} \item A \emph{(complex) Weil--Deligne parameter} for $G$ is a pair $(\varphi,N)$ where \begin{itemize} \item $\varphi\colon W_F\to {^L}\!G(\mathbb{C})$ is a continuous cross-section homomorphism, \item $N\in\widehat{\mc{N}}(\mathbb{C})$ is such that $\mathrm{Ad}(\varphi(w))(N)=\|w\|N$ for all $w\in W_F$. \end{itemize} \item A \emph{(complex) $L$-parameter} for $G$ is a map \begin{equation*} \psi\colon W_F\times \SL_2(\mathbb{C})\to {^L}\!G(\mathbb{C}), \end{equation*} such that \begin{itemize} \item $\psi|_{W_F}\colon W_F\to {^L}\!G(\mathbb{C})$ is a continuous cross-section homomorphism, \item $\psi|_{\SL_2(\mathbb{C})}\colon \SL_2(\mathbb{C})\to {^L}\!G(\mathbb{C})$ takes values in $\widehat{G}(\mathbb{C})$ and is algebraic. \end{itemize} \end{enumerate} \end{defn} For $\tau\in \{L,\mathrm{WD}\}$ let us denote by $\Phi^{\tau,\square}_G$ the set of complex $\tau$-parameters for $G$. Recall that a Weil--Deligne parameter $(\varphi,N)$ (resp.\@ an $L$-parameter $\psi$) is called \emph{Frobenius semi-simple} if for one (equiv.\@ for any) lift $w_0$ of arithmetic Frobenius the element $\varphi(w_0)$ (resp.\@ $\psi(w_0)$) is semi-simple (in the sense of \cite[\S8.2]{BorelCorvallis}). We denote by $\Phi^{\tau,\ss,\square}_G$ the subset of Frobenius semi-simple $\tau$-parameters. For each $\tau$ there is a natural action of $\widehat{G}(\mathbb{C})$ on $\Phi^{\tau,\square}_G$ which stabilizes the subset $\Phi^{\tau,\ss,\square}_G$. We then define $\Phi^\tau_G\vcentcolon= \Phi^{\tau,\square}_G/\widehat{G}(\mathbb{C})$ and $\Phi^{\tau,\ss}_G\vcentcolon= \Phi^{\tau,\ss,\square}_G/\widehat{G}(\mathbb{C})$. For an element $\psi$ of $\Phi^{L,\square}_G$ we denote by $\theta$ (or $\theta_\psi$ when we want to emphasize $\psi$) the morphism $\psi|_{\SL_2(\mathbb{C})}\colon \SL_2(\mathbb{C})\to \widehat{G}(\mathbb{C})$. To upgrade Theorem \ref{thm:JM-classical} to the parameter setting, we need to associate a Weil--Deligne parameter to any $L$-parameter. To this end, let us define a morphism of groups \begin{equation*} i=(i_1,i_2)\colon W_F\to W_F\times \SL_2(\mathbb{C}),\qquad w\mapsto \left(w,\left(\begin{smallmatrix}\|w\|^{\frac{1}{2}} & 0\\ 0 & \|w\|^{-\frac{1}{2}}\end{smallmatrix}\right)\right). \end{equation*} We then define the \emph{Jacobson--Morozov map} to be the $\widehat{G}(\mathbb{C})$-equivariant map \begin{equation*} \mathsf{JM}\colon \Phi^{L,\square}_G\to \Phi^{\mathrm{WD},\square}_G,\qquad \psi \mapsto (\psi\circ i,d\theta(e_0)). \end{equation*} It is easy to check that $\mathsf{JM}^{-1}(\Phi^{\mathrm{WD},\ss,\square}_G)$ is precisely $\Phi^{L,\ss,\square}_G$. As the Jacobson--Morozov map is $\widehat{G}(\mathbb{C})$-equivariant it induces maps $\Phi^{L}_G\to \Phi^{\mathrm{WD}}_G$ and $\Phi^{L,\ss}_G\to \Phi^{\mathrm{WD},\ss}_G$. The Jacobson--Morozov map is not a bijection as the following example illustrates. \begin{eg}\label{eg:JM-not-surj} Set $G=\GL_4$ and as $G$ is split we may replace ${^L}\!G(\mathbb{C})$ with $\widehat{G}(\mathbb{C})=\GL_4(\mathbb{C})$. Consider the Weil--Deligne parameter $(\varphi,N)$ given as follows \begin{equation*} \varphi\colon w\mapsto \begin{pmatrix} q^2 & 0 & 0 & 0 \\ 0 & q &1 &0 \\ 0 & 0& q & 0 \\ 0 & 0& 0& 1 \end{pmatrix}^{d(w)},\qquad N=\begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 &0 &1 \\ 0 & 0& 0 & 0 \\ 0 & 0& 0& 0 \end{pmatrix}. \end{equation*} Suppose that $(\varphi,N)=\mathsf{JM}(\psi)$. Then, $\psi$ is of the form $\rho \boxtimes \mathrm{Std}$, where $\mathrm{Std}$ is the standard representation of $\mathrm{SL}_2(\mathbb{C})$. Indeed, from the Jacobson--Morozov theorem one sees that as an $\mathrm{SL}_2(\mathbb{C})$ representation $\mathbb{C}^4$ is isomorphic to $\mathrm{Std}^2$. One may then check that the morphism \begin{equation*} \Hom_{\SL_2(\mathbb{C})}(\mathrm{Std},\mathbb{C}^4)\boxtimes \mathrm{Std}\to \mathbb{C}^4 \end{equation*} is an isomorphism of $W_F\times\SL_2(\mathbb{C})$-representations. That said, note that the twist of $\rho$ by the unramified character $w \mapsto \|w\|^{-1/2}$ must be isomorphic to the representation on $\mathrm{Ker} N$ induced by $\varphi$. In particular $\rho$ is semi-simple. Hence the Weil--Deligne parameter attached to $\psi$ must be Frobenius semi-simple, but the original $(\varphi,N)$ is not Frobenius semi-simple. \end{eg} However, we have the following Jacobson--Morozov theorem for parameters. \begin{thm}[{see \cite[Proposition 2.2]{GRAinv} or \cite[Proposition 1.13]{ImaLLCell}}]\label{thm:JM-params-classical} The Jacobson--Morzov map $\mathsf{JM}\colon\Phi^{L,\ss,\square}_G\to \Phi^{\mathrm{WD},\ss,\square}_G$ is a surjection and induces a bijection $\Phi^{L,\ss}_G\to \Phi^{\mathrm{WD},\ss}_G$. \end{thm} \subsection{Bijection over reductive centralizer locus and applications}\label{ss:red-loc-classical} The Jacobson--Morozov theorem for parameters is stated at the level of $\widehat{G}(\mathbb{C})$-orbits. While this is a non-issue for now, when we attempt to geometrize this result it becomes more problematic due to the subtle nature of quotients in algebraic geometry. So, we wish to upgrade the Jacobson--Morozov theorem for parameters to a bijectivity statement before quotienting by $\widehat{G}(\mathbb{C})$. To begin, we give an analogue of Proposition \ref{prop:Zudec} for parameters. To state it, let $(\varphi,N)$ be an element of $\Phi^{\mathrm{WD},\square}_G$ and set $U^N(\varphi) \vcentcolon= U^N(\mathbb{C}) \cap Z_{\widehat{G}(\mathbb{C})}(\varphi)$. \begin{prop}{\label{prop:Zphidec}} Let $\psi$ be an element of $\Phi^{L,\square}_G$ and set $(\varphi, N)=\mathsf{JM}(\psi)$. Then, the equality $Z_{\widehat{G}(\mathbb{C})}(\varphi, N)=U^N(\varphi)\rtimes Z_{\widehat{G}(\mathbb{C})}(\psi)$ holds. \end{prop} \begin{proof} Given Proposition \ref{prop:Zudec} it suffices to show that if $ua$ belongs to $Z_{\widehat{G}(\mathbb{C})}(\varphi, N)$, where $u$ is in $U^N(\mathbb{C})$ and $a$ is in $Z_{\widehat{G}(\mathbb{C})}(\theta)$, then in fact $u$ belongs to $U^N(\varphi)$ and $a$ belongs to $Z_{\widehat{G}(\mathbb{C})}(\psi)$. To prove this, we note that conjugation by an element in the image of $\varphi$ stabilizes both $U^N(\mathbb{C})$ and $Z_{\widehat{G}(\mathbb{C})}(\theta)$. Indeed, since $\Ad(\varphi(w))(N) = \|w\| N$, we have that conjugation by $\varphi(w)$ stabilizes $Z_{\widehat{G}(\mathbb{C})}(N)$ and hence its unipotent radical $U^N$. On the other hand, as $\varphi(w)$ equals $\psi(w,1)\theta(i_2(w))$, and $\psi(w,1)$ commutes with $\theta$, one may easily check the claim that $\varphi(w)$ normalizes $Z_{\widehat{G}(\mathbb{C})}(\theta)$. Now for each $w \in W_F$, $ua$ equals $\Int(\varphi(w))(u)\Int(\varphi(w))(a)$. Therefore, $ \Int(\varphi(w))(a)a^{-1} $ equals $\Int(\varphi(w))(u)^{-1}u$. By what we have proven, the former is an element of $Z_{\widehat{G}(\mathbb{C})}(\theta)$ and the latter is an element of $U^N(\mathbb{C})$. Since $U^N(\mathbb{C})$ and $Z_{\widehat{G}(\mathbb{C})}(\theta)$ have trivial intersection, we have that both sides are trivial and so $a$ and $u$ commute with $\varphi(w)$ as desired. \end{proof} We may use this decomposition to exhibit an example of a semi-simple $L$-parameter $\psi$ whose associated Weil--Deligne parameter has strictly larger centralizer. \begin{eg} Let $G=\GL_3$ and consider the element $\psi$ in $\Phi^{L,\ss,\square}_G$ given by the following \begin{equation*} \psi \left(w, \begin{pmatrix} a & b \\ c & d \end{pmatrix} \right) = \left( \begin{pmatrix} a & b & 0 \\ c & d & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \| w \|^{-\frac{1}{2}} & 0 & 0 \\ 0 & \| w \|^{-\frac{1}{2}} & 0 \\ 0 & 0 & 1 \end{pmatrix} , w \right) . \end{equation*} and set $(\varphi,N)=\mathsf{JM}(\psi)$. In this case, we have \[ \mathfrak{u}^N =\left\{ \begin{pmatrix} 0 & * & * \\ 0 & 0 & 0 \\ 0 & * & 0 \end{pmatrix} \right\} . \] Hence \[ \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \in Z_{\widehat{G} (\mathbb{C})}(\varphi,N) \cap U^N(\mathbb{C}) , \] but it does not belong to $Z_{\widehat{G} (\mathbb{C})}(\psi)$ by Proposition \ref{prop:Zphidec}. \end{eg} \begin{rem} We remark that although $Z_{\widehat{G}(\mathbb{C})}(\psi)$ need not equal $Z_{\widehat{G}(\mathbb{C})}(\mathsf{JM}(\psi))$, these groups are the same for the purposes of parametrizing $L$-packets as in \cite{KalLLCnqs} as they have the same component groups by Proposition \ref{prop:Zphidec}. More generally, one can consider the group $S^{\natural}_{\psi}$ (resp.\@ $S^{\natural}_{\mathsf{JM}(\psi)}$) that is related to \cite[Conjecture F]{KalLLCnqs} and is defined by \begin{equation*} Z_{\widehat{G}(\mathbb{C})}(\psi)/[Z_{\widehat{G}(\mathbb{C})}(\psi)\cap \widehat{G}(\mathbb{C})^{\mathrm{der}}]^{\circ},\qquad \bigg(\text{resp.}\,\,Z_{\widehat{G}(\mathbb{C})}(\mathsf{JM}(\psi))/[Z_{\widehat{G}(\mathbb{C})}(\mathsf{JM}(\psi)) \cap \widehat{G}(\mathbb{C})^{\mathrm{der}}]^{\circ}\bigg). \end{equation*} These groups are equal by Proposition \ref{prop:Zphidec} as $U^N(\varphi)$ is contained in $[Z_{\widehat{G}(\mathbb{C})}(\mathsf{JM}(\psi)) \cap \widehat{G}(\mathbb{C})^{\mathrm{der}}]^{\circ}$. \end{rem} This decomposition also allows us to give an algebraic condition for when a Weil--Deligne parameter is the image under the Jacobson--Morozov map of a semi-simple $L$-parameter with the same centralizer. In the rest of this section, we use Proposition \ref{prop:red-cent-ss}, but the proof of the proposition does not depend on the rest of this section. \begin{prop}\label{prop:red-cent-equiv} The group $Z_{\widehat{G}(\mathbb{C})}(\varphi,N)^\circ$ is reductive if and only if $(\varphi,N)=\mathsf{JM}(\psi)$ for a Frobenius semi-simple Weil--Deligne parameter $\psi$ such that $Z_{\widehat{G}(\mathbb{C})}(\psi)=Z_{\widehat{G}(\mathbb{C})}(\varphi,N)$. \end{prop} \begin{proof} Suppose first that $Z_{\widehat{G}(\mathbb{C})}(\varphi,N)^\circ$ is reductive. We shall show in Proposition \ref{prop:red-cent-ss} that this implies that $(\varphi,N)$ is Frobenius semi-simple. Let $\psi$ be any element of $\Phi^{L,\ss,\square}_G$ such that $\mathsf{JM}(\psi)=(\varphi,N)$. By Proposition \ref{prop:Zphidec} the reductivity of $Z_{\widehat{G}(\mathbb{C})}(\varphi,N)^\circ$ implies that $U^N(\varphi)$ is trivial, and thus $Z_{\widehat{G}(\mathbb{C})}(\psi)=Z_{\widehat{G}(\mathbb{C})}(\varphi,N)$ as desired. Conversely, if $(\varphi,N)=\mathsf{JM}(\psi)$ for an element of $\Phi^{L,\ss,\square}_G$ and $Z_{\widehat{G}(\mathbb{C})}(\psi)=Z_{\widehat{G}(\mathbb{C})}(\varphi,N)$, then $Z_{\widehat{G}(\mathbb{C})}(\varphi,N)^\circ$ is reductive by \cite[Proposition 3.2]{SilbergerZink} \end{proof} Let $\Phi^{\mathrm{WD},\mathrm{rc},\square}_G$ consist of those $(\varphi,N)$ with $Z_{\widehat{G}(\mathbb{C})}(\varphi,N)^\circ$ reductive. We call this the \emph{reductive centralizer locus} of $\Phi^{\mathrm{WD},\square}_G$. \begin{cor}\label{cor:JM-rd-bij-classical} The map $\mathsf{JM}\colon \mathsf{JM}^{-1}\left(\Phi^{\mathrm{WD},\mathrm{rc},\square}_G\right)\to \Phi^{\mathrm{WD},\mathrm{rc},\square}_G$ is a $\widehat{G}(\mathbb{C})$-equivariant bijection. \end{cor} \begin{proof} This follows from Theorem \ref{thm:JM-params-classical}, Proposition \ref{prop:red-cent-equiv} and that $\psi$ is Frobenius semi-simple if and only if $\mathsf{JM} (\psi)$ is for $\psi \in \Phi^{L,\square}_G$. \end{proof} \subsection{Essentially tempered parameters} To make Corollary \ref{cor:JM-rd-bij-classical} useful, we now show that $\mathsf{JM}^{-1}(\Phi^{\mathrm{WD},\mathrm{rc},\square}_G)$ contains a large class of important $L$-parameters. To this end, let us call an element $\psi$ of $\Phi^{L,\square}_G$ \emph{essentially tempered} if the projection of $\psi(W_F)$ to $\widehat{G}(\mathbb{C})/Z_0(\widehat{G})(\mathbb{C})$ is relatively compact. Let $\Phi^{L,\mathrm{est},\square}_G$ be the set consisting of essentially tempered $L$-parameters. We will soon show that every essentially tempered $L$-parameter maps into the reductive centralizer locus, but first we must establish some results concerning Frobenius semi-simple parameters. \begin{prop}\label{prop:ess-temp-Fss} Any element $\psi$ of $\Phi^{L,\mathrm{est},\square}_G$ is Frobenius semi-simple. \end{prop} \begin{proof} The map $\psi'$ obtained by composing $\psi|_{W_{F^\ast}}$ with the projection to $\widehat{G}(\mathbb{C})/Z_0(\widehat{G})(\mathbb{C})$ is a homomorphism. By Lemma \ref{lem:L-group-ss} below it suffices to show that if $w_0$ is an arithmetic Frobenius lift and $m$ is divisible by $[F^\ast:F]$, then $\psi'(w_0^m)$ is semi-simple. But, by essentially temperedness we know that the image of $\psi'(w_0^m)$ in $\widehat{G}(\mathbb{C})/Z_0(\widehat{G})(\mathbb{C})$ is contained in a maximal compact subgroup $K$ of $\widehat{G}(\mathbb{C})/Z_0(\widehat{G})(\mathbb{C})$. Up to conjugation, we may then assume that $K=H(\mathbb{R})$ for $H$ a compact form of $\widehat{G}(\mathbb{C})/Z_0(\widehat{G})(\mathbb{C})$ (see \cite[Theorem D.2.8 and Proposition D.3.2]{ConRgrsch}). But, as $H(\mathbb{R})$ consists only of semi-simple elements, the claim follows. \end{proof} \begin{lem}\label{lem:L-group-ss} Let $(s,w)$ be an element of ${^L}\!G(\mathbb{C})$ and write $(s,w)^m= (s_m,w^m)$. Then, $(s,w)$ is Frobenius semi-simple if and only if $s_m$ is semi-simple for some non-zero integer $m$ divisible by $[F^\ast:F]$. \end{lem} \begin{proof} Fix any representation $r\colon {^L}G\to \mathrm{GL}_n$. As $r((s,w)^k)=r(s,w)^k$ we see that $(s,w)$ is semi-simple if and only if $(s,w)^k$ is for some $k> 0$. But, if $m$ is divisible by $[F^\ast:F]$ then as $r((s,w)^{mk})=r(s_m^k,1)$ for some $k>0$, the conclusion follows. \end{proof} The following shows that the naming of essentially tempered $L$-parameters is reasonable. \begin{prop}\label{prop:ess-temp-twist} For $\psi \in \Phi^{L,\square}_G$, the following conditions are equivalent: \begin{enumerate} \item $\psi \in \Phi^{L,\mathrm{est},\square}_G$, \item there is a continuous character $\chi \colon W_F\times \SL_2(\mathbb{C}) \to Z_0(\widehat{G})(\mathbb{C})$ such that the projection of $(\chi \psi)(W_F)$ to $\widehat{G}(\mathbb{C})$ is relatively compact. \end{enumerate} \end{prop} \begin{proof} It is clear that (2) implies (1). We show that (1) implies (2). Fix a Frobenius lift $w_0 \in W_F$. Set $H=Z_{\widehat{G}(\mathbb{C})}(\psi)$, which has reductive identity component by Proposition \ref{prop:ess-temp-Fss} and \cite[Proposition 3.2]{SilbergerZink}. Let $\widehat{\psi}$ be the $\widehat{G}$-component of $\psi$. Taking a positive integer $m$ to be divisible by $|\Aut (\psi(I_F))|$ and $[F^\ast:F]$ we see that $\widehat{\psi}(w_0^m) \in H$, and thus in fact $\widehat{\psi}(w_0^m) \in Z(H)$. By replacing $m$ by a power, we may assume that $\widehat{\psi}(w_0^m) \in Z(H)^{\circ}$. Since $\psi \in \Phi^{L,\mathrm{est},\square}_G$, there is a compact subgroup $C \subseteq Z(H)^{\circ}$ such that $\widehat{\psi}(w_0^m) \in C \cdot (Z(H)^{\circ} \cap Z(\widehat{G})(\mathbb{C}))$. We write $\widehat{\psi}(w_0^m)=c z$ for $c \in C$ and $z \in Z(H)^{\circ} \cap Z(\widehat{G})(\mathbb{C})$. Since elements of $Z(H)^{\circ} \cap Z(\widehat{G})(\mathbb{C})$ commute with $\psi(W_F)$, we have $Z(H)^{\circ} \cap Z(\widehat{G})(\mathbb{C})=Z(H)^{\circ} \cap Z_0(\widehat{G})(\mathbb{C})$. Replacing $m$ again, we may assume that $z \in (Z(H)^{\circ} \cap Z_0(\widehat{G})^\circ(\mathbb{C}))$. We take $z_0 \in (Z(H)^{\circ} \cap Z_0(\widehat{G})^\circ(\mathbb{C}))$ such that $z_0^m =z$. Further we define $\chi$ as the unramified character sending $w_0$ to $z_0^{-1}$. Then the image of $(\chi \psi)(W_F)$ in $\widehat{G}(\mathbb{C})$ is contained in the image of $\bigcup_{i=0}^{m-1} \psi (I_F) (\chi\psi)(w_0^i) C$ in $\widehat{G}(\mathbb{C})$, which is compact. \end{proof} We now relate $\Phi^{L,\mathrm{est},\square}_G$ to the reductive centralizer locus of $\Phi^{\mathrm{WD},\square}_G$. \begin{prop}\label{prop:temp-cent-equal} The containment $\Phi^{L,\mathrm{est},\square}_G\subseteq \mathsf{JM}^{-1}(\Phi^{\mathrm{WD},\mathrm{rc},\square}_G)$ holds. \end{prop} \begin{proof} Let $\psi$ be an element of $\Phi^{L,\mathrm{est},\square}_G$ and set $(\varphi,N)=\mathsf{JM}(\psi)$. Then $\psi$ is Frobenius semi-simple by Proposition \ref{prop:ess-temp-Fss}. We claim that $Z_{\widehat{G}(\mathbb{C})}(\psi)=Z_{\widehat{G}(\mathbb{C})}(\varphi,N)$, from where we will be done by Proposition \ref{prop:red-cent-equiv}. By Proposition \ref{prop:Zphidec}, it suffices to show that $U^N(\varphi)$ is trivial. We assume that $U^N(\varphi)$ is non-trivial and take a non-trivial weight vector $v$ of $\Lie (U^N(\varphi))$ with respect to the adjoint action of $\theta|_{T_2}$, where $T_2$ is the standard maximal torus of $\SL_{2,\mathbb{C}}$. We put $u= \exp(v)$. For each $w \in W_F$ we have that $\varphi(w)= \psi(w,1)\theta(i_2(w))$. Since $\varphi(w)$ commutes with $u$, we see that $\Int(\psi(w,1)^{-1})(u)$ is equal to $\Int(\theta(i_2(w)))(u)$, and therefore \begin{equation*} \Ad(\psi(w,1)^{-1})(v) = \Ad(\theta(i_2(w)))(v). \end{equation*} But, observe that if $w_0$ is a lift of arithmetic Frobenius in $W_F$ then $i_2(w_0^{2n})=\left(\begin{smallmatrix}q^n & 0\\ 0 & q^{-n}\end{smallmatrix}\right)$. By Proposition \ref{prop:Zudec}, we deduce that $\Ad(\theta(i_2(w_0^{2n})))(v)=q^{jn} v$ for some $j\geqslant 1$. Letting $n$ tend towards infinity, and using the fact that $u$ is non-trivial, we deduce that the adjoint orbit of $W_F$ on $v$ is non-compact, which is a contradiction. \end{proof} We now state a corollary to Proposition \ref{prop:temp-cent-equal}. Before doing so, we recall an even smaller subset of $\Phi^{L,\mathrm{est},\square}_G$ that will feature prominently below. Namely, recall that $(\varphi,N)$ in $\Phi^{\mathrm{WD},\square}_G$ (resp.\@ $\psi$ in $\Phi^{L,\square}_G$) is called \emph{discrete} if the quotient \begin{equation*} Z_{\widehat{G}(\mathbb{C})}(\varphi,N)/Z_0(\widehat{G})(\mathbb{C})\qquad \bigg(\text{resp.}\,\,Z_{\widehat{G}(\mathbb{C})}(\psi)/Z_0(\widehat{G})(\mathbb{C})\bigg) \end{equation*} is finite. Denote by $\Phi^{\mathrm{WD},\mathrm{disc},\square}_G$ (resp.\@ $\Phi^{L,\mathrm{disc},\square}_G$) the set of discrete parameters and $\Phi^{\mathrm{WD},\mathrm{disc}}_G$ (resp.\@ $\Phi^{L,\mathrm{disc}}_G$) its $\widehat{G}(\mathbb{C})$-quotient. Note that $\Phi^{L,\mathrm{disc},\square}_G$ is contained in $\Phi^{L,\mathrm{est},\square}_G$ (cf.\@ \cite[Lemma 3.1]{GRAinv} and \cite[Lemma 5.2]{SilbergerZink}), and thus $\psi$ is discrete if and only if $\mathsf{JM}(\psi)$ discrete as they have the same centralizers by Proposition \ref{prop:temp-cent-equal} and its proof. \begin{cor}\label{cor:bij-et-disc} The map \begin{equation*} \mathsf{JM}\colon \Phi^{L,\mathrm{est},\square}_G\to \Phi^{\mathrm{WD},\square}_G,\qquad \bigg(\text{resp.}\,\, \mathsf{JM}\colon \Phi^{L,\mathrm{disc},\square}_G\to\Phi^{\mathrm{WD},\mathrm{disc},\square}_G\bigg) \end{equation*} is a $\widehat{G}(\mathbb{C})$-equivariant injection (resp.\@ bijection). \end{cor} Note that implicit in the above is the following result of independent interest. \begin{prop}\label{prop:disc-Frob-ss-classical} Any element of $\Phi^{\mathrm{WD},\mathrm{disc},\square}_G$ (resp.\@ $\Phi^{L,\mathrm{disc},\square}_G$) is Frobenius semi-simple. \end{prop} \begin{proof} The first claim is a special case of Proposition \ref{prop:red-cent-ss}. The second claim follows from $\Phi^{L,\mathrm{disc},\square}_G \subseteq \Phi^{L,\mathrm{est},\square}_G$ and Proposition \ref{prop:ess-temp-Fss}. \end{proof} We end this subsection by showing that one may apply Corollary \ref{cor:bij-et-disc} to show that the assocation of $\psi\circ i$ to $\psi$ is injective when restricted to the set of discrete $L$-parameters. This result plays an important technical role in \cite{Characterization}. \begin{prop}\label{prop:disc-ss-prop} The maps \begin{equation*} \Phi^{\mathrm{WD},\mathrm{disc}}_G\xrightarrow{(\varphi,N)\mapsto \varphi} \Hom(W_F,{^L}\!G(\mathbb{C}))/\widehat{G}(\mathbb{C}),\qquad \Phi^{L,\mathrm{disc}}_G\xrightarrow{\psi\mapsto \psi\circ i} \Hom(W_F,{^L}\!G(\mathbb{C}))/\widehat{G}(\mathbb{C}) \end{equation*} are injective. \end{prop} \begin{proof} By Corollary \ref{cor:bij-et-disc} it suffices to show that the former map is injective. Fix $\lambda$ in the set $\Hom(W_F,{^L}\!G(\mathbb{C}))$. By Proposition \ref{prop:disc-Frob-ss-classical} it then suffices to show that (if non-empty) the set \begin{equation*} P(G,\lambda)\vcentcolon=\left\{(\varphi,N)\in \Phi^{\mathrm{WD},\ss,\square}_G: \varphi=\lambda\right\} \end{equation*} intersects at most one $\widehat{G}(\mathbb{C})$-orbit of discrete parameters. As in \cite[\S4]{VoganLL}, set $\widehat{G}(\mathbb{C})^\lambda$ to be $Z_{\widehat{G}(\mathbb{C})}(\lambda)$, and \begin{equation*} \widehat{\mf{g}}^{\,\lambda(I_F)}_q\vcentcolon=\left\{x\in \widehat{\mf{g}}_\mathbb{C}: \begin{aligned}(1)&\quad \mathrm{Ad}(\lambda(w))(x)=x\text{ for all }w\in I_F\\ (2)&\quad \mathrm{Ad}(w_0)(x)=qx\end{aligned}\right\} \end{equation*} where $w_0$ is any lift of arithmetic Frobenius. Both $P(G,\lambda)$ and $\widehat{\mf{g}}^{\,\lambda(I_F)}_q$ carry an action of $\widehat{G}(\mathbb{C})^\lambda$, and \cite[Proposition 4.5]{VoganLL} establishes a $\widehat{G}(\mathbb{C})^\lambda$-equivariant bijection $P(G,\lambda)\to \widehat{\mf{g}}^{\lambda(I_F)}_{q}$, and that the latter space has only finitely many orbits. Therefore, $P(G,\lambda)$ carries the structure of a vector space on which $\widehat{G}(\mathbb{C})^\lambda$ acts algebraically and with only finitely many orbits. Suppose then that $(\lambda,N)$ is a discrete element of $P(G,\lambda)$ and let $\mc{O}\subseteq P(G,\lambda)$ denote its $\widehat{G}(\mathbb{C})^\lambda$-orbit. Now, $\mc{O}$ is a locally closed subscheme of $P(G,\lambda)$ (see \cite[Proposition 1.65 (2)]{MilneGroups}) of dimension $\dim(\widehat{G}(\mathbb{C})^\lambda)-\dim(H)$ where $H$ is the isotropy subgroup of $(\lambda,N)$ in $\widehat{G}(\mathbb{C})^\lambda$ (\cite[Proposition 5.23 and Proposition 7.12]{MilneGroups}). But, note that $H=Z_{\widehat{G}(\mathbb{C})}(\lambda,N)$ and so contains $Z_0(\widehat{G})(\mathbb{C})$ as a finite index subgroup. We deduce that $\dim(\mc{O})$ is equal to $\dim(\widehat{G}(\mathbb{C})^\lambda)-\dim(Z_0(\widehat{G})(\mathbb{C}))$. But, as $\widehat{G}(\mathbb{C})^\lambda$ acts through $\widehat{G}(\mathbb{C})^\lambda/Z_0(\widehat{G})(\mathbb{C})$, and has finitely many (locally closed) orbits, we see that $\dim P(G,\lambda)$ is at most $\dim(\widehat{G}(\mathbb{C})^\lambda)-\dim(Z_0(\widehat{G})(\mathbb{C}))$. Thus, we deduce that $\dim(\mc{O})=\dim(P(G,\lambda))$. As $\mc{O}$ is locally closed in $P(G,\lambda)$ we deduce that $\mc{O}$ is open. As $P(G,\lambda)$ is a vector space it is irreducible, so open orbits are unique, and the conclusion follows. \end{proof} \section{The geometric and relative Jacobson--Morozov theorems} Before we can geometrize the Jacobson--Morozov theorem for parameters, we now first geometrize the Jacobson--Morozov theorem. After doing so, we derive a version of the Jacobson--Morozov on the level of $A$-points. We fix for the remainder of this section a field $k$ of characteristic $0$ and $H$ a reductive group over $k$. \begin{rem} In this section we often assume that $H$ is split. This will be sufficient for us as $\widehat{G}$ is a split group. That said, most of these statements admit obvious generalizations to arbitrary reductive $H$, with similar proofs. The exception is Theorem \ref{thm:relative-jm}, but we suspect that the statement is still true and that one can employ a similar strategy to prove it. \end{rem} \subsection{The orbit separation space} Pivotal to our formulation of a geometric version of the Jacobson--Morozov theorem is a certain construction which, in a precise sense, replaces a variety with group action with the disjoint union of its orbits. Throughout this subsection we fix a reduced quasi-projective scheme $X$ over $k$ equipped with an action of $H$. We also assume that the map \begin{equation*} X(k)/H(k)\to X(\ov{k})/H(\ov{k}) \end{equation*} is surjective (although one may deal with the general case by Galois descent). Whenever we speak of the class of $x$ in $X(\ov{k})/H(\ov{k})$ we assume without loss of generality that $x$ is in $X(k)$. For each element $x$ of $X(k)$ let us denote by $\mathcal{O}_x$ the \emph{orbit scheme} given as the fppf sheafification of the presheaf \begin{equation*} \cat{Alg}_{k}\to\cat{Set},\qquad A\mapsto \left\{g\cdot x: g\in H(A)\right\}\subseteq X(A). \end{equation*} Since $X$ is itself an fppf sheaf, we see that $\mathcal{O}_x$ is an $H$-stable subsheaf of $X$. \begin{prop} The orbit scheme is representable by a reduced locally closed subscheme of $X$ smooth over $k$. Moreover, the orbit map $\mu_x\colon H\to \mc{O}_x$ is smooth and surjective and identifies $\mc{O}_x$ as the fppf sheaf quotient $H/Z_H(x)$. \end{prop} \begin{proof} Clearly the orbit map identifies $\mc{O}_x$ as the fppf sheaf quotient $H/Z_H(x)$. In \cite[Proposition 1.65]{MilneGroups} it is shown that $\mu_x(H)$ is a locally closed subset of $H$, which one may endow with the reduced scheme structure. In \cite[Proposition 7.17]{MilneGroups} it is shown that $\mu_x(H)$ represents $\mathcal{O}_x$. The smoothness of the orbit map is then confirmed by \cite[Proposition 7.15]{MilneGroups}, and the smoothness of $\mc{O}_x$ over $k$ is handled by \cite[Corollary 5.26]{MilneGroups}. \end{proof} It will be useful to have a more explicit description of the $A$-points of $\mc{O}_x$ for a $k$-algebra $A$. \begin{prop}\label{prop:orbit-desc-gen} For any $k$-algebra $A$, there are identifications \begin{equation*} \mc{O}_x(A)=\left\{\mathbf{x}\in X(A):x\emph{ and }\mathbf{x}\emph{ lie in the same }H(A)\emph{-orbit \'etale locally on }A\right\}, \end{equation*} and \begin{equation*} \mc{O}_x(A)/H(A)=\ker\bigg(H^1_\mathrm{\acute{e}t}(\Spec(A),Z_{H}(x))\to H^1_\mathrm{\acute{e}t}(\Spec(A),H)\bigg). \end{equation*} \end{prop} \begin{proof} The first claim follows from the fact that the orbit map $\mu_x\colon H\to \mc{O}_x$ is a smooth surjection and \cite[Corollaire 17.16.3.(ii)]{EGA4-4}. The second claim follows by combining \cite[Chaptire III, Corollaire 3.2.3]{Giraud} with the fact that as $H_A$ and $Z_H(x)_A$ are smooth over $A$, their \'etale cohomology functorially agrees with their fppf cohomology (cf.\@ \cite[Th\'{e}or\`{e}me 11.7]{GrothendieckBrauerIII}). \end{proof} When $A$ is a reduced $k$ algebra, one may give a simpler description. Say an element $\mb{x}$ of $X(A)$ is \emph{everywhere geometrically conjugate (egc)} to $x$ if for all geometric points $\Spec(k')\to\Spec(A)$ one has that $x$ and $\mb{x}$ have images in $X(k')$ belonging to the same $H(k')$-orbit. \begin{prop}\label{prop:ever-geom-conj} For a reduced $k$-algebra $A$ there is a functorial identification \begin{equation*} \mathcal{O}_x(A)=\left\{\mb{x}\in X(A):\mb{x}\emph{ is egc to }x\right\}. \end{equation*} \end{prop} \begin{proof} Evidently any element of $\mathcal{O}_x(A)$ is egc to $x$. If $\mb{x}$ is egc to $x$ then the morphism $\mb{x}\colon \Spec(A)\to X$ has the property that $\mb{x}(|\Spec(A)|)\subseteq |\mathcal{O}_x|$. As $\Spec(A)$ is reduced this implies that $\mb{x}$ factorizes through $\mathcal{O}_x$ as desired. \end{proof} We then assemble the spaces $\mc{O}_x$ into one as follows. \begin{defn} We define the \emph{orbit separation} of $X$, denoted by $X^\sqcup$, to be the space \begin{equation*} X^\sqcup\vcentcolon= \bigsqcup_{x\in X(\ov{k})/H(\ov{k})}\mc{O}_x. \end{equation*} \end{defn} We have a tautological map $X^\sqcup\to X$, and we have the following omnibus result concerning its properties in the case when $X(\ov{k})/H(\ov{k})$ is finite, which is the case of most interest to us. Below, and in the sequel, we call a morphism of schemes $f\colon Y\to X$ \emph{weakly birational} if there exists a dense open subset $U$ of $X$ such that $f^{-1}(U)\to U$ is an isomorphism. \begin{prop}\label{prop:sqcup-omnibus} Suppose that $X(\ov{k})/H(\ov{k})$ is finite. Then, the map $X^\sqcup\to X$ is a weakly birational surjective monomorphism and it is an isomorphism if and only if the action map $\mu\colon H\times X\to X$ is smooth. \end{prop} As the last condition is equivalent to the claim that $\mc{O}_x$ is open for each $x$ in $X(\ov{k})$ (cf.\@ \cite[Lemma 3.5]{Brion} and \stacks{05VJ}), this is a special case of Lemma \ref{lem:stratification-isom} below. \begin{lem}\label{lem:stratification-isom} Let $f\colon Y\to X$ be a morphism of reduced schemes finite type over $k$. Suppose that $Y_{\ov{k}}$ admits a scheme-theoretic decomposition $\bigsqcup_i Y_i$ such that $f|_{Y_i}$ is a locally closed immersion, and $f(Y_i(\ov{k}))\cap f(Y_j(\ov{k}))$ is empty for $i\ne j$. Then, \begin{enumerate} \item $f$ is a monomorphism, \item $f$ is weakly birational if and only if $f(Y(\ov{k}))$ is dense in $X$, \item $f$ is an isomorphism if and only if $f(Y(\ov{k}))=X(\ov{k})$ and each $Y_i$ is open in $X_{\ov{k}}$. \end{enumerate} \end{lem} \begin{proof} As all of these claims may be checked over $\ov{k}$ we may assume without loss of generality that $k$ is algebraically closed. The final claim is clear, thus we focus on the first two claims. For the first claim, as each $f|_{Y_i}$ is a monomorphism, it suffices to show that $f(Y_i)$ and $f(Y_j)$ are disjoint for $i\ne j$. But, as $f(Y_i)\cap f(Y_j)$ is locally closed, if non-empty it would contain a $k$-point which is a contradiction. To see the second claim, it suffices to show the if direction. For each irreducible component $Z$ of $X$ note that $\{Y_i\cap Z\}$ is a finite set of locally closed subsets with dense union. This implies that there exists some $i_0$ such that $Y_{i_0}\cap Z$ is open. Let $C$ be the union of irreducible components of $X$ which intersect $Z$ at a proper non-empty subset. Set $U_Z\vcentcolon= (Y_{i_0}\cap Z)-C$. Then, it is clear that if $U$ is the union of the $U_Z$, then $U$ is a dense open subset of $X$ and as $X$ is reduced that $f\colon f^{-1}(U)\to U$ is an isomorphism. \end{proof} Finally, observe that the orbit separation space is a functorial construction. Namely, if $Y$ is another quasi-projective scheme over $k$ equipped with an action of $H$ with the same properties, then for any $H$-equivariant morphism $X\to Y$, the composition $X^\sqcup\to X\to Y$ factorizes uniquely through $Y^\sqcup\to Y$. \subsection{The geometric Jacobson--Morozov theorem} We now move to the geometrization of the Jacobson--Morozov theorem. Let us now assume that $H$ is split. To begin, observe that one has a \emph{Jacobson--Morozov morphism} \begin{equation*} \mathsf{JM}\colon \underline{\Hom}(\SL_{2,k},H)\to \mc{N},\qquad \theta\mapsto d\theta(e_0). \end{equation*} We would like to apply the orbit separation construction from the last subsection to this map, but before we do so, we should first observe that the actions of $H$ on $\underline{\Hom}(\SL_{2,k},H)$ and $\mc{N}$ satisfy the properties used in the last section. \begin{prop}\label{prop:split-nilp-desc} The maps \begin{equation*} \mc{N}(k)/H(k)\to \mc{N}(\ov{k})/H(\ov{k}),\qquad \Hom(\SL_{2,k},H)/H(k)\to \Hom(\SL_{2,\ov{k}},H_{\ov{k}})/H(\ov{k}) \end{equation*} are surjections. \end{prop} \begin{proof} By Theorem \ref{thm:JM-classical} it suffices to show the first map is a surjection. Let $N$ be an element of $\mc{N}(\ov{k})$. Bala--Carter theory (see \cite[\S4]{Jantzen}) says that there exists a Levi subgroup $\ov{L}$ of $H_{\ov{k}}$ and a parabolic subgroup $\ov{P}$ of $\ov{L}$ such that $N$ is conjugate to an element contained in the unique open orbit of $\ov{P}$ acting on $\Lie(R_u(\ov{P}))$. Now, as $H$ is split, we may assume up to conjugacy, that $\ov{L}=L_{\ov{k}}$ for a Levi subgroup $L$ of $H$ (see \cite{Solleveld}). As $L$ is also split we may also assume, up to conjugacy, that $\ov{P}=P_{\ov{k}}$ for a parabolic subgroup $P$ of $L$. As the unique open orbit of $P$ acting on $\Lie(R_u(P))$ has a $k$-point, being a Zariski open of a vector $k$-space, we are done. \end{proof} \begin{rem} The morphism $\mc{N}(k)/H(k)\to \mc{N}(\ov{k})/H(\ov{k})$ is rarely injective. As a concrete example, if $H=\SL_{2,\mathbb{Q}}$ then $\left(\begin{smallmatrix}0 & 1\\ 0 & 0\end{smallmatrix}\right)$ and $\left(\begin{smallmatrix}0 & -1\\ 0 & 0\end{smallmatrix}\right)$ are $H(\ov{\mathbb{Q}})$-conjugate, but not $H(\mathbb{Q})$-conjugate. \end{rem} Before we show that our two spaces with $H$-action have finitely many $H(\ov{k})$-orbits, we observe the following. \begin{prop}\label{prop:SL2-Hom-open-orbits} The morphism $\underline{\Hom}(\SL_{2,k},H)^\sqcup\to\underline{\Hom}(\SL_{2,k},H)$ is an isomorphism. \end{prop} \begin{proof} It suffices to assume that $k$ is algebraically closed. Then, by Proposition \ref{prop:hom-schem-omnibus} the orbits of $k$-points of $\underline{\Hom}(\SL_{2,k},H)$ are open. But, by Proposition \ref{prop:sqcup-omnibus} we deduce that the morphism under consideration is a monomorphism which is locally on the source an open embedding, so itself an open embedding. As the image contains every $k$-point it is an isomorphism. \end{proof} \begin{prop} The sets $\Hom(\SL_{2,\ov{k}},H_{\ov{k}})/H(\ov{k})$ and $\mc{N}(\ov{k})/H(\ov{k})$ are finite. \end{prop} \begin{proof} By Theorem \ref{thm:JM-classical} these two sets are in bijection, so it suffices to prove the finiteness of either. The finiteness of the latter set is a classical result (e.g.\@ see \cite[\S2.8, Theorem 1]{Jantzen}). Alternatively, one may prove the finiteness of the former set by observing that by Proposition \ref{prop:SL2-Hom-open-orbits} the sets $\Hom(\SL_{2,\ov{k}},H_{\ov{k}})/H(\ov{k})$ and $\pi_0(\underline{\Hom}(\SL_{2,{\ov{k}}},H_{\ov{k}}))$ are equipotent. But, by Proposition \ref{prop:hom-schem-omnibus} the scheme $\underline{\Hom}(\SL_{2,{\ov{k}}},H_{\ov{k}})$ is finite type over $k$ and thus $\pi_0(\underline{\Hom}(\SL_{2,{\ov{k}}},H_{\ov{k}}))$ is finite. \end{proof} By the functoriality of the orbit separation construction the Jacobson--Morozov morphism factors uniquely through $\mc{N}^\sqcup$ and we also denote the resulting map $\underline{\Hom}(\SL_{2,k},H)\to \mc{N}^\sqcup$ by $\mathsf{JM}$. But, unlike $\underline{\Hom}(\SL_{2,k},H)$, the orbit separation space $\mc{N}^\sqcup$ is essentially never equal to $\mc{N}$. \begin{prop} The morphism $\mc{N}^\sqcup\to\mc{N}$ is an isomorphism if and only if $H$ is abelian. \end{prop} \begin{proof} If $H$ is abelian then $\mc{N}$ is a single point. If $\mc{N}^\sqcup\to\mc{N}$ is an isomorphism then by Proposition \ref{prop:sqcup-omnibus} the orbit of $0$ is open, but as it is also closed and $\mc{N}$ is connected we deduce that it is equal to $\mc{N}$. As $\dim(\mc{N})$ is equal to $\dim(H)-r(H)$, we see that $H$ is a torus as desired. \end{proof} \begin{eg}\label{eg:naive-JM-fails} The element $\mathbf{N}=\left(\begin{smallmatrix}0 & t\\ 0 & 0\end{smallmatrix}\right)$ defines a point of $\mc{N}_{\GL_{2,k}}(k[t])$ not in $\mc{N}_{\GL_{2,k}}^\sqcup(k[t])$. \end{eg} To state our geometric Jacobson--Morozov theorem, note that by Theorem \ref{thm:JM-classical} the map \begin{equation*} \mathsf{JM}\colon \Hom(\SL_{2,k},H)/H(k)\to \mc{N}(k)/H(k), \end{equation*} is a bijection. For each $\theta$, writing $N=\mathsf{JM}(\theta)$, define $\mathsf{JM}_\theta$ to be the map $\mc{O}_\theta \to\mc{O}_N$ which may be described as the quotient map $H/Z_H(\theta)\to H/Z_H(N)$. \begin{thm}[Geometric Jacobson--Morozov]\label{thm:geom-JM-split} Suppose that $H$ is split. The morphism $\mathsf{JM}\colon \underline{\Hom}(\SL_{2,k},H)\to \mc{N}$ factorizes through $\mc{N}^\sqcup$, where it may be described as $\bigsqcup_\theta \mathsf{JM}_\theta$. \end{thm} \subsection{The relative Jacobson--Morozov theorem} We now apply the geometric Jacobson--Morozov theorem to obtain a more concrete result on the level of $A$-points. \begin{thm}[Relative Jacobson--Morozov]\label{thm:relative-jm} Let $A$ be a $k$-algebra. Then, the map \begin{equation*} \mathsf{JM}\colon \Hom(\SL_{2,A},H_A)/H(A)\to \mc{N}(A)/H(A) \end{equation*} is a bijection onto $\mc{N}^\sqcup(A)/H(A)$. \end{thm} \begin{proof} Assume first that $\Spec(A)$ is connected. By Theorem \ref{thm:geom-JM-split}, it suffices to show that for each $\theta$ the map $\mathsf{JM}_\theta$ induces a bijection $\mathcal{O}_\theta(A)/H(A)\to \mathcal{O}_N(A)/H(A)$. But, by Proposition \ref{prop:orbit-desc-gen} it suffices to show that the natural map $H^1_\mathrm{\acute{e}t}(\Spec(A),Z_H(\theta))\to H^1_\mathrm{\acute{e}t}(\Spec(A),Z_H(N))$ is a bijection. But, this follows from Proposition \ref{prop:Zudec} and \cite[Lemma 4.14]{GillePianzola}. For the general case we reduce to the Noetherian case by standard approximation arguments, and then working on each component to the case when $\Spec(A)$ is connected. \end{proof} We now pursue the analogue of Theorem \ref{thm:rel-JM-triples-classical} in the relative setting. \begin{defn} Let $A$ be a $k$-algebra and $\mf{a}$ a Lie algebra over $A$. We call a triple of elements $(e,h,f)$ in $\mf{a}^3$ such that \begin{equation*} [h,e]=2e,\quad [h,f]=-2f,\quad [e,f]=h, \end{equation*} an \emph{$\mf{sl}_2$-triple} in $\mf{a}$. \end{defn} Denote by $\mc{T}(A)$ (or $\mc{T}_H(A)$ when we want to emphasize $H$) the set of $\mf{sl}_2$-triples in $\mf{h}_A$. Evidently $\mc{T}(A)$ carries a natural conjugation action by $H(A)$. \begin{thm}\label{thm:rel-JM-triples} The following diagram is commutative and each arrow is a bijection \begin{equation*} \xymatrixrowsep{3pc}\xymatrixcolsep{5pc}\xymatrix{\Hom(\SL_{2,A},H_A)/H(A)\ar[r]^{\theta\,\longmapsto \,d\theta}\ar[d]^{\mathsf{JM}} & \Hom(\mf{sl}_{2,A},\mf{h}_A)/H(A)\ar[d]^{\nu\mapsto (\nu(e_0),\nu(h_0),\nu(f_0))}\\ \mc{N}^\sqcup(A)/H(A) & \mathcal{T}(A)/H(A). \ar[l]_{e\,\longmapsfrom \,(e,h,f)}} \end{equation*} \end{thm} \begin{proof} By Theorem \ref{thm:relative-jm} the left vertical arrow is a bijection. The right vertical arrow is clearly a bijection, and the top horizontal arrow is a bijection by Proposition \ref{prop:hom-schem-omnibus}. We thus deduce that the bottom horizontal arrow is well-defined (i.e.\@ takes values in $\mc{N}^\sqcup(A)$) and is bijective. \end{proof} \subsection{A relative version of Kostant's characterization of $\mf{sl}_2$-triples} This final subsection is dedicated to giving a proof of the following relative version of \cite[Corollary 3.5]{KostdsB}. \begin{prop}\label{prop:Kostant-triples-prop} Let $A$ be a $k$-algebra and $\mf{a}$ a Lie subalgebra of $\mf{h}_A$. Then, for a pair $(e,h)$ in $\mf{a}^2$, there exists an $\mf{sl}_2$-triple of the form $(e,h,f)$ in $\mf{a}$ if and only if the following conditions hold: \begin{enumerate} \item $e\in\mc{N}^\sqcup(A)$, \item $h$ is in the image of $\mathrm{ad}(e)\colon \mf{a}\to \mf{a}$, \item $[h,e]=2e$. \end{enumerate} \end{prop} Let us set \begin{equation*} \mf{h}^e_A\vcentcolon=\ker\left(\ad(e)|\mf{h}_A\to\mf{h}_A\right),\qquad \mf{a}^e\vcentcolon=\ker\left(\ad(e)|\mf{a}\to\mf{a}\right). \end{equation*} If $\ad(e)(x)$ is zero then $\ad(e)(\ad(h)(x))$ is also zero. Thus, $\ad(h)$ stabilizes $\mf{h}_A^e$ and $\mf{a}^e$. \begin{lem}\label{lem:ad(x)+2-invertible}The $A$-linear map $\ad(h)+2\colon \mf{a}^e\to\mf{a}^e$ is an isomorphism. \end{lem} \begin{proof} It suffices to show this result after passing to an etale cover $\Spec(B)\to\Spec(A)$. Indeed, since $A\to B$ is faithfully flat we have that $(\mf{a}^e)_B=\mf{a}_B^e$, and moreover that $\ker(\ad(h)+2)$ and $\mathrm{coker}(\ad(h)+2)$ are trivial if and only if they are so after tensoring with $B$. Thus, we may assume without loss of generality that $e$ is an element of $\mc{N}(k)$. With notation as in Lemma \ref{lem:eigenvalues-ad(x)+2} below, the $A$-algebra map $A[T]\to \End_A(\mf{a}^e)$ sending $T$ to $\ad(h)$ factorizes through $A[T]/(p(T))$. But, by the Chinese remainder theorem $T+2$ is a unit in this ring. \end{proof} \begin{lem}[{cf.\@ \cite[Lemma 3.4]{KostdsB}}]\label{lem:eigenvalues-ad(x)+2} Suppose that $e$ is an element of $\mc{N}(k)$. Let $m$ be the smallest element such that $\ad(e)^{m+1}$ is trivial on $\mf{h}$. Then, $p(\ad(h)|_{\mf{h}^e_A})=0$ where \begin{equation*} p(T)=\prod_{i=0}^m\left(T-i \right). \end{equation*} Thus, a fortiori, we see that $p\left(\ad(h)|_{\mf{a}^e}\right)=0$. \end{lem} \begin{proof} For each $i=0,\ldots,m+1$ let us set \begin{equation*} \mf{d}_i\vcentcolon=(\ad(e)^i(\mf{h})\cap \mf{h}^{e})\otimes_k A. \end{equation*} Observe that \begin{equation*} \mf{h}^{e}_A=\mf{d}_0\supseteq\cdots\supseteq \mf{d}_{m+1}=0. \end{equation*} We claim then that $(\ad(h)-i)(\mf{d}_i)\subseteq \mf{d}_{i+1}$. Note that $\mf{d}_i$ is generated as an $A$-algebra by elements of the form $\ad(e)^i(z)$ for $z$ in $\mf{h}$. The exact same algebra as in \cite[Lemma 3.4]{KostdsB} then shows the desired containment, from where the claim is clear. \end{proof} Returning to the proof of Proposition \ref{prop:Kostant-triples-prop}, let us write $h=\ad(e)(f)$. Note that $[[h,f]+2f,e]$ vanishes and thus $[h,f]+2f$ is in $\mf{a}^e$. By Lemma \ref{lem:ad(x)+2-invertible} we may write $[h,f]+2f=[h,g]+2g$ for some $g$ in $\mf{a}^e$. So then, if we take $f''=f-g$ then \begin{equation*} [h,e]=2e,\qquad [h,f'']=[h,f]-[h,g]=-2f'',\qquad [e,f'']=[e,f]-[e,g]=h-0=h, \end{equation*} as desired. \section{Moduli spaces of Weil--Deligne parameters} To give a geometrization of the results of \S\ref{ss:JM-for-params-classical} it is useful to first develop a space intermediary between the moduli space of $L$-parameters (see \S\ref{s:L-param}) and the moduli space of Weil--Deligne parameters. We give such a space in this section which, in short, parameterizes Weil--Deligne parameters whose monodromy operator lies in $\mathcal{N}^\sqcup$. \subsection{The moduli space of Weil--Deligne parameters}\label{ss:WD-params} We first recall the moduli space of Weil--Deligne parameters roughly following the presentation as in \cite{ZhuCohLp}. \medskip \paragraph{Initial definitions} We begin by defining the relative analogue of a Weil--Deligne parameter. \begin{defn} For a $\mathbb{Q}$-algebra $A$, we define a \emph{Weil--Deligne parameter over} $A$ to be a pair $(\varphi,N)$ where \begin{enumerate}[leftmargin=2cm,widest=iiiiii] \item[\textbf{(WDP1)}] $\varphi\colon \mathcal{W}_{F,A}\to {^C}\!G_A$ is a morphism of group $A$-schemes such that $p_C \circ \varphi=(\| \cdot \|,\id)$, \item[\textbf{(WDP2)}] $N$ is an element of $\widehat{\mc{N}}(A)$ such that $\Ad(\varphi(w))(N)=\|w\| N$ for all $w \in \mathcal{W}_F (A)$. \end{enumerate} \end{defn} We denote the set of Weil--Deligne parameters over $A$ by $\mathsf{WDP}_G(A)$ which clearly constitutes a presheaf on $\mathbb{Q}$-algebras. The presheaf $\mathsf{WDP}_G$ has a natural action by $\widehat{G}$ given by \begin{equation*} g(\varphi,N)g^{-1}\vcentcolon= (\Int(g)\circ \varphi,\Ad(g)(N)). \end{equation*} So, for a Weil--Deligne parameter $(\varphi,N)$ we may consider the centralizer group presheaf $Z_{\widehat{G}}(\varphi,N)$. We define the morphism $\check{\varphi}\colon \mathcal{W}_{F,A}\to \check{G}_A$ of schemes as the composition of $\varphi$ with the projection to $\check{G}_A$. We denote by $\ov{\varphi}$ the homomorphism $\mathcal{W}_{F,A}\to (\widehat{G}\rtimes \underline{\Gamma_\ast})_A$ obtained by composing $\varphi$ with the quotient map ${^C}\!G_A\to (\widehat{G}\rtimes \underline{\Gamma_\ast})_A$. Observe that while $\check{\varphi}$ may not be a homomorphism, it becomes so after restriction to $\mathcal{W}_{F^\ast,A}$. In particular, for any $w \in \mathcal{W}_F (A)$ the restriction of $\check{\varphi}$ to $\langle w^m\rangle$ is a homomorphism whenever $[F^\ast:F]$ divides $m$. Let $K$ be a finite extension of $F^\ast$ Galois over $F$, and let us define for a $\mathbb{Q}$-algebra $A$ the set \begin{equation*} \mathsf{WDP}_G^K(A)\vcentcolon= \left\{(\varphi,N)\in \mathsf{WDP}_G(A): \mathcal{I}_{K,A}\subseteq \ker(\check{\varphi}|_{\mathcal{W}_{F^\ast,A}})\right\}. \end{equation*} We observe that $\mathsf{WDP}_G^K$ forms a $\widehat{G}$-stable subfunctor of $\mathsf{WDP}_{G}$. In fact, one sees that there is an equality of functors $\mathsf{WDP}_G=\varinjlim \mathsf{WDP}_G^K$ as $K$ travels over all such extensions. We finally observe that $\mathsf{WDP}_G$ has a more familiar form over an extension $k$ of $\mathbb{Q}$ containing an element $c$ such that $c^2=q$. More precisely, for a $k$-algebra $A$, we equip $\widehat{G}(A)$ with the discrete topology and put \begin{equation*} \mathsf{WDP}_{G,k}'(A)\vcentcolon= \left\{(\varphi,N):\begin{aligned} (1)&\quad \varphi\colon W_F\to \widehat{G}(A) \rtimes W_F \text{ is a a continuous cross-section homomorphism}, \\ (2)&\quad N\in\widehat{\mathcal{N}}(A)\text{ is such that }\Ad(\varphi(w))(N)=\|w\|N\text{ for all }w\in W_F\end{aligned}\right\}. \end{equation*} It is clear that $\mathsf{WDP}'_G$ is a functor on the category of $k$-algebras and comes equipped with a natural action of $\widehat{G}_k$. Let us also observe that if $i_c$ is the map from \S\ref{ss:L-and-C} then there is a morphism $i_c^{\mathrm{WD}} \colon \mathsf{WDP}_{G,k}'\to \mathsf{WDP}_{G,k}$ which on $A$-points is given by sending $(\varphi',N)$ to the unique element of $\mathsf{WDP}_G(A)$ of the form $(\varphi,N)$ which is equal to $(i_c\circ \varphi',N)$ on $A$-points. \begin{prop}\label{prop:WD-C-L-comparison} The morphism of functors $i_c^{\mathrm{WD}} \colon \mathsf{WDP}'_{G,k}\to \mathsf{WDP}_{G,k}$ is an isomorphism. \end{prop} \begin{proof} This follows from the cartesian diagram \begin{equation*} \xymatrixcolsep{4pc} \xymatrix{{}^L G_k\ar[r]^{i_c}\ar[d] & {}^C G_k \ar[d]^{p_C}\\ \mathcal{W}_{F,k}\ar[r]^-{(\| \cdot \|,\id )} & \mathbb{G}_{m,k} \times \mathcal{W}_{F,k} } \end{equation*} and that any morphism $\mathcal{W}_{F,A}\to \check{G}_A$ of schemes over $A$ factors through $(\mathcal{W}_{F}/\mathcal{I}_{K})_A$ for a finite extension $K$ of $F$. \end{proof} \paragraph{Representability} We now establish the representability of the functor $\mathsf{WDP}_{G}$. To this end, let us fix $K$ a finite extension of $F^\ast$ Galois over $F$. Note that for a $\mathbb{Q}$-algebra $A$ and an element $(\varphi,N)$ of $\mathsf{WDP}_G^K(A)$ we may define an element $\phi$ of $\underline{Z}^1(I_F/I_K,\widehat{G})(A)$ as follows. First observe that condition \textbf{(WDP1)} implies that $\varphi|_{\mathcal{I}_{F,A}}$ takes values in $\widehat{G}_A \rtimes \mathcal{I}_{F,A}$. Then, as $(\varphi,N)$ is in $\mathsf{WDP}_G^K(A)$, the composition of $\varphi|_{\mathcal{I}_{F,A}}$ with the projection $\widehat{G}_A\rtimes \mathcal{I}_{F,A}\to \widehat{G}_A\rtimes (\mathcal{I}_F/\mathcal{I}_K)_A$ factorizes through a cross-section homomorphism $(\mathcal{I}_F/\mathcal{I}_K)_A \to \widehat{G}_A\rtimes (\mathcal{I}_F/\mathcal{I}_K)_A$. This gives an element $\phi$ of $\underline{Z}^1(I_F/I_K,\widehat{G})(A)$ since $\mathcal{I}_F/\mathcal{I}_K \cong \underline{I_F/I_K}$. This association defines a morphism of presheaves $\mathsf{WDP}_G^K\to \underline{Z}^1(I_F/I_K,\widehat{G})$. Let us now fix a lift $w_0$ of arithmetic Frobenius in $W_F$. Define a morphism of presheaves \begin{equation*} j_{w_0}\colon \mathsf{WDP}_G^K\to \check{G}\times \underline{Z}^1(I_F/I_K,\widehat{G})\times \widehat{\mc{N}},\qquad (\varphi,N)\mapsto (\check{\varphi}(w_0),\phi,N). \end{equation*} On the other hand, we have a diagram \begin{equation*} \mc{D}^\mathrm{WD}\colon \xymatrix{\check{G}\times \underline{Z}^1(I_F/I_K,\widehat{G})\times \widehat{\mc{N}} \ar@<-.5ex>[r] \ar@<.5ex>[r]& \underline{\Hom}(I_F/I_K,\widehat{G})\times \bb{G}_{m,\mathbb{Q}}\times \widehat{\mc{N}}^{[I_F:I_K]+1}} \end{equation*} given by \begin{equation*} \begin{aligned} (g,f,M)\mapsto & \bigg(\Int(g,w_0)\circ f,p_{\mathbb{G}_m}(g), (\mathrm{Ad}(f(i))(M))_{i\in I_F/I_K},\Ad(g,w_0)(M)\bigg)\\ (g,f,M) \mapsto &\bigg(f\circ \Int(w_0),q,(M)_{i\in I_F/I_K},qM\bigg).\end{aligned} \end{equation*} We then have the following explicit description of $\mathsf{WDP}_G^K$. \begin{prop}\label{prop:wd-finite-level-rep} The morphism $j_{w_0}$ identifies $\mathsf{WDP}_G^K$ with the equalizer $\mathsf{Eq}(\mc{D}^\mathrm{WD})$. Thus, $\mathsf{WDP}_G^K$ is representable by a finite type affine $\mathbb{Q}$-scheme and $j_{w_0}$ is a closed embedding. \end{prop} Observe that for an extension $K\subseteq K'$ of Galois extensions of $F$ containing $F^\ast$ there is a restriction morphism $\underline{Z}^1(I_F/I_{K'},\widehat{G})\to \underline{Z}^1(I_K/I_{K'},\widehat{G})$. By Proposition \ref{prop:cocycle-scheme} and Proposition \ref{prop:sqcup-omnibus} the subspace consisting of only the trivial homomorphism is a clopen subset of the target, and thus so is its preimage in $\underline{Z}^1(I_F/I_{K'},\widehat{G})$, but this is precisely $\underline{Z}^1(I_F/I_K,\widehat{G})$. We deduce that the natural inclusion of functors $\mathsf{WDP}_G^K\to \mathsf{WDP}_G^{K'}$ is a clopen embedding. From the identification $\mathsf{WDP}_G=\varinjlim_K \mathsf{WDP}_G^K$ we deduce from Proposition \ref{prop:wd-finite-level-rep} that $\mathsf{WDP}_G$ is representable by a scheme locally of finite type over $\mathbb{Q}$, all of whose connected components are affine. The following non-trivial result will play an important technical role below. \begin{thm}[{\cite[Corollary 2.3.7]{BeGeGdef} and \cite[Corollary 3.1.10]{ZhuCohLp}}]\label{thm:WD-reduced} The schemes $\mathsf{WDP}_G^K$ are reduced for all $K$, and thus, a fortiori, $\mathsf{WDP}_G$ is reduced. \end{thm} \subsection{Semi-simplicity of parameters} As in the Theorem \ref{thm:JM-params-classical} one requires Frobenius semi-simplicity conditions to get a Jacobson--Morozov result in the relative setting. Therefore, we now develop a sufficient notion of Frobenius semi-simplicity for a Weil--Deligne and $L$-parameter over a $\mathbb{Q}$-algebra $A$. \begin{defn}\label{defn:s-s-elem} Let $R$ be a $\mathbb{Q}$-algebra and $H$ is a smooth group $R$-scheme such that $H^\circ$ is reductive. We then say that an element $h$ of $H(R)$ is \emph{semi-simple} if there exists some $m\geqslant 1$, an \'etale cover $\Spec(S)\to\Spec(R)$, and a torus $T$ of $H^\circ_S$ such that $h^m$ is in $T(S)$. \end{defn} By \cite[Exposé VIB, Corollaire 4.4]{SGA3-1} $H^\circ$ is representable so the above makes sense. Moreover, by \cite[Proposition B.3.4]{ConRgrsch} we may assume that $T$ is split in the above definition. \begin{prop}\label{prop:ss-properties} Let $R$ be a $\mathbb{Q}$-algebra and $H$ is a smooth group $R$-scheme such that $H^\circ$ is reductive, and let $h$ be an element of $H(R)$. Then, the following statements are true. \begin{enumerate} \item If $h$ is semi-simple, there exists an \'etale cover $\Spec(S)\to\Spec(R)$, an integer $m\geqslant 1$, and a split maximal torus $T$ of $H_S^\circ$ such that $h^m$ is in $T(S)$. \item If $Z$ is a closed subgroup $R$-scheme of $Z(H^\circ)$ which is flat over $R$, then $h$ is semi-simple if and only if its image in $(H/Z)(R)$ is semi-simple. \end{enumerate} \end{prop} \begin{proof} To show (1) let $\Spec(S')\to \Spec(R)$ be an \'etale cover and $T'$ a torus of $H^\circ_{S'}$ such that $h^m$ is in $T'(S')$. Note that $Z_{H^\circ}(T')$ is a reductive group (combine \cite[Lemma 2.2.4]{ConRgrsch} and \cite[Corollary 17.59]{MilneGroups}). By \cite[Corollary 3.2.7]{ConRgrsch} there exists an \'etale cover $\Spec(S)\to\Spec(S')$ and a maximal torus $T$ of $Z_{H^\circ}(T')_S$. Observe that $T$ is also a maximal torus of $H^\circ_S$. Indeed, it is evidently a torus, and its maximality can be checked over each point $x$ of $\Spec(S)$ over which it is clear. As $T'_S$ is central in $Z_{H_{S'}^\circ}(T')_S$ it is clear that $T$ contains $T'_S$ and thus $h^m$ is contained in $T(S)$. As we may pass to a futher \'etale extension to split $T$ the claim follows. Let $f\colon H^\circ\to H^\circ/Z$ be the tautological map. To prove (2) it is sufficient to note that for any $R$-algebra $S$ one has that the map $T\mapsto T/Z$ and $T'\mapsto f^{-1}(T')$ are mutually inverse bijections between the maximal tori of $H^\circ_S$ and $(H^\circ/Z)_S$ by \cite[Corollary 3.3.5]{ConRgrsch}. \end{proof} Consider a representation $\rho\colon H\to \GL(M)$ where $M$ is a finitely generated $R$-module. Let $h$ be an element of $H(R)$ and $I$ a finite subgroup of $H(R)$ that is stable under conjugation by $h$. For any $R$-algebra $S$ and any $\lambda$ in $S^\times$ let us set \begin{equation*} M_S^I(h,\lambda)\vcentcolon=\ker\left(\rho(h)-\lambda|M_S^{\rho(I)}\to M_S^{\rho(I)}\right). \end{equation*} Abbreviate $M_R^I(h,\lambda)$ to $M^I(h,\lambda)$, and further abbreviate to $M^I(\lambda)$ if $h$ is clear from context. Finally, we omit $I$ from the notation if $I$ is trivial. Evidently $M_S^I(h,\lambda)\otimes_S S'$ is equal to $M_{S'}^I(h,\lambda)$ for any flat map of $R$-algebras $S\to S'$. \begin{prop}\label{prop:eigen-decomp} Assume that $h$ is semi-simple. Then, there exists a unique decomposition \begin{equation*} M^I=\bigoplus_{\lambda \in R^\times}M^I(h,\lambda)\oplus M' \end{equation*} such that for any flat map $R\to S$ one has that \begin{equation*} \bigoplus_{\lambda\in S^\times-R^\times}M_S^I(h,\lambda) \end{equation*} is a direct summand of $M'_S$, and such that this is an equality if for some $m\geqslant 1$: \begin{enumerate} \item $h^m$ is contained in a split torus of $H^\circ_S$ and commutes with $I$, \item $S$ is a $\mathbb{Q}(\zeta_r)$-algebra, where $r\vcentcolon= [\langle h\rangle : \langle h^m\rangle]$ and $\zeta_r$ is a primitive $r^\text{th}$-root of unity, \item and $S$ contains an $r^\text{th}$-root of all $\lambda$ such that $M(h^r,\lambda)\ne 0$. \end{enumerate} \end{prop} \begin{proof} Take an \'etale cover $\Spec(S)\to \Spec(R)$ and $m\geqslant 1$ such that $h^m$ is contained in a split torus $T$ of $H^\circ_S$ and $h^m$ commutes with $I$. Then $h^r \in \langle h^m\rangle$ is contained in $T$ and commutes with $I$. By \cite[Lemma A.8.8]{CGP} one may decompose $M_S$ into character spaces $M_S(\chi)$. One then observes that $M_S(h^r,\lambda)$ is precisely the direct sum of those character spaces $M_S(\chi)$ such that $\chi(h^r)=\lambda$. So, $M_S$ admits a direct sum decomposition with respect to the spaces $M_S(h^r,\lambda)$. As $M_S$ is finitely generated, we know that $M_S(h^r,\lambda)$ is trivial for all but finitely many $\lambda_1,\ldots,\lambda_e$. In particular, we may further pass to the \'etale extension $S'\vcentcolon= S[\lambda_1^{\nicefrac{1}{r}},\ldots,\lambda_e^{\nicefrac{1}{r}},\zeta_r]$. We extend the action of $I$ on each nontrivial $M_{S'}(h^r,\lambda)$ by $\rho$ to the action of the finite group $I \rtimes (\langle h\rangle/\langle h^m\rangle)$ letting $h$ act $\lambda^{-\nicefrac{1}{r}}\rho (h)$. As $S'$ is a $\mathbb{Q} (\zeta_r)$-algebra, we have a decomposition of $M_{S'}^I(h^r,\lambda)$ into character spaces $M_{S'}(h^r,\lambda)[\nu]$ where $\nu$ travels over the characters $I \rtimes (\langle h\rangle/\langle h^m\rangle) \to \langle h\rangle/\langle h^m\rangle\to S'$. We then see that for each $\tau \in (S')^\times$ such that $\tau^r=\lambda$ the space $M_{S'}^I(h,\tau)$ admits a direct decomposition into the spaces $M_{S'}(h^r,\lambda)[\nu]$ as $\nu$ ranges over those characters with $\nu(h)=\lambda^{-\nicefrac{1}{r}}\tau$. One may then check that the module $\bigoplus_{\tau}M_{S'}^I(h,\tau)$ as $\tau$ ranges over those elements of $(S')^\times-R^\times$ is stabilized under the \'etale descent data associated to $M_{S'}^I$, and therefore (see \stacks{023N}) descends to a submodule $M'$ of $M^I$. One sees that $M'$ is a complement of $\bigoplus_\lambda M^I(h,\lambda)$ as $\lambda$ travels over the elements of $R^\times$, as this may be checked over the faithfully flat extension $S'$. One may then check that $M'$ is independent of all choices, and satisfies the desired conditions. \end{proof} The following proposition will be helpful to define Frobenius semi-simple in a way that does not require the choice of an explicit arithmetic Frobenius lift. \begin{prop}\label{prop:Frob-factor} Let $\varphi\colon \mathcal{W}_{F,A}\to {^C}\!G_A$ be a morphism of group schemes over a $\mathbb{Q}$-algebra $A$. Then there is a positive integer $m$ divisible by $[F^\ast:F]$ such that the morphism $\mathcal{W}_{F,A}\to \check{G}_A$ given by $w \mapsto \check{\varphi}(w^m)$ admits a factorization \begin{equation*} \mathcal{W}_{F,A} \stackrel{d}{\longrightarrow} \underline{\mathbb{Z}}_A \stackrel{\check{\varphi}_m}{\longrightarrow} \check{G}_A \end{equation*} and $\check{\varphi}_m$ takes values in $Z_{\check{G}}(\varphi)$. \end{prop} \begin{proof} Take a finite extension $K$ of $F^\ast$ Galois over $F$ such that $\check{\varphi}|_{\mathcal{I}_{K,A}}$ is trivial. Take a lift $w_0 \in W_F$ of arithmetic Frobenius and choose $m_0$ such that the image of $w_0^{m_0}$ in $W_F/I_K$ is central. Let $m$ be the order of $W_F/I_K \langle w_0^{m_0} \rangle$. Then for any $w \in W_F$, since $w^m$ is trivial in $W_F/I_K\langle w^{m_0}_0 \rangle$, we have that $w^m=iw^{d(w)m}_0$ for some $i \in I_K$. Hence, the images of $w^m$ and $w_0^{md(w)}$ in $W_{F^\ast}/I_K$ are the same. Since $\check{\varphi}|_{\mathcal{W}_{F^\ast,A}}$ factors through $(\mathcal{W}_{F^\ast}/\mathcal{I}_{K})_A$, we have $\check{\varphi}(w^m)=\check{\varphi}(w_0^m)^{d(w)}$ for any point $w$ of $\mathcal{W}_{F,A}$. Hence we have the factorization $\check{\varphi}_m \colon \underline{\mathbb{Z}}_A \to \check{G}_A$. The composition \begin{equation*} \mathcal{W}_{F,A} \stackrel{\varphi}{\longrightarrow} {^C}\!G \longrightarrow \check{G}_A \rtimes (\mathcal{W}_{F}/\mathcal{I}_{K})_A \end{equation*} factors through $\varphi_K \colon (\mathcal{W}_{F}/\mathcal{I}_{K})_A \to \check{G}_A \rtimes (\mathcal{W}_{F}/\mathcal{I}_{K})_A$. To show that $\check{\varphi}_m$ factors through $Z_{\check{G}}(\varphi)$, it suffices to show $\check{\varphi}(w_0^m) \in Z_{\check{G}}(\varphi_K)$. Since the image of $w_0^m$ in $W_F/I_K$ is central, we have $\varphi_K(w_0^m) \in Z_{\check{G}_A \rtimes (\mathcal{W}_{F}/\mathcal{I}_{K})_A}(\varphi_K)$. Since the image of $(1,w_0^m)$ in $\check{G}_A \rtimes (\mathcal{W}_{F}/\mathcal{I}_{K})_A$ is central, we obtain $\check{\varphi}(w_0^m) \in Z_{\check{G}_A}(\varphi_K)$. \end{proof} To define the notion of Frobenius semi-simple parameters, it is useful to have the following analogue of Lemma \ref{lem:L-group-ss}. \begin{prop}\label{prop:Frob-ss-equiv}Let $(\varphi,N)$ be an element of $\mathsf{WDP}_G(A)$. Then, the following are equivalent: \begin{enumerate} \item for any (equiv.\@ one) lift $w_0\in W_F$ of arithmetic Frobenius, $\ov{\varphi}(w_0)$ is semi-simple, \item for some $m$ as in Proposition \ref{prop:Frob-factor}, the morphism $\check{\varphi}_{m}$ \'etale locally factorizes through a torus of $\check{G}_A$. \end{enumerate} \end{prop} \begin{proof} By definition, (1) holds if and only if $\ov{\varphi}(w_0)$ has the property that $\ov{\varphi}(w_0)^{m}$ \'etale locally lies in a torus of $(\check{G}\rtimes \underline{\Gamma_\ast})_A^\circ=\check{G}_A$ for some $m$ as in Proposition \ref{prop:Frob-factor}. But, as an element of $\check{G}_A$, one easily sees that $\ov{\varphi}(w_0)^{m}$ is precisely $\check{\varphi}_{m}(1)$. As it is clear that (2) is equivalent to claim that \'etale locally on $A$ there exists a torus containing $\check{\varphi}_{m}(1)$ the claim follows. \end{proof} \begin{defn}\label{defn:Frob-ss-WD-param} For a $\mathbb{Q}$-algebra $A$, we call an element $(\varphi,N)$ of $\mathsf{WDP}_G(A)$ \emph{Frobenius semi-simple} if it satisfies any of the equivalent conditions of Proposition \ref{prop:Frob-ss-equiv}. \end{defn} For each $\mathbb{Q}$-algebra $A$, let us denote by $\mathsf{WDP}^\ss_G(A)$ (resp.\@ $\mathsf{WDP}_G^{K,\ss}(A)$) the subset of $\mathsf{WDP}_G(A)$ (resp.\@ $\mathsf{WDP}_G^K(A)$) consisting of Frobenius semi-simple parameters. It is clear that this forms a $\widehat{G}$-stable subpresheaf\footnote{Note that one does not expect this presheaf to be representable as the semi-simple elements in algebraic group form a constructible, but not locally closed, subset} of $\mathsf{WDP}_G$ (resp.\@ $\mathsf{WDP}_G^K$). Note also that by Proposition \ref{prop:ss-properties}, under the bijection of $\mathsf{WDP}_G(\mathbb{C})$ with $\Phi^{\mathrm{WD},\square}_G$ the set $\mathsf{WDP}_G^\ss(\mathbb{C})$ corresponds to $\Phi^{\mathrm{WD},\ss,\square}_G$. The following technical result will play an important role later in the paper. \begin{prop}\label{prop:red-cent-ss} If $A$ is a reduced $\mathbb{Q}$-algebra and $(\varphi,N)$ is an element of $\mathsf{WDP}_G(A)$ such that $Z_{\widehat{G}}(\varphi,N)^\circ_x$ is reductive of dimension $n$ for all $x$ in $\Spec(A)$, then $(\varphi,N)$ is Frobenius semi-simple. \end{prop} \begin{proof} Define $S(N)$ to be the closed subgroup scheme of $\check{G}_A$ cut out by the closed condition $gNg^{-1} =p_{\mathbb{G}_m}(g)N$. We have the equality $Z_{\widehat{G}}(\varphi,N)=\ker(p_{\mathbb{G}_m}|_{Z_{S(N)}(\varphi)})$. Note that for all $x$ in $\Spec(A)$ one has a short exact sequence \begin{equation*} 1\to Z_{\widehat{G}}(\varphi,N)_x\to Z_{S(N)}(\varphi)_x\to \bb{G}_{m,x}\to 1, \end{equation*} and as $Z_{\widehat{G}}(\varphi,N)^\circ_x$ is assumed to be reductive of dimension $n$ for all $x$ in $\Spec(A)$, that $Z_{S(N)}(\varphi)^\circ_x$ is reductive of dimension $n+1$, and thus $Z_{S(N)}(\varphi)^\circ$ is representable and smooth over $A$, and thus reductive over $A$, by \cite[Exposé VIB, Corollaire 4.4]{SGA3-1} and \cite[Theorem 3.23]{MilneGroups}. We take $m$ as Proposition \ref{prop:Frob-factor}. Then $\check{\varphi}_m$ factors through $Z_{S(N)}(\varphi)$. Further it factors through $Z(Z_{S(N)}(\varphi))$, since $\varphi(w^m)$ and $(1,w^m)$ commutes with $Z_{S(N)}(\varphi)$ for any point $w$ of $\mathcal{W}_{F,A}$. Then there is an $m'$ such that $\check{\varphi}_m^{m'}=\check{\varphi}_{mm'}$ factors through $Z(Z_{S(N)}(\varphi)^{\circ})^{\circ}$. As $Z_{S(N)}(\varphi)^\circ$ is reductive, $Z(Z_{S(N)}(\varphi)^\circ)^\circ$ is a torus. Hence $(\varphi,N)$ is Frobenius semi-simple. \end{proof} \subsection{\texorpdfstring{The space $\mathsf{WDP}^\sqcup_G$}{The space Locsqcup}} In this section we study the moduli space of Weil--Deligne parameters $(\varphi,N)$ where $N$ lies in $\mc{N}^\sqcup$ and show that this moduli space has an exceedingly simple structure. \begin{defn} We denote by $\mathsf{WDP}_G^{K,\sqcup}$ (resp.\@ $\mathsf{WDP}^\sqcup_{G}$) the space $\mathsf{WDP}_G^K\times_{\widehat{\mc{N}}}\widehat{\mc{N}}^\sqcup$ (resp.\@ $ \mathsf{WDP}_{G}\times_{\widehat{\mc{N}}}\widehat{\mc{N}}^\sqcup=\varinjlim_K \mathsf{WDP}_G^{K,\sqcup}$). \end{defn} Now, let us fix a finite extension $K$ of $F^\ast$ Galois over $F$ and a lift $w_0$ of arithmetic Frobenius. Then, by Proposition \ref{prop:wd-finite-level-rep} we have an identification $j_{w_0}$ of $\mathsf{WDP}_G^K(\ov{\mathbb{Q}})$ with \vspace*{1 pt} \begin{equation*} \left\{(\gamma,\phi,N)\in \check{G}(\ov{\mathbb{Q}})\times \underline{Z}^1(I_F/I_K,\widehat{G})(\ov{\mathbb{Q}})\times \widehat{\mc{N}}(\ov{\mathbb{Q}}):\begin{aligned}(1)&\quad \Int(\gamma,w_0)\circ \phi=\phi\circ\Int(w_0),\\ (2)&\quad p_{\mathbb{G}_m}(\gamma)=q,\\ (3)&\quad \mathrm{Ad}(\phi(i))(N)=N\text{ for all }i\in I_F/I_K,\\ (4)&\quad \Ad(\gamma,w_0)(N)=qN\end{aligned}\right\}. \end{equation*} \vspace*{1 pt} Now, for $(\gamma,\phi,N)$ in $\mathsf{WDP}_G^K(\ov{\mathbb{Q}})$ let us define $Z_{\phi,N}\vcentcolon= Z_{\widehat{G}}(\phi,N)$. \begin{defn} An element $(\gamma',\phi',N')$ in $\mathsf{WDP}_G^K(A)$, for a $\ov{\mathbb{Q}}$-algebra $A$, is \emph{locally movable to $(\gamma,\phi,N)$} if there exists an \'etale cover $\Spec(A')\to\Spec(A)$ and $(g,h)\in(\widehat{G}\times Z_{\phi,N}^\circ)(A')$ such that $(\gamma',\phi',N')=g(h\gamma,\phi,N)g^{-1}$. \end{defn} As this definition is clearly functorial, we observe that we may define a subpresheaf $U(\gamma,\phi,N)$ of $\mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}$ whose $A$-points are given by \begin{equation*} U(\gamma,\phi,N)(A)\vcentcolon= \left\{(\gamma',\phi',N')\in\mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}(A): (\gamma',\phi',N')\text{ is locally movable to }(\gamma,\phi,N)\right\}. \end{equation*} We then have the following. \begin{prop}\label{prop:wd-loc-mov-is-open} The morphism of presheaves $U(\gamma,\phi,N)\to \mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}$ is representable by an open immersion. Moreover, the $\ov{\mathbb{Q}}$-scheme $U(\gamma,\phi,N)$ is smooth and irreducible. \end{prop} Before we prove this proposition, we observe its major consequence. To this end, let us define an equivalence relation on $\mathsf{WDP}_G^K(\ov{\mathbb{Q}})$ by declaring that $(\gamma,\phi,N)$ is equivalent to $(\gamma',\phi',N')$ if there exists some $(g,h)\in (\widehat{G}\times Z_{\phi,N})(\ov{\mathbb{Q}})$ such that $(\gamma',\phi',N')$ is equal to $g(h\gamma,\phi,N)g^{-1}$. Let us denote an equivalence class under this relation by $[(\gamma,\phi,N)]$. Observe that as we do not require that $h$ to actually lie in $Z^\circ_{\phi,N}(\ov{\mathbb{Q}})$ that $[(\gamma,\phi,N)]$ differs from $U(\gamma,\phi,N)(\ov{\mathbb{Q}})$. For each such equivalence class, let us choose an element $(\gamma,\phi,N)$. We consider $\pi_0(Z_{\phi,N})$ as a finite abstract group, and we define an equivalence relation on it by declaring that $c$ is equivalent to $c_1 c \gamma c_1^{-1} \gamma^{-1}$ for any $c_1$ in $\pi_0(Z_{\phi,N})$. We denote by $[c]$ an equivalence class for this relation. \begin{rem} The group $\langle \gamma \rangle$ acts on $\pi_0(Z_{\phi,N})$ by $\gamma \cdot c = \gamma c \gamma^{-1}$. Note that $\langle \gamma \rangle \cong \mathbb{Z}$ since $p_{\mathbb{G}_m}(\gamma)=q$. Hence, the map $z \mapsto z(\gamma)$ for $z \in Z^1(\langle \gamma \rangle, \pi_0(Z_{\phi,N}))$ induces a bijection between $H^1( \langle \gamma \rangle, \pi_0(Z_{\phi,N}))$ and equivalence classes in $\pi_0(Z_{\phi, N})$. \end{rem} We then have the following decomposition of $\mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}$ into explicit connected components. \begin{thm}\label{thm:WD-const-decomp} The choice of $(\gamma,\phi,N)$ in each class $[(\gamma,\phi,N)]$ of $\mathsf{WDP}_G^K(\ov{\mathbb{Q}})$ gives a scheme-theoretic decomposition \begin{equation*} \mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}=\bigsqcup_{[(\gamma,\phi,N)]}\,\bigsqcup_{[c]}\,\,U(c\gamma,\phi,N). \end{equation*} \end{thm} \begin{proof} From Proposition \ref{prop:wd-loc-mov-is-open} we know that each $U(c\gamma,\phi,N)$ is an open subset of $\mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}$. As $\mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}$ is a finite type $\ov{\mathbb{Q}}$-scheme, it thus suffices to prove this claim at the level of $\ov{\mathbb{Q}}$-points. But, note that by Proposition \ref{prop:wd-finite-level-rep}, if $(\gamma,\phi,N)$ satisfies the conditions to be in $\mathsf{WDP}_G^K(\ov{\mathbb{Q}})$ then $(\gamma',\phi,N)$ does if and only if $\gamma'=h\gamma$ for $h$ in $Z_{\phi,N}(\ov{\mathbb{Q}})$. Thus, we have a decomposition \begin{equation*} \mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}=\bigsqcup_{[(\gamma,\phi,N)]} \,\bigcup_{c\in \pi_0(Z_{\phi,N})}U(c\gamma,\phi,N). \end{equation*} Next observe that an element $(h\gamma,\phi,N)$ may be written in the form $g(h'\gamma,\phi,N)g^{-1}$ if and only if $g$ is in $Z_{\phi,N}(\ov{\mathbb{Q}})$ and $h\gamma=gh'\gamma g^{-1}$ which implies that $h=gh'\gamma g^{-1}\gamma^{-1}$. With this, it is easy to see that \begin{equation*} \bigcup_{c\in \pi_0(Z_{\phi,N})}U(c\gamma,\phi,N)=\bigsqcup_{[c]}\,\,U(c\gamma,\phi,N) \end{equation*} from where the desired equality follows. \end{proof} From this we deduce the following non-trivial result. Let us denote the set of equivalence classes for $\mathsf{WDP}_G^K(\ov{\mathbb{Q}})$ (resp.\@ $\pi_0(Z_{\phi,N})$) by $[\mathsf{WDP}_G^K(\ov{\mathbb{Q}})]$ (resp.\@ $[\pi_0(Z_{\phi,N})]$). \begin{cor}\label{cor:WDP-pi0} The $\mathbb{Q}$-scheme $\mathsf{WDP}_G^{K,\sqcup}$ is smooth, and there is a non-canonical $\Gamma_\mathbb{Q}$-equivariant bijection \begin{equation*} \pi_0\left(\mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}\right)\isomto \left\{([(\gamma,\phi,N)],[c]):\begin{aligned}(1)&\quad [(\gamma,\phi,N)]\in [\mathsf{WDP}_G^K(\ov{\mathbb{Q}})]\\ (2)&\quad [c]\in [\pi_0(Z_{\phi,N})] \end{aligned}\right\} \end{equation*} where the $\Gamma_\mathbb{Q}$ action on the target is inherited from $\mathsf{WDP}_G^{K,\sqcup}$ and $\widehat{G}$. \end{cor} \medskip \paragraph{The proof of Proposition \ref{prop:wd-loc-mov-is-open}} Define the morphism $\pi_K\colon \mathsf{WDP}_G^{K,\sqcup}\to \underline{Z}^1(I_F/I_K,\widehat{G})\times\widehat{\mc{N}}^\sqcup$ by $\pi_K(\varphi,N)= (\phi,N)$. This morphism is $\widehat{G}$-equivariant when the target is endowed with the diagonal $\widehat{G}$-action. Now, by Proposition \ref{prop:cocycle-scheme} there is a decomposition \begin{equation*} \underline{Z}^1(I_F/I_K,\widehat{G})_{\ov{\mathbb{Q}}}\times\widehat{\mc{N}}^\sqcup_{\ov{\mathbb{Q}}}=\bigsqcup_{[(\phi_0,N_0)]\in\mc{J}}\mc{O}_{\phi_0}\times \mc{O}_{N_0} \end{equation*} where $\mc{J} $ is the set of $\widehat{G}(\ov{\mathbb{Q}})^2$ orbits of $(\underline{Z}^1(I_F/I_K,\widehat{G})\times\widehat{\mc{N}}^\sqcup)(\ov{\mathbb{Q}})$. Observe though that if $(\varphi,N)$ is in $\mathsf{WDP}_G^{K,\sqcup}(\ov{\mathbb{Q}})$ with $\pi_K(\varphi,N)=(\phi,N)$ then $\phi$ centralizes $N$. So, if we set $\mc{J}'$ to be the subset of $\mc{J}$ consisting of those $[(\phi_0,N_0)]$ with $\phi_0$ centralizing $N_0$ then we may produce a factorization \begin{equation*} \pi_K\colon \mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}\longrightarrow \bigsqcup_{[(\phi_0,N_0)]\in\mc{J}'}\mc{O}_\phi\times \mc{O}_N \end{equation*} which is $\widehat{G}$-equivariant. For each $[(\phi_0,N_0)]$ in $\mc{J}'$ let us set $X(\phi_0,N_0)\vcentcolon= \pi_K^{-1}(\mathcal{O}_{\phi_0}\times \mathcal{O}_{N_0})$, which is a clopen subset of $\mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}$. Set $L\vcentcolon= Z_{\widehat{G}}(\phi)$ which, by Lemma \ref{lem:fixed-points-reductive}, is a closed subgroup scheme of $\widehat{G}_{\ov{\mathbb{Q}}}$ with reductive identity component. Let $\mf{l}$ be the Lie algebra of $L$. Define $\mc{O}_N\cap\mc{N}_{L}\vcentcolon= \mc{O}_N\times_{\widehat{\mc{N}}}\mc{N}_L$. For each $M$ in $(\mathcal{O}_N\cap \mc{N}_L)(\ov{\mathbb{Q}})$ we denote by $\mc{O}_{L,M}$ the locally closed $L$-orbit subscheme of $(\mc{O}_N\cap\mc{N}_L)_\mathrm{red}$. \begin{lem}\label{lem:intersection-decomp} There exists a finite set $\{N=N_1,N_2,\ldots,N_m\}$ in $(\mc{O}_N\cap \mc{N}_L)(\ov{\mathbb{Q}})$ such that one has an equality of schemes $\mathcal{O}_N\cap \mathcal{N}_L=\bigsqcup_i \mathcal{O}_{L,N_i}$. In particular, $\mc{O}_N\cap\mc{N}_L$ is reduced. \end{lem} \begin{proof} We first show that the claimed decomposition holds for $(\mc{O}_N\cap\mc{N}_L)_\mathrm{red}$. Now, there are only finitely many $L(\ov{\mathbb{Q}})$ orbits in $(\mc{O}_N\cap\mc{N}_L)(\ov{\mathbb{Q}})$ as there are only finitely many $L^\circ$-orbits in $\mathcal{N}_L(\ov{\mathbb{Q}})$. Let $N=N_1,\ldots,N_m$ represent these orbits. By Proposition \ref{prop:sqcup-omnibus} it suffices to show that each $\mc{O}_{L,N_i}$ is open or, as they form a set-theoretic partition of $(\mc{O}_N\cap \mc{N}_L)_\mathrm{red}$, that each is closed. Then, by the Noetherian valuative criterion for properness (see \stacks{0208}) it suffices to show if $R$ is a discrete valuation ring and $f\colon \Spec(R)\to (\mathcal{O}_N\cap\mc{N}_L)_\mathrm{red}$ is a morphism with $f(\eta)\in \mc{O}_{N_i,L}$ then $f(\Spec(R))\subseteq \mc{O}_{N_i,L}$. Assume not, and let $f\colon\Spec(R)\to (\mc{O}_N\cap\mc{N}_L)_\mathrm{red}$ be a morphism such that $f(\eta)\in \mathcal{O}_{L,N_i}(k(\eta))$ and $f(s)\in\mathcal{O}_{L,N_j}(k(s))$ with $i\ne j$. Note that $f$ corresponds to an element $\mb{N}$ in $\mc{N}_L(R)$ which is, as an element of $\widehat{\mc{N}}(R)$, lies in $\mc{O}_N(R)$. Let us consider $Z_{L}(\mb{N})$. On the one hand, $Z_{L}(\mb{N})$ cannot be flat, as its generic fiber (resp.\@ special fiber) is a twisted form of $Z_L(N_i)$ (resp.\@ $Z_L(N_j)$) which has dimension $\dim(L)-\dim(\mc{O}_{L,N_i})$ (resp.\@ $\dim(L)-\dim(\mc{O}_{L,N_j})$). Note though that as $f(s)$ lies in $\ov{\mathcal{O}_{L,N_i}}$, whose $\ov{\mathbb{Q}}$-points are unions of $\ov{\mathbb{Q}}$-points of orbits of smaller dimension (cf.\@ \cite[Proposition 1.66]{MilneGroups}), $\dim(\mc{O}_{L,N_j})$ is strictly less than $\dim(\mc{O}_{N_i,L})$, and thus the fibers of $Z_{L}(\mb{N})$ have different dimensions, and so it cannot be flat over $R$ (see \cite[Corollary 14.95]{GortzWedhorn}). On the other hand, $Z_{\widehat{G}}(\mb{N})$ is flat as its \'etale locally isomorphic to $Z_{\widehat{G}}(N)=Z_{\widehat{G}}(N)_R$. But, by Lemma \ref{lem:fixed-points-reductive} this implies that $Z_{\widehat{G}}(\mb{N})^{\phi(I_F)}=Z_{L}(\mb{N})$ is flat, which is a contradiction. As $(\mc{O}_N\cap \mc{N}_L)_\mathrm{red}\to \mc{O}_N\cap\mc{N}_L$ is a homeomorphism, there is a scheme-theoretic decomposition $\mc{O}_N\cap\mc{N}_L=\bigsqcup_i U_i$ where $U_i$ is the open subscheme of $\mc{O}_N\cap\mc{N}_L$ with underlying space $\mc{O}_{L,N_i}$. As these schemes are Noetherian, to finish it suffices to show that for all $i$ and all Noetherian $\ov{\mathbb{Q}}$-algebras $A$ every morphism $\Spec(A)\to U_i$ factorizes through $\mc{O}_{L,N_i}$. As $\mc{O}_N=\mc{O}_{N_i}$ we may assume without loss of generality that $i=1$, and so $N_i=N$. Let $\mb{N}$ be the element of $\mf{l}_A$ coresponding to $\Spec(A)\to U_i$. We must then show that \'etale locally on $A$, $\mb{N}$ is conjugate to $N$. Let $I$ denotes the nilradical of $A$, and write $A_0=A/I$. As $A$ is Noetherian, $I^m=(0)$ for some $m$, and thus by inducting we may assume that $I^2=(0)$. Now, as $A_0$ is reduced the map $\Spec(A_0)\to U_i$ factorizes through $\mc{O}_{L,N}$ and thus $\mb{N}_{A_0}$ is \'etale locally conjugate to $N$. As the \'etale covers of $A$ and $A_0$ are equivalent (see \stacks{04DY}), and we are free to work \'etale locally on $A$, we may assume without loss of generality that $\Ad(l_0)(\mb{N}_{A_0})=N$ for some $l_0$ in $L(A_0)$. As $L$ is smooth, we may apply the infinitesimal lifting criterion to find a lift $l$ in $L(A)$ of $l_0$. Replacing $\mb{N}$ by $\Ad(l)(\mb{N})$ we may assume without loss of generality that $\mb{N}_{A_0}=N$. Now, as $\underline{\mathrm{Transp}}_{\widehat{G}}(\mb{N},N)\to \Spec(A)$ is a $Z_{\widehat{G}}(N)$-torsor, and thus smooth, we know by the infinitesimal lifting criterion that there exists some $g$ in $\underline{\mathrm{Transp}}_{\widehat{G}}(\mb{N},N)(A)$ lifting the identity. Using the notation of \cite[II, \S4, \textnumero 3, 3.7]{DemazureGabriel}, we may write $g=e^x$ for $x$ in $I\widehat{\mf{g}}_A$. Then, by \cite[II, \S4, \textnumero 4, 4.2]{DemazureGabriel} we have \begin{equation*} N=\Ad(g)(\mb{N})=\mb{N}+\ad(x)(\mb{N}). \end{equation*} As $N$ and $\mb{N}$ lie in $\mf{l}_A$, they are invariant for the action of the finite group $\phi(I_F/I_K)$, and so if $y$ denotes the average of $x$ over the action of $\phi(I_F/I_K)$ then \begin{equation*} N=\mb{N}+\ad(y)(\mb{N}). \end{equation*} But, by loc.\@ cit.\@ this right-hand side is equal to $\Ad(e^y)(\mb{N})$. By Lemma \ref{lem:fixed-points-reductive} we see that $e^y$ lies in $L(A)$, from where the claim follows.\end{proof} Let us now denote by $(\gamma^\mathrm{univ},\phi^\mathrm{univ},N^\mathrm{univ})$ the universal object over $X(\phi,N)$. Consider the transporter scheme $ \underline{\mathrm{Transp}}_{\widehat{G}}(\phi^\mathrm{univ},\phi)\to \underline{Z}^1(I_F/I_K,\widehat{G})$ and set $T$ to be the pullback to $X(\phi,N)$. Set $b\colon T\to X(\phi,N)$ to be the tautological map, which is smooth as $T$ is visibly an $L$-torsor. Note that we have a morphism $a\colon T\to \mathcal{O}_N\cap \mathcal{N}_L$ given by $a(g)= \mathrm{Ad}(g)(N^\mathrm{univ})$ and observe then that we have a scheme-theoretic decomposition $T=\bigsqcup_i a^{-1}(\mathcal{O}_{L,N_i})$. But, for each $i$ we also have a map $\kappa_i\colon a^{-1}(\mathcal{O}_{L,N_i})\to \pi_0(Z_{\widehat{G}}(\phi,N))$ given by sending $g$ to the component containing $\mathrm{Int}(g)(\gamma^\mathrm{univ})\gamma^{-1}$, and we define for each $i$ and each $c\in \pi_0(Z_{\widehat{G}}(\phi,N))$ the open subscheme $U_{i,c}\vcentcolon=\kappa_i^{-1}(c)$ of $a^{-1}(\mathcal{O}_{L,N_i})$. We then obtain a decomposition $T=\bigsqcup_{i,c}U_{i,c}$. As $b\colon T\to X(\phi,N)$ is smooth, we see that $b(U_{1,\mathrm{id}})$ is an open subset of $X(\phi,N)$ whose $A$-points are precisely (by \cite[Corollaire 17.16.3.(ii)]{EGA4-4}) the set of $A$-points $(\gamma',\phi',N')$ of $X(\phi,N)$ which are \'etale locally in the image of $b$. It is simple to see that this implies that $U(\gamma,\phi,N)=b(U_{1,\mathrm{id}})$, which implies $U(\gamma,\phi,N)$ is representable by an open immersion. Finally, to show that $U(\gamma,\phi,N)$ is smooth and irreducible consider the natural morphism $\widehat{G}\times Z_{\phi,N}^\circ\to U(\gamma,\phi,N)$. To simplify notation let us write $S=\widehat{G}\times Z_{\phi,N}^\circ$. Note that, by definition, $S\to U(\gamma,\phi,N)$ is surjective as \'etale sheaves and thus a fortiori surjective as schemes, and thus $U(\gamma,\phi,N)$ is irreducible. To see that $U(\gamma,\phi,N)$ is smooth, note that as $S\to U(\gamma,\phi,N)$ is surjective as \'etale sheaves there exists an \'etale cover $V\to U(\gamma,\phi,N)$ such that $p\colon S_V\to V$ admits a section. Note though that as $S_V\to S$ is \'etale and the target is reduced, so is the source (see \stacks{025O}). But, as $p$ has a section, this implies that $V$ is reduced as the morphism of sheaves of rings $\mc{O}_V\to p_\ast \mc{O}_S$ has a section and thus is injective. This implies that $U(\gamma,\phi,N)$ is reduced by \stacks{033F}. But, as we're in characteristic $0$, this implies that $U(\gamma,\phi,N)$ is generically smooth over $\ov{\mathbb{Q}}$ (see \stacks{056V}). But, as $S(\ov{\mathbb{Q}})$ acts $U(\gamma,\phi,N)$ by scheme automorphisms acting transitively on $U(\gamma,\phi,N)(\ov{\mathbb{Q}})$ we deduce that every point of $U(\gamma,\phi,N)(\ov{\mathbb{Q}})$ has regular local ring, and thus $U(\gamma,\phi,N)$ is smooth over $\ov{\mathbb{Q}}$ as desired (see \stacks{0B8X}). This completes the proof of Proposition \ref{prop:wd-loc-mov-is-open}. \section{The moduli space of $L$-parameters and the Jacobson--Morozov morphism}\label{s:L-param} In this section we define the moduli space $\mathsf{LP}_G^K$ of $L$-parameters for $G$, show it has favorable geometric properties, construct the Jacobson--Morozov morphism $\mathsf{LP}_G^K\to \mathsf{WDP}_G^{K,\sqcup}$, and show that an analogue of Theorem \ref{thm:JM-params-classical} holds for any $\mathbb{Q}$-algebra $A$. \subsection{The moduli space of $L$-parameters}\label{ss:L-param-def} We begin with a slight modification of the Langlands group scheme $\mc{W}_F\times \SL_{2,\mathbb{Q}}$ better suited to arithmetic discussions over $\mathbb{Q}$. \begin{defn}We call the $\mathbb{Q}$-scheme representing the functor \begin{equation*} \cat{Alg}_\mathbb{Q}\to\cat{Grp},\qquad A\mapsto \left\{(w,g)\in \mc{W}_F(A)\times \GL_2(A) : \|w\|=\det(g)\right\} \end{equation*} the \emph{twisted Langlands group scheme} and denote it $\mc{L}^\mathrm{tw}_F$. \end{defn} To justify the naming of $\mc{L}^\mathrm{tw}_F$, note that if $k$ is any extension of $\mathbb{Q}$ and $c$ is any element of $k$ such that $c^2=q$, then the morphism \begin{equation*} \eta_c\colon \mc{W}_{F,k}\times \SL_{2,k}\to \mc{L}^\mathrm{tw}_{F,k},\qquad (w,g)\mapsto \left(w,g\left(\begin{smallmatrix}c^{-d(w)} & 0 \\ 0 & c^{-d(w)}\end{smallmatrix}\right)\right), \end{equation*} is an isomorphism. For future reference, we observe that we have a morphism \begin{equation*} p_{\mathrm{tw}} \colon \mc{L}^\mathrm{tw}_F \to \mathbb{G}_{m,\mathbb{Q}}\times \mathcal{W}_F , \qquad (w,g) \mapsto (\|w\|,w) . \end{equation*} Let us also observe that there is a natural embedding of group schemes $\SL_{2,\mathbb{Q}}\to \mc{L}^\mathrm{tw}_F$ given by sending $g$ to $(1,g)$, as well as an embedding \begin{equation*} \iota\colon \mc{W}_F\to \mc{L}^\mathrm{tw}_F\qquad w\mapsto\left(w,\left(\begin{smallmatrix}\|w\| & 0\\ 0 & 1\end{smallmatrix}\right)\right). \end{equation*} With these embeddings, we shall implicitly think of $\SL_{2,\mathbb{Q}}$ and $\mc{I}_F$ as subfunctors of $\mc{L}^\mathrm{tw}_F$. Finally, we observe that the embedding of $\mc{W}_{K}$ into $\mc{W}_F$ for any finite extension $K$ of $F$ gives rise to an embedding of $\mc{L}^\mathrm{tw}_{K}\to \mc{L}^\mathrm{tw}_F$ which we implicitly use to think of $\mc{L}^\mathrm{tw}_{K}$ as a subgroup scheme of $\mc{L}^\mathrm{tw}_F$. \begin{defn} For a $\mathbb{Q}$-algebra $A$ we define an \emph{$L$-parameter over $A$} to be a homomorphism of group $A$-schemes $\psi\colon \mc{L}^\mathrm{tw}_{F,A}\to {^C}\!G_A$ such that $p_C \circ \psi=p_{\mathrm{tw}}$. \end{defn} Denote by $\mathsf{LP}_G(A)$ the set of $L$-parameters over $A$, which is functorial in $A$. Note that $\mathsf{LP}_G$ has a natural conjugation action by $\widehat{G}$ and so one has the centralizer group presheaf $Z_{\widehat{G}}(\psi)$. For an $L$-parameter $\psi$ over $A$ we define the morphism $\check{\psi}\colon \mc{L}^\mathrm{tw}_{F,A}\to \check{G}_A$ as the composition of $\psi$ with the projection ${^C}\!G_A\to \check{G}_A$. We denote by $\ov{\psi}$ the homomorphism of group $A$-schemes $\mc{L}^\mathrm{tw}_{F,A}\to (\widehat{G}\rtimes\underline{\Gamma_\ast})_A$ obtained by composing $\psi$ with the quotient homomorphism $\check{G}_A\to (\widehat{G}\rtimes\underline{\Gamma_\ast})_A$. Let us observe that while $\check{\psi}$ may not be a homomorphism, it becomes so after restriction to $\mc{L}^\mathrm{tw}_{F^\ast,A}$. Finally, by our assumptions on $\psi$ the restriction to $\SL_{2,A}$ takes values in $\widehat{G}_A$ and we denote this resulting morphism $\SL_{2,A}\to\widehat{G}_A$ by $\theta$ (or $\theta_\psi$ when we want to emphasize $\psi$). To relate this to more familiar objects, fix $k$ to be an extension of $\mathbb{Q}$ containing an element $c$ such that $c^2=q$. For a $k$-algebra $A$, we endow $\widehat{G}(A)$ with the discrete topology and set \begin{equation*} \mathsf{LP}'_{G,k}(A)\vcentcolon= \left\{W_F\times \SL_2(A) \xrightarrow{\psi} \widehat{G}(A) \rtimes W_F : \begin{aligned}(1)&\,\, \psi\text{ is a homomorphism over $W_F$},\\ (2)&\,\, W_F \stackrel{\psi|_{W_F}}{\to} \widehat{G}(A) \rtimes W_F \to \widehat{G}(A) \textrm{ is continuous,}\\ (3)&\,\, \psi|_{\SL_2(A)}\colon \SL_2(A) \to \widehat{G}(A) \text{ is algebraic}\end{aligned}\right\}. \end{equation*} There is a morphism $i_c^{\mathrm{L}} \colon \mathsf{LP}'_{G,k}\to \mathsf{LP}_{G,k}$ which on $A$-points is given by sending $\psi'$ to the element $\psi$ of $\mathsf{LP}_G^K(A)$ that is equal to $i_c \circ \psi' \circ \eta_c^{-1}$ on $A$-points, where $\psi$ is uniquely determined by Proposition \ref{prop:hom-schem-omnibus}. We can show the following proposition in the same way as Proposition \ref{prop:WD-C-L-comparison}. \begin{prop}\label{L-C-L-comparison} The morphism $i_c^{\mathrm{L}} \colon \mathsf{LP}'_{G,k}\to \mathsf{LP}_{G,k}$ is an isomorphism. \end{prop} For a finite extension $K$ of $F^\ast$ Galois over $F$ define \begin{equation*} \mathsf{LP}_G^K(A)\vcentcolon= \left\{\psi\in\mathsf{LP}_G(A): \mc{I}_K\subseteq \ker\left(\check{\psi}|_{\mc{L}^\mathrm{tw}_{F^\ast,A}}\right)\right\}, \end{equation*} which clearly forms a subpresheaf of $\mathsf{LP}_G$. We have the equality of presheaves $\mathsf{LP}_G=\varinjlim_K \mathsf{LP}_G^K$. As in the case of Weil--Deligne parameters, may associate to an $L$-parameter $\psi$ in $\mathsf{LP}_G^K(A)$ an element $\phi$ of $\underline{Z}^1(I_F/I_K,\widehat{G})(A)$ and thus obtain a morphism of presheaves $\mathsf{LP}_G^K\to \underline{Z}^1(I_F/I_K,\widehat{G})$. Fix a lift $w_0$ of arithmetic Frobenius in $W_F$ and define a morphism of presheaves \begin{equation*} j_{w_0}\colon \mathsf{LP}_G^K\to \check{G}\times \underline{Z}^1(I_F/I_K,\widehat{G})\times \underline{\Hom}(\SL_{2,\mathbb{Q}},\widehat{G}),\qquad \psi\mapsto \left(\check{\psi}\left(w_0,\left(\begin{smallmatrix}q & 0\\ 0 & 1\end{smallmatrix}\right)\right),\phi,\theta\right). \end{equation*} On the other hand, we have a diagram \begin{equation*} \mc{D}^L\colon \xymatrix{\check{G}\times \underline{Z}^1(I_F/I_K,\widehat{G})\times \underline{\Hom}(\SL_{2,\mathbb{Q}},\widehat{G}) \ar@<-.5ex>[r] \ar@<.5ex>[r]& \underline{\Hom}(I_F/I_K,\widehat{G})\times \bb{G}_{m,\mathbb{Q}}\times \underline{\Hom}(\SL_{2,\mathbb{Q}},\widehat{G})^{[I_F:I_K]+1}} \end{equation*} given by the two maps \begin{equation*} \begin{aligned} (g,f,\nu)\mapsto & \bigg(\Int(g,w_0)\circ f,p_{\mathbb{G}_m}(g), (\mathrm{Int}(f(i))\circ \nu)_{i\in I_F/I_K},\Int(g,w_0)\circ \nu\bigg)\\ (g,f,\nu) \mapsto &\bigg(f\circ \Int(w_0),q,(\nu)_{i\in I_F/I_K},\nu\circ \Int\left(\left( w_0,\left(\begin{smallmatrix}q & 0\\ 0 & 1\end{smallmatrix}\right)\right)\right)\bigg).\end{aligned} \end{equation*} We then have the following explicit description of $\mathsf{LP}_G^K$. \begin{prop}\label{prop:L-finite-level-rep} The morphism $j_{w_0}$ gives an identification of $\mathsf{LP}_G^K$ with $\mathrm{Eq}(\mc{D}^L)$. In particular, $\mathsf{LP}_G^K$ is representable by a finite type affine $\mathbb{Q}$-scheme and $j_{w_0}$ is a closed embedding. \end{prop} As already observed, for an extension $K\subseteq K'$ of finite extensions of $F^\ast$ Galois over $F$, there is a restriction morphism $\underline{Z}^1(I_F/I_{K'},\widehat{G})\to \underline{Z}^1(I_K/I_{K'},\widehat{G})$ which is a clopen embedding, and thus $\mathsf{LP}_G^K\to\mathsf{LP}_{G}^{K'}$ is also a clopen embedding. As we have the identification of presheaves $\mathsf{LP}_G=\varinjlim_K \mathsf{LP}_G^K$ we deduce from Proposition \ref{prop:wd-finite-level-rep} that $\mathsf{LP}_G$ is representable by a scheme locally of finite type over $\mathbb{Q}$, all of whose connected components are affine. \subsection{Decomposition into connected components}\label{ss:L-param-conn-comp-decomp} We now establish the analogue of Theorem \ref{thm:WD-const-decomp} for $\mathsf{LP}_G$. Let us fix $K$ a finite extension of $F^\ast$ Galois over $F$, and a lift $w_0$ of arithmetic Frobenius. Then, by Proposition \ref{prop:L-finite-level-rep} we have an identification $j_{w_0}$ of $\mathsf{LP}_G^K(\ov{\mathbb{Q}})$ with \begin{equation*} \left\{(\gamma,\phi,\theta)\in \check{G}(\ov{\mathbb{Q}})\times \underline{Z}^1(I_F/I_K,\widehat{G})(\ov{\mathbb{Q}})\times \underline{\Hom}(\SL_{2,\mathbb{Q}},\widehat{G})(\ov{\mathbb{Q}}):\begin{aligned}(1)&\quad \Int(\gamma,w_0)\circ \phi=\phi\circ\Int(w_0),\\ (2)&\quad p_{\mathbb{G}_m}(\gamma)=q,\\ (3)&\quad \Int(\phi(i))\circ \theta=\theta\text{ for all }i\in I_F/I_K,\\ (4)&\quad \Int((\gamma,w_0))\circ \theta = \theta\circ \Int\left(\left( w_0, \left(\begin{smallmatrix}q & 0\\ 0 & 1\end{smallmatrix}\right)\right)\right) \end{aligned}\right\}. \end{equation*} Now, for $(\gamma,\phi,\theta)$ in $\mathsf{LP}_G^K(\ov{\mathbb{Q}})$ let us define $Z_{\phi,\theta}$ to be $Z_{\widehat{G}}(\phi,\theta)$. This is a linear algebraic group over $\ov{\mathbb{Q}}$ whose identity component is reductive. Let us then say that an element $(\gamma',\phi',\theta')$ in $\mathsf{LP}_G^K(A)$, for a $\ov{\mathbb{Q}}$-algebra $A$, is \emph{locally movable to $(\gamma,\phi,\theta)$} if there exists an \'etale cover $\Spec(A')\to\Spec(A)$ and $(g,h)\in(\widehat{G}\times Z_{\phi,\theta}^\circ)(A')$ such that $(\gamma',\phi',\theta')=g(h\gamma,\phi,\theta)g^{-1}$. As this definition is clearly functorial, we obtain a subpresheaf of $\mathsf{LP}_{G,\ov{\mathbb{Q}}}^K$ as follows: \begin{equation*} U(\gamma,\phi,\theta)(A)\vcentcolon= \left\{(\gamma',\phi',\theta')\in\mathsf{LP}_{G,\ov{\mathbb{Q}}}^K(A): (\gamma',\phi',\theta')\text{ is locally movable to }(\gamma,\phi,\theta)\right\}. \end{equation*} We then have the following, whose proof is identical to Proposition \ref{prop:wd-loc-mov-is-open} except the analogue of Lemma \ref{lem:intersection-decomp} is simpler considering Proposition \ref{prop:SL2-Hom-open-orbits}. \begin{prop}\label{prop:l-loc-mov-is-open} The morphism of presheaves $U(\gamma,\phi,\theta)\to \mathsf{LP}_{G,\ov{\mathbb{Q}}}^K$ is representable by an open immersion. Moreover, the $\ov{\mathbb{Q}}$-scheme $U(\gamma,\phi,\theta)$ is smooth and irreducible. \end{prop} Define an equivalence relation on $\mathsf{LP}_G^K(\ov{\mathbb{Q}})$ by declaring that $(\gamma,\phi,\theta)$ is equivalent to $(\gamma',\phi',\theta')$ if there exists some $(g,h)\in (\widehat{G}\times Z_{\phi,\theta})(\ov{\mathbb{Q}})$ such that $(\gamma',\phi',\theta')=g(h\gamma,\phi,\theta)g^{-1}$. Let us denote an equivalence class under this relation by $[(\gamma,\phi,\theta)]$. Observe that here we do not require $h$ to lie in $Z^\circ_{\phi,\theta}(\ov{\mathbb{Q}})$, so that these equivalence classes differ from $U(\gamma,\phi,\theta)(\ov{\mathbb{Q}})$. For each such equivalence class, let us choose an element $(\gamma,\phi,\theta)$. We consider $\pi_0(Z_{\phi,\theta})$ as a finite abstract group, and we define an equivalence relation on it by declaring that $c$ is equivalent to $c_1 c \gamma c_1^{-1} \gamma^{-1}$ for any $c_1$ in $\pi_0(Z_{\phi,\theta})$. We denote by $[c]$ an equivalence class for this relation. We then have the following decomposition of $\mathsf{LP}_{G,\ov{\mathbb{Q}}}^K$ into explicit connected components, whose proof is exactly the same as that of Theorem \ref{thm:WD-const-decomp}. \begin{thm}\label{thm:L-const-decomp} The choice of $(\gamma,\phi,\theta)$ in each class $[(\gamma,\phi,\theta)]$ of $\mathsf{LP}_G^K(\ov{\mathbb{Q}})$ gives an identification \begin{equation*} \mathsf{LP}_{G,\ov{\mathbb{Q}}}^K=\bigsqcup_{[(\gamma,\phi,\theta)]}\bigsqcup_{[c]}\,\,U(c\gamma,\phi,\theta). \end{equation*} \end{thm} We derive from this two corollaries neither of which is a priori obvious. \begin{cor} For all $(\gamma,\phi,\theta)$ in $\mathsf{LP}_G^K(\ov{\mathbb{Q}})$ the $\ov{\mathbb{Q}}$-scheme $U(\gamma,\phi,\theta)$ is affine. \end{cor} Denote the set of equivalence classes for $\mathsf{LP}_G^K(\ov{\mathbb{Q}})$ (resp.\@ $\pi_0(Z_{\phi,\theta})$) by $[\mathsf{LP}_G^K(\ov{\mathbb{Q}})]$ (resp.\@ $[\pi_0(Z_{\theta,N})]$). \begin{cor}\label{cor:LP-pi0} The affine $\mathbb{Q}$-scheme $\mathsf{LP}_G^K$ is smooth, and there is a non-canonical $\Gamma_\mathbb{Q}$-equivariant bijection \begin{equation*} \pi_0\left(\mathsf{LP}_{G,\ov{\mathbb{Q}}}^K\right)\isomto \left\{([(\gamma,\phi,\theta)],[c]):\begin{aligned}(1)&\quad [(\gamma,\phi,\theta)]\in \left[\mathsf{LP}_G^K(\ov{\mathbb{Q}})\right]\\ (2)&\quad [c]\in [\pi_0(Z_{\phi,\theta})] \end{aligned}\right\} \end{equation*} where the $\Gamma_\mathbb{Q}$ action on the target is inherited from $\mathsf{LP}_G^K$ and $\widehat{G}$. \end{cor} \subsection{The Jacobson--Morozov morphism}\label{ss:JM-mor} We now come to the definition of the Jacobson--Morozov map in the geometric setting. \begin{defn} The morphism $\mathsf{JM}\colon \mathsf{LP}_G \to \mathsf{WDP}_G$ given by sending $\psi$ to $(\psi\circ \iota,d\theta_\psi(e_0))$ is called the \emph{Jacobson--Morozov morphism}. \end{defn} It is clear that $\mathsf{JM}$ is $\widehat{G}$-equivariant. By Theorem \ref{thm:geom-JM-split} it is also clear that $\mathsf{JM}$ factorizes uniquely through $\mathsf{WDP}^\sqcup_G$. Moreover, for any finite extension $K$ of $F^\ast$ Galois over $F$, one sees that $\mathsf{JM}^{-1}(\mathsf{WDP}_G^K)$ is precisely $\mathsf{LP}_G^K$ and so we get factorizations $\mathsf{LP}_G^K\to \mathsf{WDP}_G^K$ and $\mathsf{LP}_G^K\to\mathsf{WDP}_G^{K,\sqcup}$. We denote all these factorizations also by $\mathsf{JM}$. Observe that over $\ov{\mathbb{Q}}$ we may give a simpler description of the Jacobson--Morozov morphism on each connected component. Namely, let us fix $(\gamma,\phi,\theta)$ in $\mathsf{LP}_G^K(\ov{\mathbb{Q}})$ as in the notation of \S\ref{ss:L-param-conn-comp-decomp}. Then, first observe that $\mathsf{JM}(\gamma,\phi,\theta)$ is equal to $\left(\gamma,\phi,N\right)$ where $N= \mathsf{JM}(\theta)$. We may then observe that $\mathsf{JM}$ restricted to $U(\gamma,\phi,\theta)$ maps into $U(\gamma,\phi,N)$ and is the \'etale sheafification of the map which on $A$-points is the map \begin{equation*} \left\{g(h\gamma,\phi,\theta)g^{-1}:(g,h)\in \widehat{G}(A)\times Z_{\phi,\theta}^\circ(A)\right\}\to \left\{g(h'\gamma,\phi,N)g^{-1}:(g,h')\in \widehat{G}(A)\times Z_{\phi,N}^\circ(A)\right\} \end{equation*} given by sending $g(h\gamma,\phi,\theta)g^{-1}$ to $g(h\gamma,\phi,N)g^{-1}$. We also observe that if $k$ is an extension of $\mathbb{Q}$ and $c$ is an element of $k$ such that $c^2=q$ then under the isomorphisms described in Proposition \ref{prop:WD-C-L-comparison} and Proposition \ref{L-C-L-comparison} that the Jacobson--Morozov corresponds to the morphism $\mathsf{LP}'_{G,k}\to \mathsf{WDP}'_{G,k}$ sending $\psi$ to the map on $A$-points of $(\psi\circ \iota',d\theta_\psi(e_0))$ where \begin{equation*} \iota'\colon \mc{W}_{F,k}\to \mc{W}_{F,k}\times \SL_{2,k} ,\qquad w\mapsto \left(w,\left(\begin{smallmatrix} c^{-d(w)} & 0\\ 0 & c^{d(w)}\end{smallmatrix}\right) \right). \end{equation*} So, on the level of $\mathbb{C}$-points we see that our Jacobson--Morozov map agrees with that from \S\ref{ss:JM-for-params-classical}. We now move towards stating the analogue of Theorem \ref{thm:JM-params-classical} at the level of $A$-points. To begin, we must define the notion of semi-simplicity for $L$-parameters in the relative setting. \begin{prop}\label{prop:Frob-factor-psi} Let $\psi$ be an L-parameter over a $\mathbb{Q}$-algebra $A$. Then there is a positive integer $m$ divisible by $[F^\ast:F]$ such that the morphism \begin{equation*} \mathcal{W}_{F,A}\to \check{G}_A,\qquad w \mapsto \check{\psi} \left(w^{2m},\left(\begin{smallmatrix}q^{-md(w)} & 0\\ 0 & q^{-md(w)}\end{smallmatrix}\right)\right) \end{equation*} admits a factorization \begin{equation*} \mathcal{W}_{F,A} \stackrel{d}{\longrightarrow} \underline{\mathbb{Z}}_A \stackrel{\check{\psi}_m}{\longrightarrow} \check{G}_A . \end{equation*} \end{prop} \begin{proof} This is proved in the same way as Proposition \ref{prop:Frob-factor}. \end{proof} \begin{defn}\label{defn:Frob-ss-L-param} For $A$ a $\mathbb{Q}$-algebra, we call an element $\psi$ of $\mathsf{LP}_G(A)$ \emph{Frobenius semi-simple} if there exists an integer $m$ as in Proposition \ref{prop:Frob-factor-psi} such that $\check{\psi}_m$ factors through a subtorus of $\check{G}_A$ etale locally on $A$. \end{defn} Let us denote by $\mathsf{LP}^\ss_G(A)$ (resp.\@ $\mathsf{LP}_G^{K,\ss}(A)$) the subset of Frobenius semi-simple elements of $\mathsf{LP}_G(A)$ (resp.\@ $\mathsf{LP}_G^K(A)$). This evidently forms a $\widehat{G}$-stable subfunctor of $\mathsf{LP}_G$ (resp.\@ $\mathsf{LP}_G^K$). \begin{rem}To understand the reasoning for this definition, observe that under the isomorphism in Proposition \ref{L-C-L-comparison}, this condition corresponds to an element $\psi'$ of $\mathsf{LP}'_{G,k}(A)$ satisfying the property that the projection of $\psi'(w_0^{2m},1)$ to $\widehat{G}(A)$ is semi-simple for some $m$ as in Proposition \ref{prop:Frob-factor-psi}. In particular, this notion of semi-simple agrees with that from \S\ref{ss:JM-for-params-classical} for $\mathbb{C}$-points by Lemma \ref{lem:L-group-ss}. \end{rem} We now prove the following surprisingly subtle semi-simplicity preservation property for the Jacobson--Morozov morphism. \begin{prop}\label{prop:JM-preserves-ss} Let $A$ be a $\mathbb{Q}$-algebra and $\psi$ an element of $\mathsf{LP}_G(A)$. Then, $\psi$ is Frobenius semi-simple if and only if $\mathsf{JM}(\psi)$ is. \end{prop} \begin{proof}Suppose that $\psi$ is Frobenius semi-simple. As the conclusion is insensitive to passing to an \'etale extension and conjugating, we do so freely. Take $m$ as in Proposition \ref{prop:Frob-factor-psi} and a split maximal torus $T$ of $\check{G}_A$ such that $\check{\psi}_m$ factors through $T$. Note that the eigenspace $\check{\mf{g}}_A(1)$ with respect to $\check{\psi}_m(1)$ is the Lie algebra of a Levi subgroup $L$ of $\check{G}_A$ such that $\check{\psi}_m$ factors through $Z(L)$. Indeed, we may assume that $T=(T_0)_A$ for a maximal torus $T_0$ of $\check{G}$. Let $L'$ be the Levi subgroup of $\check{G}$ generated by $T_0$ and the root groups for the roots $\alpha$ which annihilate $\check{\psi}_m(1)$. Then, we may take $L= L'_A$, where $\check{\psi}_m$ factors through $Z(L)$ by \cite[Corollary 3.3.6]{ConRgrsch}. Note that $\theta$ factorizes through $L$ as by Proposition \ref{prop:hom-schem-omnibus} it suffices to check this on the level of Lie algebras, from where it is clear. Let $T_2$ denote the standard diagonal subtorus of $\SL_{2,A}$. Since $\theta$ factorizes through $L$, by \cite[Lemma 5.3.6]{ConRgrsch} we may assume that the map $\theta|_{T_2}$ factorizes through a maximal torus $T'$ of $L$. But, as $Z(L)\subseteq T'$ both $\theta|_{T_2}$ and $\check{\psi}_m$ factorize through $T'$. Hence, if we write $\mathsf{JM}(\psi)=(\varphi,N)$ then the morphism $\mc{W}_{F,A} \to \check{G}_A$ given by $w \mapsto \varphi(w^m)$ factors through $T'$. This implies that $\mathsf{JM}(\psi)$ is Frobenius semi-simple. Conversely, suppose that $\mathsf{JM}(\psi)=(\varphi,N)$ is Frobenius semi-simple. Let $m$ be any integer as Proposition \ref{prop:Frob-factor}. As above, we may build a reductive subgroup $L_m$ of $\check{G}_A$ such that $\mathrm{Lie}(L_m)$ is identified with $\check{\mf{g}}_A(1)$ with respect to $\check{\varphi}_m(1)$. We claim that the group $L_{km}$ stabilizes for $k$ sufficiently large. Indeed, the roots of $\alpha$ of $\check{G}$ relative to $T_0$ that annihilate $\check{\varphi}_{km}(1)=\check{\varphi}_m(1)^k$ stabilize for $k$ sufficiently large, from where the claim follows by the construction. Denote by $L$ the group $L_{km}$ for $k$ sufficiently large, say for $k\geqslant k_0$. Let us write $Z$ for the torus $Z(L)^{\circ}$ (see \cite[Theorem 3.3.4]{ConRgrsch}). Observe that as $\check{\varphi}_{km}$, for $k\geqslant k_0$, centralizes $\Lie(L)$ that $\check{\varphi}_{km}$ factors through $Z(L)$. So then, for some $k_1\geqslant k_0$ we have that $\check{\varphi}_{k_1m}$ factors through $Z$. We put $m_1=k_1 m$. We will be done if we can show that $\theta|_{T_2}$ factorizes through the reductive group $A$-scheme $Z_{\check{G}}(Z)$ (see \cite[Lemma 2.2.4]{ConRgrsch} and \cite[Corollary 17.59]{MilneGroups}). Indeed, in this case by \cite[Lemma 5.3.6]{ConRgrsch}, we know that after passing to an \'etale extension, $\theta|_{T_2}$ factorizes through a maximal torus $T'$ of $Z_{\check{G}}(Z)$. Then $\theta|_{T_2}$ and $\check{\varphi}_{m_1}$ factor through $T'$. Hence \begin{equation*} \mc{W}_{F,A} \to \check{G}_A,\qquad w \mapsto \check{\psi} \left(w^{2m_1},\left(\begin{smallmatrix}q^{-m_1d(w)} & 0\\ 0 & q^{-m_1d(w)}\end{smallmatrix}\right)\right) \end{equation*} factors through $T'$. This implies that $\psi$ is Frobenius semi-simple. Working etale locally, and by passing to a $\check{G}(A)$-conjugate, we may assume that $Z$ is equal to $Z'_A$ for a split subtorus $Z'$ of $\check{G}$. Let $R_0$ be the set of nontrivial characters of $Z'$ appearing in the adjoint action of $Z'$ on $\check{\mf{g}}_A$. Note that these characters are already defined over $\mathbb{Q}$. Consider the functor on $\cat{Alg}_\mathbb{Q}$ with \begin{equation*} Y(B)\vcentcolon= \left\{z\in Z'(B):\begin{aligned}(1)&\quad \chi(z)\ne 1\text{ for all }\chi\in R_0,\\ (2)&\quad \chi(z)=q^{m_1}\text{ for all }\chi\in R_0\text{ such that }\chi(\check{\varphi}_{m_1}(1))=q^{m_1}\end{aligned}\right\}. \end{equation*} Clearly $Y$ defines a locally closed subscheme of $Z'$ which is non-empty as $\check{\varphi}_{m_1}(1)$ is an element of $Y(A)$. Take $y \in Y(F)$ for a finite extension $F$ of $\mathbb{Q}$. By passing to an \'etale extension, we may assume that $A$ contains $F$. We claim that inclusion $Z_{\check{G}}(Z)\subseteq Z_{\check{G}}(y)^\circ_A$ is an equality. As $Z_{\check{G}}(Z)$ is flat over $\Spec(A)$, we know from the fibral criterion for isomorphism (see \cite[Corollaire 17.9.5]{EGA4-4}), that it suffices to check this after base change to every point of $\Spec(A)$. But, as $A$ is $\mathbb{Q}$-algebra, and $Z_{\check{G}}(Z)$ and $Z_{\check{G}}(y)^\circ_A$ are both connected, it then suffices to check they have the same Lie algebra (e.g.\@ see \cite[Corollary 10.16]{MilneGroups}), but this is true by construction. In the following, we use the notation $\check{\mathfrak{g}}_A(\lambda)$ for $\lambda \in A^{\times}$ with respect to $\check{\varphi}_{m_1}(1)$. By construction, we know that $\mathrm{Int}(y)$ acts on $\check{\mathfrak{g}}_A(q^{\pm m_1})$ by multiplication by $q^{\pm m_1}$. Moreover, the $\SL_2$-triple $(N,f,h)$ associated to $\theta$ by Theorem \ref{thm:rel-JM-triples} satisfies $N \in \check{\mathfrak{g}}_A(q^{m_1})$, $f \in \check{\mathfrak{g}}_A(q^{-m_1})$ and $h \in \check{\mathfrak{g}}_A(1)$. Therefore, the $\mathfrak{sl}_2$-triple attached to $\mathrm{Int}(y) \circ \theta$ is $(q^{m_1}N,q^{-m_1}f,h)$. Thus, the $\mathfrak{sl}_2$-triple attached to $\Int(y)\circ \theta\circ \mu$ is $(N,f,h)$ where \begin{equation*} \mu\colon \SL_{2,A}\isomto\SL_{2,A},\qquad \begin{pmatrix} a & b \\ c & d \end{pmatrix} \mapsto \begin{pmatrix} a & q^{-m_1}b \\ q^{m_1}c & d \end{pmatrix}. \end{equation*} By Theorem \ref{thm:rel-JM-triples} $\mathrm{Int}(y) \circ \theta\circ \mu=\theta$, so $\theta|_{T_2}$ factorizes through $Z_{\check{G}}(y)^{\circ}_A =Z_{\check{G}}(Z)$ as desired. \end{proof} We end this section by proving a relative version of Proposition \ref{prop:Zphidec}. Fix a $\mathbb{Q}$-algebra $A$ and let $N$ be an element of $\mc{N}^\sqcup(A)$. Let us denote by $\mf{u}^N$ the $A$-submodule $\mathrm{im}(\ad (N)) \cap \Ker (\ad (N))$ of $\widehat{\mf{g}}_A$, which we also treat as a subfunctor of $\widehat{\mf{g}}_A$ in the obvious way. Note that $\mf{u}^N$ is in fact a closed subscheme of $\widehat{\mc{N}}_A$ and for all $A$-algebras $B$ there is an equality \begin{equation*} \mf{u}^N(B) = \mathrm{im}(\ad (N\otimes 1)) \cap \Ker (\ad (N\otimes 1)). \end{equation*} As these claims are \'etale local, we may assume that $N=gN_0g^{-1}$ for some $N_0$ in $\widehat{\mc{N}}(\mathbb{Q})$ and $g$ in $\widehat{G}(A)$. Observe then that $\mf{u}^N$ is equal to $g(\mf{u}^{N_0})_A g^{-1}$ where $\mf{u}^{N_0} \subseteq \widehat{\mf{g}}$ is defined in the same way as $\mf{u}^N$. As $\widehat{\mc{N}}_A$ is $\widehat{G}(A)$-equivariant it suffices to show that $\mf{u}^{N_0}$ factorizes through $\widehat{\mc{N}}$ which may be checked on $\ov{\mathbb{Q}}$-points which is then clear. One similarly proves the claimed equality. As $\mf{u}^N$ is a closed subscheme of $\widehat{\mc{N}}_A$, we obtain a closed subscheme $U^N\vcentcolon= \exp(\mf{u}^N)$ of $\widehat{G}_A$. We claim that $U^N$ is a closed subgroup scheme of $\widehat{G}_A$ flat over $A$. As this may be checked \'etale locally we are again reduced to checking that $\exp(\mf{u}^{N_0})$ is a closed subgroup $\mathbb{Q}$-scheme of $\widehat{G}$ (automatically flat over $\mathbb{Q}$), but this is true by Proposition \ref{prop:exp-omnibus}. For an element $(\varphi,N)$ of $\mathsf{WDP}^{\sqcup}_G(A)$ we set \begin{equation*} U^N(\varphi)\vcentcolon= U^N\times_{\widehat{G}_A}Z_{\widehat{G}}(\varphi). \end{equation*} Concretely this means that for every $A$-algebra $B$ one has an identification of $U^N(\varphi)(B)$ with $U^N(B)\cap Z_{\widehat{G}}(\varphi)(B)$ where this intersection is taken in $\widehat{G}(B)$. Let us first establish the following relative version of Proposition \ref{prop:Zudec}, which follows easily (using the same reduction arguments as already used above) from Proposition \ref{prop:Zudec} \begin{lem}\label{lem:zudec-rel} Let $\theta$ be an element of $\underline{\Hom}(\SL_{2,\mathbb{Q}},\widehat{G})(A)$ and define $N=\mathsf{JM}(\theta)$. Then, $Z_{\widehat{G}}(N)=U^N\rtimes Z_{\widehat{G}}(\theta) $. \end{lem} \begin{prop}\label{prop:Zphidesc-rel} Let $A$ be a $\mathbb{Q}$-algebra, $\psi$ is an element of $\mathsf{LP}_G(A)$, and set $(\varphi,N)=\mathsf{JM}(\psi)$. Then, $Z_{\widehat{G}}(\varphi, N) = U^N(\varphi)\rtimes Z_{\widehat{G}}(\psi)$. \end{prop} \begin{proof} Let $B$ be an $A$-algebra. Given Lemma \ref{lem:zudec-rel} it clearly suffices to show that conjugation by an element in the image of $\varphi$ stabilizes $U^N$, as the rest of the argument for Proposition \ref{prop:Zphidec} then goes through verbatim. Let $u=\exp(n)$ be an element of $U^N(B)$ and observe that $\Int(\varphi(w))(u)$ is equal to $\exp(\Ad(\varphi(w))(n))$, and so we are done as clearly $\Ad(\varphi(w))(n)\in \mf{u}^N(B)$. \end{proof} \subsection{The relative Jacobson--Morozov theorem for parameters} We now arrive at the relative analogue of Theorem \ref{thm:JM-params-classical}. Let us set $\mathsf{WDP}^{\sqcup,\ss}_G$ to be the presheaf whose $A$-points consist of Frobenius semi-simple Weil--Deligne parameters $(\varphi,N)$ such that $N$ lies in $\mc{N}^\sqcup(A)$. \begin{thm}[Relative Jacobson--Morozov theorem for parameters]\label{thm:rel-JM-param} The Jacobson--Morozov morphism $ \mathsf{JM}\colon \mathsf{LP}^\ss_G\to \mathsf{WDP}^{\sqcup,\ss}_G$ is surjective, and induces an isomorphism of quotient presheaves \begin{equation*} \mathsf{JM}\colon \mathsf{LP}^\ss_G/\widehat{G}\isomto \mathsf{WDP}^{\sqcup,\ss}_G/\widehat{G}. \end{equation*} \end{thm} Let us fix a $\mathbb{Q}$-algebra $A$, an element $(\varphi,N)$ of $\mathsf{WDP}^{\sqcup,\ss}_G(A)$, and an arithmetic Frobenius lift $w_0 \in \mathcal{W}_{F,A}$. In the notation from Proposition \ref{prop:eigen-decomp}, with $\rho\colon (\check{G}\rtimes \underline{\Gamma_\ast})_A\to \GL(\widehat{\mf{g}}_A)$ the adjoint action, $h=\ov{\varphi}(w_0)$, and $I=\phi(I_F/I_K)$, let $\mf{h}$ and $\mf{h}(\lambda)$ be $\widehat{\mf{g}}_A^I$ and $\widehat{\mf{g}}_A^I(\lambda)$ respectively. \begin{prop}[{cf.\@ \cite[Lemma 2.1]{GRAinv}}]\label{prop:gr-prop} There exists an $\mf{sl}_2$-triple in $\widehat{\mf{g}}_A$ of the form $(N,f,h)$ where $N\in\mf{h}(q)$, $f\in\mf{h}(q^{-1})$, and $h\in\mf{h}(1)$. Moreover, any two such $\mf{sl}_2$-triples are conjugate by an element of $Z_{\widehat{G}}(\varphi,N)$ \'etale locally on $A$. \end{prop} \begin{proof} By Theorem \ref{thm:rel-JM-triples} there exists an $\mf{sl}_2$-triple $(N,h_{-1},f_{-1})$ in $\widehat{\mf{g}}_A$. We take a finite extension $K$ of $F^\ast$ Galois over $F$ such that $\mathcal{I}_{K,A} \subseteq \ker (\check{\varphi}|_{\mathcal{W}_{F^\ast,A}})$. Observe that $N$ is in $\mf{h}$ by definition and if we set $h_0$ to be the average over the action of $\phi(I_F/I_K)$, then $h_0$ is also in $\mf{h}$ and $(N,h_0)$ satisfies the conditions of Proposition \ref{prop:Kostant-triples-prop} for $\mf{h}$. Therefore there exists an $\mf{sl}_2$-triple in $\mf{h}$ of the form $(N,h_0,f_0)$. Given this, the decomposition result from Proposition \ref{prop:eigen-decomp}, and Proposition \ref{prop:Kostant-triples-prop}, the existence argument as in \cite[Lemma 2.1]{GRAinv} goes through without further comment. To show the uniqueness part of the statement, let $(N,h_1,f_1)$ be another $\mf{sl}_2$-triple satisfying the same conditions. We shall pass to an \'etale extension freely in the following. By Proposition \ref{prop:SL2-Hom-open-orbits}, we may assume that there exists a morphism $\theta\colon \SL_{2,\mathbb{Q}}\to\widehat{G}$ such that $(N,h,f)$ is the associated $\mf{sl}_2$-triple. Set $\mf{m}\vcentcolon=\mf{h}^N\cap \mf{h}(1)$, and for each $i\in\mathbb{N}$ set $\mf{m}_i$ to be $\left\{x\in\mf{m}:[h,x]=ix\right\}$. We can check that $\mf{m}=\bigoplus_{i}\mf{m}_i$ by using the adjoint action of $\theta|_{T_2}$ and Lemma \ref{lem:Gm-Ad-ad} below, where $T_2$ is the diagonal subtorus of $\SL_{2,\mathbb{Q}}$. Let us now set $\mf{u}\vcentcolon=\bigoplus_{i>0}\mf{m}_i$. Then $\mf{u}$ is Lie subalgebra of $\widehat{\mf{g}}_A$ contained in $\widehat{\mc{N}}(A)$ as it is contained in $\bigoplus_{i>0}\widehat{\mf{g}}_{i,A}$, the base change to $A$ of $\bigoplus_{i>0}\widehat{\mf{g}}_i$ where $\widehat{\mf{g}}_i=\{x\in \widehat{\mf{g}}: [h,x]=ix\}$, and $\bigoplus_{i>0}\widehat{\mf{g}}_i$ is quickly checked to be contained in $\widehat{\mc{N}}(\mathbb{Q})$. Consider $U\vcentcolon=\exp(\mf{u})$, which is a subgroup of $H(A)$ by (3) of Proposition \ref{prop:exp-omnibus}. We claim that $\left\{\mathrm{Ad}(u)(h):u\in U\right\}$ is equal to $h+\mf{u}$. To see this, we note that if we write $u=\exp(x)$ for $x\in\mf{u}$ then by (2) of Proposition \ref{prop:exp-omnibus} $\mathrm{Ad}(u)(h)$ is equal to $\sum_{n \geq 0} \frac{1}{n!}\mathrm{ad}(x)^n(h)$. We need to show that for any $x_0 \in \mf{u}$ there is $x \in \mf{u}$ such that $x_0=\sum_{n \geq 1} \frac{1}{n!}\mathrm{ad}(x)^n(h)$. We define a filtration $\mathrm{Fil}^i(\mathfrak{u})=\bigoplus_{j\geqslant i}\mathfrak{m}_j$ for $i \geq 1$. It suffices to prove that there is $x_i \in \mf{u}$ such that \begin{equation*} x_0 \equiv \sum_{n \geq 1} \frac{1}{n!}\mathrm{ad}(x_i)^n(h) \mod \mathrm{Fil}^i(\mathfrak{u}) \end{equation*} by induction on $i$. This is trivial for $i=1$. We assume that it is proved for $i$. We take $x_i' \in \mathrm{Fil}^i(\mathfrak{u})$ such that $[x_i',h]=x_0 - \sum_{n \geq 1} \frac{1}{n!}\mathrm{ad}(x_i)^n(h)$. Then $x_{i+1}=x_i +x_i'$ is seen to satisfy \begin{equation*} x_0 \equiv \sum_{n \geq 1} \frac{1}{n!}\mathrm{ad}(x_{i+1})^n(h) \mod \mathrm{Fil}^{i+1}(\mathfrak{u}) \end{equation*} since $[\mathfrak{u},\mathrm{Fil}^{i}(\mathfrak{u})] \subseteq \mathrm{Fil}^{i+1}(\mathfrak{u})$. Note now that $y=h_1-h=[N,f_1-f]$ is in $\mf{u}$. Indeed, by inspection $[N,y]=0$ so that $y$ is in $\mf{h}^N$, but since $h_1$ and $h$ are both in $\mf{h}(1)$, so is their difference $y$. Note though that as $y=[N,f_1-f]$ we have $y$ is in $\mf{u}$. Indeed, it again suffices to show that $\widehat{\mf{g}}_A\cap [N,\widehat{\mf{g}}_A]$ is equal to $\bigoplus_{i>0}\widehat{\mf{g}}_{i,A}$ which, again, may be verified over $\mathbb{Q}$ in which case it is again classical (cf. \cite[Proposition 2.2]{GRAinv}). Thus, we know that there exists some $u$ in $U$ such that $\mathrm{Ad}(u)(h)=h+y=h_1$. One then verifies that $\Ad(u)(f)=f_1$ as in loc.\@ cit. Finally, we now observe that the inclusion $U\subseteq Z_{\widehat{G}}(\varphi,N)(A)$ holds. Indeed, writing $u=\exp(x)$ we see that $\Ad(u)(N)=N$ since $x$ is in $\mf{h}^N$ and using the formula from (2) of Proposition \ref{prop:exp-omnibus}. Similarly, as $\Int(\varphi(w))(\exp(x))$ is equal to $\exp(\Ad(\varphi(w))(x))$, this is just $\exp(x)$ as $x$ is in $\mf{h}(1)$. \end{proof} \begin{lem}\label{lem:Gm-Ad-ad} Let $S$ be a scheme and $H$ a smooth group $S$-scheme with Lie algebra $\mf{h}$. Let $\rho \colon \mathbb{G}_{m,S} \to H$ be a morphism of group $S$-schemes. Set $h=d\rho(1)$, and for an integer $i$ we set \begin{equation*} \mf{h}_{\rho,i}=\{ x \in \mf{h} : \Ad (\rho (z)) x=z^i x\text{ for all }z\},\qquad \mf{h}_{h,i}=\{ x \in \mf{h} : \ad (h)(x)=ix \}. \end{equation*} Then we have $\mf{h}_{\rho,i} \subseteq \mf{h}_{h,i}$. This is an equality if $S$ is a $\mathbb{Q}$-scheme. \end{lem} \begin{proof} We have $d(\Ad \circ \rho)(1)=\ad (h)$ under the identification of the Lie algebra of $\GL (\mf{h})$ with $\End (\mf{h})$. By taking the weight decomposition of $\mf{h}$ under $\Ad \circ \rho$ (cf.\@ \cite[Lemma A.8.8]{CGP}), we obtain the claim from the fact that the derivative of the $i^\text{th}$-power map $\mathbb{G}_{m,S} \to \mathbb{G}_{m,S}$ is the multiplication-by-$i$ map. The last claim follows from $\mf{h}=\bigoplus_{i \in \mathbb{Z}} \mf{h}_{\rho,i}$ and that $\mf{h}_{h,i}$ for $i \in \mathbb{Z}$ are linearly independent if $S$ is a $\mathbb{Q}$-scheme. \end{proof} To show the surjectivity claim in Theorem \ref{thm:rel-JM-param} let $(N,f,h)$ be as in Proposition \ref{prop:gr-prop}, and consider the morphism $\theta\colon \SL_{2,A}\to \widehat{G}_A$ associated by Theorem \ref{thm:relative-jm}. We then consider the morphism of schemes \begin{equation*} \psi\colon \mc{L}^\mathrm{tw}_{F,A}\to {^C}\!G_A,\quad (w,g)\mapsto \theta\left(g\left(\begin{smallmatrix}\|w\| & 0\\ 0 & 1\end{smallmatrix}\right)^{-1}\right)\varphi(w). \end{equation*} We claim that this a morphism of group $A$-schemes. To prove this, it suffices to show \begin{equation*} \Ad(\varphi(w))(\theta (g))=\theta \left( \Ad \left(\left(\begin{smallmatrix}\|w\| & 0\\ 0 & 1\end{smallmatrix}\right)\right)(g)\right) \end{equation*} for $w \in \mc{W}_{F,A}(B)$ and $g \in \SL_2 (B)$, where $B$ is any $A$-algebra. This follows from Proposition \ref{prop:hom-schem-omnibus} and the construction of $\theta$. One then easily check that $\psi$ is an element of $\mathsf{LP}_G(A)$ such that $\mathsf{JM}(\psi)=(\varphi,N)$ as desired. We now show that $\mathsf{JM}$ induces a bijection $\mathsf{LP}^\ss_G(A)/\widehat{G}(A)\isomto \mathsf{WDP}^{\sqcup,\ss}_G(A)/\widehat{G}(A)$, which now only requires the demonstration of injectivity. By the $\widehat{G}(A)$-equivariance of $\mathsf{JM}$ it suffices to show that if $\psi_1$ and $\psi_2$ are elements of $\mathsf{LP}^\ss_G(A)$ such that $\mathsf{JM}(\psi_1)$ and $\mathsf{JM}(\psi_2)$ both equal $(\varphi,N)$, then $\psi_1$ and $\psi_2$ are $\widehat{G}(A)$-conjugate. Note that the $\mf{sl}_2$-triples associated to $\theta_{\psi_i}$ for $i=1,2$ both satisfy the conditions of Proposition \ref{prop:gr-prop} for $(\varphi,N)$. Therefore, \'etale locally on $A$ the $\mf{sl}_2$-triples associated to $\psi$ and $\psi'$ are conjugate in a way that centralizes $(\varphi,N)$ and so $\psi$ and $\psi'$ are \'etale locally conjugate. From this we deduce that $\psi_2$ defines a class in $H^1_\mathrm{\acute{e}t}(\Spec(A),Z_{\widehat{G}}(\psi_1))$ given by $\underline{\mathrm{Transp}}_{\widehat{G}}(\psi_1,\psi_2)$. Note though that we have a natural map \begin{equation*} H^1_\mathrm{\acute{e}t}(\Spec(A),Z_{\widehat{G}}(\psi_1))\to H^1_\mathrm{\acute{e}t}(\Spec(A),Z_{\widehat{G}}(\varphi,N)) \end{equation*} which maps $\underline{\mathrm{Transp}}_{\widehat{G}}(\psi_1,\psi_2)$ to the trivial element, and so $\underline{\mathrm{Transp}}(\psi_1,\psi_2)$ belongs to \begin{equation*} \ker\bigg(H^1_\mathrm{\acute{e}t}(\Spec(A),Z_{\widehat{G}}(\psi_1))\to H^1_\mathrm{\acute{e}t}(\Spec(A),Z_{\widehat{G}}(\varphi,N))\bigg), \end{equation*} and so we are done if this kernel is trivial. But, this follows from Proposition \ref{prop:Zphidesc-rel}. \section{Geometric properties of the Jacobson--Morozov map} In this final section we use the material developed so far to prove that the Jacobson--Morozov morphism satisfies favorable geometric properties. Namely, we show that $\mathsf{JM}\colon \mathsf{LP}_G^K\to \mathsf{WDP}_G^{K,\sqcup}$ (resp.\@ $\mathsf{JM}\colon \mathsf{LP}_G^K\to \mathsf{WDP}_G^K$) is birational (resp.\@ weakly birational). We do this by exhibiting a more explicit space which embeds into all three moduli spaces weakly birationally. This is the geometric analogue of the reductive centralizer locus from \S\ref{ss:red-loc-classical}. We then finally show that as a particular application of these ideas one may prove that the Jacobson--Morozov map is an isomorphism between the discrete loci in $\mathsf{LP}_G^K$ and $\mathsf{WDP}_G^K$. \subsection{Birationality properties} To begin, note that as the morphism $\mc{N}^\sqcup\to\mc{N}$ is surjective and satisfies the conditions of Lemma \ref{lem:stratification-isom}, $\mathsf{WDP}_G^{K,\sqcup}\to \mathsf{WDP}_G^K$ is then also surjective and satisfies the same conditions. We therefore deduce from Lemma \ref{lem:stratification-isom} the following. \begin{prop}\label{prop:sqcup-to-square-bir} The morphism $\mathsf{WDP}_G^{K,\sqcup}\to \mathsf{WDP}_G^K$ is weakly birational. \end{prop} We now give a more explicit effective version of this result. To start, we observe the following where we denote by $(\varphi^\mathrm{univ},N^\mathrm{univ})$ the universal pair over $\mathsf{WDP}_G^K$. \begin{prop}\label{prop:red-equi-dim-loc-closed} For each $n\geqslant 0$, the subset \begin{equation*} \mathsf{WDP}_G^{K,n}\vcentcolon= \left\{x\in \mathsf{WDP}_G^K:Z_{\widehat{G}}(\varphi^\mathrm{univ},N^\mathrm{univ})_x^\circ \emph{ is reductive of dimension }n+\dim(Z_0(\widehat{G}))\right\} \end{equation*} of $\mathsf{WDP}_G^K$ is locally closed, is open if $n=0$, and is empty if $n>\dim(\widehat{G}/Z_0(\widehat{G}))$. \end{prop} \begin{proof} Consider the quotient $Q\vcentcolon= Z_{\widehat{G}}(\varphi^\mathrm{univ},N^\mathrm{univ})/Z_0(\widehat{G})_{\mathsf{WDP}_G^K}$. By \cite[Exposé VIB, Proposition 4.1]{SGA3-1}, the function $f\colon \mathsf{WDP}_G^K\to \mathbb{N}$ given by $f(x)=\dim(Q_x)$ is upper semi-continuous. In particular the set $D_n=f^{-1}([0,n+1))\cap f^{-1}([n,\infty)]$ of points where $Q_x$ is of dimension $n$ is locally closed, and as $D_0=f^{-1}([0,1))$, $D_0$ is open. Let us endow $D_n$ with the reduced substructure. Let us then note that by \cite[Exposé VIB, Corollaire 4.4]{SGA3-1} for all $n\geqslant 0$ the identity component functor $Q_{D_n}^\circ$ is representable and is smooth over $D_n$. Thus, by \cite[Proposition 3.1.9]{ConRgrsch}, we deduce that the locus of $x$ in $D_n$ where $Q_x^\circ$ is reductive is open, and thus locally closed in $\mathsf{WDP}_G^K$ and open if $n=0$. But, evidently this locus is equal to $\mathsf{WDP}_G^{K,n}$. \end{proof} \begin{defn}\label{defn:locus-of-red} We define the \emph{reductive centralizer locus} in $\mathsf{WDP}_G^K$ to be the $\mathbb{Q}$-scheme $\mathsf{WDP}_G^{K,\mathrm{rc}}\vcentcolon= \bigsqcup_n \mathsf{WDP}_G^{K,n}$ (where each $\mathsf{WDP}_G^{K,n}$ is given the reduced subscheme structure). We call the open subset $\mathsf{WDP}_G^{K,0}$ the \emph{discrete locus} and denote it by $\mathsf{WDP}_G^{K,\mathrm{disc}}$. \end{defn} Let us observe that by the proof of Proposition \ref{prop:red-equi-dim-loc-closed}, if $A$ is a reduced $\mathbb{Q}$-algebra and $(\varphi,N)$ is a Weil--Deligne parameter over $A$ such that the corresponding morphism $\Spec(A)\to \mathsf{WDP}_G^K$ factorizes through $\mathsf{WDP}_G^{K,\mathrm{rc}}$, then $Z_{\widehat{G}}(\varphi,N)^\circ$ is representable and reductive over $A$. Now, while a priori unclear, we show now that the reducedness of $\mathsf{WDP}_G^K$ implies that $N^\mathrm{univ}$ pulled back to the reductive centralizer locus lies in $\mc{N}^\sqcup$. More precisely, we have the following. \begin{prop}\label{prop:red-cent-constant-N} The morphism $\mathsf{WDP}_G^{K,\mathrm{rc}}\to\mathsf{WDP}_G^K$ factorizes through $\mathsf{WDP}_G^{K,\sqcup}$. \end{prop} Indeed, as $\mathsf{WDP}_G^{K}$ is reduced by Theorem \ref{thm:WD-reduced} this follows from the following proposition. \begin{prop}\label{prop:reductive-cent-constant-N} If $A$ is a reduced $\mathbb{Q}$-algebra, and $(\varphi,N)$ is an element of $\mathsf{WDP}_G(A)$ such that $Z_{\widehat{G}}(\varphi,N)_x^\circ$ is a reductive group scheme of dimension $n$ for all $x$ in $\Spec(A)$, then $(\varphi,N)$ is an element of $\mathsf{WDP}^\sqcup_G(A)$. \end{prop} \begin{proof} We break the argument into several steps to make the structure clear. \medskip \noindent\textbf{Step 1:} It suffices to prove that if $A$ is a strictly Henselian discrete valuation ring, then $N$ is egc to some $N_0$ in $\widehat{\mc{N}}(\mathbb{Q})$. Indeed, we must show that the map $\Spec(A)\to \mathcal{N}$ induced by $(\varphi,N)$ factorizes through $\mc{N}^\sqcup$. By standard Noetherian approximation arguments we may assume that $A$ is Noetherian. We may then assume that $A$ is connected, in which case we must show that this morphism factorizes through some $\mc{O}_N$. As $A$ is reduced, it suffices to show that $\Spec(A)\to \mc{N}$ factorizes through some $\mc{O}_N$ set-theoretically. As $A$ is connected, any two points of $\Spec(A)$ may be connected by a finite chain of specialization and generalizations. This reduces us to showing that if $x$ is a generalization of $y$ in $\Spec(A)$ then these points map into a common $\mc{O}_N$. We are then reduced to the case of a discrete valuation ring by \stacks{054F}, and then trivially to the case of a strictly Henselian discrete valuation ring. \medskip \noindent\textbf{Step 2:} We claim we may assume that $(\varphi,N)$ is in $\mathsf{WDP}^{K,\mathrm{disc}}_G(A)$. Write $\eta$ (resp.\@ $s$) for the generic point (resp.\@ special) of $\Spec(A)$. As $Z_{\widehat{G}}(\varphi,N)$ has constant fiber dimension, the same is true for $Z_{\widehat{G}^{\mathrm{der}}}(\varphi,N)$ and so again \cite[Exposé VIB, Corollaire 4.4]{SGA3-1} shows that $Z_{\widehat{G}^{\mathrm{der}}}(\varphi,N)^\circ$ is representable and reductive over $A$. As $A$ is strictly Henselian, for any reductive group over $A$, all its tori are split, all its maximal tori are conjugate, and all its Borel subgroups are conjugate. Then, as ${^C}\!G$ is equal to ${^L}\!\widetilde{G}$ the arguments in \cite[Lemma 3.5]{BorelCorvallis} show that if $T$ is a maximal torus of $Z_{\widehat{G}^\mathrm{der}}(\varphi,N)^\circ$ there exists some $g\in \check{G}(A)$ and a Levi subgroup $H$ of $G^\ast$ (where $G^\ast$ is the quasi-split inner form of $G$) such that $g Z_{{^C}\!G_A}(T)g^{-1}={^C}\!H_A$. Therefore $g^{-1}(\varphi,N)g$ factorizes through ${^C}\!H_A$. We claim then that $(\varphi,N)$ is in $\mathsf{WDP}^{K,\mathrm{disc}}_H(A)$. By Proposition \ref{prop:red-cent-ss} $g^{-1}(\varphi_\eta,N_\eta)g$ and $g^{-1}(\varphi_s,N_s)g$ are Frobenius semi-simple. Moreover, the argument given in \cite[Proposition 3.6]{BorelCorvallis} shows that neither $g^{-1}(\varphi_\eta,N_\eta)g$ nor $g^{-1}(\varphi_s,N_s)g$ factorizes through a proper Levi (in the sense of loc.\@ cit.\@) which, as they are both Frobenius semi-simple, implies by the usual arguments (cf.\@ \cite[Lemma 10.3.1]{KotStfcus}) that they are discrete. As $N$ is in $\mc{N}^\sqcup(A)$ if and only if $g^{-1}Ng$ is, the claimed reduction follows. \medskip \noindent\textbf{Step 3:} We now show that we may assume $N_s\ne 0$. If both $N_s$ and $N_\eta$ are zero we're done, and so it suffices to show that if $N_\eta\ne 0$ then $N_s\ne 0$. To see this, assume otherwise. But the inequality $\dim Z_{\check{G}}(\varphi_\eta)\leqslant \dim Z_{\check{G}}(\varphi_s)=\dim Z_{\check{G}}(\varphi_s,N_s)$ holds by \cite[Exposé VIB, Proposition 4.1]{SGA3-1}. That said, $\dim Z_{\check{G}}(\varphi_\eta,N_\eta)<\dim Z_{\check{G}}(\varphi_\eta)$. Indeed, it suffices to note that if $w_0$ is any lift of arithmetic Frobenius then (as in Propoistion \ref{prop:Frob-factor}) for $m$ sufficiently large $\check{\varphi}_\eta(w_0^m)$ defines a point of $Z_{\check{G}}(\varphi_\eta)$ but, as $N_\eta \ne0$, does not define a point of $Z_{\check{G}}(\varphi_\eta,N_\eta)$ and thus $ Z_{\check{G}}(\varphi_\eta,N_\eta)^\circ \subsetneq Z_{\check{G}}(\varphi_\eta)^\circ$ from where the claim follows. But, observe that $\dim(Z_{\check{G}}(\varphi_\eta,N_\eta))$ (resp.\@ $\dim(Z_{\check{G}}(\varphi_\eta,N_\eta))$) is equal to $\dim(Z_{\widehat{G}}(\varphi_\eta,N_\eta))+1$ (resp.\@ $\dim(Z_{\widehat{G}}(\varphi_s,N_s))+1$) and so we arrive at a contradiction. \medskip \noindent\textbf{Step 4:} Replacing $G$ with $G^\mathrm{der}$ we may assume that $Z_0(\widehat{G})$ is finite. Proposition \ref{prop:red-cent-ss} together with Theorem \ref{thm:rel-JM-param} imply that $(\varphi_\eta,N_\eta)$ (resp.\@ $(\varphi_s,N_s)$) comes from an $L$-parameter $\psi_1$ (resp.\@ $\psi_2$). Write $\mu_i$ for the restriction of $\theta_{\psi_i}$ to the diagonal maximal torus. Fix $w_0$ to be an arithmetic Frobenius lift. By Frobenius semi-simplicity and the fact that $A$ is strictly Henselian, there is, up to conjugacy, a positive integer $m_0$ divisible by $[F^\ast:F]$ such that $\check{\varphi} (w_0^{m_0})$ is contained in the $A$-points of a maximal torus $T$ of $\check{G}_{\ov{\mathbb{Q}}}$. By the relationship between $\psi_i$ and $\varphi_i$ and the argument of \cite[Lemma 3.1]{GRAinv}, we see that up to replacing $m_0$ by a power, we may further assume that $\check{\varphi}_\eta (w_0^{2m_0})=\mu_1 (q^{m_0})$ and $\check{\varphi}_s(w_0^{2m_0})=\mu_2 (q^{m_0})$. From this first equality it is simple to see that $\mu_1$ factorizes through $T_\eta$, and thus there exists a unique lift $\mu_A$ of $\mu_1$ to $T_A$ where $\mu$ is a cocharacter of $T$. We note as $N_s\ne 0$, that $\mu_2$ is characterized by the property that the image of $\mu_2$ contains $\check{\varphi}_s (w_0^{2m_0})$ and $\mathrm{Ad}(\mu_2 (q^{m_0}))(N_s)=q^{2m_0}N_s$. As $\check{G}_A$ and $\widehat{\mf{g}}_A$ are separated over $A$, we have that the image of $\mu$ contains $\varphi (w_0^{2m_0})$ and $\mathrm{Ad}(\mu(q^{m_0}))(N)=q^{2m_0}N$. Hence, $\mu_s$ satisfies the above characterization of $\mu_2$, so $\mu_s=\mu_2$. Let $P(\mu)$ be the parabolic subgroup of $\widehat{G}_{\ov{\mathbb{Q}}}$ associated to $\mu$. Define $\widehat{\mathfrak{g}}_\eta(j)$ (resp.\@ $\widehat{\mathfrak{g}}_s(i)$) using $\mu_\eta$ (resp.\@ $\mu_s$) as in \cite[\S5.7]{CarFinLie}. Then by \cite[Proposition 5.7.3]{CarFinLie} $N_\eta$ (resp.\@ $N_s$) is in the unique open $P(\mu)_\eta$-orbit (resp.\@ $P(\mu)_s$) of $\bigoplus_{i \geq 2} \widehat{\mathfrak{g}}_\eta(i)$ (resp.\@ $\bigoplus_{i \geq 2} \widehat{\mathfrak{g}}_s(i)$). But, by the uniqueness of this open orbit, we then see that $N_\eta$ and $N_s$ are both conjugate to any $\ov{\mathbb{Q}}$-point of the unique open orbit of $P(\mu)$ on $\bigoplus_{i \geq 2} \widehat{\mathfrak{g}}(i)$, from where the conclusion follows. We are then done by Proposition \ref{prop:split-nilp-desc}. \end{proof} We next show the pleasant property that $\mathsf{WDP}_G^{K,\mathrm{rc}}$ actually has dense image in $\mathsf{WDP}_G^{K,\sqcup}$. \begin{lem}\label{lem:stab-dim} Let $k$ be a field, $X$ an irreducible finite type $k$-scheme equipped with an action of an algebraic $k$-group $H$, and $Y$ an irreducible locally closed subscheme of $X$. Assume that the action morphism $\mu \colon H \times Y \to X$ is dominant. Then there is a dense open subset $U$ of $Y$ such that $\dim Z_H(y)\leqslant \dim(H) +\dim(Y) -\dim(X)$ for all $y \in U$. \end{lem} \begin{proof} By \cite[Corollary 14.116]{GortzWedhorn} there exists a dense open subset $V$ of $X$ with the property that $\dim \mu^{-1}(y)=\dim H +\dim Y -\dim X$ for all $y \in V$. As $\mu$ is $H$-equivariant when $H$ is made to act on the first component of $H\times Y$, we may assume that $V$ is $H$-stable. We put $U=V \cap Y$, which is non-empty as $\mu$ is dominant and $V$ is $H$-stable. As $Z_H(y) \times \{ y \} \subseteq \mu^{-1}(y)$ for $y \in U$, we obtain the claim. \end{proof} \begin{prop}\label{prop:dense-tor-cent} The set \[ \{ x \in \mathsf{WDP}_G^{K,\sqcup} : Z_{\widehat{G}}(\varphi^\mathrm{univ},N^\mathrm{univ})^\circ_x \textrm{ is a torus}\} \] contains an open dense subset of $\mathsf{WDP}_G^{K,\sqcup}$. \end{prop} \begin{proof} Observe that this may be checked over $\ov{\mathbb{Q}}$, as the morphism $\Spec(\ov{\mathbb{Q}})\to\Spec(\mathbb{Q})$ is surjective and universally open (see \stacks{0383}). Thus, from Theorem \ref{thm:WD-const-decomp} it suffices to show that for each $(\gamma,\phi,N)$ in $\mathsf{WDP}_G^K(\ov{\mathbb{Q}})$ corresponding to $(\varphi,N)$, one has that the set of points $x$ in $U(\gamma,\phi,N)$ such that $Z_{\widehat{G}}(\varphi^\mathrm{univ},N^\mathrm{univ})^\circ_x$ is a torus contains a dense open subset. Let $H$ be the normalizer of $\phi$ in $(\widehat{G} \rtimes \underline{\Gamma_\ast})_{\overline{\mathbb{Q}}}$. Then $H^\circ=Z_{\widehat{G}}(\phi)^\circ$ which is a reductive group by Lemma \ref{lem:fixed-points-reductive}. Consider the linear algebraic $\ov{\mathbb{Q}}$-group $S'_H(N)$ representing the functor \begin{equation*} \cat{Alg}_{\overline{\mathbb{Q}}}\to \cat{Grp},\qquad A\mapsto \left\{(h,z)\in H(A)\times A^\times: \Ad(h)(N)=z^2N \right\}, \end{equation*} which is clearly seen to be a closed subgroup scheme of $((\widehat{G}\times \bb{G}_{m}) \rtimes \underline{\Gamma_\ast})_{\ov{\mathbb{Q}}}$ by changing the order of the components. Let $S_H(N)$ be the image of $S'_H(N)$ in $(\check{G} \rtimes \underline{\Gamma_\ast})_{\ov{\mathbb{Q}}}$. Let $s_0 u_0$ be the Jordan decomposition of $\ov{\varphi}(w_0)$ in $S_H(N)$. Then the image of $u_0$ in $\mathbb{G}_{m,\overline{\mathbb{Q}}}$ is trivial. Hence $u_0$ is an element of $Z_{\phi,N}^{\circ}$. Replacing $\gamma$ by $u_0^{-1} \gamma$, we may assume that $\varphi$ is Frobenius semi-simple from the beginning. Let $\psi$ be an element of $\mathsf{LP}_G^K(\ov{\mathbb{Q}})$ such that $\mathsf{JM}(\psi)=(\varphi,N)$ and write $\theta=\theta_\psi$. Let $U_{H}(N)$ be the unipotent radical of $Z_{H}(N)$. Then, as in Proposition \ref{prop:Zudec}, we have $Z_{H}(N)=U_{H}(N)\rtimes Z_{H}(\theta)$. We take a maximal quasi-torus $T$ of $Z_{H}(\theta)$ in the sense of \cite[Definition 8.6]{HaPaCryChe}. Set $s_1$ to be the image of $\left( \theta \left( \left(\begin{smallmatrix} q^{1/2} & 0 \\ 0 & q^{-1/2} \end{smallmatrix}\right)\right),q^{1/2}\right)$ in $\check{G}(\ov{\mathbb{Q}})$. Then $Z_{\phi,N}^{\circ} \gamma s_1^{-1} \subseteq Z_H(N)$. So we can write $T \cap Z_{\phi,N}^{\circ} \gamma s_1^{-1}=t_1 T^{\circ}$ for some $t_1 \in T(\ov{\mathbb{Q}})$ by \cite[Theorem 8.10 (d)]{HaPaCryChe}. Then, we have $Z_{\phi,N}^{\circ} \gamma=t_1 Z_{\phi,N}^{\circ} s_1$. We let $T^{t_1}$ be the closed subgroup scheme of $T$ of elements commuting with $t_1$. For $t_0$ in $(T^{t_1})^\circ(\ov{\mathbb{Q}})$, we consider the morphism \[ \Lambda_{t_0} \colon Z_{H}(N)^{\circ} \times (T^{t_1})^\circ \to Z_{H}(N)^{\circ},\qquad (h,t) \mapsto (t_1t_0)^{-1}ht_1t_0 t s_1 h^{-1} s_1^{-1}. \] This induces \[ \Lie (\Lambda_{t_0}) \colon \mathrm{Lie}(Z_{H}(N)^{\circ}) \times \mathrm{Lie}((T^{t_1})^\circ) \to \mathrm{Lie}(Z_{H}(N)^{\circ}),\qquad (x,y) \mapsto \mathrm{ad}((t_1t_0)^{-1})x + y - \mathrm{ad}(s_1)x. \] This is identified with the direct sum of \begin{align*} &\Lie (\Lambda_{t_0})_1 \colon \mathrm{Lie}(Z_{H}(\theta)^{\circ}) \times \mathrm{Lie}((T^{t_1})^\circ) \to \mathrm{Lie}(Z_{H}(\theta)^{\circ}),\qquad (x,y) \mapsto \mathrm{ad}((t_1t_0)^{-1})x + y - x,\\ &\Lie (\Lambda_{t_0})_2 \colon \mathrm{Lie}(U_{H}(N)^{\circ}) \to \mathrm{Lie}(U_{H}(N)^{\circ}),\qquad z \mapsto \mathrm{ad}((t_1t_0)^{-1})z - \mathrm{ad}(s_1)z. \end{align*} In the proof of \cite[Theorem 8.9 (c)]{HaPaCryChe}, it is shown that the morphism \begin{equation*} Z_{H}(\theta)^{\circ} \times t_1 (T^{t_1})^\circ \to t_1 Z_{H}(\theta)^{\circ},\qquad (g,t) \mapsto g t g^{-1} \end{equation*} is dominant. Therefore, by Lemma \ref{lem:stab-dim} and the fact that $(T^{t_1})^\circ \subseteq Z_{Z_{H}(\theta)^{\circ}}(t_1t_0)^{\circ}$ for any $t_0 \in (T^{t_1})^\circ$, there is an open dense subset $U_{t_1,1} \subseteq (T^{t_1})^\circ$ such that $Z_{Z_{H}(\theta)^{\circ}}(t_1t_0)^{\circ}=(T^{t_1})^\circ$ for $t_0 \in U_{t_1,1}$. This implies that $\Lie (\Lambda_{t_0})_1$ is surjective for $t_0 \in U_{t_1,1}$. The eigenvalues of the diagonalizable $\mathrm{ad}(s_1)$ on $\mathrm{Lie}(U_{H}(N))$ are contained in $\{ q^{i/2} \}_{1 \leq i \leq n_0}$ for some $n_0$ by Proposition \ref{prop:Zudec}. Let $m_1$ be the order of $t_1$ in $\pi_0(T)$. Then there is a positive integer $m$ such that the eigenvalues of the diagonalizable $\mathrm{ad}(t_1^{-1-mm_1})$ on $\mathrm{Lie}(U_{H}(N))$ are disjoint from $\{ q^{i/2} \}_{1 \leq i \leq n_0}$. Since $t_1^{-1-mm_1}$ and $s_1$ are commutative, $\mathrm{ad}(t_1^{-1-mm_1})$ and $\mathrm{ad}(s_1)$ are simultaneously diagonalizable. Hence we have the surjectivity of $\Lie (\Lambda_{t_1^{mm_1}})_2$. Since the surjectivity of $\Lie (\Lambda_{t_0})_2$ defines an open subset on $(T^{t_1})^\circ$, which we now know is non-empty, there is an open dense subset $U_{t_1,2} \subseteq (T^{t_1})^\circ$ such that $\Lie (\Lambda_{t_0})_2$ is surjective for $t_0 \in U_{t_1,2}$. We put $U_{t_1}=U_{t_1,1} \cap U_{t_1,2}$. Then, for $t_0 \in U_{t_1}$, the map $\Lie (\Lambda_{t_0})$ is surjective, hence $\Lambda_{t_0}$ is dominant. This implies that \begin{equation*} Z_{H}(N)^{\circ} \times t_1 (T^{t_1})^\circ s_1 \to t_1 Z_{H}(N)^{\circ} s_1,\qquad (g,t) \mapsto gtg^{-1} \end{equation*}is dominant. Further, for $t_0 \in U_{t_1}$, the surjectivity of $\Lie (\Lambda_{t_0})$ implies that the kernel of \begin{equation*} \mathrm{Lie}(Z_{H}(N)^{\circ}) \to \mathrm{Lie}(Z_{H}(N)^{\circ}),\qquad x \mapsto \mathrm{ad}((t_1t_0)^{-1})x - \mathrm{ad}(s_1)x \end{equation*} is equal to $\mathrm{Lie}((T^{t_1})^\circ)$. This means that for $t_0 \in U_{t_1}$, we have $Z_{Z_{H}(N)} (t_1 t_0 s_1)^{\circ} =(T^{t_1})^\circ$. So we have toral centralizer for all points in the image of the dominant map \begin{equation*} Z_{H}(N)^{\circ} \times t_1 U_{t_1} s_1 \to t_1 Z_{H}(N)^{\circ} s_1,\qquad (g,t) \mapsto gtg^{-1}, \end{equation*} whose target is equal to $Z_{\phi,N}^{\circ} \gamma$, and so the conclusion follows from Chevalley's theorem (see \cite[Theorem 10.19]{GortzWedhorn}). \end{proof} From this, together with Proposition \ref{prop:sqcup-to-square-bir} and Lemma \ref{lem:stratification-isom}, we deduce that the two maps $\mathsf{WDP}_G^{K,\mathrm{rc}}\to \mathsf{WDP}_G^{K,\sqcup}$ and $\mathsf{WDP}_G^{K,\mathrm{rc}}\to \mathsf{WDP}_G^{K}$ are weakly birational. To connect this discussion to the Jacobson--Morozov map, we now show that $\mathsf{JM}$ is an isomorphism over $\mathsf{WDP}_G^{K,\mathrm{rc}}$. \begin{prop}\label{prop:JM-isom-over-red-locus} The morphism $\mathsf{JM}\colon \mathsf{JM}^{-1}(\mathsf{WDP}_G^{K,\mathrm{rc}})\to \mathsf{WDP}_G^{K,\mathrm{rc}}$ is an isomorphism. \end{prop} \begin{proof} Let $A$ be a $\mathbb{Q}$-algebra. As $\mathsf{JM}$ is $\widehat{G}(A)$-equivariant, to show that this map is a bijection on $A$-points it suffices to prove that the map on $A$-points is a bijection upon quotienting both sides by $\widehat{G}(A)$, and that for all $\psi$ in $\mathsf{JM}^{-1}(\mathsf{WDP}_G^{K,\mathrm{rc}}(A))$ the equality $Z_{\widehat{G}}(\psi)=Z_{\widehat{G}}(\varphi,N)$ holds where $(\varphi,N)=\mathsf{JM}(\psi)$. For the bijectivity on quotient sets, it suffices by Theorem \ref{thm:rel-JM-param} to show that every element of $\mathsf{WDP}_G^{K,\mathrm{rc}}(A)$ belongs to $\mathsf{WDP}_G^{K,\sqcup,\ss}(A)$. But, this follows from Proposition \ref{prop:red-cent-ss} and Proposition \ref{prop:red-cent-constant-N}. Suppose now that $\psi$ is an element of $\mathsf{JM}^{-1}(\mathsf{WDP}_G^{K,\mathrm{rc}}(A))$. To show that $Z_{\widehat{G}}(\psi)=Z_{\widehat{G}}(\varphi,N)$ it suffices by Proposition \ref{prop:Zphidesc-rel} to show that $U^N(\varphi)$ is trivial. Applying the fiberwise criterion for isomorphism (see \cite[Lemma B.3.1]{ConRgrsch}) to identity section of $U^N(\varphi)$ it suffices to show that $U^N(\varphi)_x$ is trivial for all $x$ in $\Spec(A)$. But, as $U^N(\varphi)_x$ is unipotent it is contained in $Z(\varphi,N)^\circ_x$, and as it is also normal it must be trivial by our assumption that $Z(\varphi,N)^\circ_x$ is reductive. \end{proof} We deduce that $\mathsf{WDP}_G^{K,\mathrm{rc}}$ also admits a weakly birational monomorphism to $\mathsf{LP}_G^K$. So, we now come to our main geometric result concerning the Jacobson--Morozov morphism. \begin{thm}\label{thm:JM-omnibus} The morphism $\mathsf{JM}\colon \mathsf{LP}_G^K\to \mathsf{WDP}_G^{K,\sqcup}$ (resp. $\mathsf{JM}\colon \mathsf{LP}_G^K\to \mathsf{WDP}_G^K$) is birational (resp.\@ weakly birational). \end{thm} \begin{proof} The weak birationality of both maps is clear from the above discussion, and therefore it suffices to show that the map $\mathsf{JM}\colon \mathsf{LP}_G^K\to \mathsf{WDP}_G^{K,\sqcup}$ induces a bijection on irreducible components. It clearly suffices to check this after base changing to $\ov{\mathbb{Q}}$. By Theorem \ref{thm:WD-const-decomp} and Theorem \ref{thm:L-const-decomp} the connected components of $\mathsf{LP}_{G,\ov{\mathbb{Q}}}^K$ and $\mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup}$ are irreducible, so it suffices to show that the map $\mathsf{JM}\colon \pi_0(\mathsf{LP}_{G,\ov{\mathbb{Q}}}^K)\to \pi_0(\mathsf{WDP}_{G,\ov{\mathbb{Q}}}^{K,\sqcup})$ is bijective. To do this we first show that the Jacobson--Morozov map induces a bijection $[\mathsf{LP}_G^K(\overline{\mathbb{Q}})]\to [\mathsf{WDP}_G^K(\overline{\mathbb{Q}})]$. By Proposition \ref{prop:dense-tor-cent} and Proposition \ref{prop:red-cent-ss} every equivalence class of the target contains a Frobenius semi-simple element and thus surjectivity follows from Theorem \ref{thm:rel-JM-param}. To show injectivity suppose that $(\gamma_i,\phi_i,\theta_i)$ for $i=1,2$ are elements of $\mathsf{LP}_G^K(\overline{\mathbb{Q}})$ such that $(\gamma_i,\phi_i,N_i)$ are equivalent in $\mathsf{WDP}_G^K(\ov{\mathbb{Q}})$. Without loss of generality, we may assume that $\phi_1=\phi_2=:\phi$ and $N_1=N_2=:N$ and that $\gamma_2=h\gamma_1$ with $h$ in $Z_{\phi,N}(\overline{\mathbb{Q}})$. By Proposition \ref{prop:gr-prop} there exists $z$ in $Z_{\phi,N}(\ov{\mathbb{Q}})$ such that $z\theta_1 z^{-1}=\theta_2$. Note then that $(\gamma_2,\phi,\theta_2)=z(s\gamma_1,\phi,\theta_1)z^{-1}$ where $s=z^{-1}\gamma_2z\gamma_1^{-1}$. Writing $s=z^{-1}h\gamma_1 z\gamma_1^{-1}$ one sees from the fact that $z^{-1}$ and $h$ both centralize $\phi$ and $\gamma_1$ normalizes $\phi$ that $s$ centralizes $\phi$. On the other hand, one can just as easily check that as $\gamma_1$ centralizes $\theta_1$ and $\gamma_2$ centralizes $\theta_2$ that $s=z^{-1}\gamma_2z\gamma_1^{-1}$ also centralizes $\theta_1$. Therefore as $(\gamma_2,\phi,\theta_2)=z(s\gamma_1,\phi,\theta_1)z^{-1}$ we deduce that $(\gamma_2,\phi,\theta_2)$ and $(\gamma_1,\phi,\theta_1)$ are equivalent in $\mathsf{LP}_G^K(\overline{\mathbb{Q}})$ as desired. But, for $(\gamma,\phi,\theta)$ with image $(\gamma',\phi,N)$ under the Jacobson--Morozov map, one has $\pi_0(Z_{\phi,N})=\pi_0(Z_{\theta,N})$ as follows quickly from Proposition \ref{prop:Zphidesc-rel}. These observations together with Corollary \ref{cor:WDP-pi0} and Corollary \ref{cor:LP-pi0} give the desired conclusion. \end{proof} Let us finally note that as a possibly useful corollary of the above results, we also obtain the density of Frobenius semi-simple parameters in all three of these moduli spaces. \begin{cor} The subsets \begin{equation*} \mathsf{LP}_G^\ss(\ov{\mathbb{Q}})\subseteq \mathsf{LP}_G,\qquad \mathsf{WDP}_G^{\sqcup,\ss}(\ov{\mathbb{Q}}) \subseteq \mathsf{WDP}_G^{\sqcup},\qquad \mathsf{WDP}_G^\ss(\ov{\mathbb{Q}})\subseteq \mathsf{WDP}_G \end{equation*} are dense. \end{cor} \subsection{Isomorphism over the discrete locus} In this final section we apply the material to give a geometric analogue of Corollary \ref{cor:bij-et-disc} or, in other words, we show that the Jacobson--Morozov morphism is an isomorphism over the discrete loci in $\mathsf{LP}_G^K$ and $\mathsf{WDP}_G^K$. We have defined the discrete locus $\mathsf{WDP}_G^{K,\mathrm{disc}}$ in Definition \ref{defn:locus-of-red}, and we now do so for $\mathsf{LP}_G^K$. \begin{defn}\label{defn:disc-locus-L} Let $\psi^\mathrm{univ}$ be the universal $L$-parameter over $\mathsf{LP}_G^K$. Then, the \emph{discrete locus} in $\mathsf{LP}_G^K$ is the subset \begin{equation*} \mathsf{LP}^{K,\mathrm{disc}}_G\vcentcolon= \left\{x\in \mathsf{LP}_G^K: Z_{\widehat{G}}(\psi^\mathrm{univ})_x/Z_0(\widehat{G})_x\to \Spec(k(x))\text{ is finite}\right\}. \end{equation*} \end{defn} The same argument as in the proof of Proposition \ref{prop:red-equi-dim-loc-closed} shows that $\mathsf{LP}_G^{K,\mathrm{disc}}$ is an open subset of $\mathsf{LP}_G^K$ and we endow it with the open subscheme structure. The following relates the discrete loci in $\mathsf{WDP}_G^K$ and $\mathsf{LP}_G^K$, giving a geometrization of Corollary \ref{cor:bij-et-disc}. \begin{prop} The equality $\mathsf{JM}^{-1}(\mathsf{WDP}_G^{K,\mathrm{disc}})=\mathsf{LP}_G^{K,\mathrm{disc}}$ holds. \end{prop} \begin{proof} As these are both open subsets of the finite type affine $\mathbb{Q}$-scheme $\mathsf{LP}_G^K$, it suffices to show that they have the same $\ov{\mathbb{Q}}$-points. In other words, we must show that for an element $\mathsf{LP}_G^K(\ov{\mathbb{Q}})$ one has that $Z_{\widehat{G}}(\psi)$ is finite (as a set) if and only if $Z_{\widehat{G}}(\mathsf{JM}(\psi))$ is finite. Choosing an embedding $\ov{\mathbb{Q}}\to \bb{C}$ one then quickly deduces this from Proposition \ref{prop:temp-cent-equal} and its proof. \end{proof} From this, and Proposition \ref{prop:JM-isom-over-red-locus} we deduce the following. \begin{thm}\label{thm:JM-isom-disc-locus} The morphism $\mathsf{JM}\colon \mathsf{LP}_G^{K,\mathrm{disc}}\to\mathsf{WDP}_G^{K,\mathrm{disc}}$ is an isomorphism. \end{thm} \bibliographystyle{test2}
1,108,101,566,825
arxiv
\section{Introduction} Let $G=(V,E)$ be a graph. We call $|V|$ the {\it order} of $G$ and $|E|$ the size of it. If $|V|=n$, we call $G$ an $n$-vertex graph. Given a family $\mathcal{F}$ of graphs, a graph $G$ is said to be {\em $\mathcal{F}$-saturated} if $G$ does not contain a subgraph isomorphic to any member $F\in\mathcal{F}$ but $G+e$ contains at least one copy of some $F\in\mathcal{F}$ for any edge $e\notin E(G)$. The {\it Tur\'{a}n number} $\mbox{ex}(n,\mathcal{F})$ of $\mathcal{F}$ is the maximum number of edges in an $n$-vertex $\mathcal{F}$-saturated graph. The minimum number of edges in an $n$-vertex $\mathcal{F}$-saturated graph is called the {\it saturation number}, denoted by $\mbox{sat}(n,\mathcal{F})$, i.e. $$\mbox{sat}(n,\mathcal{F})=\min\{|E(G)| : G\mbox{ is an $n$-vertex }\mathcal{F}\mbox{-saturated graph}\}\mbox{.}$$ We call an $n$-vertex $\mathcal{F}$-saturated graph of size $\mbox{sat}(n,\mathcal{F})$ a {\it minimum extremal graph} for $\mathcal{F}$ and let $\mbox{Sat}(n,\mathcal{F})$ be the family of all $n$-vertex minimum extremal graphs for $\mathcal{F}$. Let $C_r$ denote the cycle of length $r$ and $\mathcal{C}_{\ge r}$ be the family of cycles of length at least $r$. Erd\H{o}s and Gallai (1959) proved the following theorem on Tur\'an number of $\mathcal{C}_{\ge r}$. \begin{thm}[The Erd\H{o}s-Gallai Theorem, \cite{EG59}]\label{ex} Let $n\ge r$, $$\mbox{ex}(n,\mathcal{C}_{\ge r})\le\frac{(r-1)(n-1)}{2}.$$ \end{thm} For a single cycle $C_r$, there are many results for $\mbox{ex}(n, C_r)$ and $\mbox{sat}(n, C_r)$ have been known, we review some of them in the following. \begin{itemize} \item (Simonovits~\cite{Simo74}) $\mbox{ex}(n, C_{2k+1})=\lceil\frac{n^2}4\rceil$ for sufficiently large $n$; \item (Erd\H{o}s-Bondy-Simonovits~\cite{Bon-Sim74}, The Even Cycle Theorem) $\mbox{ex}(n, C_{2k})=O(n^{1+\frac1k})$; \item (Erd\H{o}s, Hajnal, and Moon~\cite{EHM64}) $\mbox{sat}(n,C_3)=n-1$ for $n\ge 3$; \item (Ollmann~\cite{Oll72}, Tuza~\cite{Tuz89}, Fisher et al~\cite{FFL97}) $\mbox{sat}(n, C_4)=\lfloor\frac{3n-5}{2}\rfloor$ for $n\ge 5$; \item (Chen~\cite{Che09,Che11}) $\mbox{sat}(n, C_5)=\lceil\frac{10}{7}(n-1)\rceil$ for $n\ge 21$; \item(Barefoot et al.~\cite{BCE96} and Zhang et al.~\cite{Zhang15}) $\mbox{sat}(n,C_6)\le\lfloor\frac{3n-3}{2}\rfloor$ for $n\ge 9$; \item (F\"{u}redi and Kim~\cite{FK13}) $(1+\frac{1}{r+2})n-1<\mbox{sat}(n,C_r)<(1+\frac{1}{r-4})n+\binom{r-4}{2}$ for all $r\ge 7$ and $n\ge 2r-5$; \item (Clark, Entringer, and Shapiro~\cite{Clark83, Clark92}, Lin et al~\cite{LJZY97}) $\mbox{sat}(n,C_n)=\lceil\frac{3n}{2}\rceil$ for $n=17$ or $n\ge 19$. \end{itemize} A natural question is to determine $\mbox{sat}(n,\mathcal{C}_{\ge r})$ for $n\ge r\ge 3$. It is trivial that $\mbox{sat}(n,\mathcal{C}_{\ge 3})=n-1$ and $\mbox{Sat}(n,\mathcal{C}_{\ge 3})=\{\mbox{tree on } n \mbox{ vertices}\}$. Ferrara et al.~\cite{Subdivision12} proved that \begin{thm}[Ferrara et al., Theorems 2.1, 2.13 and 2.17 in~\cite{Subdivision12}]\label{THM: subdivision} (1) For $r\ge 3$ and $n\ge n(r)$, there exists an absolute constant $c$ such that $$\frac{5n}{4}\le\mbox{sat}(n, \mathcal{C}_{\ge r})\le\left(\frac 54+\frac cr\right)n.$$ In particular, if $r\ge 36$, $c=8$ will suffice. (2) For $n\ge 1$, $\mbox{sat}(n, \mathcal{C}_{\ge4})=n +\lfloor\frac{n-3}4\rfloor$. (3) For $n\ge 5$, $\mbox{sat}(n, \mathcal{C}_{\ge5})=\lfloor\frac{10(n-1)}7\rfloor$. \end{thm} In this paper, we determine the exact values of $\mbox{sat}(n,\mathcal{C}_{\ge r})$ for $r=6$ and $r$ with $56\le r\le n\le 2r$, and give new lower and upper bounds of $\mbox{sat}(n,\mathcal{C}_{\ge r})$. The main results of the paper are the following. \begin{thm}\label{THM: 6} For $n\ge6$, \begin{equation*} sat(n,\mathcal{C}_{\ge6})=\begin{cases} 9 & n=6;\\ 11 & n=7;\\ 12 & n=8;\\ 13 & n=9;\\ \left\lceil \frac{3(n-1)}{2}\right\rceil & n\ge10. \end{cases} \end{equation*} \end{thm} \begin{thm}\label{THM: lower} $\mbox{sat}(n,\mathcal{C}_{\ge r})\ge n+\frac{r}{2}$ for $2r \ge n\ge r\ge 6$. \end{thm} To give the new upper bound of $\mbox{sat}(n,\mathcal{C}_{\ge r})$ and the exact value of $\mbox{sat}(n,\mathcal{C}_{\ge r})$ for $\frac n2\le r\le n$, we define a function $g(x)$(see Figure~\ref{fuc}) on $x\in(0,1]\cap\mathbb{Q}$: \begin{equation} g(x)=\left\{\begin{array}{ll} 1+\frac{1}{2}x, & \mbox{if } x\in[\frac{1}{2},1],\\ 1+\frac{k}{2}x, & \mbox{if } x\in (\frac{1}{2k},\frac{2}{4k-3}],\\ 2-\frac{3k-3}{2}x, & \mbox{if } x\in (\frac{2}{4k-3},\frac{1}{2k-2}], \end{array} \right. \mbox{ for $k\ge 2$}. \end{equation} \begin{figure}[h] \centering \includegraphics[width=5.5in]{fuc.jpg} \caption{The image of $g(x)$}\label{fuc} \end{figure} \begin{thm}\label{THM: upper} (i) $\mbox{sat}(n,\mathcal{C}_{\ge r})\le g(\frac{r}{n})n+O(\frac{n}{r})$ for $n\ge r\ge 56$. (ii) $\mbox{sat}(n,\mathcal{C}_{\ge r})=n+\lceil\frac{r}{2}\rceil$ for $28\le\frac{n}2\le r\le n$. \end{thm} \noindent{\bf Remark:} (1) In fact, we have proved that $O(\frac nr)<\frac{2n}r$ in the proof of Theorem~\ref{THM: upper}. So, for $n\ge r\ge 56$, \begin{equation*} sat(n,\mathcal{C}_{\ge r})\begin{cases} =n+\lfloor\frac r2\rfloor & r\le n\le 2r;\\ \le (\frac {5k-3}{4k-3}+\frac 2r)n & 2(k-1)r-2(k-2)\le n<\frac{4k-3}2r;\\ \le (\frac{5k-3}{4k-3}+\frac 2r)n & \frac{4k-3}2r\le n < 2kr-2(k-1). \end{cases} \end{equation*} The new upper bound is better than the one given in~\ref{THM: subdivision} for the first case and the other case when $n$ is large enough such as $n>r^2$. (2) Theorem~\ref{THM: upper} (ii) gives the exact value of $\mbox{sat}(n,\mathcal{C}_{\ge r})$ when $28\le\frac n2\le r\le n$, however, the lower bound given in Theorem~\ref{THM: lower} holds for $2r\ge n\ge r\ge 6$. (3) Ferrara et al.~\cite{Subdivision12} also observed that for large $n$, $\mbox{sat}(n,C_r)$ and $\mbox{sat}(n, \mathcal{C}_{\ge r})$ agree for $r=3$ and 5 and differ for all other values of $r$, save perhaps for $r=6$. From Barefoot et al.~\cite{BCE96}, Zhang et al.~\cite{Zhang15}) and Theorem~\ref{THM: 6}, we know that $\mbox{sat}(n,C_6)<\mbox{sat}(n, \mathcal{C}_{\ge 6})$ when $3n-3$ is odd and $n\ge 10$. The rest of the article is arranged as follows. We give the proof of Theorem~\ref{THM: 6} in Section 2. In Section 3, we will give a structural theorem for $\mathcal{C}_{\ge r}$-saturated graphs and the proof of Theorem~\ref{THM: lower}. We prove Theorem~\ref{THM: upper} in Section 4 and give some remarks in the last section. \section{Proof of Theorem~\ref{THM: 6}} The following result is due to Dirac. \begin{thm}[Dirac 1952, Theorem 4 in~\cite{Dirac52}]\label{THM: Dirac52} Let $G$ be a connected graph with $\delta(G)\ge d$. If $n\ge 2d$ then $G$ contains a path of length at least $2d$. \end{thm} Note that a cycle is 2-connected. From the definition of the $\mathcal{C}_{\ge r}$-saturated graph, we have the following two facts. \begin{fact}\label{FACT: sm} A $\mathcal{C}_{\ge r}$-saturated graph $G$ on $n$ vertices must be connected. \end{fact} \begin{fact}\label{FACT: f2} Let $G$ be a $\mathcal{C}_{\ge r}$-saturated graph. Then any pair of nonadjacent vertices in $G$ must be connected by a path of length at least $r-1$ in $G$. \end{fact} Given integers $n\ge k\ge 2r$, let $H(n,k,r)$ be the graph obtained from the complete graph $K_{k-r}$ by connecting each vertex of the empty graph $\overline{K_{n-k+r}}$ to the same $r$ vertices of $K_{k-r}$, we call the $r$ vertices of $K_{k-r}$ the {\it center} of $H(n,k,r)$. The following result is due to Kopylov~\cite{Kopylov77} \begin{thm}[Kopylov~\cite{Kopylov77}]\label{THM:Koplov} Let $n\ge k\ge 5$ and let $r=\lfloor\frac{k-1}2\rfloor$. If $G$ is a 2-connected $n$-vertex graph with $e(G)\ge\max\{e(H(n, k, 2)), e(H(n, k,r))\}$, then either $G$ has a cycle of length at least $k$, or $G=H(n,k,2)$ or $G=H(n,k,r)$. \end{thm} Note that when $k=6$, $r=\lfloor\frac{k-1}2\rfloor=2$. So we have the following corollary. \begin{cor}\label{COR: c_5} $H(n,6,2)$ is $\mathcal{C}_{\ge 6}$-saturated for $n\ge 6$. \end{cor} The following theorem due to Whitney~\cite{Whitney32} characterizes the structure of 2-connected graphs. Given a graph $H$, we call $P$ an {\em $H$-path} if $P$ is nontrivial and meets $H$ exactly in its ends. We call a path connecting vertices $u$ and $v$ a {\it $(u,v)$-path}. \begin{thm}[Whitney, 1932]\label{THM: EAR-DECOM} A graph is 2-connected if and only if it can be constructed from a cycle by successively adding $H$-paths to graph $H$ already constructed. \end{thm} Let $D(a,b)$ be the graph on $a+b+3$ vertices whose vertex set is $\{t_1,t_2,t_3\}\cup A\cup B$ with $|A|=a,|B|=b$ and $$E(D(a,b))=\{t_1t_2,t_1t_3,t_2t_3\}\cup\{ut_1, ut_2 : u\in A\}\cup \{vt_1, vt_3 : v\in B\}\mbox{.}$$ We call $\{t_1, t_2, t_3\}$ the {\it center} of $D(a,b)$. Clearly, the center vertices have degree $a+b+2$, $a+2$ and $b+2$, respectively. It is easy to check that $D(a,b)$ is $\mathcal{C}_{\geq6}$-saturated if $a,b\ge 2$. \begin{lem}\label{61} Let $G$ be a $2$-connected graph on $n\geq6$ vertices. If $G$ is $\mathcal{C}_{\geq6}$-saturated , then $G$ is isomorphic to $H(n,6,2)$ for $n\ge 6$ or $D(a,b)$ for some $a,b\ge 2$ with $a+b+3=n$. \end{lem} \begin{proof} Since $G$ is $2$-connected, $\delta(G)\ge 2$. If $\delta(G)\ge 3$, then by Theorem~\ref{THM: Dirac52}, $G$ has a cycle of length at least $6$, a contradiction. Hence $\delta(G)=2$. \begin{claim}\label{CL: 2-vertex} Every vertex of degree two is contained in a triangle. \end{claim} Otherwise, suppose there is a vertex $v$ with $N_{G}(v)=\{u_1,u_2\}$ and $u_1u_2\notin E(G)$. By Fact~\ref{PROP: p2}, there is a $(u_1,u_2)$-path $P$ of length at least $5$ in $G$. Clearly, $v\notin V(P)$. So $C=P+u_1vu_2$ is a cycle in $G$ of length greater than $6$, a contradiction. \medskip If $G$ contains a copy of $H$ isomorphic to $K_4$, we claim that $V(G)\setminus V(H)$ is an isolated set in $G$. If not, let $G'=G-V(H)$ and $e=uv$ is an edge in $G'$, then there are two vertex disjoint paths $P_1, P_2$ connecting $u,v$ and $V(H)$ by the well known Menger Theorem. But any pair of vertices in $H$ is connected by a path $P_3$ of length three. So $P_1+P_3+P_2+e$ is a cycle of length at least six, a contradiction. Therefore, all vertices of $V(G)\setminus V(H)$ must be of degree 2 and have common neighbors in $V(H)$ because $G$ is 2-connected and $\mathcal{C}_{\ge 6}$-free. This implies that $G\cong H(n,6,2)$, as desired. Now suppose $G$ contains no $K_4$. By Claim~\ref{CL: 2-vertex} and $\delta(G)=2$, $G$ must contain a triangle with a vertex of degree $2$ and two vertices of degree at least $3$, say $u$ and $v$. Since $G$ is $2$-connected and $\mathcal{C}_{\ge 6}$-saturated, all $(u,v)$-paths must have length $2$ or $3$. Further by Claim~\ref{CL: 2-vertex} and 2-connectivity of $G$, it is easy to show that $G$ contains $K_4^-$ with a vertex of degree 2 as a subgraph. \begin{claim}\label{CL: K_4-} There is such a copy $H$ of $K_4^-$ with the property that $V(G)\setminus V(H)$ is an independent set. \end{claim} Let $V(H)=\{v_1, v_2, v_3, v_4\}$ and $H=K_4-\{v_2v_4\}$. Without loss of generality, assume $d_G(v_4)=2$. Let $S=V(G)\setminus V(H)$. If $G[S]$ is not empty, since $G$ is $2$-connected, there is an $H$-path $P$ of length at least 3 connecting $v_i$ and $v_j$ for some $i,j\in\{1,2,3\}$. If there is a $(v_i,v_j)$-path $P'$ of length 3 in $H$, then $P+P'$ is a cycle of length at least $6$, a contradiction. So $P$ has to be of length three and $\{v_i, v_j\}=\{v_1, v_3\}$. Assume $P=v_1w_1w_2v_3$. Now it can be checked that every pair of vertices in $H\cup P$ is connected by a path of length at least three in $H\cup P$. So there is no $H\cup P$-path of length at length at least three in $G$. Therefore, $S'=V(G)\setminus(V(H\cup P))$ is an independent set in $G$. If $d_G(w_i)\ge 3$ for $i=1,2$, let $w_i'\in N_G(w_i)\cap S'$. By the Menger Theorem, there are two internal vertex disjoint paths $P_1$ and $P_2$ connecting $w_1'$ and $w_2'$. Since $G[S']$ is empty, $P_i$ ($i=1,2$) must contain edges in $H\cup P$, which implies that $P_i$ ($i=1,2$) has length at least three and so $P_1\cup P_2$ is a cycle of length at least 6, a contradiction. Thus, at least one of $w_1, w_2$, say $w_1$, is of degree two. By Claim~\ref{CL: 2-vertex}, $v_1w_2\in E(G)$. Hence $\{v_1, w_1, w_2, v_3\}$ induces a copy $H'$ of $K_4^-$ with $d_G(w_1)=2$. If $d_G(v_2)=2$ then $H'$ is a copy of $K_4^-$ as claimed. Now suppose $d_G(v_2)\ge 3$ and $v_2'\in N_G(v_2)\cap S'$. Then $d_G(v_2')=2$ and $N_G(v_2')\setminus\{v_2\}\subset\{v_1, v_3, w_2\}$. But this is impossible, since, otherwise, we can find a cycle of length at least 6 because there is a path of length at least 4 connecting $v_2$ and any one of $\{v_1, v_3, w_2\}$ in $H\cup P$. The claim is true. \medskip Let $V(H)=\{v_1, v_2, v_3, v_4\}$ and $H=K_4-\{v_2v_4\}$ is a copy of $K_4^-$ as claimed in Claim~\ref{CL: K_4-}. Assume $d_G(v_4)=2$. Let $S=V(G)\setminus V(H)$. Then each vertex of $S$ has degree 2 and has neighbors $v_i, v_j$ for some $i,j\in\{1,2,3\}$. Let $A=\{v : N_G(v)=\{v_1, v_2\}\}$, $B=\{v : N_G(v)=\{v_1, v_3\}\}$ and $C=\{v : N_G(v)=\{v_2, v_3\}\}$. Again since $G$ is $C_{\ge 6}$-free, at least one of $A, B, C$ is empty. Without loss of generality, assume $C=\emptyset$, $|A|=a$ and $|B|=b$, i.e. $G\cong D(a,b)$ for $a\ge 0$, $b\ge 0$. We claim that $a\ge 2$ and $b\ge 2$. Clearly, $v_4\in B$. If $a=0$, then $G\cong H(n, 5, 2)$, which is not $\mathcal{C}_{\ge 6}$-saturated. Without loss of generality, assume $a\ge b$. If $B=\{v_4\}$, then the longest path connecting $v_2, v_4$ has length at most $4$, a contradiction, too. So we have $a\ge b\ge 2$. We are done. \end{proof} Let $B_2(G)$ be the set of blocks of $G$ isomorphic to $K_2$ and $b_2(G)=|B_2(G)|$. \begin{lem}\label{LEM:cut} Let $G$ be a $\mathcal{C}_{\ge r}$-saturated graph for $r\ge 4$. Then the following holds. (a) Every block $B$ of $G$ is $\mathcal{C}_{\ge r}$-saturated. Specifically, each block $B$ with $|V(B)|<r$ is a complete graph. (b) $B_2(G)$ forms a matching of $G$. \end{lem} \begin{proof} (a) Let $B$ be a block of $G$. Since $B$ is a maximal 2-connected subgraph of $G$, any cycle containing edges of $B$ and any path connecting two nonadjacent vertices in $B$ must be totally contained in $B$. Since $G$ is $\mathcal{C}_{\ge r}$-saturated, $B$ contains no cycle of length at least $r$, and any pair of nonadjacent vertices in $B$ is connected by a path of length $r-1$ in $B$, i.e. $B$ is $\mathcal{C}_{\ge r}$-saturated too. Specifically, if $|V(B)|<r$ then the longest path in $B$ has length no more than $r-1$. Hence $B$ contains no nonadjacent vertices, i.e., $B$ is a complete graph. (b) Suppose there is a vertex $u$ incident with two blocks of $B_2(G)$, say $uv_1,uv_2$. Then $v_1v_2\notin E(G)$, otherwise $uv_1,uv_2$ is contained in the triangle $uv_1v_2u$, a contradiction to the fact that $uv_1, uv_2\in B_2(G)$. So there exists a $(v_1,v_2)$-path $P$ of length at least $r-1$ in $G$. However, both $uv_1$ and $uv_2$ are cut edges, which forces that $uv_1,uv_2\in E(P)$, i.e., $P=v_1uv_2$, a contradiction to $|V(P)|=r\ge 4$. \end{proof} An {\it$(a,b,c,d,f)$-cactus}, denoted by $T(a,b,c,d,f)$, is a connected graph whose blocks consist of $a$ copies of $K_3$, $b$ copies of $K_4$, $c$ copies of $K_5$, $d$ members in $\{D(r,s) : r,s\ge 2\}$ and $f$ members in $\{H(t,6,2) : t\ge 6\}$. \begin{lem}\label{62} A graph $G$ is $\mathcal{C}_{\ge6}$-saturated if and only if (i) $G$ is connected and $B_2(G)$ forms a matching of $G$; (ii) $G$ contains no $T(a, 0,0,0,0)$ with $a\ge 2$; (iii) the center vertices of $D(r,s)$, $H(t,6,2)$ and the vertices of blocks $K_3, K_4$ can not incident with a cut edge; (iv) each component of $G-B_2(G)$ is isomorphic to $K_1$ or $T(a,b,c,d,f)$ for some $c+d+f\ge 1$ if $a+b\le 1$. \end{lem} \begin{proof} {\bf Necessity:} (i) The connectivity of $G$ comes from the definition of $C_{\ge r}$-saturation of $G$ and by Lemma~\ref{LEM:cut} (b), $B_2(G)$ forms a matching. (ii) Otherwise, there are two triangles $B_1$ and $B_2$ such that $|V(B_1)\cap V(B_2)|=1$. Suppose $V(B_1)=\{v_1, v_2, x\}$ and $V(B_2)=\{u_1,u_2, x\}$. Then $G+v_1u_1$ contains a cycle of length at least $6$. However, $v_1u_1$ is in a block of size $5$ in $G+v_1u_1$, which is a contradiction. (iii) Suppose to the contrary that there is a cut edge $xy$ with $x$ be a center vertex of $D(r,s)$, $H(t,6,2)$ or a vertex of a block isomorphic to $K_3, K_4$. Choose $z$ to be a center vertex other than $x$ in $D(a,b)$ (if $x$ is of degree $r+2$ or $s+2$ then choose $z$ be the center vertex of degree $r+s+2$), $H(t,6,2)$ or a vertex other than $x$ of $K_3, K_4$. Note that $y$ is a cut vertex not contained in a same block with $z$. So $yz\notin E(G)$ and $G+yz$ should contain a cycle of length at least $6$. However, the edge $yz$ is in a block of $G+yz$ isomorphic to $D(r+1,s), D(r,s+1), F(t+1, 6,2)$ or a block of size at most $5$, which contains no cycle of length at least $6$ by Lemma~\ref{61}, a contradiction. (iv) We first show that every block of $G$ is isomorphic to one of $\{K_t : 1\le t\le5\}\cup\{D(r,s): r,s\ge 2\}\cup\{H(t,6,2) : t\ge 6\}$. Let $B$ be a block of $G$. If $|V(B)|\le 5$, then $B\cong K_t$ with $1\le t\le 5$ by Lemma~\ref{LEM:cut} (a) and we are done. Now suppose $|V(B)|\ge 6$. Then $B$ is a $2$-connected $\mathcal{C}_{\ge6}$-saturated graph on at least $6$ vertices. By Lemma~\ref{61}, either $B\cong D(r,s)$ with $r,s\ge 2$ or $B\cong H(t,6,2)$ with $t\ge 6$. We are done, too. If a component of $G-B_2(G)$ is $T(a,b, 0,0,0)$ then $a+b\ge 1$. If $a+b=1$, then, by (iii), $G$ must be $T(a,b,0,0,0)$, which is of order at most 4, a contradiction. \medskip \noindent {\bf Sufficient:} By Lemma~\ref{61}, $G$ is $C_{\ge 6}$-free. It is sufficient to show that the addition of any non-edge to $G$ induces cycles of length at least 6. It can be checked that the component $T(a,b,c,d,f)$ of $G-B_2(G)$ is $\mathcal{C}_{\ge 6}$-saturated for $c+d+f\ge 1$ if $a+b\le 1$, and the addition of any non-edge between two components also gives a cycle of length at least $6$ by (iii). \end{proof} The following lemma gives the lower bound of $\mbox{sat}(n, C_{\ge 6})$. \begin{lem}\label{63} For $n\ge6$, \begin{equation*} sat(n,\mathcal{C}_{\ge6})\begin{cases} =9 & n=6;\\ =11 & n=7;\\ =12 & n=8;\\ =13 & n=9;\\ \ge\left\lceil \frac{3(n-1)}{2}\right\rceil & n\ge10. \end{cases} \end{equation*} \end{lem} \begin{proof} Let $G$ be a minimum $\mathcal{C}_{\ge 6}$-saturated graph on $n$ vertices. Let $b_2(G)$, $b_3(G)$, $b_4(G)$, $b_5(G)$, $b(G)$ and $b^*(G)$ denote the number of blocks isomorphic to $K_2$, $K_3$, $K_4$, $K_5$ and members in $\{D(r,s) : r,s\ge 2\}$ and members in $\{H(t,6,2) : t\ge 6\}$, respectively. Suppose all the blocks of the form $D(r,s)$ are $\{D(r_G^i, s_G^i): i\in[b(G)]\}$ and all the blocks of the form $H(t,6,2)$ are $\{F(t_G^j, 6,2) : j\in[b^*(G)]\}$. In the following, we write $r^i, s^i$ and $t^j$ for $r_G^i, s_G^i$ and $t^j_G$. By Lemma~\ref{62}, each component of $G-B_2(G)$ is isomorphic to $K_1$ or $T(a,b,c,d,f)$. Let $C(G)$ be the set of all components of $G-B_2(G)$. For each component $H\in C(G)$, we have $|V(H)|=1+2a+3b+4c+\sum_{i=1}^d(r_H^i+s_H^i+2)+\sum_{j=1}^{f}(t_H^j-1)$. Since $B_2(G)$ is a matching, the number of components in $G-B_2(G)$ is $b_2(G)+1$. So $$|V(G)|=\sum_{H\in C(G)}|V(H)|=(b_2+1)+2b_3+3b_4+4b_5+\sum_{i=1}^{b(G)}(r^i+s^i+2)+\sum_{j=1}^{b^*(G)}(t^j-1)\mbox{.}$$ By (i) and (iii) of Lemma~\ref{62}, $$b_2\le 5b_5+\sum_{i=1}^{b(G)}(r^i+s^i)+\sum_{j=1}^{b^*(G)}(t^j-2)\mbox{.}$$ Therefore, \begin{equation*} \begin{split} |E(G)|&=b_2+3b_3+6b_4+10b_5+\sum_{i=1}^{b(G)}(2r^i+2s^i+3)+\sum_{j=1}^{b^*(G)}(2t^j-2)\\ &=\frac{3}{2}\left(b_2+2b_3+3b_4+4b_5+\sum_{i=1}^{b(G)}(r^i+s^i+2)+\sum_{j=1}^{b^*(G)}(t^j-1)\right)\\ &+\frac{1}{2}\left(-b_2+3b_4+8b_5+\sum_{i=1}^{b(G)}(r^i+s^i)+\sum_{j=1}^{b^*(G)}(t^j-1)\right)\\ &\ge \frac{3}{2}(|V(G)|-1)+\frac{1}{2}\left(3b_4+3b_5+b^*(G)\right)\\ &\ge\frac{3}{2}(n-1). \end{split} \end{equation*} Thus we have $|E(G)|\ge\lceil\frac{3}{2}(n-1)\rceil$ for $n\ge 10$. For $6\le n\le 9$, since $r^i, s^i\ge 2$ and $t^j\ge 6$, we have $$1+b_2+2b_3+3b_4+4b_5+6b(G)+5b^*(G)\le n\le 9.$$ Hence we can list all of the $C_{\ge 6}$-saturated graphs of order $n=6,7,8,9$ and compare the number of edges of them. All the minimum $C_{\ge 6}$-saturated graphs of order $n$ with $6\le n\le 11$ are listed in Figure~\ref{sat6}. This completes the proof. \begin{figure}[h] \centering \includegraphics[width=5.5in]{sat6.jpg} \caption{All minimum saturated graphs for $n=6,7,8,9,10,11$.}\label{sat6} \end{figure} \end{proof} To complete the proof of Theorem~\ref{THM: 6}, it is sufficient to construct a $\mathcal{C}_{\ge 6}$-saturated graph of order $n\ge 10$ and $\lceil\frac{3}{2}(n-1)\rceil$ edges. For odd $n\ge 10$, let $M_{6,n}$ be the graph obtained from $D(\frac{n-7}{2},2)$ by pending a leaf to each of its vertex except the center. It is easy to check that $|V(M_{6,n})|=n$ and $|E(M_{6,n})|=\lceil\frac{3}{2}(n-1)\rceil$. By Lemma~\ref{62}, $M_{6,n}$ is a $\mathcal{C}_{\ge 6}$-saturated graph and we are done. For even $n\ge 10$, let $M_{6,n}$ be obtained by deleting one leaf from $M_{6,n+1}$. Again by Lemma~\ref{62}, $M_{6,n}$ is a $\mathcal{C}_{\ge 6}$-saturated and can be checked that $E(M_{6,n})=\lceil\frac{3}{2}(n-1)\rceil$. \section{Structural theorem for $\mathcal{C}_{\ge r}$-saturated graphs and a new lower bound} For a graph $G$ and a subset $X\subseteq V(G)$, let $\delta_{G}(X)=\min\{d_{G}(v) : v\in X\}$ and $\Delta_{G}(X)=\max\{d_{G}(v): v\in X\}$. We write $d_{G}(X)=d$ for short if $\delta_{G}(X)=\Delta_{G}(X)=d$. Let $\overline{d}_{G}(X)=\frac{1}{|X|}\sum_{v\in X}d_{G}(v)$ be the average degree of $X$. Let $N_G(X)$ be the set of neighbors of $X$ out of $X$. For a graph $G$ and two disjoint vertex sets $U, W\subset V(G)$, let $G[U]$ be the subgraph induced by $U$, and $G[U, W]$ be the bipartite subgraph of $G$ with vertex classes $U, W$ and edge set $$E_G[U,W]=\{uv\in E(G):u\in U\mbox{ and }v\in W\}\mbox{.}$$ The following lemma characterised the $C_{\ge 4}$-saturated graphs and will be used in the proofs of this section. \begin{lem}[Proposition 2.12 in~\cite{Subdivision12}]\label{LEM: 4str} A graph $G$ is $\mathcal{C}_{\ge 4}$-saturated if and only if (1) $B_2(G)$ forms a matching of $G$; (2) every component of $G-B_2(G)$ is isomorphic to $K_1$ or $T(t,0,0,0,0)$ for some $t\ge 1$. \end{lem} The following lemma gives the structure of a $\mathcal{C}_{\ge r}$-saturated graph for $r\ge 6$. \begin{figure}[h] \centering \includegraphics[width=3in]{strc.jpg} \caption{The structure of a $\mathcal{C}_{\ge r}$-saturated graph for $r\ge 6$}\label{strc} \end{figure} \begin{lem}\label{THM: structure} Let $G$ be a $\mathcal{C}_{\ge r}$-saturated graph on $n$ vertices for $n\ge r\ge 6$. Let $X_1$ be the set of leaves in $G$ and $X_3=\{v\in V(G): d_{G}(v)=3 \mbox{ and } v\in N_{G}(X_1)\}$ and $X_{\ge 4}=\{v\in V(G): d_{G}(v)\ge 4 \mbox{ and } v\in N_{G}(X_1)\}$. {Let $X_2'$ be the set of vertices of degree two with at least one neighbor of degree two and $X_2$ be the rest of the vertices of degree two.} Let $Y=N_G(X_2'\cup X_2\cup X_3)\setminus X_1$ and $Z$ be the set of remaining vertices in $G$. Then the following hold. (i) $G[X_1]$, $G[X_1, X_2\cup X_2'\cup Y\cup Z]$, $G[X_2\cup X_3]$, $G[X_2\cup X_3, X_2']$, $G[X'_2\cup X_2\cup X_3, X_{\ge 4}\cup Z]$ are all empty graphs; (ii) Both $G[X_2']$ and $G[X_1, X_3\cup X_{\ge 4}]$ are perfect matchings; (iii) For each $uv\in G[X_2']$, there is a $w\in Y$ such that $ w\in N_G(u)\cap N_G(v)$; (iv) If $Y, Z\cup X_{\ge 4}\neq\emptyset$ then $E_G[Y, Z\cup X_{\ge 4}]\not=\emptyset$; (v) For each vertex of $X_2\cup X_3$, its two neighbors in $Y$ are adjacent. (vi) {Let $Y_1$ be the set of isolated vertices} in $G[Y]$ and $Y_2=Y\setminus Y_1$. Let $H=G[Y\cup Z\cup X_{\ge 4}]$. Then $\delta_{H}(Y)\ge 2$ and $\overline{d}_{H}(Y_2)\ge\frac{5}{2}$. The structure of $G$ is shown in Figure~\ref{strc}. \end{lem} \begin{proof} (i). By definition of $X_1$, a component of $G[X_1]$ is either an edge or an isolated vertex. Since $G$ is connected and $n\ge r\ge 6$, $X_1$ must be an independent set of $G$. By definition of $Y$ and $Z$, $G[X_1, Y\cup Z]$ is an empty graph. Clearly, every vertex of $X_1$ is contained in a block isomorphic to $K_2$. If there exists a vertex $v\in N_{G}(X_1)\cap(X_2\cup X_2')$, then $v$ is a cut vertex of $G$ and so $v$ is contained in two adjacent blocks each of which is isomorphic to $K_2$, a contradiction to (b) of Lemma~\ref{LEM:cut}. Therefore, $G[X_1, X_2\cup X_2']$ is empty. By definition, $G[X_2]$ is empty. Suppose there is an edge $uv\in E(G[X_2\cup X_3])$ and $u\in X_3$. Let $u'$ be the leaf adjacent to $u$. Since $v\in X_2\cup X_3$ and $G[X_1, X_2]$ is empty, $v$ must have a non-leaf neighbor, say $w$. Then $u'w\notin E(G)$. Thus there is a $(u',w)$-path $P$ of length at least $r-1\ge 5$ in $G$. Clearly, $v\notin V(P)$, otherwise $P=wvuu'$ is of length three, a contradiction. So $P-u'u+uvw$ is a cycle of length at least $r$ in $G$, a contradiction to $G$ is $\mathcal{C}_{\ge r}$-saturated. Therefore, $G[X_2\cup X_3]$ is an empty graph. With a similar discussion, we have that there is no edge $uv$ with $u\in X_3$ (or $u\in X_{\ge 4}$) and $v\in X_2'$ (or $v\in X_2'\cup X_2\cup X_3$). That is $G[X_3, X_2']$ (or $G[X_2'\cup X_2\cup X_3, X_{\ge 4}]$) is empty. Since $E(G[X_2, X_2'])$ is empty by definition, we have $G[X_2\cup X_3, X_2']$ is an empty graph. By definition, $G[X_2'\cup X_2\cup X_3, Z]$ is empty. So $G[X_2'\cup X_2\cup X_3, X_{\ge 4}\cup Z]$ is empty too. The proof of (i) is complete. By (i), we have $X_3\cup X_{\ge 4}=N_G(X_1)$ and $X_1,X_2,X_2',X_3,X_{\ge 4},Y,Z$ form a partition of $V(G)$. (ii). By the definition and (b) Lemma~\ref{LEM:cut}, $G[X_1, X_3\cup X_{\ge 4}]$ is a matching. To complete (ii), we prove that $\Delta(G[X_2'])=\delta(G[X_2'])=1$. By definition, $\delta(G[X_2'])\ge 1$. Suppose there exists a vertex $v\in X_2'$ having two neighbors in $X_2'$, say $u_1,u_2$. Then $u_1u_2\notin E(G)$, otherwise, $G[\{v,u_1,u_2\}]$ forms a component of $G$, a contradiction to the connectivity of $G$. So $G$ contains a $(u_1, u_2)$-path $P$ of length at least $r-1\ge 5$. Clearly, $v\notin V(P)$, otherwise, $P=u_1vu_2$ is of length three, a contradiction. So $P+u_1vu_2$ is a cycle in $G$ of length at least $r+1$, a contradiction. (iii). Let $uv$ be a component in $G[X_2']$ and $u'$ (resp. $v'$) be the second neighbor of $u$ (resp. $v$). Then $u',v'\in Y$. To complete (iii), we show that $u'=v'$. If not, then $u'v\notin E(G)$. Hence $G$ contains a $(u',v)$-path of length at least $r-1\ge 5$ in $G$. With a similar discussion as in (ii), we have $u\notin V(P)$ and so $P+u'uv$ is a cycle of length at least $r+1$ in $G$, a contradiction. (iv). If not, then $G[Z\cup X_{\ge 4}\cup N_{G}(X_{\ge 4})]$ forms a component of $G$, which is a contradiction to Fact~\ref{FACT: sm}. (v). If not, then there is a $w\in X_2\cup X_3$ with $N_G(w)\cap Y=\{u,v\}$ but $uv\notin E(G)$. So there is a $(u,v)$-path $P$ of length at least $r-1$ in $G$. With the same reason as in (ii), $w\notin V(P)$. Therefore, $P+uwv$ is a cycle of length at least $r+1$ in $G$, a contradiction. (vi). Recall that $H=G[Y\cup Z\cup X_{\ge 4}]$. We first prove $\delta_H(Y)\ge 2$. Suppose there exists a vertex $v\in Y$ with $d_{H}(v)\le 1$. If $d_H(v)=0$, then by (v), $E_G[v, X_2\cup X_3]=\emptyset$. So $N_G(v)\subseteq X_2'$. By (iii), the component containing $v$ is isomorphic to $T(t,0,0,0,0)$ for some $t>0$. By Fact~\ref{FACT: sm}, $G$ is connected, which implies that $G$ is isomorphic to $T(t,0,0,0,0)$. Clearly, $T(t,0,0,0,0)$ is not $\mathcal{C}_{\ge r}$-saturated for $r\ge 6$, a contradiction. Now suppose $d_H(v)=1$ and let $N_H(v)=\{u\}$. By (v), $N_G(v)\cap(X_2\cup X_3)\subseteq N_G(u)\cap(X_2\cup X_3)$. By (iii), $N_G(v)\cap X_2'$ is disjoint with $N_G(u)\cap X_2'$. We first claim that $N_G(v)\cap X_2'=\emptyset$. If not, choose $w\in N_G(v)\cap X_2'$. Then $wu\notin E(G)$ because $u$ and $v$ have no common neighbor in $X_2'$. So there is a $(u,w)$-path of length at least $r-1\ge 5$ in $G$. Since the edge containing $w$ in $G[X_2']$ only connect to $v$ in $G$, any $(u,w)$-path must pass through $v$. But the longest $(u,v)$-path in $G$ has length at most two (equality holds when $N_G(v)\cap(X_2\cup X_3)\not=\emptyset$) and the longest $(v,w)$-path has length two, so the longest $(u,w)$-path has length at most four, a contradiction. With similar discussion, we have $N_G(u)\cap X_2'=\emptyset$. Therefore, the block $B$ containing $v$ is isomorphic to $H(k, 4, 2)$ centered at $\{u,v\}$, where $k=|N_G(v)\cap(X_2\cup X_3)|+2$. If $|N_G(v)\cap(X_2\cup X_3)|\ge 2$ then $B$ is not $\mathcal{C}_{\ge r}$-saturated because adding any edge in $X_2\cup X_3$ gives rise to a longest cycle of length at most $5\le r-1$ in $B$, a contradiction to the $\mathcal{C}_{\ge r}$-saturation of $B$. So $|N_G(v)\cap(X_2\cup X_3)|\le 1$ and thus $d_G(v)\le 2$, which is a contradiction to $d_G(v)\ge 3$. Therefore, we have $\delta_H(Y)\ge 2$. Now we show that $\overline{d}_{H}(Y_2)\ge\frac{5}{2}$ using a discharging argument. Recall that every vertex of $Y_2$ has at least one neighbor in $Y_2$. \begin{claim}\label{CLAIM: c1} For any $v\in Y_2$ with $d_{H}(v)=2$, the two neighbors of $v$ are adjacent. \end{claim} If not, denote $N_H(v)=\{v_1, v_2\}$, then there is a $(v_1,v_2)$-path $P$ of length $r-1\ge 5$ in $G$. If $v\in V(P)$, by (i), $G[X'_2\cup X_2\cup X_3, X_{\ge 4}\cup Z]$ is empty, then the only vertices used by $P$ are $v_1,v_2,v$ and at most two vertices in $X_2\cup X_3$, i.e. $P$ has length at most $4<r-1$, a contradiction. Hence, $v\notin V(P)$. It follows that $P+v_1vv_2$ is a cycle of length at least $r+1$ in $G$, a contradiction too. \begin{claim}\label{CLAIM: c2} For any pair of vertices $u,v\in Y_2$ with $d_{H}(u)=d_{H}(v)=2$, $uv\notin E(G)$. \end{claim} Otherwise, let $w$ be the other neighbor of $v$ in $H$, then $wu\in E(G)$ by Claim~\ref{CLAIM: c1}. Hence the triangle $T=uvwu$ forms a block of $H$. Let $B$ be the block of $G$ containing $T$. Then $B$ is obtained from $T$ by adding $T$-paths of length exactly two, each of which has ends in $\{u,v,w\}$ and the internal vertex in $X_2\cup X_3$. If $B\cap (X_2\cup X_3)\not=\emptyset$, we claim that $B$ is not $\mathcal{C}_{\ge r}$-saturated, so we have a contradiction to Lemma~\ref{LEM:cut}. In fact, let $P=uxv$ be a $T$-path with $x\in X_2\cup X_3$. Then $wx\notin E(G)$. But the longest $(w,x)$-path is at most four by the structure of $B$. So $B$ is not $\mathcal{C}_{\ge r}$-saturated. Now assume $B\cap (X_2\cup X_3)=\emptyset$. That is $B=T=uvwu$. Since $d_G(u)\ge 3$, by (ii), there must be an edge $u_1u_2\in G[X_2']$ such that the triangle $T'=uu_1u_2u$ forms a block of $G$. Clearly, the longest path connecting any pair of nonadjacent vertices in $V(T)\cup V(T')$ has length at most $4<r-1$, a contradiction. A vertex $v\in Y_2$ with $d_H(v)=r$ (or $d_H(v)\ge r$) is called an $r$-vertex (or an $r^+$-vertex). From Claims~\ref{CLAIM: c1} and~\ref{CLAIM: c2}, we have that for each 2-vertex $v\in Y_2$, either $v$ has two adjacent $3^+$-neighbors in $Y_2$ (we call $v$ an inner vertex), or $v$ has two adjacent neighbors such that one is a $3^+$-vertex in $Y_2$ and the other in $Z\cup X_{\ge 4}$ (we call $v$ a boundary vertex). \begin{claim}\label{CLAIM: c3} Every $3^+$-vertex $v\in Y_2$ has at most $d_H(v)-1$ neighbors of degree two in $Y_2$. \end{claim} Suppose $v\in Y_2$ is a $3^+$-vertex adjacent to $r$ vertices of degree two in $Y_2$. Let $v_1, \ldots, v_r$ be the 2-vertices in $Y_2$ adjacent to $v$ and $u_1,\ldots, u_r$ be their other neighbors so that $u_i$ is adjacent to $v_i$ for $i=1,\ldots, r$. By Claims~\ref{CLAIM: c1} and~\ref{CLAIM: c2}, $u_1, \ldots, u_r\in N_H(v)$ and $\{v_1, \ldots, v_r\}$ is an independent set in $H$. Hence $d_H(v)\ge r+1$, the equality holds if and only if $u_1=\cdots=u_r$. \begin{claim}\label{CLAIM: c4} No $3$-vertex in $Y_2$ is adjacent to two boundary vertices in $Y_2$. \end{claim} If not, suppose that there is a $3$-vertex $v\in Y_2$ adjacent to two boundary vertices $v_1, v_2\in Y_2$. By Claim~\ref{CLAIM: c1}, $v, v_1, v_2$ have a common neighbor $u\in Z\cup X_{\ge 4}$. Hence $u$ is a cut vertex separating $v, v_1, v_2$ and the other vertices of $Z\cup X_{\ge 4}$ (if $Z\cup X_{\ge 4}\not=\emptyset$). By definition, $v_1, v_2$ are 2-vertices. By Claim~\ref{CLAIM: c2}, $v_1v_2\notin E(G)$. Hence there is a $(v_1,v_2)$-path $P$ of length at least $r-1$ in $G$. Let $B$ be the block of $G$ containing $\{v, v_1, v_2, u\}$. By (i) and (v), $v\in V(P)$ and the length of $P$ is at most $4<r-1$, a contradiction. For each $v\in Y_2$, define its initial charge as $ch(v)=d_H(v)-\frac 52$. Then \begin{equation*}\label{EQN: e1} \sum_{v\in Y_2}ch(v)=\sum_{v\in Y_2}d_H(v)-\frac 52|Y_2|. \end{equation*} Hence to show $\overline{d}_H(Y_2)\ge \frac 52$, it is sufficient to show $\sum_{v\in Y_2}ch(v)\ge 0$. Now we redistribute the charges according to the following rules. (R1) Every $3^+$-vertex $v\in Y_2$ gives $\frac 14$ to each of its incident inner vertex in $Y_2$. (R2) Every $3^+$-vertex $v\in Y_2$ gives $\frac 12$ to each of its incident boundary vertex in $Y_2$. We proceed to derive that each vertex $v\in Y_2$ ends up with a nonnegative final charge $ch'(v)$. For a 2-vertex $v\in Y_2$, if $v$ is an inner vertex, by Claim~\ref{CLAIM: c2}, $v$ has two $3^+$-neighbors in $Y_2$. Hence by (R1), $v$ receives at least $2\times \frac 14=\frac 12$ from its $3^+$-neighbors. If $v$ is a boundary vertex, by (R2), $v$ receives at least $\frac 12$ from its $3^+$-neighbor. So the final charge $ch'(v)=2-\frac 52+\frac 12=0$. For a 3-vertex $v\in Y_2$, by Claim~\ref{CLAIM: c4}, if $v$ is adjacent to a boundary vertex then $v$ has no other neighbor of degree two, so $v$ gives $\frac 12$ to its boundary neighbor. If $v$ is not adjacent to boundary vertex then, by Claim~\ref{CLAIM: c3}, $v$ has at most two neighbors of degree two, so $v$ gives at most $2\times \frac 14$ to its neighbors. Therefore, the final charge $ch'(v)=3-\frac 52-\frac 12=0$. For a $4^+$-vertex $v\in Y_2$, by Claim~\ref{CLAIM: c3}, $v$ has at most $d_H(v)-1$ neighbors of degree two. By (R1) and (R2), $v$ gives at most $\frac 12(d_H(v)-1)$ to its neighbors of degree two. So the final charge $ch'(v)=d_H(v)-\frac 52-\frac 12(d_H(v)-1)=\frac 12d_H(v)-2\ge 0$. Therefore, \begin{equation*}\label{EQN: e2} \sum_{v\in Y_2}ch(v)=\sum_{v\in Y_2}ch'(v)\ge 0. \end{equation*} This completes the proof of (vi). \end{proof} \begin{cor}\label{COR:ineq} Let $G$ be a $\mathcal{C}_{\ge r}$-saturated graph on $n$ vertices for $n\ge r\ge 6$. $X_1$, $X_2$, $X_2'$, $X_3$, $X_{\ge4}$, $Y_1$, $Y_2$, $Z$ are defined the same as in Lemma~\ref{THM: structure} and let $x_1=|X_1|, x_2=|X_2|, x_2'=|X_2'|, x_3=|X_3|, x_4=|X_{\ge 4}|, y=|Y|, z=|Z|$ and $y_1=|Y_1|$. We have (a) $x_1=x_3+x_4$ and $n=x_2+x_2'+2x_3+2x_4+y+z$; (b) $y_1\le \frac{1}{2}x_2'$ and $y\le 2x_2+2x_3+\frac{1}{2}x_2'$; (c) if $x_2+x_3=0$ and $G[Y\cup Z\cup X_{\ge 4}]$ is a complete graph, then $z+x_4+y=r-1$; otherwise, $x_4+x_3+x_2'\le n-r$ and $3x_2+2x_3+z-\frac{1}{2}x_2'\ge 2r-n$. \end{cor} \begin{proof} (a) follows directly from (ii) of Lemma~\ref{THM: structure}. (b) By (v) of Lemma~\ref{THM: structure}, $N_G(X_2\cup X_3)\cap Y\subseteq Y_2$. Hence $N_G(Y_1)\subseteq X_2'$. By (ii), (iii) of Lemma~\ref{THM: structure} and the double-counting method, $2y_1=2|Y_1|\le |E_G(Y_1, X_2')|\le |X_2'|=x_2'$. Similarly, we have $y-y_1=|Y_2|\le |E_G(X_2\cup X_3, Y_2)|=2|X_2\cup X_3|=2x_2+2x_3$. So $y\le 2x_2+2x_3+\frac 12 x_2'$. (c) If $x_2+x_3=0$ and $G[Y\cup Z\cup X_{\ge 4}]$ is a complete graph, then $G$ is obtained from the complete graph $K_{y+z+x_4}$ by attaching leaves to $X_{\ge 4}$ and $K_3$'s to $Y$. It is easy to check that this graph $G$ is $C_{y+z+x_4+1}$-saturated, which implies $y+z+x_4=r-1$. If $x_2+x_3=0$ but $G[Y\cup Z\cup X_{\ge 4}]$ is not a complete graph, then any pair of nonadjacent vertices in $Y\cup Z\cup X_{\ge 4}$ are connected by a path of length at least $r-1$ in $G$. Obviously, all of the vertices in this path are in $Y\cup Z\cup X_{\ge 4}$, which implies $y+z+x_4\ge r$. Note that $n=z+y+x_2'+2x_4$. So $x_3+x_2'+x_4=x_2'+x_4\le n-r$. Now suppose $x_2+x_3\neq 0$. Denote $H=G[Y\cup Z\cup X_{\ge 4}\cup X_2\cup X_3]$. Since every vertex in $X_2\cup X_3$ has degree exactly two in $H$, $H$ is not a complete graph if $y+z+x_4+x_2+x_3\ge 4$. If $y+z+x_4+x_2+x_3\le 3$, since each vertex in $X_2\cup X_3$ has two neighbors in $Y$, $y\ge 2$ and thus $y=2$, $x_2+x_3=1$, and $z+x_4=0$. Therefore, $G$ is isomorphic to $T(t,0,0,0,0)$ for some $t\ge 1$ (for $x_2=1$) or is the graph obtained from $T(t,0,0,0,0)$ by attaching one leaf to the vertex in $X_3$ (for $x_3=1$). By Lemma~\ref{LEM: 4str}, $G$ is $C_{\ge 4}$-saturated but not $C_{\ge 6}$-saturated, a contradiction to $r\ge 6$. Hence $G[Y\cup Z\cup X_{\ge 4}\cup X_2\cup X_3]$ is not a complete graph. So any pair of nonadjacent vertices is connected by a path $P$ of length at least $r-1$ in $G$. By (i) and (iii) of Theorem~\ref{THM: structure}, $V(P)\subseteq Y\cup Z\cup X_{\ge 4}\cup X_2\cup X_3$. Therefore, $y+z+x_4+x_2+x_3\ge r$. Note that $n=(y+z+x_4+x_2+x_3)+x_4+x_3+x_2'$. So $x_4+x_3+x_2'\le n-r$. By (b), we have $3x_2+2x_3+z-\frac{1}{2}x_2'\ge y+z+x_2-x_2'=n-2(x_4+x_3+x_2')\ge 2r-n$. \end{proof} \begin{cor}\label{COR: (2)} Let $G$ be a $\mathcal{C}_{\ge r}$-saturated graph on $n$ vertices for some $r\ge 6$ and $\frac n2\le r\le n$. Then $e(G)\ge n+\frac{r}{2}$. \end{cor} \begin{proof} Let $X_1, X_2, X_2', X_3, X_{\ge4}, Y_1, Y_2, Z$ are defined the same as in Lemma~\ref{THM: structure} and let $x_1, x_2, x_2', x_3, x_4, y, z$ and $y_1$ defined as in Corollary~\ref{COR:ineq}. Denote $H=G[Y\cup Z\cup X_{\ge 4}]$. Then \begin{eqnarray} e(G)&=& e(H)+e_G(X_3\cup X_{\ge 4}, X_1)+e_G(Y, X_2\cup X_3)+e(G[X_2'])+e_G(Y, X_2')\nonumber\\ &=& e(H)+(x_3+x_4)+2(x_2+x_3)+\frac{3}{2}x_2'.\label{EQN: e(G)} \end{eqnarray} If $x_2+x_3=0$ and $G[Y\cup Z\cup X_{\ge 4}]$ is a complete graph, then $y+z+x_4=r-1\ge 5$ by (c) of Corollary~\ref{COR:ineq}. By Equality~(\ref{EQN: e(G)}), \begin{eqnarray*} e(G) &=& |E(G[Y\cup Z\cup X_{\ge 4}])|+(x_3+x_4)+\frac{3}{2}x_2'\\ &=&\binom{r-1}{2}+x_4+\frac{3}{2}x_2'\\ &=&\binom{r-1}{2}+\frac{1}{2}x_2'+n-(r-1)\\ &= & n+\frac{1}{2}(r^2-5r+4)+\frac{1}{2}x_2'\\ &\ge& n+\frac{r}{2}+\frac{1}{2}x_2'\\ &\ge& n+\frac r2, \end{eqnarray*} where the third equality holds since $n=(z+x_4+y)+x_4+x_2'=r-1+x_4+x_2'$ and the fifth inequality holds since $r\ge 6$. Now suppose $x_2+x_3\neq 0$ or $G[Y\cup Z\cup X_{\ge 4}]$ is not a complete graph. Then $x_4+x_3+x_2'\le n-r$ and $3x_2+2x_3+z-\frac{1}{2}x_2'\ge 2r-n$ by (c) of Corollary~\ref{COR:ineq}. Let $A=y-(2x_2+2x_3+\frac{1}{2}x_2')$, $B=(x_4+x_3+x_2')-(n-r)$ and $C=(2r-n)-(3x_2+2x_3+z-\frac{1}{2}x_2')$. Then $B,C\le 0$. Counting $e_G(Y, X_2'\cup X_2\cup X_3)$, we have $A\le 0$. Thus, since $\frac n2\le r\le n$, we get $$\left(\frac{r}{2n}-\frac{1}{4}\right)A+\left(\frac{r}{n}-\frac{1}{2}\right)B+\left(\frac{1}{2}-\frac{r}{2n}\right)C\le 0\mbox{.}$$ So \begin{equation*} \begin{split} e(G) &\ge e(G)+\left(\frac{r}{2n}-\frac{1}{4}\right)A+\left(\frac{r}{n}-\frac{1}{2}\right)B+\left(\frac{1}{2}-\frac{r}{2n}\right)C\\ &= e(G)+\left(\frac{r}{2n}-\frac{1}{4}\right)\left(z+y+x_2'+x_2+2x_3+2x_4\right)+\frac{x_2'}{8}-\frac{3x_2}{4}-\frac{x_3}{2}-\frac{z}{4}\\ &\ge \frac{5}{4}n+\frac{1}{8}\left(2z+6x_2+x_2'+4x_3\right)+\left(\frac{r}{2n}-\frac{1}{4}\right)n+\frac{1}{8}\left(x_2'-6x_2-4x_3-2z\right)\\ &\ge n+\frac{r}{2}+\frac{1}{4}x_2'\\ &\ge n+\frac r2\mbox{,} \end{split} \end{equation*} where the third inequality holds since $e(G)\ge \frac 54n$ by Theorem~\ref{THM: subdivision}. \end{proof} \section{Proof of Theorem~\ref{THM: upper}} In this section, we construct maximally $\mathcal{C}_{\ge r}$-saturated graphs that achieve the bounds stated in Theorem~\ref{THM: upper}. Our constructions are based on the constructions of the maximally nonhamiltonian graphs with fewest edges given in~\cite{Clark83,Clark92,LJZY97,Stacho98}. Bollob\'as~\cite{bollobas78} posed the problem of finding $\mbox{sat}(n, C_n)$. Bondy~\cite{bondy72} has shown that $\mbox{sat}(n, C_n)\ge \lceil\frac {3n}2\rceil$ for $n>7$. In~\cite{Clark83,Clark92,LJZY97}, the authors completely determined that $\mbox{sat}(n, C_n)=\lceil\frac {3n}2\rceil$ by constructing the maximally nonhamiltonian graphs with fewest edges. These constructions came from appropriate modifications of a family of well-known snarks, Isaacs' flower snarks. Let $J_k$ be the Isaacs' flower snark on $4k$ vertices with $k=2p+1$ and $p\ge 7$, and for a vertex $v\in V(J_k)$, $J_k(v)$ denotes the graph obtained from $J_k$ by expanding $v$ to a triangle and for an edge $uv\in E(J_k)$, $J_k(uv)$ denotes the graph obtained from $J_k$ by replacing the edge $uv$ by a bowtie (i.e. a $T(2,0,0,0,0)$ in this paper), detailed definitions can be found in~\cite{Stacho98} (Definitions 1, 2 and 3) and {the appendix of this paper. The following table lists the optimal $C_n$-saturated graphs for all $n$, where Clark et al~\cite{Clark83,Clark92} gave the construction for $n=8p, 8p+2, 8p+4$ and $8p+6$, and the optimality of the other cases have been proved by Stacho~\cite{Stacho98}. \begin{center} \begin{tabular}{c|c|c|c}\label{TB: t1} order& constrction& order& constrction\\ \hline $8p$ & $J_{k-2}(v_2,v_{14})$ & $8p+1$ & $J_{k-2}(v_{14})(v_0v_2)$\\ $8p+2$ & $J_{k-2}(v_2,v_{14},v_{26})$ & $8p+3$ & $J_{k-2}(v_{14},v_{26})(v_0v_2)$\\ $8p+4$ & $J_k$ & $8p+5$ & $J_{k-2}(v_{14},v_{26},v_{38})(v_0v_2)$\\ $8p+6$ & $J_k(v_2)$ & $8p+7$ & $J_k(v_0v_2)$ \end{tabular} \end{center} We define an {\it almost 3-regular} graph is a graph with all vertices of degree three but one, say $u_0$, of degree four with the property that the neighborhood $N_G(u_0)$ induces a perfect matching in $G$, say $\{u_1u_2, v_1v_2\}$, such that $u_1, u_2$ (resp. $v_1, v_2$) have distinct neighbors out of $\{u_0\}\cup N_G(u_0)$. Note that $G[N_G(u_0)\cup\{u_0\}]\cong T(2,0,0,0,0)$ by the definition. A {\em barbell} is a graph obtained from two disjoint triangles by adding a new edge connecting them. For simplify, we call a 3-regular (or an almost 3-regular) graph containing no barbell as a subgraph {\it a good graph}. By the definitions of $J_k$, $J_k(v)$ and $J_k(uv)$, we can check that all optimal graphs constructed in the above table are good. So we have \begin{lem}\label{LEM: 56} For any $r\ge 56$, there exists a ${C}_{r}$-saturated good graph $G$ on $r$ vertices and $\lceil\frac{3r}{2}\rceil$ edges. \end{lem} We also need the following property of $C_r$-saturated graph on $r$ vertices. \begin{lem}\label{LEM: mnh} Let $G$ be a ${C}_{r}$-saturated good graph on $r\ge 6$ vertices. Then every edge $e\in E(G)$ is contained in a cycle of length $r-1$. \end{lem} \begin{proof} Suppose there exists an edge $e=u_0v_0\in E(G)$ which is not contained in any cycle of length $r-1$. \noindent{\bf Case 1:} $d_G(u_0)=d_G(v_0)=3$. Let $N_G(u_0)=\{v_0,a_1,a_2\}$ and $N_G(v_0)=\{u_0,b_1,b_2\}$. Suppose $a_1a_2, b_1b_2\in E(G)$. If $|\{a_1,a_2\}\cup\{b_1,b_2\}|=2$, then $\{a_1,a_2\}=\{b_1,b_2\}$ and $G[\{v_0,u_0,a_1,a_2\}]$ is isomorphic to $K_4$. Since $G$ is connected and $r\ge 6$, $G$ must be an almost 3-regular graph and the unique 4-vertex is in $\{a_1,a_2\}$. But this is impossible since $G[N_G(a_i)\cup\{a_i\}]\ncong T(2,0,0,0,0)$ for $i=1,2$. If $|\{a_1,a_2\}\cup\{b_1,b_2\}|=3$, without loss of generality, let $a_1=b_1$ and $a_2\neq b_2$, then $a_1$ is a 4-vertex in $G$ and so $G$ must be an almost 3-regular graph. Note that $N_G(a_1)=\{a_2,b_2,v_0,u_0\}$. So $G[N_G(a_1)\cup\{a_1\}]\ncong T(2,0,0,0,0)$, a contradiction. So $|\{a_1,a_2\}\cup\{b_1,b_2\}|=4$. But $G[\{u_0,a_1,a_2,v_0,b_1,b_2\}]$ induces a barbell in $G$, a contradiction. Now suppose one of $a_1a_2,b_1b_2$ is not an edge in $G$. Without loss of generality, assume $a_1a_2\notin E(G)$. Then $G$ contains a Hamiltonian $(a_1, a_2)$-path $P$. So $u_0$ is an internal vertex of $P$. We claim that $u_0v_0\in E(P)$. If not, then $u_0a_1,u_0a_2\in E(P)$ and so $P=a_1u_0a_2$, a contradiction. Thus $u_0v_0\in E(P)$. Since one of $u_0a_1,u_0a_2$ is contained in $P$, without loss of generality, assume $u_0a_1\in E(P)$. Hence $P-u_0a_1$ is a $(u_0, a_2)$-path on vertex set $V(G)\setminus\{a_1\}$. Since $a_2u_0\notin E(P)$, $P-u_0a_1+u_0a_2$ is a cycle of length $r-1$ containing $u_0v_0$, a contradiction. \noindent{\bf Case 2:} One of $u_0,v_0$ is a 4-vertex. Without loss of generality, assume $d_G(u_0)=4$ and $N_G(u_0)=\{u_1,u_2,v_0,v_1\}$ with $u_1u_2, v_0v_1\in E(G)$. Let $N_{G}(v_0)=\{u_0, v_1, a\}$. By the definition of the almost $3$-regular graph, $av_1\notin E(G)$. Hence $G$ contains a Hamiltonian path connecting $a$ and $v_1$. If $u_0v_0\notin E(P)$, then $P=av_0v_1$ is of length $2$, a contradiction. Thus $u_0v_0\in E(P)$. Since there is another one of $v_0v_1, v_0a$ contained in $P$, without loss of generality, assume $v_0v_1\in E(P)$. Then $v_0a\notin E(P)$. Hence $P-v_0v_1+v_0a$ is a cycle on $r-1$ vertices containing $u_0v_0$ in $G$, a contradiction. \end{proof} Let $G$ and $H$ be two distinct graphs and $v\in V(G)$. We {\em attach $H$ to $v$} means that we identify a vertex of $H$ and $v$ to obtain a new graph. Let $U, W$ be two disjoint subsets of $V(G)$. We define $L(G; U, W)$ be the graph obtained from $G$ by attaching a $K_2$ to each vertex of $U$ and attaching a $K_3$ to each vertex of $W$. A vertex is called a {\it support vertex} of $G$ if it is adjacent a leaf of $G$. For two graphs $G, H$, let $u$ and $v$ be two support vertices of $G$ and $H$, respectively. Define $C(G, H; uv)$ to be the graph obtained from $G$ and $H$ by adding a new edge $uv$ and deleting the leaves adjacent to $u, v$ in $G$ and $H$. For $k\ge 3$ and a sequence of graphs $G_1, G_2, \ldots, G_k$. We recursively define $$C(G_1,...,G_k; u_1v_1,\ldots, u_{k-1}v_{k-1})=C(C(G_1,...,G_{k-1};u_1v_1,\ldots,u_{k-2}v_{k-2}),G_k; u_{k-1}v_{k-1}),$$ where $u_i$ (resp. $v_i$)is a support vertex of $G_i$ (resp. $G_{i+1}$). Let $M_{r,r}$ be a ${C}_{r}$-saturated good graph. We define $M_{r,n}$ as follows: \begin{itemize} \item If $r\le n\le 2r$, define $M_{r,n}=L(M_{r,r}; U, \emptyset)$, where $U\subset V(M_{r,r})$ and $|U|=n-r$; \item if $2(k-1)r-2(k-2)<n<\frac{4k-3}{2}r$ for some $k\ge 2$, define $$G=C(G_1,...,G_{k-1};u_1v_1,\ldots, u_{k-2}v_{k-2}),$$ where $G_i=L(M^i_{r,r}; U_i, V_i)$ and $M^i_{r,r}$ are pairwise disjoint copies of $M_{r,r}$, $U_i(\not=\emptyset)$ and $V_i$ form a partition of $V(M^i_{r,r})$ with $\sum_{i=1}^{k-1}|V_i|=n-2(k-1)r+2(k-2)$, and $v_{i-1}, u_i\in U_i$ and $v_{i-1}\not=u_{i}$ for $1\le i\le k-1$. \item if $\frac{4k-3}{2}r\le n\le 2kr-2(k-1)$ for some $k\ge 2$, define $$M_{r,n}=C(G_1,...,G_{k}; u_1v_1,\ldots, u_{k-1}v_{k-1}),$$ where $G_i=L(M^i_{r,r}; U_i, \emptyset)$ and $M^i_{r,r}$ are pairwise disjoint copies of $M_{r,r}$, $U_i\not=\emptyset$ are subsets of $V(M^i_{r,r})$ with $\sum_{i=1}^{k}|U_i|=n-kr+2(k-1)$, and $v_{i-1}, u_i\in U_i$ and $v_{i-1}\not=u_i$ for $1\le i\le k$. \end{itemize} \begin{prop}\label{PROP: p1} For $n\ge r\ge 56$, $M_{r,n}$ is $\mathcal{C}_{\ge r}$-saturated graph. \end{prop} \begin{proof} It is sufficient to show that $C(G_1, \ldots, G_k; u_1v_1, \ldots, u_{k-1}v_{k-1})$ is $\mathcal{C}_{\ge r}$-saturated for $k\ge 1$, where $G_i=L(M^i_{r,r}; U_i, V_i)$ and $M^i_{r,r}$ are pairwise disjoint copies of $M_{r,r}$, $U_i(\not=\emptyset)$ and $V_i$ are disjoint subsets of $V(M^i_{r,r})$, and $v_{i-1}, u_i\in U_i$ with $v_{i-1}\not=u_i$ for $1\le i\le k-1$. Let $H=C(G_1, \ldots, G_k; u_1v_1, \ldots, u_{k-1}v_{k-1})$. By definition, the blocks of $H$ are isomorphic to $M_{r,r}$, $K_3$, or $K_2$. So $H$ is $\mathcal{C}_{\ge r}$-free since $M_{r,r}$ is ${C}_{r}$-saturated. Now we prove that for any $a,b\in V(H)$ with $ab\notin E(H)$, $H$ contains an $(a,b)$-path on at least $r$ vertices. \noindent{\bf Case 1:} $a,b\in V(G_i)$ for some $1\le i\le k$. Without loss of generality, assume $i=1$. If $a,b\in V(M^1_{r,r})$, we are done since $M_{r,r}$ is ${C}_{r}$-saturated. If $a\in V(M^1_{r,r})$ but $b$ is not, then $b$ has a neighbor, say $b'$, in $V(M^1_{r,r})$. If $ab'\in E(M^1_{r,r})$ then $ab'$ is contained in a cycle $C$ on $r-1$ vertices within $M^1_{r,r}$ by Lemma~\ref{LEM: mnh}. Thus, $C-ab'+bb'$ is an $(a,b)$-path on $r$ vertices in $L(M^1_{r,r}; U_1, V_1)$. If $ab'\notin E(M^1_{r,r})$ then $M^1_{r,r}$ contains a $(a, b')$-path $P$ on $r$ vertices. Thus $P+bb'$ is an $(a,b)$-path on $r+1$ vertices in $L(M^1_{r,r};U_1, V_1)$. If $a,b\notin V(M^1_{r,r})$ then $a, b$ have two different neighbors in $V(M^1_{r,r})$ (this is because $ab\notin E(G_1)$). If $a'b'\in E(M^1_{r,r})$ then $M^1_{r,r}$ contains a cycle $C$ on $r-1$ vertices containing $a'b'$. Thus $C-a'b'+a'a+b'b$ is an $(a,b)$-path on $r+1$ vertices in $L(M^1_{r,r}; U_1, V_1)$, we are done. If $a'b'\notin E(M^1_{r,r})$ then $M^1_{r,r}$ contains an $(a',b')$-path $P$ on $r$ vertices. So $P+a'a+b'b$ is an $(a,b)$-path on $r+2$ vertices in $L(M^1_{r,r}; U_1, V_1)$. \noindent{\bf Case 2:} $a\in V(G_i)$ and $b\in V(G_{j+1})$ for some $1\le i\le j\le k-1$. Since $H$ is connected and $u_jv_j$ is a cut edge, there exists an $(a, u_j)$-path $P_1$ from $a$ to $u_j$ containing no vertices in $V(M^{j+1}_{r,r})$. If $b\neq v_j$, then $u_jb\notin E(H)\cap E(G_{j+1})$. Hence $G_{j+1}$ contains a $(u_j, b)$-path $P_2$ on at least $r$ vertices by Case 1. Hence $P_1+P'_2$ is an $(a, b)$-path on at least $r$ vertices in $H$. If $b=v_j$, we may also assume $a=u_{i}$ by symmetry, which implies $i<j$. Let $P$ be a $(u_i, v_{j-1})$-path $P_1$ containing no vertices in $V(M^{j}_{r,r})\setminus\{v_{j-1}\}$. Since $v_{j-1}\neq u_j$, $v_{j-1}v_j\notin E(G_{j})$. Again from Case 1, $G_j$ contains a $(v_{j-1}, v_j)$-path $P_2$ on at least $r$ vertices. Hence $P_1+P_2$ is an $(a,b)$-path on at least $r$ vertices in $H$. \end{proof} \begin{prop}\label{PROP: p2} For $n\ge r\ge 56$, $M_{r,n}$ is $\mathcal{C}_{\ge r}$-saturated with $e(M_{r,n})=g(\frac{r}{n})n+O(\frac{n}{r})$. Furthermore, if $r\le n\le 2r$, we have $\mbox{sat}(n, \mathcal{C}_{\ge r})=n+\lceil\frac{r}{2}\rceil$. \end{prop} \begin{proof} By Theorem~\ref{LEM: 56} and Proposition~\ref{PROP: p1}, we know that $M_{r,n}$ is $\mathcal{C}_{\ge r}$-saturated. In the following, we check the order and the number of edges of $M_{r,n}$. If $r\le n\le 2r$, $|V(M_{r,n})|=r+|U|=n$ and $e(M_{r,n})=\lceil\frac{3r}2\rceil+n-r=n+\lceil\frac{r}2\rceil$. Since $\mbox{sat}(n, \mathcal{C}_{\ge r})\ge n+\frac{r}{2}$, we have $\mbox{sat}(n, \mathcal{C}_{\ge r})=n+\lceil\frac{r}{2}\rceil$. If $2(k-1)r-2(k-2)<n<\frac{4k-3}{2}r$ for some $k\ge 2$, by definition, $$|V(M_{r,n})|=(k-1)r+\sum_{i=1}^{k-1}|U_i|+2\sum_{i=1}^{k-1}|V_i|-2(k-2)=n$$ and \begin{eqnarray*} e(M_{r,n})&=&(k-1)e(M_{r,r})+\sum_{i=1}^{k-1}|U_i|+3\sum_{i=1}^{k-1}|V_i|-(k-2)\\ &=&(k-1)\left\lceil\frac{3}{2}r\right\rceil+(k-1)r+2\left(n-2(k-1)r+2(k-2)\right)-(k-2)\\ &=&2n-(k-1)\left\lfloor\frac{3}{2}r\right\rfloor+3(k-2)\\ &=& g\left(\frac{r}{n}\right)n+O\left(\frac{n}{r}\right)\left(<g\left(\frac{r}{n}\right)n+\frac{2n}r\right). \end{eqnarray*} If $\frac{4k-3}{2}r\le n\le 2kr-2(k-1)$ for some $k\ge 2$, by definition, $$|V(M_{r,n})|=kr+\sum_{i=1}^{k}|U_i|-2(k-1)=n$$ and \begin{eqnarray*} e(M_{r,n})&=&k e(M_{r,r})+\sum_{i=1}^{k}|U_i|-(k-1)\\ &=&k\left\lceil\frac{3}{2}r\right\rceil+n-kr+2(k-1)-(k-1)\\ &=&n+k\left\lceil\frac{r}{2}\right\rceil+(k-1)\\ &=& g\left(\frac{r}{n}\right)n+O\left(\frac{n}{r}\right)\left(<g\left(\frac{r}{n}\right)n+\frac{2n}r\right). \end{eqnarray*} \end{proof} \section{Remarks} It is obvious that the Tur\'an function has monotonicity, i.e., $\mbox{ex}(n,\mathcal{F}_1)\ge \mbox{ex}(n,\mathcal{F}_2)$ for $\mathcal{F}_1\subseteq\mathcal{F}_2$. But the saturation number does not have this property (as has been observed in~\cite{Subdivision12,FF11,KT86,Pi04} ). In this paper, we determine the exact values of $\mbox{sat}(n,\mathcal{C}_{\ge r})$ for $r=6$ and $\frac n2\le r\le n$. From the image of $g(x)$, we guess that $\mbox{sat}(n,\mathcal{C}_{\ge r})$ does not have monotonicity with respect to $r$ too. It is also an interesting question to determine the exact values of $\mbox{sat}(n,\mathcal{C}_{\ge r})$ for the other values of $r$. It seems like for $r\ge 7$, $\mbox{sat}(n,\mathcal{C}_{\ge r})$ are always close to either $\frac{5n}{4}$ or $\frac{3n}{2}$ when $n$ is large.
1,108,101,566,826
arxiv
\section{Introduction} Given a graph $G$ with $n$ vertices and $m$ edges, does $G$ contain a \emph{triangle} (a cycle with three vertices)? This is one of the most basic algorithmic questions in graph theory, and many other problems reduce to it~\cite{ItaiRo78,WiWi18}. The best known algorithms use fast matrix multiplication and run in either $O(n^{\omega})$ time or in $O\big(m^{2\omega/(\omega+1)}\big)$ time, where $\omega < 2.37287$ is the matrix multiplication exponent~\cite{AlonYuZw97,LeGall14,ItaiRo78}. Despite decades of research, the best available ``combinatorial'' algorithm\footnote{An algorithm is ``combinatorial'' if it does not need algebraic manipulations to achieve its goal.} needs $O\big(n^3\,\text{polyloglog}(n)/\log^{4}n\big)$ time~\cite{Yu15}, only slightly better than checking all vertex triples. This lack of progress can be explained by a connection to Boolean matrix multiplication (BMM): if there is a truly subcubic combinatorial algorithm for finding triangles, there is also a truly subcubic combinatorial algorithm for BMM~\cite{WiWi18}. Itai and Rodeh~\cite{ItaiRo78} reduced computing the \emph{girth} (the length of a shortest cycle) of an unweighted undirected graph to triangle detection. For integer edge weights, Roditty and V.~Williams~\cite{RodittyVa11} gave an equivalence between finding a minimum weight cycle (the weighted girth) and finding a minimum weight triangle. For the special case of \emph{planar} graphs, significantly better algorithms are known. Itai and Rodeh~\cite{ItaiRo78} and, independently, Papadimitriou and Yannakakis~\cite{PapadimitriouYa81} showed that a triangle can be found in $O(n)$ time, if it exists. Chang and Lu~\cite{ChangLu13} presented an $O(n)$ time algorithm for computing the girth. The weighted girth can be found in $O(n \log\log n)$ time both in an undirected and in a directed planar graph~\cite{lacki_min-cuts_2011,MozesNiNuWe18}. In computational geometry, there are two noteworthy graph classes that generalize planar graphs: \emph{disk graphs} and \emph{transmission graphs}. We are given a set $S$ of $n$ planar point \emph{sites}. Each $s \in S$ has an \emph{associated radius} $r_s > 0$ and an \emph{associated disk} $D_s$ with center $s$ and radius $r_s$. The \emph{disk graph} $D(S)$ is the undirected graph on $S$ where two sites $s, t \in S$ are adjacent if and only if $D_s$ and $D_t$ intersect, i.e., $|st| \leq r_s + r_t$, where $|\cdot|$ is the Euclidean distance. In a \emph{weighted disk graph}, the edges are weighted according to the Euclidean distance between their endpoints. The \emph{transmission graph} $T(S)$ is the directed graph on $S$ where there is an edge from $s$ to $t$ if and only if $t$ lies in $D_s$, i.e., $|st| \leq r_s$. Again, there is a weighted variant. Both graph classes have received a lot of attention, as they give simple and natural theoretical models for geometric sensor networks (see, e.g.,~\cite{KaplanMuRoSe15,KaplanMuRoSe18}). Motivated by the vastly better algorithms for planar graphs, we investigate triangle detection and girth computation in disk graphs and transmission graphs. We will see that in a disk graph, a triangle can be found in $O(n \log n)$ time, using a simple geometric observation to relate disk graphs and planar graphs. By a reduction from \textsc{$\varepsilon$-closeness}~\cite{Polishchuk17}, this is optimal in the algebraic decision tree model, a contrast to planar graphs, where $O(n)$ time is possible. Our method generalizes to finding a shortest triangle in a weighted disk graph in $O(n \log n)$ expected time. Moreover, we can compute the unweighted and weighted girth in a disk graph in $O(n \log n)$ time, with a deterministic algorithm for the unweighted case and a randomized algorithm for the weighted case. The latter result requires a method to find a shortest cycle that contains a given vertex. Finally, we provide an algorithm to detect a directed triangle in a transmission graph in $O(n \log n)$ expected time. For this, we study the geometric properties of such triangles in more detail, and we develop several new techniques for batched range searching that might be of independent interest, using linearized quadtrees and three-dimensional polytopes to test for containment in the union of planar disks. As before, this algorithm extends to the weighted version. We will assume \emph{general position}, meaning that all edge lengths (and more generally shortest path distances) are pairwise distinct, that no site lies on a disk boundary, and that all radii are pairwise distinct. \section{Finding a (Shortest) Triangle in a Disk Graph} \label{sec:disk_triangle} We would like to decide if a given disk graph contains a triangle. If so, we would also like to find a triangle of minimum Euclidean perimeter. \subsection{The Unweighted Case} \label{sec:disk_triangle_unweighted} The following property of disk graphs, due to Evans et al.~\cite{EvansGaLoPo16}, is the key to our algorithm. For completeness, we include a proof. \begin{lemma} \label{lem:triangle_planar} Let $D(S)$ be a disk graph that is not plane, i.e., the embedding that represents each edge by a line segment between its endpoints has two segments that cross in their relative interiors. Then, there are three sites whose associated disks intersect in a common point. \end{lemma} \begin{figure} \begin{center} \includegraphics{figs/triangle_planar} \end{center} \caption{If $D(S)$ is not plane, then three disks intersect in a common point. We distinguish two cases, depending on whether $u$ lies in the northwest or in the northeast quadrant.} \label{fig:triangle_planar} \end{figure} \begin{proof} Suppose the segments $st$ and $uv$ intersect in a point $a$. The sites $s$, $t$, $u$, and $v$ are pairwise distinct, and without loss of generality, we assume that: (i) $a \in D_s \cap D_u$; (ii) $r_u \leq r_s$; (iii) the point $s$ lies in the origin, the edge $st$ lies on the $x$-axis, with $t$ in the positive halfplane; and (iv) the site $u$ lies above the $x$-axis, the site $v$ lies below the $x$-axis; see Figure~\ref{fig:triangle_planar}. If $a \in D_t$, then $D_s \cap D_t \cap D_u \neq \emptyset$, and we are done. Thus, suppose that $a \not\in D_t$, and let $b$ be the first point on $st$ in $D_t$. If $b \in D_u$, then $D_s \cap D_t \cap D_u \neq \emptyset$, and we are done. Thus, suppose that $b \not\in D_u$. If $u$ lies in the northwest quadrant, then $v$ must be in the southeast quadrant. Furthermore, since $r_u \leq r_s$, it follows that in the southeast quadrant, $D_u$ is completely contained in $D_s$, so the first point on the segment $av$ that is in $D_v$ must also be in $D_s$ and $D_u$. Thus, $D_s \cap D_u \cap D_v \neq \emptyset$, and we are done. If $u$ lies in the northeast quadrant, since $r_u \leq r_s$ and since $b \not\in D_u$, it follows that below the $x$-axis, we have $D_u \subseteq D_s$, and the first point on the segment $av$ that is in $D_v$ must also be in $D_s$ and $D_u$, i.e. $D_s \cap D_u \cap D_v \neq \emptyset$. \end{proof} If $D(S)$ is not plane, it contains a triangle by Lemma~\ref{lem:triangle_planar}. If $D(S)$ is plane, we can construct it explicitly and then search for a triangle in $O(n)$ time~\cite{ItaiRo78,PapadimitriouYa81}. To check whether $D(S)$ is plane, we begin an explicit construction of $D(S)$ and abort if we discover too many edges. \begin{theorem} \label{thm:unweightedtriangle} Let $D(S)$ be a disk graph on $n$ sites. We can find a triangle in $D(S)$ in $O(n \log n)$ worst-case time, if it exists. \end{theorem} \begin{proof} For each \(s\in S\) we split $\partial D_s$ into two $x$-monotone curves, namely the upper and the lower arc from the leftmost point of $D_s$ to the rightmost point. We use the Bentley-Ottmann sweepline algorithm to find the intersections between these boundary arcs. The intersections are reported one by one, and the total time to find the first $m$ intersections is $O(n\log n + m\log n)$~\cite[Theorem~2.4]{dBCvKO}.\footnote{The algorithm is presented for line segments, but it extends easily to continuous $x$-monotone curves.} If the sweepline algorithm reports more than $6n - 12$ intersection points, we can be sure that $D(S)$ is not plane, because an edge of $D(S)$ corresponds to at most two intersections. Then, $D(S)$ contains a triangle by Lemma~\ref{lem:triangle_planar}, and we can find it in $O(n \log n)$ time with another plane sweep that gives an intersection between the edges we have generated so far. If there are at most $6n - 12$ intersections, we use another plane sweep to find the vertical decomposition of the arrangement of the disks $D_s$, $s \in S$, in $O(n \log n)$ time. We use the vertical decomposition to construct the remaining edges of $D(S)$ that are due to a disk being completely contained in another disk. For this, we walk through the pseudo-trapezoids of the vertical decomposition, keeping track of the disks that contain the current pseudo-trapezoid. When we enter a disk for the first time, we generate edges between this disk and the disks containing the current pseudo-trapezoid. If it turns out that $D(S)$ has more than $3n - 6$ edges, we abort the generation of the edges, since then $D(S)$ is not plane and contains a triangle that can be found in $O(n \log n)$ time with a plane sweep over the edges generated so far. If $D(S)$ has at most $3n - 6$ edges, we obtain an explicit representation of $D(S)$. We check if this representation is plane in $O(n \log n)$ time with a plane sweep, returning a triangle if this is not the case. Finally, if $D(S)$ is plane, we check for a triangle in $O(n)$ time~\cite{ItaiRo78,PapadimitriouYa81}. \end{proof} \subsection{The Weighted Case} \label{sec:disk_triangle_weighted} Suppose the edges in $D(S)$ are weighted by their Euclidean lengths. We would like to find a triangle of minimum perimeter, i.e., of minimum total edge length. For this, we solve the decision problem: given $W > 0$, does $D(S)$ contain a triangle with perimeter at most $W$? Once a decision algorithm is available, the optimization problem can be solved with Chan's randomized geometric optimization framework~\cite{Chan99}. To decide if $D(S)$ contains a triangle with perimeter at most $W$, we use a grid with diameter $W/3$. We look for triangles whose vertices lie in a single grid cell, using the algorithm from Section~\ref{sec:disk_triangle_unweighted}. If no cell contains such a triangle, then $D(S)$ will be sparse and we will need to check only $O(n)$ further triples. Details follow. Set $\ell = W/(3\sqrt{2})$. Let $G_1$ be the grid whose cells are pairwise disjoint, axis-parallel squares with side length $\ell$, aligned so that the origin $(0,0)$ is a vertex of $G_1$. The cells of $G_1$ have diameter $\sqrt{2} \cdot \ell = W/3$, so any triangle whose vertices lie in a single cell has perimeter at most $W$. We make three additional copies $G_2$, $G_3$, $G_4$ of $G_1$, and we shift them by $\ell/2$ in the $x$-direction, in the $y$-direction, and in both the $x$- and $y$-directions, respectively. In other words, $G_2$ has $(\ell/2, 0)$ as a vertex, $G_3$ has $(0, \ell/2)$ as a vertex, and $G_4$ has $(\ell/2,\ell/2)$ as a vertex, see Figure~\ref{fig:grids}. This ensures that if all edges in a triangle are ``short'', the triangle lies in a single grid cell. \begin{figure} \begin{center} \input{figs/grids} \end{center} \caption{The four shifted grids, with a cell from each grid shown in red, orange, green, and blue, respectively. Every square with side length at most $\ell/2$ is wholly contained in a single grid cell.} \label{fig:grids} \end{figure} \begin{lemma}\label{lem:trianglegridcell} Let $\Delta$ be a triangle formed by three vertices $a,b,c \in \mathset{R}^2$ such that each edge of $\Delta$ has length at most $\ell/2$. There is a cell $\sigma \in \bigcup_{i = 1}^4 G_i$ with $a, b, c \in \sigma$. \end{lemma} \begin{proof} We can enclose $\Delta$ with a square of side length $\ell/2$. This square must be completely contained in a cell of one of the four grids, see Figure~\ref{fig:grids}. \end{proof} We go through all nonempty grid cells $\sigma \in \bigcup_{i=1}^4 G_i$, and we search for a triangle in the disk graph $D(S \cap \sigma)$ induced by the sites in $\sigma$, with Theorem~\ref{thm:unweightedtriangle}. Since each site lies in $O(1)$ grid cells, and since we can compute the grid cells for a given site in $O(1)$ time (using the floor function), the total running time is $O(n \log n)$. If a triangle is found, we return YES, since the cells have diameter $W/3$ and thus such a triangle has perimeter at most $W$. If no triangle is found, Lemma~\ref{lem:trianglegridcell} implies that any triangle in $D(S)$ has one side of length more than $\ell/2$ and hence at least one vertex with associated radius at least $\ell/4$. We call a site $s \in S$ \emph{large} if $r_s > \ell/4$. A simple volume argument bounds the number of large sites in a grid cell. \begin{lemma} \label{lem:volumearg} Let $\sigma \in \bigcup_{i = 1}^4 G_i$ be a nonempty grid cell, and suppose that $D(S \cap \sigma)$ does not contain a triangle. Then $\sigma$ contains at most $18$ large sites. \end{lemma} \begin{proof} Suppose $\sigma$ contains at least $19$ large sites. We cover $\sigma$ with $3 \times 3$ congruent squares of side length $\ell/3$. Then, at least one square $\tau$ contains at least $\lceil 19/9 \rceil = 3$ large sites. The associated radius of a large site is more than $\ell/4$ and each square has diameter $(\sqrt{2}/3)\ell < \ell/2$, so the large sites in $\tau$ form a triangle in $D(S \cap \sigma)$, a contradiction. \end{proof} Let $\sigma \in G_i$, $i \in \{1,\dots, 4\}$, be a grid cell. The \emph{neighborhood} $N(\sigma)$ of $\sigma$ is the $5 \times 5$ block of cells in $G_i$ centered at $\sigma$. Since the diameter of a grid cell is $W/3$, any two sites $u, v \in S$ that form a triangle of perimeter at most $W$ with a site $s \in S \cap \sigma$ must be in $N(\sigma)$. Let $S_\ell \subseteq S$ denote the large sites. At this stage, we know that any triangle in $D(S)$ has at least one vertex in $S_\ell$. By Lemma~\ref{lem:volumearg}, for any $\sigma \in \bigcup_{i = 1}^4 G_i$, we have $|\cup_{\tau \in N(\sigma)} \tau \cap S_\ell| = O(1)$. Thus, to detect a triangle of perimeter at most $W$ with at least two large vertices, we proceed as follows: for each non-empty cell $\sigma \in G_i$, iterate over all large sites $s$ in $\sigma$, over all large sites $t$ in $N(\sigma)$, and over all (not necessary large) sites $u$ in $N(\sigma)$. Check whether $stu$ is a triangle of perimeter at most $W$. If so, return YES. Since the sites in each grid cell are examined $O(1)$ times for $O(1)$ pairs of large sites, the total time is $O(n)$. It remains to detect triangles of perimeter at most $W$ with exactly one large vertex. We iterate over all grid cells $\sigma \in \bigcup_{i=1}^4 G_i$, and we compute $D(S \cap \sigma)$. Since $D(S \cap \sigma)$ contains no triangle, Lemma~\ref{lem:triangle_planar} shows that $D(S \cap \sigma)$ is plane, has $O(|S \cap \sigma|)$ edges and can be constructed in time $O(|S \cap \sigma| \log |S \cap \sigma|)$. For every edge $st \in D(S \cap \sigma)$ with both endpoints in $S \setminus S_\ell$, we iterate over all large sites $u$ in $N(\sigma)$ and we test whether $stu$ makes a triangle in $D(S)$ with perimeter at most $W$. If so, we return YES. By Lemma~\ref{lem:volumearg}, this takes $O(|S \cap \sigma|)$ time, so the total running time is $O(n \log n)$. If there is a triangle of perimeter at most $W$ with exactly one vertex in $S_\ell$, the edge with both endpoints in $S \setminus S_\ell$ has length at most $\ell/2$ and thus must lie in a single grid cell $\sigma \in \bigcup_{i=1}^4 G_i$. To summarize: \begin{lemma} \label{lem:decision} Let $D(S)$ be a disk graph on $n$ sites, and let $W > 0$. We can decide in $O(n \log n)$ worst-case time whether $D(S)$ contains a triangle of perimeter at most $W$. \end{lemma} We employ the following general lemma due to Chan~\cite{Chan99}. Let $\Pi$ be a \emph{problem space}, and for a problem $P \in \Pi$, let $w(P) \in \mathset{R}$ be its \emph{optimum} and $|P| \in {\mathbb N}$ be its \emph{size}. \begin{lemma}[Lemma~2.1 in~\cite{Chan99}] \label{lem:chan} Let $\alpha < 1$, $\varepsilon > 0$, and $r \in {\mathbb N}$ be constants, and let $\delta(\cdot)$ be a function such that $\delta(n)/n^\varepsilon$ is monotone increasing in $n$. Given any optimization problem $P \in \Pi$ with optimum $w(P)$, suppose that within time $\delta(|P|)$, (i) we can decide whether $w(P) < t$, for any given $t \in \mathset{R}$, and (ii) we can construct $r$ subproblems $P_1, \dots, P_r$, each of size at most $\lceil \alpha|P|\rceil$, so that \[ w(P) = \min\{w(P_1), \dots, w(P_r)\}. \] Then we can compute $w(P)$ in total expected time $O(\delta(|P|))$. \end{lemma} Now the following main theorem of this section is immediate. \begin{theorem} \label{thm:shortesttriangle} Let $D(S)$ be a weighted disk graph on $n$ sites. We can compute a shortest triangle in $D(S)$ in $O(n \log n)$ expected time, if one exists. \end{theorem} \begin{proof} We apply Lemma~\ref{lem:chan}. For Condition~(i), we use Lemma~\ref{lem:decision}. For Condition~(ii), we construct four subsets $S_0, \dots, S_3$ of $S$ as follows: enumerate the sites in $S$ as $S = \{s_1, \dots, s_n\}$, and put the site $s_i$ into all sets $S_j$ with $i \not\equiv j \pmod 4$. Then, for any three sites $a,b,c \in S$, there is at least one subset $S_j$ with $a,b,c \in S_j$. Now, Lemma~\ref{lem:chan} with $\alpha = 3/4$, $\varepsilon = 1$, $r = 4$, and $\delta = O(n \log n)$ implies the theorem. \end{proof} \section{Computing the Girth of a Disk Graph} \label{sec:disk_girth} We extend the results from Section~\ref{sec:disk_triangle} to the girth. The unweighted case is easy: if $D(S)$ is not plane, the girth is $3$, by Lemma~\ref{lem:triangle_planar}. If $D(S)$ is plane, we use the algorithm for planar graphs~\cite{ChangLu13}. The weighted case is harder. If $D(S)$ is plane, we use the algorithm for planar graphs~\cite{lacki_min-cuts_2011}. If not, Theorem~\ref{thm:shortesttriangle} gives a shortest triangle $\Delta$ in $D(S)$. However, there could be cycles with at least four edges that are shorter than $\Delta$. To address this, we use $\Delta$ to split $D(S)$ into sparse pieces where a shortest cycle can be found efficiently. \subsection{The Unweighted Case} \label{sec:disk_girth_unweighted} Chang and Lu~\cite[Theorem~1.1]{ChangLu13} showed how to find the girth of an unweighted planar graph with $n$ vertices in $O(n)$ time. Hence, we obtain a simple extension of Theorem~\ref{thm:unweightedtriangle}. \begin{theorem} \label{thm:disk_unweighted_girth} Let $D(S)$ be a disk graph for a set $S$ of $n$ sites. We can compute the unweighted girth of $D(S)$ in $O(n \log n)$ worst-case time. \end{theorem} \begin{proof} We proceed as in Theorem~\ref{thm:unweightedtriangle}. If $D(S)$ is not plane, the girth is $3$. If $D(S)$ is plane, we apply the algorithm of Chang and Lu~\cite[Theorem~1.1]{ChangLu13} to an explicit representation of $D(S)$. \end{proof} \subsection{The Weighted Case} We describe how to find the shortest cycle through a given vertex in a weighted graph with certain properties. This is then used to compute the weighted girth of a disk graph. Let $G$ be a graph with nonnegative edge weights so that all shortest paths and cycles in $G$ have pairwise distinct lengths and so that for all edges $uv$, the shortest path from $u$ to $v$ is the edge $uv$. We present a deterministic algorithm that, given $G$ and a vertex $s$, computes the shortest cycle in $G$ containing $s$, if it exists.\footnote{Even though this seems to be a simple fact, we could not locate a previous reference for this.} A simple randomized algorithm can also be found in Yuster~\cite[Section~2]{Yuster11}. The next lemma states a structural property of the shortest cycle through $s$. It resembles Lemma~1 of Roditty and V.~Williams~\cite{RodittyVa11} that deals with an overall shortest cycle in $G$. \begin{lemma} \label{lem:cycle_struct} The shortest cycle in $G$ that contains $s$ consists of two paths in the shortest path tree of $s$, and one additional edge. \end{lemma} \begin{proof} Let $C = v_0, v_1, v_2, \dots, v_{\ell-1}, v_\ell$ be the shortest cycle in $G$ containing $s$, where all vertices $v_i$, $0 \leq i \leq \ell - 1$ are pairwise distinct, $\ell \geq 3$, and $v_0 = v_\ell = s$. For $v_i \in C$, let $d_1(v_i)$ be the length of the path $s, v_1, \dots, v_i$, and let $d_2(v_i)$ be the length of the path $v_i, v_{i+1}, \dots, s$. Let $\pi(v_i)$ denote the shortest path from $s$ to $v_i$, and let $|v_i v_{i+1}|$ be the length of the edge $v_iv_{i+1}$. Suppose that $C$ is not of the desired form. Let $v_kv_{k+1}$ be the edge on $C$ with $d_1(v_k) < |v_kv_{k+1}| + d_2(v_{k+1})$ and $d_2(v_{k+1}) < d_1(v_{k}) + |v_k v_{k+1}|$. By our assumptions on $G$, the edge $v_k v_{k+1}$ exists and $k \neq 0, \ell - 1$. We distinguish two cases, illustrated in \cref{fig:dijkstra}. \begin{figure} \center \input{figs/dijkstra} \caption{The two cases for $\pi(v_k)\cap \pi(v_{k+1})$. On the left the paths are disjoint, on the right the shortest path share a prefix.} \label{fig:dijkstra} \end{figure} First, suppose that $\pi(v_k) \cap \pi(v_{k+1}) = \{s\}$. Consider the cycle $C'$ given by $\pi(v_k)$, the edge $v_k v_{k+1}$, and $\pi(v_{k+1})$. Since $s \neq v_k, v_{k+1}$ and since the edge $v_{k}v_{k+1}$ does not appear on $\pi(v_k)$ nor on $\pi(v_{k+1})$, it follows that $C'$ is a proper cycle. Furthermore, by assumption, $C'$ is strictly shorter than $C$, because $\pi(v_k)$ is shorter than $d_1(v_k)$ or $\pi(v_{k+1})$ is shorter than $d_2(v_{k+1})$. This contradicts our assumption on $C$. Second, suppose that $|\pi(v_k) \cap \pi(v_{k+1})| \geq 2$. Since $\pi(v_k)$ and $\pi(v_{k+1})$ are shortest paths, their intersection is a prefix of each path. By the assumption\footnote{Namely, that for all edges $uv$, the shortest path from $u$ to $v$ is the edge $uv$.} on $G$ at least one of $v_1, v_{\ell-1}$ is not in $\pi(v_k) \cup \pi(v_{k+1})$. Without loss of generality, this vertex is $v_1$. Let $j \geq 1$ be the smallest index such that $v_j \in \pi(v_k) \cup \pi(v_{k+1})$. We have $j \in \{2, \dots, k\}$. Consider the cycle $C'$ that starts at $s$, follows $C$ along $v_1, v_2, \dots$ up to $v_j$, and then returns along $\pi(v_k)$ or $\pi(v_{k+1})$ to $s$. By construction, $C'$ is a proper cycle. Furthermore, $C' \neq C$, because even if $j = k$, the path $\pi(v_k)$ cannot contain the part of $C$ from $v_{k+1}$ to $s$, due to the choice of $k$. Finally, $C'$ is strictly shorter than $C$, because the second part of $C'$ from $v_j$ to $s$ follows a shortest path and is thus strictly shorter than $d_2(v_j)$. Again, $C'$ contradicts our choice of $C$. \end{proof} \begin{theorem} \label{thm:dijkstra_cycle} Let $G = (V, E)$ be a weighted graph with $n$ vertices and $m$ edges that has the properties given at the beginning of this section. Let $s \in V$. We can compute the shortest cycle in $G$ that contains $s$ in $O(n \log n + m)$ time, if it exists. \end{theorem} \begin{proof} We find the shortest path tree $T$ for $s$ in $G$, and we traverse $T$ to find for each vertex $v$ in $T \setminus \{s\}$ the second vertex $b[v]$ on the shortest path from $s$ to $v$ (the vertex following $s$). Then, we iterate over all edges in $E$ that are not in $T$. For each such $e = uv$, we check if $b[u] \neq b[v]$. If so, $e$ closes a cycle in $T$ that contains $s$. We determine the length of this cycle (in $O(1)$ time). We return the shortest cycle found in this way. The correctness follows from Lemma~\ref{lem:cycle_struct}. As for the running time, it takes $O(n \log n + m)$ time to find the shortest path tree for $s$ with Dijkstra's algorithm and Fibonacci heaps~\cite[Chapter~24.3]{CormenLeRiSt09}. After that, it takes $O(n)$ time to compute the nodes $b[v]$, for $v \in T \setminus \{s\}$, and $O(m)$ time to iterate over the edges not in $T$. The length of the cycle associated with such an edge $e$ can be computed in $O(1)$ time, using the shortest path distances in $T$ and the length of $e$. \end{proof} Let $D(S)$ be a weighted disk graph on $n$ sites. A careful combination of the tools developed so far gives an algorithm for the weighted girth of $D(S)$. \begin{theorem} \label{thm:disk_girth_weighted} Given a weighted disk graph $D(S)$ on $n$ sites, we can compute the weighted girth of $D(S)$ in $O(n\log n)$ expected time. \end{theorem} \begin{proof} We use Theorem~\ref{thm:shortesttriangle} to find the shortest triangle in $D(S)$, if it exists, in $O(n \log n)$ expected time. If $D(S)$ has no triangle, it is plane by Lemma~\ref{lem:triangle_planar}. As in the proof of Theorem~\ref{thm:unweightedtriangle}, we can then explicitly construct $D(S)$ in $O(n\log n)$ time with a plane sweep. We determine the girth of $D(S)$ using the algorithm of {\L}{\c a}cki\xspace and Sankowski~\cite[Section~5]{lacki_min-cuts_2011}, in additional $O(n \log\log n)$ time, and are done. Now, suppose $D(S)$ contains a triangle, and let $W$ be the length of the shortest triangle in $D(S)$, an upper bound for the girth of $D(S)$. As in Section~\ref{sec:disk_triangle_weighted}, we set $\ell = W/(3\sqrt{2})$, and we let let $G$ be the grid of side length $\ell$ that has the origin $(0,0)$ as a vertex. We call a site $s \in S$ \emph{large} if $r_s \geq \ell/4$, and we let $S_\ell \subseteq S$ be the set of large sites. We need to check whether $D(S)$ contains a cycle with more than three vertices and length less than $W$. If so, we must find the shortest such cycle. First, we consider cycles in the induced subgraph $D(S \setminus S_\ell)$. The graph $D(S \setminus S_\ell)$ has no triangle, as such a triangle would have length less than $3 \cdot \ell/2 < W$. Thus, by Lemma~\ref{lem:triangle_planar}, $D(S \setminus S_\ell)$ is plane. We can directly compute the weighted girth of $D(S \setminus S_\ell)$ in $O(n \log n)$ time with a plane sweep and the algorithm of {\L}{\c a}cki\xspace and Sankowski~\cite[Section~5]{lacki_min-cuts_2011}. Let $\Delta_1$ be the weighted girth of $D(S \setminus S_\ell)$. Next, we consider cycles that have at least one large site. Let $\sigma$ be a cell of $G$. The induced subgraph $D(S \cap \sigma)$ has no triangle, since by the choice of $\ell$, such a triangle would have length less than $W$. Thus, Lemma~\ref{lem:volumearg} shows $|S_\ell \cap \sigma| = O(1)$. By the triangle inequality, the maximum distance between any two sites in a cycle of length less than $W$ is less than $W/2$. Thus, any such cycle containing a site $s \in S \cap \sigma$ completely lies in the $7 \times 7$ neighborhood $N(\sigma)$ around $\sigma$. Since $N(\sigma)$ has $O(1)$ cells and since each cell contains $O(1)$ large sites, there are $O(1)$ large sites in $S_\sigma = S \cap (\bigcup_{\tau \in N(\sigma)} \tau)$. Now, for each grid cell $\sigma$, we consider all large sites $s \in S_\ell \cap \sigma$. We must find the shortest cycle through $s$ in the subgraph $D(S_\sigma)$ of $D(S)$ in $N(\sigma)$. Let $n_\sigma = |S_\sigma|$. Since the graph induced by $S_\sigma \setminus S_\ell$ is plane and since $|S_\sigma \cap S_\ell| = O(1)$, the graph $D(S_\sigma)$ has $O(n_\sigma)$ edges. Hence, we can construct $D(S_\sigma)$ and apply Theorem~\ref{thm:dijkstra_cycle} to compute the shortest cycle in $D(S_\sigma)$ through $s$ in total time $O(n_\sigma \log n_\sigma)$. Let $\Delta_2$ be the smallest length of such a cycle, over all grid cells $\sigma$ and all large sites $s \in S_\ell \cap \sigma$. Since each small site is involved in $O(1)$ neighborhoods, we get $\sum_{\sigma \in G} n_\sigma = O(n)$, and the overall running time of this step is $O(n\log n)$. Finally, we return $\min \{W, \Delta_1, \Delta_2\}$. If we also want the shortest cycle itself, we simply maintain appropriate pointers in the algorithm. The total expected running time is $O(n \log n)$. \end{proof} \section{Finding a Triangle in a Transmission Graph}\label{sec:trianglesdir} Given a transmission graph $T(S)$ on $n$ sites, we want to decide if $T(S)$ contains a directed triangle. We first describe an inefficient algorithm for this problem, and then we will explain how to implement it in $O(n \log n)$ expected time. The algorithm iterates over each directed edge $e = st$ with $r_t \geq r_s$, and it performs two tests: first, for each directed edge $tu$ with $r_u \geq r_t/2$, it checks if $us$ is an edge in $T(S)$, i.e., if $s \in D_u$. If so, the algorithm reports the triangle $stu$. Second, the algorithm tests if there is a site $u$ such that $r_u \in [r_s, r_t/2)$ and such that $us$ is an edge in $T(S)$, i.e., such that $s \in D_u$. If such a $u$ exists, it reports the triangle $stu$. If both tests fail for each edge $e$, the algorithm reports that $T(S)$ contains no triangle. The next lemma shows that the algorithm is correct. \begin{lemma} \label{lem:strategy_correct} A triple $stu$ reported by the algorithm is a triangle in $T(S)$. Furthermore, if $T(S)$ contains a triangle, the algorithm will find one. \end{lemma} \begin{figure} \label{fig:requrementecheck} \center \input{figs/requirementcheck} \caption{We do not need to check $u \in D_t$.} \end{figure} \begin{proof} Let $stu$ be a triple reported by the algorithm. The algorithm explicitly checks that $st$ and $us$ are edges in $T(S)$. It remains to consider $tu$. If $r_u \geq r_t/2$, then $stu$ is reported by the first test, and the algorithm explicitly checks that $tu$ is an edge in $T(S)$. If $r_u < r_t/2$, then $stu$ is reported by the second test. We have $r_s < r_t/2$, since $s$ and $t$ are chosen so that $r_s \leq r_u$. Furthermore, $st$ and $us$ are edges of $T(S)$, so $t \in D_s$ and $s \in D_u$. Since the second test ensures that $r_u \leq r_t/2$, it follows from the triangle inequality that \[ |tu| \leq |ts| + |su| \leq r_s + r_u < r_t/2 + r_t/2 = r_t, \] so $u \in D_t$, and $tu$ is an edge in $T(S)$. Thus, the reported triple $stu$ is a triangle in $T(S)$. Now suppose that $T(S)$ contains a triangle $stu$, labeled such that $r_s \leq \min\{r_t, r_u\}$. If $r_u \geq r_t/2$, then $stu$ is found by the first test for the edge $st$. If $r_u < r_t/2$, we have $s \in D_u$ and $r_u \in [r_s, r_t/2)$. Thus, the second test will be successful for the edge $st$, and the algorithm will report a triple $stu'$, such that $s \in D_{u'}$ and $r_{u'} \in [r_s, r_t/2)$ (the site $u'$ might be different from $u$). The first part of the proof shows that $stu'$ is a triangle in $T(S)$. \end{proof} There are several challenges for making the algorithm efficient. First of all, there might be many edges $st$ with $r_t \geq r_s$. However, the following lemma shows that if there are $\omega(n)$ such edges, the transmission graph $T(S)$ must contain a triangle. \begin{figure} \label{fig:subdiv} \center \input{figs/diskgridsubdivision} \caption{Three disks with radius at least $r/4$ in the same grid cell form a clique.} \end{figure} \begin{lemma}\label{lem:disks_close_together} There is an absolute constant $\alpha$ so that for any $r > 0$, if there is an $r \times r$ square $\sigma$ that contains more than $\alpha$ sites $s \in S$ with $r_s \geq r/4$, then $T(S)$ has a directed triangle. \end{lemma} \begin{proof} We cover $\sigma$ with a $6 \times 6$ grid of side length $r/6$; see \Cref{fig:subdiv}. There are $36$ grid cells. For every $s \in S \cap \sigma$ with $r_s \geq r/4$, the disk $D_s$ completely covers the grid cell containing $s$. If $\sigma$ contains more than $\alpha = 72$ sites $s$ with $r_s \geq r/4$, then one grid cell contains at least three such sites. These sites form a directed triangle in $T(S)$. \end{proof} Thus, to implement the algorithm, we must solve two range searching problems. \begin{description} \item[(R1)] EITHER determine that for every site $s \in S$, there are at most $\alpha$ outgoing edges $st$ with $r_t \geq r_s/2$ and report all these edges; OR find a square $\sigma$ of side length $r > 0$ that contains more than $\alpha$ sites $s \in S$ with $r_s \geq r/4$. \item[(R2)] Given $O(n)$ query triples $(s, r_1, r_2)$ with $s \in S$ and $0 < r_1 < r_2$, find a site $u \in S$ such that there is a query triple $(s, r_1, r_2)$ with $u \neq s$, $r_u \in [r_1, r_2)$, and $s \in D_u$; or report that no such site exists. \end{description} The query (R1) indeed always has a valid outcome: suppose there is a site $s \in S$ with more than $\alpha$ outgoing edges $st$ with $r_t \geq r_s/2$. Then, all the endpoints $t$ lie in $D_s$, so the square $\sigma$ centered at $s$ with side length $r = 2r_s$ contains more than $\alpha$ sites with associated radius at least $r/4$. The next theorem shows that we can detect a triangle in $T(S)$ with linear overhead in addition to the time needed for answering (R1) and (R2). \begin{theorem} \label{thm:userange} If (R1) and (R2) can be solved in time $R(n)$ for input size $n$, we can find a directed triangle in a transmission graph $T(S)$ on $n$ sites in time $R(n) + O(n)$, if it exists. \end{theorem} \begin{proof} First, we perform a range query (R1). If it reports a square $\sigma$ of side length $r$ that contains more than $\alpha$ sites $s \in S$ with $r_s \geq r/4$, we scan $S$ to find a set $S'$ of $\alpha + 1$ such sites. By Lemma~\ref{lem:disks_close_together}, $T(S')$ contains a triangle, and we find it in $O(1)$ time by testing all triples in $S'$. Otherwise, (R1) reports the set $E'$ of all edges $st$ in $T(S)$ with $r_t \geq r_s/2$, where $|E'| = O(n)$. We go through all edges $e = st$ in $E'$ with $r_t \geq r_s$, and we check if there is an edge $tu$ in $E'$ such that $us$ is an edge in $T(S)$, i.e., such that $s \in D_u$. If so, we report the triangle $stu$. This takes care of the first test in the algorithm, and we check only $O(n)$ triples, because for each site in $S$, there are at most $\alpha$ outgoing edges in $E'$. If we have not been successful, we again go trough all edges $e = st$ in $E'$, and if $r_t > 2r_s$, we create the triple $(s, r_s, r_t/2)$. We perform a range query (R2) on the resulting set of $O(n)$ triples. If (R2) finds a site $u \in S$ such that for a query triple $(s, r_s, r_t/2)$, we have $u \neq s$, $r_u \in [r_s, r_t/2)$, and $s \in D_u$, we report the triangle $stu$. Otherwise, we report that $T(S)$ does not contain a triangle. By Lemma~\ref{lem:strategy_correct}, we correctly report a triangle in $T(S)$, if it exists. The time for the additional steps is $O(n)$, so the total running time is $R(n) + O(n)$. \end{proof} Using existing methods~\cite{WillardLu85}, it is easy to solve (R1) and (R2) in $O(n \log^2 n)$ time. However, a better solution is possible. In the next section, we will implement (R1) and (R2) in $O(n \log n)$ expected time. \begin{theorem}\label{thm:unweightedtriangledir} \label{thm:transmission_triangle_unweighted} Let $T(S)$ be a transmission graph on $n$ sites. We can find a directed triangle in $T(S)$ in expected time $O(n \log n)$, if it exists. \end{theorem} \section{Batched Range Searching} The range queries must handle subsets of sites whose associated radii lie in certain intervals: a query $s$ in (R1) concerns sites $t \in S$ such that $r_t \geq r_s/2$; and a query $(s, r_1, r_2)$ in (R2) concerns sites $t$ such that $r_t \in [r_1, r_2)$. Using a standard approach~\cite{dBCvKO,WillardLu85}, we subdivide each such \emph{query interval} into $O(\log n)$ pieces from a set of \emph{canonical intervals}. For this, we build a balanced binary tree $B$ whose leaves are the sites of $S$, sorted by increasing associated radius. For each vertex $v \in B$, let the \emph{canonical interval} $\cD_v$ be the sorted list of sites in the subtree rooted at $v$. There are $O(n)$ canonical intervals. \begin{figure} \centering \input{figs/querypath} \caption{Example is for a query of type (R1), assuming that \(r_{s_3}< r_{s_7}/2 \leq r_{s_4}\).} \label{fig:querypath} \end{figure} Next, we define \emph{canonical paths} and \emph{canonical nodes}. For a radius $r > 0$, the (proper) \emph{predecessor} of $r$ is the site $s \in S$ with the largest radius $r_s \leq r$ ($r_s < r$). The (proper) \emph{successor} of $r$ is defined analogously. For a query $s$ in (R1), we consider the path $\pi$ in $B$ from the root to the leaf with the proper predecessor $t$ of $r_s/2$. If $t$ does not exist (i.e., if $r_t \geq r_s/2$, for all $t \in S$), we let $\pi$ be the left spine of $B$. We call $\pi$ the \emph{canonical path} for $s$. The \emph{canonical nodes} for $s$ are the right children of the nodes in $\pi$ that are not in $\pi$ themselves, plus possibly the last node of $\pi$, if $r_t \geq r_s/2$, for all $t \in S$, see Figure~\ref{fig:querypath}. For a query $(s, r_1, r_2)$ in (R2), we consider the path $\pi_1$ in $B$ from the root to the leaf with the proper predecessor $t_1$ of $r_1$ and the path $\pi_2$ in $B$ from the root to the leaf for the successor $t_2$ of $r_2$. Again, if $t_1$ does not exist, we take $\pi_1$ as the left spine of $B$, and if $t_2$ does not exist, we take $\pi_2$ as the right spine of $B$. Then, $\pi_1$ and $\pi_2$ are the \emph{canonical paths} for $(s, r_1, r_2)$. The \emph{canonical nodes} for $(s, r_1, r_2)$ are defined as follows: for each vertex $v$ in $\pi_1 \setminus \pi_2$, we take the right child of $v$ if it is not in $\pi_1$, and for each $v$ in $\pi_2 \setminus \pi_1$, we take the left child of $v$ if it is not in $\pi_1$. Furthermore, we take the last node of $\pi_1$ if $t_1$ does not exist, and the last node of $\pi_2$ if $t_2$ does not exist. A standard argument bounds the number and total size of the canonical intervals. \begin{lemma} \label{lem:canonical} The total size of the canonical intervals is $O(n \log n)$. The tree $B$ and the canonical intervals can be built in $O(n \log n)$ time. For any query $q$ in (R1) or (R2), there are $O(\log n)$ canonical nodes, and they can be found in $O(\log n)$ time. The canonical intervals for the canonical nodes of $q$ constitute a partition of the query interval for $q$. \end{lemma} \begin{proof} Since a site $s \in S$ appears in $O(\log n)$ canonical intervals, the total size of the canonical intervals is $O(n \log n)$. To construct $B$, we sort $S$ according to the radii $r_s$, and we build $B$ on top of the sorted list. To find the (sorted) canonical intervals, we perform a postorder traversal of $B$, copying and merging the child intervals for each parent node. The bound on the canonical nodes for $q$ follows, since $B$ has height $O(\log n)$. To find them, we trace the canonical paths for $q$ in $B$. The partition property holds by construction. \end{proof} \subsection{Queries of Type (R1)} \label{sec:rone} We build a compressed quadtree on $S$, and we perform the range searches the compressed quadtree. It is possible to compute a compressed quadtree for each canonical interval without logarithmic overhead. Since Lemma~\ref{lem:disks_close_together} gives us plenty of freedom in choosing the squares for our range queries, we take squares from the grid that underlies the quadtree. This allows us to reduce the range searching problem to predecessor search in a linear list, a task that can be accomplished by one top-down traversal of $B$. Details follow. \subparagraph*{Hierarchical grids, Z-order, compressed quadtrees.} We translate and scale $S$ (and the associated radii), so that $S$ lies in the interior of the unit square $U = [0,1]^2$ and so that all radii are at most $\sqrt{2}$. We define a sequence of hierarchical grids that subdivide $U$. The grid $G_0$ consists of the single cell $U$. The grid $G_i$, $i \geq 1$, consists of the $2^{2i}$ square cells with side length $2^{-i}$ and pairwise disjoint interiors that cover $U$. The hierarchical grids induce an infinite four-regular tree ${\mathcal T}$: the vertices are the cells of $\G = \bigcup_{i = 0}^\infty G_i$. The unit square $U$ is the root, and for $i = 1, \dots$, a cell $\sigma$ in $G_i$ is the child of the cell in $G_{i-1}$ that contains it. We make no explicit distinction between a vertex of ${\mathcal T}$ and its corresponding cell. \begin{figure} \center \input{figs/zorder} \caption{$Z$-Order. On the very right we have \(\sigma \leq_Z \tau \leq_Z \tilde{\sigma}\).} \label{fig:zorder} \end{figure} The \emph{$Z$-order} $\leq_Z$ is a total order on the cells of $\G$; see~\cite{BuchinMu11} for more details. Let $\sigma, \tau \in \G$. If $\sigma \subseteq \tau$, then $\sigma \leq_Z \tau$: and if $\tau\subseteq \sigma$, then $\tau \leq_Z \sigma$, If $\sigma$ and $\tau$ are unrelated in ${\mathcal T}$, let $\rho$ be the lowest common ancestor of $\sigma$ and $\tau$ in ${\mathcal T}$, and let $\sigma'$ and $\tau'$ be the children of $\rho$ with $\sigma \subseteq \sigma'$ and $\tau \subseteq \tau'$. We set $\sigma \leq_Z \tau$ if $\sigma'$ is before $\tau'$ in the order shown in \cref{fig:zorder}; and $\tau \leq_Z \sigma$, otherwise. The next lemma shows that given $\sigma, \tau \in \G$, we can decide if $\sigma \leq_Z \tau$ in constant time. \begin{lemma}[Chapter~2 in Har-Peled~\cite{har-peled_geometric_2008}] \label{obs:compareconst} Suppose the floor function and the first differing bit in the binary representations of two given real numbers can be computed in $O(1)$ time. Then, we can decide in $O(1)$ time for two given cells $\sigma, \tau \in \G$ whether $\sigma \leq_Z \tau$ or $\tau \leq_Z \sigma$. \end{lemma} For a site $s \in S$, let $\sigma_s$ be the largest cell in $\G$ that contains only $s$. The \emph{quadtree} for $S$ is the smallest connected subtree of ${\mathcal T}$ that contains the root $U$ and all cells $\sigma_s$, for $s \in S$. The \emph{compressed quadtree} $\C$ for $S$ is obtained from the quadtree by contracting any maximal path of vertices with only one child into a single edge. Vertices that were at the top of such a path are now called \emph{compressed} vertices. The compressed quadtree for $S$ has $O(n)$ vertices, and it can be constructed in $O(n\log n)$ time (see, e.g.,~\cite[Appendix~A]{BuchinLoMoMu11} and \cite{har-peled_geometric_2008}). The \emph{linearized compressed quadtree} $\cL$ for $S$ is the sorted sequence of cells obtained by listing the nodes of $\C$ according to a postorder traversal, were the children of a node $\sigma \in \C$ are visited according to the $Z$-order from Figure~\ref{fig:zorder}. The cells in $\cL$ appear in increasing $Z$-order, and range searching for a given cell $\sigma \in \G$ reduces to a simple predecessor search in $\cL$, as is made explicit in the following lemma. \begin{lemma}\label{lem:quadtreeorder} Let $\sigma$ be a cell of $\G$, and let $\cL$ be the linearized compressed quadtree on $S$. Let $\tau = \max_Z \{\rho \in \cL \mid \rho \leq_Z \sigma\}$ be the \emph{$Z$-predecessor} of $\sigma$ in $\cL$ ($\tau = \emptyset$, if the predecessor does not exist). Then, if $\sigma \cap \tau = \emptyset$, then also $\sigma \cap S = \emptyset$, and if $\sigma \cap \tau \neq \emptyset$, then $\sigma \cap S = \tau \cap S$. \end{lemma} \begin{proof} Let $\C$ be the compressed quadtree on $S$, and let $\C_\sigma = \{ \tau \in \C \mid \tau \subseteq \sigma\}$ be the cells in $\C$ that are contained in $\sigma$. If $\C_\sigma$ is non-empty, then $\C_\sigma$ is a connected subtree of $\C$. Let $\tau$ be the root of this subtree. Then, $\tau = \max_Z \{\rho \in \C_\sigma\}$, and $\tau \leq_Z \sigma$. Furthermore, all other cells in $\C \setminus \C_\sigma$ are either smaller than all cells in $\C_\sigma$ or larger than $\sigma$. Thus, $\tau$ is the $Z$-predecessor of $\sigma$ in $\cL$, and $\sigma \cap S = \tau \cap S \neq \emptyset$. Otherwise, if $\C_\sigma = \emptyset$, the $Z$-predecessor of $\sigma$ in $\cL$ either does not exist or is disjoint from $\sigma$. Thus, in this case, we have $\emptyset = \sigma \cap \tau = \sigma \cap S$. \end{proof} \subparagraph*{The search algorithm.} For a site $s \in S$, we define the \emph{neighborhood} $N(s)$ of $s$ as all cells in $\G$ with side length $2^{\lfloor \log_2{r_s}\rfloor}$ that intersect $D_s$. The neighborhood will be used to approximate $D_s$ for the range search in the quadtrees. \begin{lemma}\label{lem:neighbor_size} There is a constant $\beta$ such that $|N(s)| \leq \beta$ for all $s \in S$. \end{lemma} \begin{proof} We have $r_s/2 < 2^{\lfloor \log_2{r_s}\rfloor}$, and a $5\times5$ grid with cells of side length $r_s/2$ covers $D_s$, no matter where $s$ lies; see Figure~\ref{fig:constantneighborhood}. Thus, the lemma holds with $\beta = 25$. \end{proof} \begin{figure} \center \input{figs/constantneighborhood} \caption{The neighborhood of a site has constant size}\label{fig:constantneighborhood} \end{figure} We now show that a linearized compressed quadtree for each canonical interval can be found without logarithmic overhead. \begin{lemma} \label{lem:lin_quad} We can compute for each $v \in B$ the linearized quadtree $\cL_v$ for the sites in $\cD_v$ in $O(n \log n)$ time. \end{lemma} \begin{proof} For each $v \in B$, we build the compressed quadtree $\C_v$ for $\cD_v$, as follows: at the root, we compute the compressed quadtree $\C$ for $S$ in $O(n \log n)$ time~\cite{BuchinLoMoMu11,har-peled_geometric_2008}. Then, we traverse $B$. Given the compressed quadtree $\C_v$ for a node $v \in B$, we compute $\C_w$ for a child $w$ of $v$ as follows. We do a postorder traversal of $\C_v$. In each leaf $\nu$ of $\C_v$, we check if the site $s$ in $\nu$ is in $\cD_w$, by testing $r_s$. If not, we remove $\nu$; otherwise, we keep it. In each inner vertex $\nu$ of $\C_v$, we check if $\nu$ has any remaining children. If not, we remove $\nu$. If $\nu$ has exactly one remaining child that is not a compressed vertex, we mark $\nu$ as compressed and continue. If the only remaining child of $\nu$ is compressed, we remove this child, connect $\nu$ to its grandchild, and mark $\nu$ as compressed. This takes $O(|\C_v|)$ time and gives $\C_w$. Once all the compressed quadtrees $\C_v$ are available, we traverse $B$ again to find the linearized compressed quadtrees $\cL_v$ by a traversal of each $\C_v$. The total time to find the $\cL_v$ is $O(n \log n + \sum_{v \in B} |\C_v|) = O(n \log n)$, since $|C_v| = O(|\cD_v|)$, for all $v \in B$, and $\sum_{v \in B} |\cD_v| = O(n \log n)$ by Lemma~\ref{lem:canonical}. \end{proof} Using the linearized compressed quadtrees, the range searching problem can be solved by a batched predecessor search, using a single traversal of $B$. \begin{lemma} \label{lem:cquadtree_search} The range searching problem (R1) can be solved in $O(n \log n)$ time. \end{lemma} \begin{proof} We apply Lemma~\ref{lem:lin_quad} to find the linearized quadtree for every canonical interval in \(B\). Remember that the queries in (R1) are the complete set \(S\). We split each query $s \in S$ into subqueries, by considering the neighborhood $N(s)$ of $s$. Let ${\mathcal Q}' = \bigcup_{s \in S} \big\{(\sigma, s) \mid \sigma \in N(s)\big\}$ be the set of \emph{split queries}. The purpose of the split queries is to approximate the associated disks for the query sites by cells from the hierarchical grid. By Lemma~\ref{lem:neighbor_size}, $|{\mathcal Q}'| = O(n)$. We now perform range queries for the cells in the split queries. For this, we first sort the elements of ${\mathcal Q}'$ in the $Z$-order of their first components, in $O(n \log n)$ time. Next, we distribute the split queries along their canonical paths in $B$. For each $v \in B$, let ${\mathcal Q}'_v$ be the sorted sublist of queries in ${\mathcal Q}'$ (in the $Z$-order of the first component) that have $v$ on their canonical path. By Lemmas~\ref{lem:canonical} and~\ref{lem:neighbor_size}, we have $\sum_{v \in B} |{\mathcal Q}'_v| = O(n \log n)$. To find the lists ${\mathcal Q}'_v$ for all $v \in B$ in $O(n \log n)$ time, we perform a pre-order traversal of $B$, computing the lists for the children from the lists of the parents. More precisely, given the sorted list ${\mathcal Q}'_v$ for a node $v \in B$, we can find the sorted list ${\mathcal Q}'_w$ for a child $w$ of $v$ in time $O(|{\mathcal Q}'_v|)$ by scanning ${\mathcal Q}'_v$ from left to right and by copying the elements that also appear in ${\mathcal Q}'_w$. Finally, we distribute the split queries into their canonical nodes. The canonical nodes of a query are children of the nodes on its canonical path. Thus, we can find for each $v \in B$ the sorted list ${\mathcal Q}''_v$ of split queries with $v$ as a canonical node as follows: we iterate over all non-root nodes $v \in B$, and we scan the list ${\mathcal Q}'_w$ of the parent node $w$ of $v$. We copy all queries that have $v$ as a canonical node from ${\mathcal Q}'_w$ into ${\mathcal Q}''_v$. This takes $O(n \log n)$ time. Next, we iterate over all $v \in B$, and we merge the lists ${\mathcal Q}''_v$ with the lists $\cL_v$, in $Z$-order. This takes $O\big(\sum_{v \in B} |\cL_v| + |{\mathcal Q}''_v|\big) = O(n \log n)$ time. By Lemma~\ref{lem:quadtreeorder}, we obtain for each \((\sigma,s) \in {\mathcal Q}''_v\) a cell \(\tau_v^{\sigma,s}\). If \(\sigma \cap \tau_v^{\sigma,s} \neq \emptyset\) we know that \(\sigma \cap \cD_v = \tau_v^{\sigma,s}\cap \cD_v\). Since these sites are all from \(\cD_v\) they all have radius at least \(r_s/2\). We can find all these sites in \(O(k)\) time, where \(k\) is the output size. If $k > \alpha$, we stop and report $\sigma$ as a square with many sites of large radius.\footnote{Note that here the radii are \(\geq r_s/2\) inside of the cells \(\sigma\). This might be larger than the value \(2^{\lceil \log_2 r_s\rceil}/4\) needed by (R1). But still, if there are more than \(\alpha\) sites in \(\sigma\), we still have a triangle in a square. Otherwise we will later determine that each disk contains few sites of radius at least \(r_s/2\).} Otherwise, we use the sites in $\sigma$ to accumulate the sites for the query disk $D_s$. This we can do by considering all canonical nodes of \(s\) and for each cell \(\sigma\) iterate over the sites contained in \(\sigma\). In each such cell there are at most \(\alpha\) sites. For each site \(t \in \sigma\) we can check if \(t\in D_s\). If we find a query disk $D_s$ with more than $\alpha$ sites of large radius, we stop and report its enclosing square with many sites of large radius.\footnote{\(r=2r_s\) is the side length of the enclosing square, the radii are at least \(r/4\) as desired.} Otherwise, for each $s \in S$, we have found the at most $\alpha$ sites of radius at least $r_s/2$ in $D_s$. The whole algorithm takes $O(n \log n)$ time. \end{proof} \subsection{Queries of Type (R2)} \label{sec:RangeR2} We use the tree structure of the canonical intervals (i) to construct quickly the search structures for each canonical interval; and (ii) to solve all queries for a canonical interval in one batch. We exploit the connection between planar disks and three-dimensional polytopes. Let $U = \big\{(x,y,z) \mid x^2 + y^2 = z \big\}$ be the three-dimensional unit paraboloid. For a site $s \in S$, the lifted site $\hat{s}$ is the vertical projection of $s$ onto $U$. Each disk $D_s$ is transformed into an upper halfspace $\widehat{D}_s$, so that the projection of $\widehat{D}_s \cap U$ onto the $xy$-plane is the set $\mathset{R}^2 \setminus D_s$;\footnote{This halfspace is bounded by the plane \(z=2x_sx-x_s^2+2y_sy-y_s^2+r_s^2\), where \(s=(x_s, y_s)\).} see Figure~\ref{fig:lifteddisks}. The union of a set of disks in $\mathset{R}^2$ corresponds to the intersection of the lifted upper halfspaces in $\mathset{R}^3$. \begin{figure} \centering \scalebox{0.7}{ \input{figs/disklift} } \caption{Lifting disks and points. For $\hat{D}$ only the bounding plane is shown.} \label{fig:lifteddisks} \end{figure} \begin{lemma} \label{lem:complicated} The range searching problem (R2) can be solved in $O(n \log n)$ expected time. \end{lemma} \begin{proof} For each $v \in B$, we construct a three-dimensional representation of the union of the disks in the canonical interval $\cD_v$. As explained above, this is the intersection $\E_v$ of the lifted three-dimensional halfspaces $\widehat{D}_s$, for $s \in \cD_v$. The intersection of two three-dimensional convex polyhedra with a total of $m$ vertices can be computed in $O(m)$ time~\cite{Chazelle92,chan_simpler_2016}. Therefore, we can construct all the polyhedra $\E_v$, for $v\in B$, in overall $O(n\log n)$ time, by a bottom-up traversal of $B$ (by Lemma~\ref{lem:canonical}, the total number of vertices of these polyhedra is $O(n \log n)$). For the query processing, we compute a polytope $\widehat{Q}_v$ for each $v \in B$. The polytope $\widehat{Q}_v$ is obtained by determining all the points $p$ that appear in a query $(p, r_1, r_2)$ that has $v$ as a canonical node, lifting those points $p$ to their three-dimensional representations $\hat{p}$, and taking the convex hull of the resulting three-dimensional point set. The lifted query points all lie on the unit paraboloid $U$, so every lifted query point appears as a vertex on $\widehat{Q}_v$. To find all polytopes $\widehat{Q}_v$, for $v \in B$, efficiently, we proceed as follows: let $A$ be the three-dimensional point set obtained by taking all points that appear in a query and by lifting them onto the unit paraboloid. We compute the convex hull of $A$ in $O(n \log n)$ time. Then, for each $v \in B$, we find the convex hull of all lifted queries that have $v$ in their canonical path. This can be done in $O(n \log n)$ total expected time by a top-down traversal of $B$. We already have the polytope for the root of $B$. To compute the polytope for a child node, given that the polytope for the parent node is available, we use the fact that for any polytope $\E$ in $\mathset{R}^3$ with $m$ vertices, we can compute the convex hull of any subset of the vertices of $\E$ in $O(m)$ expected time~\cite{ChazelleMu11}. Once we have for each $v \in B$ the convex hull of the lifted query points that have $v$ on their canonical \emph{path}, we can compute for each $v \in B$ the polytope $\widehat{Q}_v$ that is the convex hull of the lifted query points that have $v$ as a canonical \emph{node}. For this, we consider the canonical path polytope stored at the parent node of $v$, and we again use the algorithm from~\cite{ChazelleMu11} to extract the convex hull for the lifted query points that have $v$ as a canonical node. Now that the polyhedra $\widehat{Q}_v$ and the polytopes $\E_v$ are available, for all $v \in B$, we can answer the query as follows: for each node $v \in B$, we must check for vertices of $\widehat{Q}_v$ that do not lie inside of $\E_v$. These are exactly the vertices of $\widehat{Q}_v$ that are not vertices of $\widehat{Q}_v \cap \E_v$. As mentioned, the intersections $\widehat{Q}_v \cap \E_v$ can be found in linear time for each node $v \in B$, for a total time $O(n \log n)$, and once the intersection is available, we can easily find all vertices $\hat{p}$ of $\widehat{Q}_v$ that are not vertices of \(\widehat{Q}_v\cap \E_v\) (e.g., using radix sort). If for any such intersection $\widehat{Q}_v \cap \E_v$, there is a lifted site $\hat{s} \in \widehat{Q}_v$ that is not a vertex of $\widehat{Q}_v \cap \E_v$, we report $s$ as the result of the range search. Otherwise, we report that the range search is unsuccessful. \end{proof} \section{Finding the Shortest Triangle in a Transmission Graph} \label{sec:finding_the_smallest_triangle} We extend Theorem~\ref{thm:transmission_triangle_unweighted} to find the shortest triangle in $T(S)$. As in Section~\ref{sec:disk_triangle_weighted}, we solve the decision problem: given $W > 0$, does $T(S)$ have a directed triangle of perimeter at most $W$? We set $\ell = W/\sqrt{27}$, and call a site $s \in S$ \emph{large} if $r_s > \ell$. We let $S_\ell \subseteq S$ be the set of all large sites. \begin{lemma}\label{lem:find_small_triangles} We can find a triangle in $T(S \setminus S_\ell)$ of perimeter at most $W$ in $O(n\log n)$ time, if it exists. \end{lemma} \begin{proof} Any triangle in $T(S \setminus S_\ell)$ has perimeter at most $W$: consider a directed triangle $stu$ in $T(S \setminus S_\ell)$ with $r_s \geq \max\{r_t, r_u\}$. Then we have $t, u \in D_s$, so the triangle $stu$ lies in $D_s$. Elementary calculus shows that a triangle of maximum perimeter in $D_s$ must be equilateral with its vertices on $\partial D_s$, so any triangle contained in $D_s$ has perimeter at most $3\cdot \sqrt{3}\cdot r_s \leq \sqrt{27}\cdot \ell = W$. We can find a triangle in $T'$ in $O(n\log n)$ time by Theorem~\ref{thm:unweightedtriangledir}. \end{proof} It remains to check for triangles of perimeter at most $W$ with at least one large vertex. Some such triangles have to be considered individually, while the others can be handled efficiently in batch mode. The following lemma shows that we may assume that there are few edges from $S \setminus S_\ell$ to $S_\ell$. \begin{lemma} \label{lem:small_outdegree} If $T(S)$ does not have a triangle of perimeter at most $W$, every site in $S_\ell$ has at most six incoming edges from $S \setminus S_\ell$. Furthermore, in $O(n \log n)$ time, we can either find a triangle of perimeter at most $W$ in $T(S)$ or determine for each site in $S_\ell$ all incoming edges from $S \setminus S_\ell$. \end{lemma} \begin{proof} Suppose there is a square $\sigma$ in the plane with side length $0 < r < 2\ell$ such that $\sigma$ contains more than $\alpha$ sites $s$ of radius \(r_s\geq r/4\). Then, Lemma~\ref{lem:disks_close_together} shows that $T(S)$ contains a triangle that lies in $\sigma$. Specifically, since $\sigma$ has side length at most $r/6 = 2\ell/6$, the definition of $\ell$ implies that the triangle has perimeter at most $W$. If there is no such square, it follows that there is no site $s$ with $r_s \leq \ell$ such that $D_s$ contains more than $\alpha$ sites of radius at least $r_s/2$, as otherwise $s$ could be enclosed by a square of side length $2r_s \leq 2\ell$ that contains many sites of large radius. Thus, every small site $s$ has $O(1)$ outgoing edges to sites with radius at least $r_s/2$. In particular, there are $O(n)$ edges from small sites to large sites. We can use a suitable variant of (R1) so that in $O(n \log n)$ time we can EITHER find a square of side length $0 < r < 2\ell$ that contains more than $\alpha$ sites $s$ of radius $r_s \geq r/4$r OR determine that for every site $s$ with $r_s \leq \ell$, there are at most $\alpha$ sites in $D_s$ of radius at least $r_s/2$. Furthermore, in the second case, we explicitly get all sites of radius at least $r_s/2$ in each $D_s$ with $r_s \leq \ell$ Thus, if the second case applies, we obtain all edges from $S \setminus S_\ell$ to $S_\ell$. Suppose there is a large site $s$ of indegree at least $7$. Then, there must be two sites $t,u \in S \setminus S_\ell$ such that the angle between the edges $ts$ and $us$ is less than $\pi/3$. Thus, the distance $|tu|$ is less than the maximum of $|ts|$ and $|us|$, so there is a directed edge with endpoints $t$ and $u$ and the sites $s, t, u$ form a triangle of perimeter at most $3\ell \leq W$. \end{proof} Next, we want to limit the number of relevant edges between large sites. For this, we subdivide the plane with a grid $G$ of side length $\ell/\sqrt{2}$. Then, we have the following: \begin{lemma}\label{lem:constant_large_sites} A triangle contained in a cell $\sigma \in G$ has perimeter at most $W$. If there is no triangle in $\sigma$, then $\sigma$ contains $O(1)$ large sites. We can check for such triangles in $O(n\log n)$ overall expected time. \end{lemma} \begin{proof} The maximum perimeter of a triangle contained in $\sigma$ is $(1 + \sqrt{2})\ell < W$. Furthermore, if there are at least three large sites in $\sigma$, these large sites form a triangle, since the disk of a large site covers $\sigma$. By applying Theorem~\ref{thm:unweightedtriangledir} to the induced subgraph in each cell of $G$, we can find such a triangle in $O(n\log n)$ total expected time. \end{proof} We define the neighborhood $N(\sigma)$ of a cell $\sigma \in G$ as the $5\times 5$ block of cells centered at $\sigma$. Let $t$ be a site and $\sigma$ the cell containing $t$, then the neighborhood $N(t)$ of $t$ are all sites contained in $N(\sigma)$. Since the side length of a grid cell is $W/3\sqrt{6}$, each triangle of perimeter at most $W$ is completely contained in the neighborhood of some cell. \begin{lemma}\label{lem:remaining_mixed} We can check the remaining triangles in $O(n)$ overall time. \end{lemma} \begin{proof} Consider a remaining triangle $sut$ with $r_t \geq \max\{r_u, t_s\}$. Then, $t \in S_\ell$, and $s, t, u$ all lie in $N(t)$. By Lemma~\ref{lem:constant_large_sites}, there are $O(1)$ large candidates for $u$ in $N(t)$, and by Lemma~\ref{lem:small_outdegree}, there are $O(1)$ small candidates for $u$. Having fixed a $t$ and a possible candidate $u$, we iterate over all $s\in N(t)$ and check if $s$, $u$, and $t$ form a triangle with weight at most $W$. Every site $s$ is contained in $O(1)$ grid neighborhoods, and since there are $O(1)$ candidate pairs in each grid neighborhood, $s$ participates in $O(1)$ explicit checks. The result follows. \end{proof} The following theorem summarizes the considerations in this section. \begin{theorem} It takes $O(n\log n)$ expected time to find the shortest triangle in a transmission graph. \end{theorem} \begin{proof} We already saw that there is an $O(n\log n)$ time decision algorithm for the problem. As in Theorem~\ref{thm:shortesttriangle}, the result follows from an application of Chan's randomized optimization technique~\cite{Chan99} (restated in Lemma~\ref{lem:chan}). \end{proof} \section{Conclusion} Once again, disk graphs and transmission graphs prove to be a simple yet powerful graph model where difficult algorithmic problems admit faster solutions. It would be interesting to find a \emph{deterministic} $O(n \log n)$ time algorithm for finding a shortest triangle in a disk graph. Currently, we are working on extending our results to the girth problem in transmission graphs; can we find an equally simple and efficient algorithm as for disk graphs?
1,108,101,566,827
arxiv
\section{Introduction} This paper is primarily concerned with the estimation of additive energies of regular measures (in the sense of Ahlfors and David, localized to specific ranges of scales) on Euclidean spaces $\R^d$. We recall the key definitions. If $x \in \R^d$ and $r>0$, we let $B(x,r) = B^d(x,r) \coloneqq \{ y \in \R^d: |x-y| \leq r \}$ denote the closed ball of radius $r$ centered at $x$, where we use $||$ for the Euclidean norm on $\R^d$. \begin{definition}[Regular sets]\label{reg-set}\cite{bourgain-dyatlov} Let $0 \leq \delta \leq d$ with the ambient dimension $d$ an integer, let $0 < \alpha_0 < \alpha_1$ be scales, and let $C \geq 1$. A closed non-empty subset $X$ of $\R^d$ is said to be \emph{$\delta$-regular} on scales $[\alpha_0, \alpha_1]$ with constant $C$ if there exists a Radon measure\footnote{All measures in this paper will be unsigned.} $\mu_X$ (which we call a \emph{$\delta$-regular measure} or simply \emph{regular measure} associated to $X$) obeying the following properties: \begin{itemize} \item[(i)] $\mu_X$ is supported on $X$ (thus $\mu_X(\R^d \backslash X) = 0$); \item[(ii)] For every ball $B(x,r)$ of radius $r \in [\alpha_0,\alpha_1]$, we have the upper bound $\mu_X(B(x,r)) \leq C r^\delta$; \item[(iii)] If in addition in (i) we have $x \in X$, then we have the matching lower bound $\mu_X(B(x,r)) \geq C^{-1} r^\delta$. \end{itemize} \end{definition} Examples of regular sets include Cantor sets, smooth compact submanifolds of $\R^d$, and $r$-neighbourhoods of such sets for $0 < r \leq \alpha_0$; see for instance \cite{dyatlov-survey} for such examples and further discussion. Now we review the notion of additive energy at a given scale, as given for instance in \cite{dyatlov-zahl}. \begin{definition}[Additive energy]\label{energy-def} Let $\mu$ be a Radon measure on $\R^d$, and let $r>0$. The \emph{additive energy} $\Energy(\mu,r)$ of $\mu$ at scale $r$ is defined to be the quantity \begin{equation}\label{energy-form} \Energy(\mu,r) \coloneqq \mu^4\left( \{ (x_1,x_2,x_3,x_4) \in (\R^d)^4: |x_1+x_2-x_3-x_4| \leq r \}\right) \end{equation} where $\mu^4$ is the product measure on $(\R^d)^4$ formed from four copies of the measure $\mu$ on $\R^d$. \end{definition} By the Fubini--Tonelli theorem one can write $$\Energy(\mu,r) = \int_{\R^d} \int_{\R^d} \int_{\R^d} \mu( B(x_1+x_2-x_3,r))\ d\mu(x_1) d\mu(x_2) d\mu(x_3)$$ so we have the trivial bound \begin{equation}\label{triv-bound} \Energy(\mu,r) \leq \mu(\R^d)^3 \sup_{x \in \R^d} \mu(B(x,r)). \end{equation} In particular, if $\mu = \mu_X$ is a regular measure associated to a $\delta$-regular set on scales $[\alpha,1]$ with constant $C$ that is supported on the unit ball $B(0,1)$, then we have the ``trivial bound'' \begin{equation}\label{triv} \Energy(\mu,r) \leq C^4 r^\delta \end{equation} for any $\alpha \leq r \leq 1$. In the case when $\delta$ is an integer, this trivial bound can be sharp up to multiplicative constants. Indeed, if $\mu = \mu_X$ is $\delta$-dimensional Lebesgue measure on the disk $X \coloneqq B^\delta(0,1) \times \{0\}^{d-\delta}$, then for any $0 < \alpha \leq r \leq 1$, one can verify that $\mu$ is regular on scales $[\alpha,1]$ with some constant\footnote{See Section \ref{notation-sec} for our asymptotic notation conventions.} $C = O_{d,\delta}(1)$, but that $\Energy(\mu,r) \sim_{d,\delta} r^\delta$. However, when the dimension $\delta$ is not an integer one expects an improvement to the trivial bound \eqref{triv} in the asymptotic regime when $r$ goes to zero. In one dimension $d=1$ this was achieved by Dyatlov and Zahl \cite{dyatlov-zahl}: \begin{theorem}[Improved additive energy bound for regular measures in $\R$]\label{dz-main}\cite[Proposition 6.23]{dyatlov-zahl} Let $0 < \delta < 1$, $C>1$ and $0 < \alpha < 1$. Let $X \subset [0,1]$ be a $\delta$-regular set on scales $[\alpha,1]$, and let $\mu_X$ be an associated regular measure. Then we have $$ \Energy(\mu, r) \lesssim_{C,\delta} r^{\delta + \beta}$$ for any $\alpha \leq r < 1$, where $\beta > 0$ is of the form \begin{equation}\label{beta} \beta =\delta \exp\left( - K (1-\delta)^{-14} (1 + \log^{14} C) \right) \end{equation} for some absolute constant $K$. \end{theorem} Among other things, this theorem was used to establish the first non-trivial case of the \emph{fractal uncertainty principle}, which we will return to later in this introduction. The proof of Theorem \ref{dz-main} relied on an induction on scale argument and some major inverse theorems in additive combinatorics, such as the Bogulybov--Ruzsa theorem of Sanders \cite{sanders-br}. Ultimately, the key point is that the $\delta$-regular set $X$ is ``porous'' and cannot exhibit approximate translation invariance along a medium-length arithmetic progression; on the other hand, additive combinatorics and induction on scales can be used to produce such an approximate translation invariance if $\mu$ has exceptionally high additive energy. The bound \eqref{beta} on $\beta$ behaves quasipolynomially in $C$. In \cite[\S 6.8.3]{dyatlov-zahl} the question was raised as to whether the exponent $\beta$ could be improved to be polynomial in $C$. Our first main result answers this question in the affirmative. \begin{theorem}[Further improvement to additive energy bounds for regular measures in $\R$]\label{main} With the hypotheses of Theorem \ref{dz-main}, one can take $\beta$ to be of the form $$ \beta = c \min(\delta,1-\delta) C^{-25}$$ for some absolute constant $c>0$. \end{theorem} The exponent $25$ can be improved here, but we do not attempt to optimize it in this paper. It may be possible to improve the bound even further; the best known counterexample (see \cite[\S 6.8.3]{dyatlov-zahl}) only shows that $\beta$ must decay at least as fast as $\frac{1}{\log C}$ as $C \to \infty$ (holding $\delta$ fixed). Our argument is elementary and relies heavily on the order structure of the real line $\R$, as well as the ability to work with a mesh of dyadic (or more precisely, $K$-adic for a large $K$) intervals. Roughly speaking, the idea is to identify a lot of ``left\footnote{One could also work just as easily with a notion of ``right edge'' if desired by switching all the signs.} edges'' of (a suitable discretization of) the $\delta$-regular set $X$: ($K$-adic) intervals $I = [x,x+K^{-2n})$ which intersect the set $X$, but such that there is a large interval $[x-K^{-2n+1},x)$ immediately to the left of $I$ that is completely disjoint from $X$. Informally, left edges identify locations and scales where the set $X$ visibly fails to behave like an additive subgroup of the real numbers. As $\delta$ is bounded away from $0$ and $1$, we will be able to exhibit many left edges at many scales, to the point that almost all of the elements of $X$ will be contained in at least one left edge (or slight technical generalization of this concept which we call a left near-edge). On the other hand, if one considers a sum $x_1+x_2$ arising from a pair $x_1,x_2 \in X$ that are contained in left-edges $[y_1,y_1+K^{-2n})$, $[y_2,y_2+K^{-2n})$ respectively, then there are unusually few other pairs $x_3,x_4 \in X$ with $x_3, x_4$ close (within $K^{-2n+1}/10$ say) of $x_1,x_2$ respectively, such that $x_1+x_2 \approx x_3+x_4$. This is because in the vicinity of the left-edge $[y_1,y_1+K^{-2n})$, most of the candidates for $x_3$ lie far to the right of $x_1$, and similarly most of the candidates of $x_4$ lie far to the right of $x_2$, so it is difficult to keep $x_3+x_4$ close to $x_1+x_2$; see Figure \ref{fig:leftedge}. Because of this, each pair of left-edges can be used to create a slight diminution of the additive energy; by combining the effect of all the available pairs of left-edges at all scales, we can obtain a preliminary improvement to the trivial bound \eqref{triv} on the additive energy at some fixed small scale $r_0$ (made precise in Proposition \ref{slight-gain-1} below), which can then be iterated by a standard ``induction on scales'' argument to produce Theorem \ref{main}. Our bounds are superior to that in Theorem \ref{dz-main} because they do not rely on the Bogulybov--Ruzsa theorem of Sanders \cite{sanders-br}, for which the best known bounds are only quasipolynomial in nature. \begin{figure} [t] \centering \includegraphics[width=4in]{./leftedge.png} \caption{The $\delta$-regular set $X$ can be covered by intervals of length $K^{-2n}$ for some given $n$, which are arranged in a fractal-type pattern. Some of these intervals are depicted here, on two different portions of the real line $\R$. The intervals $[y_1, y_1+K^{-2n})$ and $[y_2, y_2+K^{-2n})$ are left edges: they have nearby intervals covering $X$ to the right, but not to the left. Note that if $x_1 \in X \cap [y_1 + y_1+K^{-2n})$ and $x_4 \in X \cap [y_4, y_4+K^{-2n})$ then there are a relatively small number of additional pairs $(x_3,x_4)$ of real numbers $x_3,x_4 \in X$ that are somewhat close to $x_1,x_2$ respectively such that $x_1+x_2 \approx x_3 + x_4$. This leads to a small but non-zero diminuition of the additive energy at the scales between to $K^{-2n}$ and $K^{-2n+1}$, which one can then hope to iterate.} \label{fig:leftedge} \end{figure} Our second result addresses the higher dimensional case. Here the order-theoretic arguments do not seem to be effective (except possibly in the very high-dimensional regime $d-1 < \delta < d$), and we revert to using more tools from additive combinatorics, and in particular the Bogulybov--Ruzsa theorem. As such, our bounds are of the same general quasipolynomial shape as in Theorem \ref{dz-main}, though it seems likely that further progress on the ``polynomial Freiman--Ruzsa conjecture'' (see e.g., \cite{sanders-bams}) may eventually lead to polynomial bounds along the lines of Theorem \ref{main}. \begin{theorem}[Improved additive energy bound for higher dimensional regular measures]\label{main-second} Let $d \geq 1$ be an integer, let $0 < \delta < d$ be a non-integer, and let $C>1$ and $0 < \alpha < 1$. Let $X \subset B^d(0,1)$ be a $\delta$-regular set on scales $[\alpha,1]$, and let $\mu = \mu_X$ be an associated regular measure. Then we have $$ \Energy(\mu, r) \lesssim_{C,\delta,d} r^{\delta + \beta}$$ for any $\alpha \leq r < 1$, where $\beta = \beta_{d,\delta,C} > 0$ takes the quasipolynomial form $$ \beta = \exp\left( - O_{\delta,d}( 1 + \log^{O_{\delta,d}(1)}(C) ) \right).$$ \end{theorem} We prove this theorem in Section \ref{higher-sec}. As with Theorem \ref{main}, it suffices to gain a small amount over the trivial bound at some fixed scale $r_0$ as one can then use induction on scales to then obtain the power saving for arbitrary scales $r$. Our proofs shares some features in common with that in \cite{dyatlov-zahl}, in that one assumes that the energy is unusually large and uses this to locate a non-trivial arithmetic progression along which $\mu$ obeys an approximate symmetry. If the dimension $\delta$ was less than $1$ then one could use the regularity to obtain a contradiction (basically because a $\delta$-dimensional measure cannot support an entire line segment if $\delta<1$). The main novelty in our argument is the treatment of the higher dimensional case $\delta>1$. Here the strategy is to ``quotient out'' by the arithmetic progression and obtain (at least on some range of scales) what is (morally at least) a $\delta-1$-regular measure supported on a $d-1$-dimensional hyperplane of $\R^d$. This can then be treated by a suitable induction hypothesis (note how the crucial hypothesis that $\delta$ is not an integer is propagated by this process). In order to make the notion of ``quotienting out'' by an arithmetic progression (which is merely an approximate subgroup of $\R^d$, as opposed to a genuine subgroup) rigorous, we will rely heavily on the machinery of additive combinatorics, most notably the theory of the Gowers uniformity norm $U^2$ and of approximate groups, and how one can split up the problem of estimating a global $U^2$ Gowers norm into the ``fine scale'' problem of estimating various local Gowers norms restricted to a ``coset'' $4H+y$ of some approximate group $H$, and the ``coarse scale'' problem of controlling the $U^2$ norm of the output of these local Gowers norms (viewed as a function of $y$); see Lemma \ref{relate}(iii) for a precise statement. \begin{remark} It seems likely that the techniques in this paper can be combined with those in \cite{rossi-shmerkin} to obtain new $L^q$ improving bounds for convolutions of regular measures, but we will not pursue this direction here. \end{remark} One can use these additive energy bounds to obtain expansion estimates for both linear and nonlinear maps. Here is a sample such result. For any measurable $E \subset \R^d$ we let $|E|$ denote its Lebesgue measure. \begin{theorem}[Nonlinear expansion]\label{nonlinear} Let $d \geq 1$ be an integer, let $0 < \delta < d$ be a non-integer, and let $C>1$ and $0 < r < 1$. Let $F: \R^d \times \R^d \to \R^d$ be a $C^2$ (twice continuously differentiable) map such that for any $x,y \in B(0,2)$, the derivative maps $D_x F(x,y), D_y F(x,y): \R^d \to \R^d$ are invertible. Let $X,Y \subset B(0,1)$ be $\delta$-regular sets on scales $[r,1]$, and let $X_r \coloneqq X+B(0,r)$, $Y_r \coloneqq Y + B(0,r)$ be the $r$-neighborhoods of $X,Y$ respectively. Then the set $$ F(X_r, Y_r) \coloneqq \{ F(x,y): x \in X_r, y \in Y_r \}$$ has measure $$ | F( X_r, Y_r )| \gtrsim_{C,\delta,d,F} r^{d-\delta-\beta}$$ where $\beta$ takes the form $$ \beta = \exp\left( - O_{\delta,d,F}( 1 + \log^{O_{\delta,d}(1)}(C) ) \right).$$ When $d=1$, we can take $$ \beta = c_F \min(\delta,1-\delta) C^{-25}$$ for some $c_F>0$ depending on $F$. \end{theorem} In particular, we have \begin{equation}\label{x-sum} |X_r + X_r|, |X_r - X_r | \gtrsim_{C,\delta,d} r^{d-\delta-\beta} \end{equation} and in one dimension we similarly have \begin{equation}\label{x-product} |X_r \cdot X_r| \gtrsim_{C,\delta} r^{1-\delta-\beta} \end{equation} if $X$ lies in (say) $[1/2,1]$ (this can be deduced either by taking logarithms, or by applying the above theorem to the multiplication map $x,y \mapsto xy$ after applying some mild changes of variable to avoid the degeneracies at $x,y=0$). Here we use the usual sumset notations \begin{align*} A+B &\coloneqq \{ a+b: a \in A, b \in B \} \\ A-B &\coloneqq \{ a-b: a \in A, b \in B \} \\ A+b &\coloneqq \{ a+b: a \in A \} \\ -A &\coloneqq \{ -a: a \in A \} \\ A\cdot B &\coloneqq \{ ab: a \in A, b \in B \}. \end{align*} For comparison, it is easy to see from Definition \ref{reg-set} that $$ |X_r| \sim_{C,\delta,d} r^{d-\delta}$$ so the bounds in \eqref{x-sum}, \eqref{x-product} represent a power gain over the trivial bound $|A+B| \geq |A|, |B|$ (and $|A \cdot B| \gtrsim |A|, |B|$ when $A,B \subset [1,2]$). We establish Theorem \ref{nonlinear} in Section \ref{nonlinear-sec}. This result can be compared with the celebrated discretized sum-product theorem of Bourgain \cite{bourgain}, which in our language would give a bound of the form \begin{equation}\label{sum-product} \max( |X_r+X_r|, |X_r \cdot X_r| ) \gtrsim_{C,\delta} r^{\delta+\beta} \end{equation} for some $\beta = \beta_\delta > 0$ and $X \subset [1,2]$, but where the lower bound in Definition \ref{reg-set}(iii) is only assumed to hold at scale $r=1$; see the recent preprint \cite{gkz} for an explicit value of $\beta$ as a function of $\delta$. In this more general set up there are standard constructions (in which $X$ resembles either a long arithmetic progression or a long geometric progression at various scales) that show that one can no longer expect the separate bounds \eqref{x-sum}, \eqref{x-product}, and and can only hope for \eqref{sum-product}. This is consistent with results such as \eqref{x-sum}, \eqref{x-product} because regular sets cannot support long arithmetic progressions or long geometric progressions. Under the same general hypotheses on $X$ considered in the above-mentioned result \eqref{sum-product} of Bourgain, an expansion bound of the form $$ |F( X_r, X_r )| \gtrsim_{C,\delta,F} r^{\delta+\beta}$$ was also recently established in \cite{rz} for polynomial maps $F$ under the (necessary) Elekes-R\'onyai condition that $F(x,y)$ is not of the form $h(a(x)+b(y))$ or $h(a(x)b(y))$ for some polynomials $h,a,b$. \subsection{Application to the fractal uncertainty principle} Let $d \geq 1$ be a dimension, and let $0 < h \leq 1$ be a small parameter. We then define the semiclassical Fourier transform ${\mathcal F}_h f \colon L^2(\R^d) \to L^2(\R^d)$ by the formula $$ {\mathcal F}_h f(\xi) \coloneqq (2\pi h)^{-d/2} \int_{\R^d} e^{-i x \cdot \xi/h} f(x)\ dx$$ for Schwartz functions $f$, extended to $L^2(\R^d)$ by continuity in the usual fashion. The \emph{fractal uncertainty principle} concerns operator norm estimates of the form $$ \| 1_{X_h} {\mathcal F}_h 1_{Y_h} \|_{L^2(\R^d) \to L^2(\R^d)} \lesssim h^\sigma$$ for various sets $X_h,Y_h$ (which are permitted to depend on $h$) and various exponents $\sigma$. Here and in the sequel we use $1_E$ to denote the indicator function of a set $E$. A model case is when $X,Y$ are $\delta$-regular subsets of $B(0,1)$ at scales $[h,1]$ for some constant $C$, and $X_h \coloneqq X + B(0,h)$, $Y_h \coloneqq Y + B(0,h)$ are the $h$-neighborhoods of $X,Y$ respectively. A standard application of Plancherel's theorem and trivial bound $$ \| {\mathcal F}_h f\|_{L^\infty(\R^d)} \lesssim_d h^{-d/2} \|f\|_{L^1(\R^d)}$$ then gives the ``trivial bound'' \begin{equation}\label{trivial-unc} \| 1_{X_h} {\mathcal F}_h 1_{Y_h} \|_{L^2(\R^d) \to L^2(\R^d)} \lesssim_{C,d} h^{\max\left(\frac{d}{2}-\delta,0\right)} \end{equation} (see e.g., \cite{dyatlov-survey} for the argument in the one-dimensional case $d=1$, which extends without difficulty to higher dimensions). In one dimension $d=1$ with $0 < \delta < 1$, the fractal uncertainty principle \cite{dyatlov-zahl}, \cite{dyatlov-jin}, \cite{bourgain-dyatlov} asserts that one can improve this bound to \begin{equation}\label{frac-unc} \| 1_{X_h} {\mathcal F}_h 1_{Y_h} \|_{L^2(\R) \to L^2(\R)} \lesssim_{C,\delta} h^{\max\left(\frac{1}{2}-\delta,0\right) + \beta} \end{equation} for some $\beta>0$ depending only on $C,\delta$. Indeed, the following specific values for $\beta$ are known: \begin{itemize} \item[(i)] \cite[Theorems 4, 6]{dyatlov-zahl} One can take \begin{equation}\label{beta-frac} \beta = \frac{3}{8} \left(\frac{1}{2}-\delta\right) - \max\left(\frac{1}{2}-\delta,0\right) + \frac{1}{16} \delta \exp( - K (1-\delta)^{14} (1+\log^{14} C) ) \end{equation} for some absolute constant $K>0$ (this only gives a positive value of $\beta$ for $\delta$ sufficiently close to $1/2$). \item[(ii)] \cite[Theorem 1.2]{jin-zhang} For $\delta \geq 1/2$, one can take \begin{equation}\label{sss} \beta = \exp\left[ -\exp( K( C \delta^{-1} (1-\delta)^{-1})^{K(1-\delta)^{-2}} )\right] \end{equation} for an absolute constant $K>0$. \end{itemize} fractal uncertainty principles can be applied to quantum chaos to obtain lower bounds on the mass of eigenfunctions and to produce spectral gaps on various negatively curved manifolds see \cite{dyatlov-survey} for a survey of this connection. fractal uncertainty principles have also been established for a wider class of sets than regular sets, and particular to \emph{porous sets}, but we will not discuss this generalization further here. We remark that while the exponent \eqref{beta-frac} was obtained via the additive energy method, other fractal uncertainty principle estimates, such as the one giving \eqref{sss}, relied on somewhat different techniques, such as the Beurling--Malliavin multiplier theorem. By combining \cite[Theorem 4]{dyatlov-zahl} (or \cite[Proposition 5.4]{dyatlov-survey}) with Theorem \ref{main}, we can immediately improve \eqref{beta-frac} to $$ \beta = \frac{3}{8} \left(\frac{1}{2}-\delta\right) - \max\left(\frac{1}{2}-\delta,0\right) + c \min(\delta,1-\delta) C^{-25} $$ for an absolute constant $c>0$; this is still only an improvement over the trivial bound for $\delta$ very close to $1/2$, but the dependence on the regularity constant $C$ has improved. In higher dimension $d > 1$, a similar combination of \cite[Theorem 4]{dyatlov-zahl} and Theorem \ref{main-second} gives the estimate $$ \| 1_{X_h} {\mathcal F}_h 1_{Y_h} \|_{L^2(\R^d) \to L^2(\R^d)} \lesssim_{C,d,\delta} h^{\max(\frac{d}{2}-\delta,0) + \beta}$$ with \begin{equation}\label{bad} \beta = \frac{3}{8} \left(\frac{d}{2}-\delta\right) - \max\left(\frac{d}{2}-\delta,0\right) + \exp\left( - O_{\delta,d}( 1 + \log^{O_{\delta,d}(1)}(C) ) \right) \end{equation} when $\delta$ is a non-integer. In the case of even dimension this does not\footnote{In particular, our results do not directly make progress on \cite[Conjecture 6.2]{dyatlov-survey}, which concerns the two-dimensional situation $d=2$.} give a positive value of $\beta$ for any value of $\delta$, which is to be expected since the no improvement to the trivial exponent $\max(\frac{d}{2}-\delta,0)$ is possible at the critical value $\delta=d/2$ in this case (see \cite[Example 6.1]{dyatlov-survey} for a counterexample when $d=2$, and a similar construction also works in other even dimensions). However when $d$ is odd, $d/2$ is a non-integer, and for $\delta$ sufficiently close to $d/2$ the implied constants in \eqref{bad} can then be verified to be uniform in $\delta$ (for $d$ fixed), leading to a fractal uncertainty principle of the form $$ \beta = \frac{3}{8} \left(\frac{d}{2}-\delta\right) - \max\left(\frac{d}{2}-\delta,0\right) + \exp\left( - O_d( 1 + \log^{O_d(1)} C) \right)$$ which is non-negative for $\delta$ sufficiently close to $d/2$. To the authors knowledge, this is the first higher dimensional fractal uncertainty principle that holds for \emph{arbitrary} regular sets. In the case when one of the sets $X,Y$ was the Cartesian product of $d$ one-dimensional regular (or porous) sets, a higher dimensional fractal uncertainty principle (with exponents similar in shape to \eqref{sss}) was recently obtained in \cite{han-schlag}. Also, as observed in \cite[\S VI.A]{dyatlov-survey}, a higher dimensional fractal uncertainty principle can sometimes be deduced from iterating the one-dimensional principle if certain projections and fibers of $X,Y$ are assumed to be porous. If one allows the sets $X,Y$ to be regular with different dimensional parameters $\delta, \delta'$ then the additive energy bounds in Theorems \ref{main}, \ref{main-second} can give further fractal uncertainty principles. Indeed we have the following statement: \begin{theorem}[Consequence of additive energy bounds]\label{add-eng} Let $d \geq 1$ be an integer, let $0 < \delta < d$ be a non-integer, and let $C>1$ and $0 < h < 1$. Let $Y \subset B^d(0,1)$ be a $\delta$-regular set on scales $[h,1]$ with constant $C$. Then \begin{equation}\label{fey} \| {\mathcal F}_h 1_{Y_h} \|_{L^2(\R^d) \to L^8(\R^d)} \lesssim_{C,d,\delta} h^{-3\delta/8+\beta} \end{equation} where $$ \beta = \exp\left( - O_{\delta,d}( 1 + \log^{O_{\delta,d}(1)}(C) ) \right);$$ in the $d=1$ case one can instead take $$ \beta = c \min(\delta,1-\delta) C^{-25}$$ for some absolute constant $c>0$. In particular, by H\"older's inequality, if $X \subset B^d(0,1)$ is $\delta'$-regular on scales $[h,1]$ with constant $C'$ for some $0 \leq \delta' \leq d$ and $C' > 1$, then \begin{equation}\label{fey-2} \| 1_{X_h} {\mathcal F}_h 1_{Y_h} \|_{L^2(\R) \to L^2(\R)} \lesssim_{C,C',d,\delta,\delta'} h^{\frac{3}{8} (d-\delta-\delta')+\beta} \end{equation} \end{theorem} We prove this theorem in Section \ref{fract-sec}. One can compare \eqref{fey-2} against the trivial bound, which in this case is \begin{equation}\label{xhyh} \| 1_{X_h} {\mathcal F}_h 1_{Y_h} \|_{L^2(\R) \to L^2(\R)} \lesssim_{C,C',d,\delta,\delta'} h^{\max\left(\frac{d-\delta-\delta'}{2},0\right)}. \end{equation} Thus one has new fractal uncertainty principles for $\delta'$-regular $X$ and $\delta$-regular $Y$ when $\delta$ is non-integer and $\delta+\delta'$ is sufficiently close to $d$; by duality one also has similar results when it is $\delta'$ that is assumed to be non-integer in place of $\delta$. As before, a modification of \cite[Example 6.1]{dyatlov-survey} shows that no improvement of the trivial bound \eqref{xhyh} can be obtained when $\delta,\delta'$ are both integers. \subsection{Acknowledgments} The first author is supported by a National Science Foundation Postdoctoral Fellowship, NSF grant 1703715. The second author is supported by NSF grant DMS-1764034 and by a Simons Investigator Award. The first author would also like to thank Semyon Dyatlov for helpful conversations, and Josh Zahl for a correction. \subsection{Notation}\label{notation-sec} We use the asymptotic notation $X \lesssim Y$, $Y \gtrsim X$, or $X = O(Y)$ to denote the bound $|X| \leq CY$ for an absolute constant $C > 0$, and use $X \sim Y$ synonymously with $X \lesssim Y \lesssim X$. If we need the constant $C$ to depend on additional parameters (e.g., the dimension $d$), we indicate this by subscripts, thus for instance $X \lesssim_d Y$ denotes the bound $|X| \leq C_d Y$ for some constant $C_d>0$ that depends on $d$. If $E$ is a finite set, we use $\# E$ to denote its cardinality. \section{The one-dimensional case: obtaining a slight gain} In this section we begin the proof of Theorem \ref{main}. Namely, we establish the following seemingly weaker variant in which one obtains only a slight improvement over the trivial bound \eqref{triv}, but at a fixed scale $r_0>0$. In the next section we will use standard induction on scales argument to iterate this slight improvement to a power gain in the scale parameter. \begin{proposition}[Slight gain over the trivial bound in one dimension]\label{slight-gain-1} Let $0 < \delta < 1$, $C>1$ and $0 < \eps \leq 1/2$. Let $r_0>0$ be the quantity \begin{equation}\label{r0-def} r_0 \coloneqq \exp\left( - C_2 \frac{C^{16} \log^2(C/\eps)}{\eps^2 \min(\delta,1-\delta)}\right) \end{equation} for a sufficiently large absolute constant $C_2$. Let $X \subset \R$ be a $\delta$-regular set on scales $[r_0,1]$ with constant $C$, and let $\mu = \mu_X$ be an associated regular measure. Then we have $$ \Energy(\mu|_{[-1,1]}, r_0) \leq \eps r_0^\delta.$$ \end{proposition} We now turn to the proof of this proposition. Let $0 < \delta < 1$, $C>1$, $0 < \eps \leq 1/2$, and define $r_0$ by \eqref{r0-def}. We let $X \subset \R^d$ be $\delta$-regular on scales $[r_0,1]$ with constant $C$ and associated regular measure $\mu_X$. Our task is to establish the bound $$ \Energy(\mu|_{[-1,1]}, r_0) \leq \eps r_0^\delta.$$ We first estimate the left-hand side by an integral involving the $\mu^2$-measure of various ``strips'' $S_z$: \begin{lemma}\label{mor} One has \begin{equation}\label{emor} \Energy(\mu|_{[-1,1]}, r_0) \lesssim r_0^{-1} \int_{[-3,3]} \mu^2( S_z )^2\ dz \end{equation} where $\mu^2 = \mu \times \mu$ is the product of two copies of the measure $\mu$, and for each $z \in \R$, $S_z$ denotes the strip $$ S_z \coloneqq \{ (x,y) \in [-1,1]^2: |x+y-z| \leq r_0 \}$$ (see Figure \ref{fig:strip}). \end{lemma} \begin{figure} [t] \centering \includegraphics[width=4in]{./strip.png} \caption{A strip $S_z$.} \label{fig:strip} \end{figure} \begin{proof} By the Fubini--Tonelli theorem, the right-hand side is $$ r_0^{-1} \int_{\R^4} \left(\int_{[-3,3]} 1_{|x_1+x_4-z|, |x_2+x_3-z| \leq r_0}\ dz\right)\ d\mu^4(x_1,x_2,x_3,x_4).$$ If $x_1,x_2,x_3,x_4 \in [-1,1]$ are such that $|x_1-x_2-x_3+x_4| \leq r_0$, then direct calculation shows that $$ \int_{[-3,3]} 1_{|x_1+x_4-z|, |x_2+x_3-z| \leq r_0}\ dz \gtrsim r_0$$ and the claim follows. \end{proof} To estimate the right-hand side of \eqref{emor} we shall remove a small exceptional set $E$ from $[-1,1]^2$: \begin{corollary}\label{conc} For any measurable subset $E$ of $[-1,1]^2$, one has $$ \Energy(\mu|_{[-1,1]}, r_0) \lesssim C^3 r_0^\delta \left( \mu^2(E) + \sup_{z \in [-3,3]} \mu(\pi(X^2 \cap S_z \backslash E)) \right)$$ where $\pi \colon \R^2 \to \R$ is the projection to the first coordinate: $\pi(x,y) \coloneqq x$. \end{corollary} \begin{proof} Since $\mu^2$ is supported on $X^2$, we can split $$ \mu^2( S_z )^2 \leq \mu^2(S_z \cap E) \mu^2(S_z) + \mu^2(X^2 \cap S_z \backslash E) \mu^2(S_z) $$ and thus \begin{align*} \Energy(\mu|_{[-1,1]}, r_0) &\lesssim r_0^{-1} \int_{[-3,3]} \mu^2( S_z \cap E ) \mu^2(S_z)\ dz\\ &\quad + r_0^{-1} \left(\sup_{z \in [-3,3]} \mu^2(X^2 \cap S_z \backslash E)\right) \int_{[-3,3]} \mu^2(S_z)\ dz. \end{align*} By the Fubini--Tonelli theorem one has \begin{align*} \int_{[-3,3]} \mu^2(S_z)\ dz &= \int_{[-1,1]^2} \int_{[-3,3]} 1_{|x+y-z| \leq r_0}\ dz d\mu^2(x,y) \\ &\lesssim r_0 \mu([-1,1])^2 \\ &\lesssim C^2 r_0 \end{align*} and similarly \begin{align*} \int_{-3}^3 \mu^2( S_z \cap E ) \mu^2(S_z)\ dz &= \int_E \int_{[-1,1]^2} \int_{-3}^3 1_{|x_1+x_4-z|, |x_2+x_3-z| \leq r_0}\ dz d\mu^2(x_2,x_3) d\mu^2(x_1,x_4) \\ &\lesssim r_0 \int_E \int_{[-1,1]} \mu(B(x_1-x_2+x_4,2r_0))\ d\mu(x_2) d\mu^2(x_1,x_4) \\ &\lesssim r_0 \int_E \mu([-1,1]) C r_0^\delta\ d\mu^2(x_1,x_4) \\ &\lesssim C^2 r_0^{1+\delta} \mu^2(E). \end{align*} Finally, for any $z$, one has from the Fubini--Tonelli theorem again that \begin{align*} \mu^2(X^2 \cap S_z \backslash E) &\leq \int_{\pi(X^2 \cap S_z \backslash E)} \mu([x-r_0,x+r_0])\ d\mu(x) \\ &\lesssim C r_0^\delta \mu( \pi(X^2 \cap S_z \backslash E) ). \end{align*} Combining all these estimates, we obtain the claim. \end{proof} To construct this exceptional set $E$ we will take advantage of the porous nature of the set $X$ as viewed through a sparse $K$-adic mesh. Let $K \geq 1000$ be a perfect square and multiple of $100$ to be chosen later, chosen so large that \begin{equation}\label{k1d} K^{\delta/2}, K^{(1-\delta)/2} \geq C_0 C^2 \end{equation} for some large absolute constant $C$. Define a \emph{$K$-adic interval} to be a half-open interval of the form $I = [jK^{-n}, (j+1)K^{-n})$ for some integers $j,n$. Such an interval will be said to be \emph{active} if it intersects $X$, and \emph{inactive} otherwise. We have the following basic porosity property: \begin{lemma}[Porosity]\label{poros} There does not exist a sequence $I_1,\dots,I_{\sqrt{K}}$ of $\sqrt{K}$ consecutive active $K$-adic intervals of equal length in $[r_0,1]$. \end{lemma} \begin{proof} Suppose for contradiction that there was a sequence $I_1,\dots,I_{\sqrt{K}}$ of consecutive active $K$-adic intervals of some length $K^{-n} \in [r_0,1]$. From the regularity of $\mu$, we have $$ \mu( I_j + [-K^{-n}, K^{-n}]) \gtrsim C^{-1} K^{-n\delta}$$ for all $j=1,\dots,\sqrt{K}$, hence on summing and noting the bounded overlap of the $I_j + [-K^{-n}, K^{-n}]$ $$ \mu\left( \bigcup_{j=1}^{\sqrt{K}} I_j + [-K^{-n}, K^{-n}]\right) \gtrsim C^{-1} \sqrt{K} K^{-n\delta}.$$ On the other hand, from regularity again we have $$ \mu\left( \bigcup_{j=1}^{\sqrt{K}} I_j + [-K^{-n}, K^{-n}]\right) \lesssim C K^{(1/2-n)\delta}.$$ This contradicts \eqref{k1d} if $C_0$ is large enough. \end{proof} Now we make some further definitions concerning $K$-adic intervals (see Figure \ref{fig:family}): \begin{itemize} \item[(i)] If $I = [jK^{-n}, (j+1)K^{-n})$ is a $K$-adic interval, we define the \emph{right shift} of $I$ to be the $K$-adic interval $I^+ \coloneqq [(j+1)K^{-n}, (j+2)K^{-n})$, the \emph{left shift} of $I$ to be the $K$-adic interval $I^- \coloneqq [(j-1)K^{-n}, jK^{-n})$, and the \emph{parent} $I^*$ to be the unique $K$-adic interval of $I$ of length $K^{1-n}$ that contains $I$. We then call $I$ a \emph{child} of $I^*$, and define a \emph{grandchild} to be a child of a child. \item[(ii)] Given two intervals $I,J$, we say that $I$ \emph{lies to the left} of $J$, or equivalently that $J$ \emph{lies to the right} of $I$, if $x < y$ for all $x \in I$ and $y \in J$. \item[(iii)] A \emph{sibling} of a $K$-adic interval $I$ is another $k$-adic interval $J$ with the same parent as $I$: $I^* = J^*$. A \emph{left sibling} (resp. right sibling) of $I$ is a sibling $J$ that lies to the left (resp. right) of $I$. \item[(iv)] A \emph{left edge} is an active $K$-adic interval $I$ of length $K^{-2n} \in [r_0,1]$ for some $n \geq 1$, with the property that all left siblings of $I$, as well as the left shift $(I^*)^-$ of $I$, are inactive. \item[(v)] A \emph{left near-edge} is a $K$-adic interval $J$, to which there is associated a left edge $I$ of the same length as $J$ and equal to or to the left of $J$, such that all the $K$-adic intervals of the same length as $J$ between $J$ and $I$ are active. (In particular, if $I$ is a left edge, then $I$ and $I^+$ are left near-edges, and from Lemma \ref{poros} each left edge is associated to at most $K/100$ left near-edges, which are all adjacent to each other and of the same length as the left edge, with the rightmost of these left near-edges being inactive. \end{itemize} \begin{figure} [t] \centering \includegraphics[width=4in]{./family.png} \caption{Some selected $K$-adic intervals (with $K=4$). The interval $J$ here is a grandchild of $I^*$, and its parent $J^*$ is a left sibling of $I$. If the intervals in red are active and the intervals in black are inactive, and $I$ has length $K^{-2n} \in [r_0,1]$ for some $n \geq 1$, then $I$ will be a left edge and $I, I^+, I^{++}$ will be left near-edges, but none of the other intervals of length $K^{-2n}$ depicted here will be left near-edges.} \label{fig:family} \end{figure} One could also define the notion of a right edge and right near-edge, but we will not need to do so here. \begin{example} Let $K=10$, and let $X \subset [0,1]$ be the set of all real numbers whose decimal expansions take values in the set $\{ 1, 2, 5, 6\}$. Then an $10$-adic interval of length $10^{-2n}$ is an interval of the form $[\frac{a}{10^{2n}}, \frac{a+1}{10^{2n}})$, where $a$ is an integer. This interval is active if $a$ is positive with decimal expansion length $2n$ and has all digits in $\{1,2,5,6\}$; a left-edge arises if furthermore the final digit is $1$ and the penultimate digit lies in $\{1,5\}$. Finally, a left near-edge of length $10^{-2n}$ arises if $a$ is positive with decimal expansion of length $2n$ with the final digit in $\{1,2,3\}$, penultimate digit in $\{1,5\}$, and all other digits in $\{1,2,5,6\}$. Observe that almost all elements of $X$ will lie in at least one left near-edge, in the sense that the set of exceptions has strictly smaller dimension than $X$ itself. This phenomenon of abundance of left near-edges is crucial to our argument. \end{example} We make the following basic observations. Firstly, if $I$ is a left edge, then it is active, and from the regularity property of $\mu$ we have $$ \mu( I^- \cup I \cup I^+ ) \gtrsim C^{-1} |I|^\delta.$$ On the other hand, as $I$ is a left edge, $I^-$ cannot be active (it is either a left sibling of $I$, or lies in $(I^*)^-$) and thus \begin{equation}\label{muii} \mu( I \cup I^+ ) \gtrsim C^{-1} |I|^\delta. \end{equation} Thus at least one of the left near-edges $I,I^+$ associated to $I$ must absorb a relatively large amount of the mass of $\mu$ (compared to the upper bound of $C |I|^\delta$ coming from the regularity hypothesis). Next, we claim that any two left edges $I,J$ of the same length $|I|=|J|$ must be separated from each other by at least $K|I|$. Indeed, we may assume without loss of generality that $J$ lies to the left of $I$; as the left siblings of $I$, as well as the entirety of $(I^*)^-$, are inactive, this forces $J$ to lie to the left of $(I^*)^-$, giving the claim. In particular, we see that the left near-edges associated to $I$ are disjoint from the left near-edges associated to $J$. Now we make the key claim that lets us produce many left edges at many scales. \begin{lemma}[Many left edges]\label{many-left} Let $n \geq 1$ be such that $K^{-2n} \in [K^2 r_0, 1]$. Let $I$ be an active $K$-adic interval of length $K^{-2n}$. Then $I \cup I^-$ contains a left edge $J$ of length $K^{-2} |I| = K^{-2(n+1)}$. Furthermore, if $J$ is associated to a left near-edge $J'$ that is contained in a larger left near-edge $\tilde J$, then $I$ is also contained in a left-near edge of the same length as $\tilde J$. \end{lemma} \begin{figure} [t] \centering \includegraphics[width=4in]{./manyleft.png} \caption{A typical situation that arises in the proof of Lemma \ref{many-left} (only a portion of the large intervals $\tilde J, \tilde J^+$ are depicted here). Active intervals are displayed in red, inactive intervals in black. In this example $J, J'$ is contained in $I^-$, but it is also possible for one or both of these intervals to lie in $I$ instead. Similarly, it is also possible for $I$ to lie in $\tilde J$ rather than $\tilde J^+$. Note that $J$ is a left edge.} \label{fig:manyleft} \end{figure} \begin{proof} By Lemma \ref{poros}, at least one of the children $I'$ of $I^-$ will be inactive. On the other hand, by the pigeonhole principle at least one of the grandchildren of $I$ is active. In particular we can find an active interval $J$ of length $K^{-2} |I|$ that lies to the right of $I'$, is a grandchild of either $I$ or $I^-$, and is the leftmost interval with these properties; see Figure \ref{fig:manyleft}. By Lemma \ref{poros} $J$ lies at a distance of at most $\sqrt{K} |I'| = K^{-1/2} |I'|$ of $I'$, and thus is a grandchild of either $I$ or $I'$. By construction, all the intervals of length $K^{-2} |I|$ between $I'$ and $J$ are inactive. As $I'$ is also inactive, this makes $J$ a left edge by definition. Now suppose that there is a left near-edge $J'$ associated to $J$ is contained in a larger left near-edge $\tilde J$, thus $\tilde J$ is at least as large as $I$. Note that $J'$ is either equal to $J$, or lies to the right at a distance of at most $\sqrt{K} |J| = K^{-3/2} |I|$ thanks to Lemma \ref{poros}; in particular, $J'$ also lies in $I^- \cup I$. If $J'$ lies in $I$ then $\tilde J$ will contain $I$ and we are done, so suppose $J'$ lies in $I^-$. Then $\tilde J$ contains $I^-$, hence contain $J$, hence is active, hence $\tilde J^+$ is also a left near-edge. As $\tilde J$ contains $I^-$, $I$ will be contained in either $\tilde J$ or $\tilde J^+$, and the claim follows. \end{proof} Now we can construct the exceptional set $E \subset [-1,1]^2$. Let $N$ be the largest integer such that $K^{-2N} \geq r_0$. For any $0 \leq n \leq N$, we let $E_n$ be the set of all elements of $[-1,1]^2$ that do not lie in a square $I_1 \times I_2$, where $I_1,I_2$ are two left near-edges of equal length $K^{-2n'}$ for some $1 \leq n' \leq n$. Clearly $E_0 = [-1,1]^2$, hence by regularity $$ \mu^2(E_0) \lesssim C^2.$$ Thanks to Lemma \ref{many-left}, we can now show a geometric decrease in the measures of the $E_n$: \begin{proposition}[Geometric decrease] For every $0 \leq n \leq N-1$, one has $$ \mu^2(E_{n+1}) \leq (1-c C^{-4} K^{-4\delta}) \mu^2(E_{n})$$ for some absolute constant $c>0$. \end{proposition} \begin{proof} It suffices to show that $$ \mu^2(E_n \backslash E_{n+1}) \gtrsim C^{-4} K^{-4\delta} \mu^2(E_n).$$ By construction, $E_n$ is the union of squares $I_1 \times I_2$, where $I_1,I_2$ are $K$-adic intervals of length $K^{-2n}$ such that there is no $1 \leq n' \leq n$ for which $I_1, I_2$ are respectively contained in two left-near edges $\tilde I_1, \tilde I_2$ of length $K^{-2n'}$. If $I_1$ or $I_2$ are inactive then the square $I_1 \times I_2$ has zero measure in $\mu$, so we can restrict attention to squares $I_1 \times I_2$ which are \emph{active} in the sense that $I_1$ and $I_2$ are both active. By regularity, the contribution of each active square can be bounded by $$ \mu^2(I_1 \times I_2) \lesssim (C |I_1|^\delta) (C |I_2|^\delta) = C^2 K^{-4n\delta}.$$ Thus, if there are $M$ active squares, we have $$ \mu^2(E_n) \lesssim C^2 K^{-4n\delta} M$$ and so it will now suffice to establish the bound $$ \mu^2(E_n \backslash E_{n+1}) \gtrsim C^{-2} K^{-4n\delta-4\delta} M.$$ From Lemma \ref{many-left}, for each active square $I_1 \times I_2$ one can find left edges $J_1 \subset I_1^- \cup I_1$, $J_2 \subset I_2^- \cup I_2$ of length $K^{-2n-2}$ obeying the conclusions of the lemma. From \eqref{muii} and the pigeonhole principle we can find for each $i=1,2$ an interval $J'_i$ that is either equal to $J_i$ or its right shift $J_i^+$, such that $$ \mu(J'_i) \gtrsim C^{-1} (K^{-2n-2})^\delta$$ and hence $$ \mu^2(J'_1 \times J'_2) \gtrsim C^{-2} K^{-4n\delta-4\delta}.$$ Note that $J'_1,J'_2$ are left near-edges associated to $J_1,J_2$ respectively. In particular, from Lemma \ref{many-left}, since $I_1,I_2$ fail to be respectively contained in left-near edges of length $K^{-2n'}$ for some $1 \leq n' \leq n$, the same is true for $J'_1, J'_2$. By construction of $E_{n+1}$, this implies that $$ J'_1 \times J'_2 \subset E_n \backslash E_{n+1}.$$ Since $J'_1 \times J'_2$ lies within $O(K^{-2n})$ of $I_1 \times I_2$, we see that each square $J'_1 \times J'_2$ can be generated by at most $O(1)$ of the $M$ active squares $I_1 \times I_2$. Thus we have $$ \mu^2( E_n \backslash E_{n+1} ) \gtrsim C^{-2} K^{-4n\delta-4\delta} M$$ giving the claim. \end{proof} Iterating this proposition we see that \begin{align*} \mu^2(E) &\lesssim C^2 (1-c C^{-4} K^{-4\delta})^N \\ &\lesssim C^2 \exp( -c N C^{-4} K^{-4\delta} ) \\ &\lesssim C^2 r_0^{\frac{c'}{C^4 K^{4\delta} \log K}} \end{align*} for some absolute constants $c, c'>0$. Now let $z \in [-3,3]$. In view of Corollary \ref{conc}, we are interested in bounding the expression $\mu(\pi(X^2 \cap S_z \backslash E)) )$. We can of course write this as $\mu(Z)$, where $Z$ is the set $$ Z \coloneqq \pi(X^2 \cap S_z \backslash E).$$ By construction of $E$, we can then bound $$ \mu(Z) \leq \sum_{I_1,I_2} \mu\left( \pi\left(X^2 \cap S_z \cap \left(\bigcup_{I'_1} I'_1 \times \bigcup_{I'_2} I_2\right)\right) \right)$$ where $I_1,I_2$ range over pairs of left-edges of equal length in $[r_0,1]$, and $I'_1,I'_2$ range over the left near-edges associated with $I_1,I_2$ respectively. Now we make a key calculation. \begin{lemma}[Bounding a piece of $\mu(Z)$]\label{muz-piece} To every pair $I_1,I_2$ of left edges of equal length in $[r_0,1]$, one can find a set $Y_{I_1,I_2} \subset [-2,2]$ such that \begin{equation}\label{yip} \mu\left( \pi\left( X^2 \cap S_z \cap \left(\bigcup_{I'_1} I'_1 \times \bigcup_{I'_2} I_2\right) \right) \right) \lesssim C^2 K^{-\delta/2} \mu(Y_{I_1,I_2}). \end{equation} Furthermore, the sets $Y_{I_1,I_2}$ are disjoint as $(I_1,I_2)$ vary. \end{lemma} The key points here are the gain of $K^{-\delta/2}$ on the right-hand side, and the disjointness of the sets $Y_{I_1,I_2}$. \begin{figure} [t] \centering \includegraphics[width=4in]{./gain.png} \caption{A typical situation that arises in the proof of Lemma \ref{muz-piece}. To reduce clutter we take $I_1=I'_1$ and $I_2=I'_2$. The set $X^2$ (which supports the measure $\mu^2$) is contained in the squares formed by products of active intervals, such as $I_1 \times I_2$ (which is a typical component of the complement of $E$). Because $I_1,I_2$ are left-edges, if $S_z$ intersects $I'_1 \times I'_2 = I_1 \times I_2$, then the set $\pi(X^2 \cap S_z \cap (I'_1 \times I'_2))$, being contained in $I_1 = I'_1$, will be much smaller than the nearby $Y_{I_1,I_2}$ as measured using the regular measure $\mu$; on the other hand, $Y_{I_1,I_2}$ stays at a medium distance from the set $Z = \pi(X^2 \cap S_z \backslash E)$ and from $I_1$, which will ensure that the $Y_{I_1,I_2}$ are disjoint as $I_1,I_2$ vary. Note how this picture resembles the Cartesian product of the two portions of the real line depicted in Figure \ref{fig:leftedge}.} \label{fig:gain} \end{figure} \begin{proof} Write $I_1 = [x_1, x_1 + K^{-2n})$ and $I_2 = [x_2, x_2 + K^{-2n}]$ for some $K^{-2n} \in [r_0,1]$. We can assume that the set $X^2 \cap S_z \cap (\bigcup_{I'_1} I'_1 \times \bigcup_{I'_2} I_2)$ is non-empty, otherwise we can simply set $Y_{I_1,I_2}$ to be the empty set. From Lemma \ref{poros}, the intervals $I'_i$ lie within $2 \sqrt{K} K^{-2n}$ of $x_i$ for $i=1,2$, hence by definition of $S_z$ we have \begin{equation}\label{zoo} |z - x_1 - x_2| \leq 5 \sqrt{K} K^{-2n}. \end{equation} also we see that from the definition of $Z$ that $Z$ contains a point within $2 \sqrt{K} K^{-2n}$ of $x_1$, thus \begin{equation}\label{disp} \mathrm{dist}(x_1,Z) \leq 2\sqrt{K} K^{-2n}. \end{equation} We define $Y_{I_1,I_2}$ to be the interval \begin{equation}\label{y-def} Y_{I_1,I_2} \coloneqq \left[x_1 + 100 \sqrt{K} K^{-2n}, x_1 + \frac{1}{100} K^{-2n+1}\right]; \end{equation} see Figure \ref{fig:gain}. Clearly $Y_{I_1,I_2} \subset [-2,2]$. We now verify \eqref{yip}. We can use regularity to bound \begin{align*} \mu\left( \pi\left( S_z \cap \left(\bigcup_{I'_1} I'_1 \times \bigcup_{I'_2} I_2\right) \right) \right) &\leq \mu\left( \bigcup_{I'_1} I'_1 \right)\\ &\lesssim C (\sqrt{K} K^{-2n})^\delta. \end{align*} Next, as $I_1$ is a left edge, it contains an element $x_*$ of $X$, and $$ \mu([x_1 - K^{-2n+1}, x_1)) = 0$$ and hence $$ \mu(Y_{I_1,I_2}) \geq \mu\left( B(x_*,\frac{1}{200} K^{-2n+1}) \backslash B(x_*,200 \sqrt{K} K^{-2n}) \right).$$ From regularity we have $$\mu\left( B(x_*,\frac{1}{200} K^{-2n+1}) \right) \gtrsim C^{-1} K^{(-2n+1)\delta}$$ and $$ \mu\left( B(x_*,200 \sqrt{K} K^{-2n}) \right) \lesssim C K^{(-2n+1/2)\delta}$$ hence by \eqref{k1d} we have (for $C_0$ large enough) that $$ \mu(Y_{I_1,I_2}) \gtrsim C^{-1} K^{(-2n+1)\delta}$$ and \eqref{yip} follows. It remains to establish the disjointness of the $Y_{I_1,I_2}$. To do this we study the relative position of $Y_{I_1,I_2}$ and $Z$. From \eqref{disp} one has \begin{equation}\label{y1} \mathrm{dist}(y,Z) \leq \frac{1}{10} K^{-2n+1} \end{equation} for all $y \in Y_{I_1,I_2}$. Now we use the fact that $I_2$ is a left-edge, thus $$ X \cap [x_2 - K^{-2n+1}, x_2) = \emptyset.$$ By definition of $S_z$, this implies that the set $Z \subset \pi( X^2 \cap S_z )$ avoids the interval $(z - x_2+r_0, z-x_2+K^{-2n+1}-r_0)$. Using \eqref{zoo}, we conclude that $Z$ avoids the interval $$ \left[x_1 + 10 \sqrt{K} K^{-2n}, x_1 + \frac{1}{2} K^{-2n+1}\right]$$ (say), and thus we have \begin{equation}\label{y2} \mathrm{dist}(y,Z) \geq 10 K^{-2n+1/2} \end{equation} for all $y \in Y_{I_1,I_2}$. The bounds \eqref{y1}, \eqref{y2} imply that two intervals $Y_{I_1,I_2}, Y_{I'_1,I'_2}$ cannot overlap if the lengths $|I_1|=|I_2|=K^{-2n}$, $|I'_1|=|I'_2|=K^{-2n'}$ are distinct. Thus it remains to show that the intervals $Y_{I_1,I_2}, Y_{I'_1,I'_2}$ cannot overlap in the case of all equal lengths $|I_1|=|I_2|=|I'_1|=|I'_2|=K^{-2n}$. Recall that if the left-edges $I_1,I'_1$ are distinct, they are separated from each other by at least $K^{-2n+1}$. Comparing this with \eqref{y-def} we see that one can only have $Y_{I_1,I_2}, Y_{I'_1,I'_2}$ intersect if $I_1=I'_1$. But then from \eqref{zoo} we see that left-edges $I_2, I'_2$ are separated by at most $10 \sqrt{K} K^{-2n}$, and hence must also be equal from the separation property of left-edges. Thus we see that the $Y_{I_1,I_2}$ are indeed disjoint as $(I_1,I_2)$ vary. \end{proof} Summing the above lemma over all pairs $(I_1,I_2)$ we conclude that $$ \mu(Z) \lesssim C^2 K^{-\delta/2} \mu([-2,2]) \lesssim C^3 K^{-\delta/2}$$ and hence by Corollary \ref{conc} we have $$ \Energy(\mu|_{[-1,1]}, r_0) \lesssim C^6 r_0^\delta \left( r_0^{\frac{c'}{C^4 K^{4\delta} \log K}} + K^{-\delta/2} \right).$$ If we now set $$ K \coloneqq (C_1 C^6/\eps)^{\max(\frac{2}{\delta}, \frac{2}{1-\delta})}$$ for a sufficiently large absolute constant $C_1$, the the condition \eqref{k1d} will be satisfied, and $$ \Energy(\mu|_{[-1,1]}, r_0) \leq r_0^\delta \left(\frac{\eps}{2} + O\left( C^6 r_0^{\frac{c'' \eps^2 \min(\delta,1-\delta)}{C^{16} \log(C/\eps)}} \right) \right)$$ for some absolute constant $c>0$. Thus if we select $$ r_0 \coloneqq \exp\left( - C_2 \frac{C^{16} \log^2(C/\eps)}{\eps^2 \min(\delta,1-\delta)} \right)$$ for a sufficiently large absolute constant $C_2$, we obtain $$ \Energy(\mu|_{[-1,1]}, r_0) \leq \eps r_0^\delta$$ as required. This concludes the proof of Proposition \ref{slight-gain-1}. \section{The one-dimensional case: induction on scales}\label{induct-sec} In this section we show how one can iterate Proposition \ref{slight-gain-1} to obtain Theorem \ref{main}. Let $\delta, C, \alpha, X, \mu_X$ be as in Theorem \ref{main}. Let $\eps>0$ be a small parameter to be chosen later, and let $r_0$ be the quantity in Proposition \ref{slight-gain-1}. It is convenient to adopt the notation of Gowers uniformity norms \cite{gowers}. For $f_1,f_2,f_3,f_4 \in L^{4/3}(\R)$, define the Gowers inner product $$ \langle f_1,f_2,f_3,f_4 \rangle_{U^2(\R)} \coloneqq \int_\R \int_\R \int_\R f_1(x) \overline{f_2}(x+h) \overline{f_3}(x+k) f_4(x+h+k)\ dx dh dk$$ and the Gowers uniformity norm $$ \|f\|_{U^2(\R)} \coloneqq \langle f,f,f,f \rangle^{1/4}$$ for any $f \in L^{4/3}(\R)$. One can also write the $U^2$ norm in terms of the Fourier transform as \begin{equation}\label{u24} \|f\|_{U^2(\R)} = \| \hat f \|_{L^4(\R)} \end{equation} so it is clear that the $U^2(\R)$ norm is indeed a norm. We also recall the well-known \emph{Gowers--Cauchy--Schwarz inequality} \begin{equation}\label{gcz-1} |\langle f_1,f_2,f_3,f_4 \rangle_{U^2(\R)}| \leq \|f_1\|_{U^2(\R)} \|f_2\|_{U^2(\R)} \|f_3\|_{U^2(\R)} \|f_4\|_{U^2(\R)} \end{equation} We can relate the Gowers norms to additive energy as follows: \begin{lemma}[Additive energy and Gowers norms]\label{agn} If $\mu$ is a finite Radon measure on $\R$ and $r>0$, then \begin{equation}\label{emr} \Energy(\mu,r) \sim r^{-3} \| \mu * 1_{[-r,r]} \|_{U^2}^4 \end{equation} (compare with Lemma \ref{mor}). Also, for any $\lambda>0$ one has \begin{equation}\label{lamr} \| \mu * 1_{[-\lambda r,\lambda r]} \|_{U^2} \sim_\lambda \| \mu * 1_{[-r,r]} \|_{U^2}. \end{equation} \end{lemma} \begin{proof} The claim \eqref{lamr} is clear after observing that $1_{[-\lambda r, \lambda_r]}$ can be bounded by the sum of $O_\lambda(1)$ translates of $1_{[-r,r]}$ (and vice versa), together with the triangle inequality and translation invariance of Gowers norms. By the Fubini--Tonelli theorem we have $$ \| \mu * 1_{[-r,r]} \|_{U^2}^4 = \int_{\R^4} 1_{[-r,r]} * 1_{[-r,r]} * 1_{[-r,r]} * 1_{[-r,r]}(x_1+x_2-x_3-x_4)\ d\mu(x_1) \dots d\mu(x_4).$$ Since $$ 1_{[-r,r]} * 1_{[-r,r]} * 1_{[-r,r]} * 1_{[-r,r]} \gtrsim r^3 1_{[-r,r]}$$ we obtain the bound $$ \| \mu * 1_{[-r,r]} \|_{U^2}^4 \gtrsim r^3 \Energy(\mu,r).$$ Conversely, since $$ 1_{[-r/4,r/4]} * 1_{[-r/4,r/4]} * 1_{[-r/4,r/4]} * 1_{[-r/4,r/4]} \lesssim r^3 1_{[-r,r]}$$ we have $$ \| \mu * 1_{[-r/4,r/4]} \|_{U^2}^4 \lesssim r^3 \Energy(\mu,r)$$ and the claim \eqref{emr} now follows from \eqref{lamr}. \end{proof} The key claim is the following inequality relating the energy at different scales: \begin{proposition}[Energy at nearby scales]\label{en-nearby} If $\alpha \leq r \leq r_0$, one has $$ \Energy(\mu,r) \lesssim C^4 \eps r_0^\delta \Energy(\mu,r/r_0)$$ \end{proposition} \begin{proof} By Lemma \ref{agn} one has $$ \Energy(\mu,r) \sim r^{-3} \| \mu * 1_{[-r,r]} \|_{U^2(\R)}^4.$$ Partition $\R$ into a collection ${\mathcal I}$ of half-open intervals $I$ of length $r/r_0$, then we can decompose $$ \mu = \sum_I \mu|_I$$ where $\mu_I$ denotes the restriction of $\mu$ to $I$, and thus $$ \Energy(\mu,r) \sim \sum_{I_1,I_2,I_3,I_4} r^{-3} \langle \mu|_{I_1} * 1_{[-r,r]}, \dots, \mu|_{I_4} * 1_{[-r,r]} \rangle_{U^2(\R)}$$ where $I_1,\dots,I_4$ are understood to vary over ${\mathcal I}$. The integral vanishes unless $I_1-I_2-I_3+I_4$ intersects $[-4r,4r]$ and all of the $\mu|_{I_i}$ are non-vanishing, which by Definition \ref{reg-set}(iii) implies that $$ \mu(2I_i-I_i) \gtrsim C^{-1} (r/r_0)^\delta,$$ where $2I_i-I_i = I_i + [-r/r_0,r/r_0]$ is the triple of $I_i$. When this occurs, we can use the Gowers--Cauchy--Schwarz inequality to estimate $$ \langle \mu|_{I_1} * 1_{[-r,r]}, \dots, \mu|_{I_4} * 1_{[-r,r]} \rangle_{U^2(\R)} \leq \prod_{i=1}^4 \| \mu|_{I_i} * 1_{[-r,r]}\|_{U^2(\R)}$$ and hence by Lemma \ref{agn} again $$ r^{-3} \langle \mu|_{I_1} * 1_{[-r,r]}, \dots, \mu|_{I_4} * 1_{[-r,r]} \rangle_{U^2(\R)} \lesssim \prod_{i=1}^4 \Energy(\mu|_{I_i}, r)^{1/4}.$$ For each $i$, let $T_i \colon \R \to \R$ be the affine (order-preserving) map that sends $I_i$ to $[-1,1]$. Direct calculations shows that the pushforward measure $(r_0/r)^\delta T_* \mu$ is $\delta$-regular at scales $[\alpha r_0/r, r_0/r]$, and in particular at scales $[r_0,1]$. Applying Proposition \ref{slight-gain-1} to this measure and undoing the rescaling, we see after a routine calculation that $$ \Energy(\mu|_{I_i}, r_0) \leq \eps (r/r_0)^{4\delta} r_0^\delta \lesssim C^4 \eps r_0^\delta \mu(2I_i-I_i)^4.$$ Putting all this together, we see that $$ \Energy(\mu,r) \lesssim C^4 \eps r_0^\delta \sum_{I_1,I_2,I_3,I_4: I_1-I_2-I_3+I_4 \cap [-4r,4r] \neq \emptyset} \prod_{i=1}^4 \mu(2I_i-I_i).$$ By the Fubini--Tonelli theorem, we can write the right-hand side as $$ C^4 \eps r_0^\delta \int_{\R^4} \sum_{I_1,I_2,I_3,I_4: I_1-I_2-I_3+I_4 \cap [-4r,4r] \neq \emptyset} \prod_{i=1}^4 1_{2I_i-I_i}(x_i)\ d\mu(x_1) \dots d\mu(x_4).$$ The integrand can be computed to equal $O(1)$, and vanishes unless $$ |x_1 - x_2 - x_3 + x_4| \leq 100 r/r_0$$ (say). Thus we conclude that $$ \Energy(\mu,r) \lesssim C^4 \eps r_0^\delta \Energy(\mu,100r/r_0)$$ and the claim now follows by using Lemma \ref{agn} to remove the factor of $100$. \end{proof} If we now set $\eps$ to be a small multiple of $1/C^4$, we can ensure that $$ \Energy(\mu,r) \leq \frac{1}{e} r_0^\delta \Energy(\mu,r/r_0)$$ and hence upon induction (starting with the base case $r_0 \leq r \leq 1$) one has $$ \Energy(\mu,r) \lesssim (r/r_0)^{\delta + \frac{1}{\log r_0}} \Energy(\mu,1)$$ for all $\alpha \leq r \leq 1$. Substituting the specific value \eqref{r0-def} corresponding to the indicated choice of $\eps$, we obtain the claim. \section{Gowers norms and approximate groups}\label{add-comb} In the previous section we connected the additive energy $\Energy(\mu,r)$ to the Gowers uniformity norms used in additive combinatorics. We now develop the theory of these norms (and the related notion of an \emph{approximate group}) in more detail, as we shall rely heavily on these tools for the higher-dimensional argument. As we will be working in the continuous setting of Euclidean spaces $\R^d$ rather than in the discrete settings that are more traditional in additive combinatorics, we shall phrase these concepts in the general setting of locally compact abelian groups. In this section we will lay out the basic theory of these concepts; most of the material is standard, except for a key ``scale splitting'' estimate which will underlie various manifestations of an ``induction on scales'' strategy; see Lemma \ref{split} and Lemma \ref{relate}(iii). \begin{definition}[LCA groups]\label{lca-group} An \emph{LCA group} is a locally compact abelian group $V = (V,+)$ equipped with a Haar measure $m_V$, and the Borel sigma algebra. For any $1 \leq p \leq \infty$, we define the usual Banach spaces $L^p(V)$ of $p^{\mathrm{th}}$ power integrable functions $f \colon V \to \C$, quotiented out by almost everywhere equivalence. We let $L^p(V)_+$ denote the subset of $L^p(V)$ consisting of functions that are non-negative (almost everywhere). For any positive measure subset $X$ of $V$, we define $L^p(X)$ and $L^p(X)_+$ similarly. \end{definition} In this paper we will mostly be concerned with the case when $V$ is a Euclidean space $\R^d$ equipped with Lebesgue measure, though we will occasionally also need to work with the lattice $\Z^d$ with counting measure, or hyperplanes $v^\perp \coloneqq \{ x \in \R^d: x \cdot v = 0 \}$ equipped with Lebesgue measure. \begin{definition}[Gowers uniformity norm]\cite[Definition 1.1]{eisner} Let $V$ be an LCA group. If $f_1,f_2,f_3,f_4 \in L^{4/3}(V)$, we define the Gowers inner product $$ \langle f_1,f_2,f_3,f_4 \rangle_{U^2(V)} \coloneqq \int_V \int_V \int_V f_1(x) \overline{f_2}(x+h) \overline{f_3}(x+k) f_4(x+h+k)\ dm_V(x) dm_V(h) dm_V(k)$$ (these integrals can be shown to be absolutely convergent by the H\"older and Young inequalities, or by interpolation) and for $f \in L^{4/3}(V)$ we define the uniformity norm $\|f\|_{U^2(V)} = \|f\|_{U^2(V,)}$ by the formula $$ \|f\|_{U^2(V)} \coloneqq \langle f,f,f,f \rangle_{U^2(V)}^{1/4}.$$ If $\mu$ is an absolutely continuous Radon measure on $V$ whose Radon-Nikodym derivative $\frac{d\mu}{dm_V}$ is finite, we define (by abuse of notation) $$ \| \mu \|_{U^2(V)} \coloneqq \left\| \frac{d\mu}{dm_V} \right\|_{U^2(V)}.$$ Finally, if $X$ is a positive measure subset of $V$ and $f \in L^{4/3}(X)$, we define $$ \|f \|_{U^2(X)} \coloneqq \|f 1_X \|_{U^2(V)}.$$ \end{definition} Although we define the Gowers norms here for $f \in L^{4/3}(V)$, for our applications it would suffice to restrict attention to the non-negative functions $f \in L^{4/3}(V)_+$. As noted in \cite{eisner}, the norms $\|\|_{U^2(V)}$ are indeed norms; in fact one has an explicit representation \begin{equation}\label{u24-v} \|f\|_{U^2(V)} = \|f*f\|_{L^2(V)}^{1/2} = \| \hat f \|_{L^4(\hat V)} \end{equation} in terms of the $L^4$ norm of the Fourier transform $\hat f \colon \hat V \to \C$ on the Pontragyin dual $\hat V$ of $V$ (equipped with the dual Haar measure $m_{\hat V}$), where we use the usual convolution operation $$ f*g(x) \coloneqq \int_V f(x-y) g(y)\ dm_V(y).$$ Similarly one has $$ \langle f_1,f_2,f_3,f_4 \rangle_{U^2(V)} = \int_{\hat V} \hat f_1(\xi) \overline{\hat f_2(\xi)} \overline{\hat f_3(\xi)} \hat f_4(\xi)\ dm_{\hat V}(\xi).$$ From Young's inequality or the Hausdorff-Young inequality one then has the bound \begin{equation}\label{young} \|f\|_{U^2(V)} \leq \|f\|_{L^{4/3}(V)} \end{equation} so in particular by H\"older's inequality \begin{equation}\label{triv-u2} \|f\|_{U^2(V)} \leq \|f\|_{L^1(V)}^{3/4} \|f\|_{L^\infty(V)}^{1/4} \end{equation} (compare with \eqref{triv-bound}). From another application of H\"older (or Cauchy-Schwarz) we conclude the \emph{Gowers--Cauchy--Schwarz inequality} \begin{equation}\label{gcz} |\langle f_1,f_2,f_3,f_4\rangle_{U^2(V)}| \leq \|f_1\|_{U^2(V)} \|f_2\|_{U^2(V)} \|f_3\|_{U^2(V)} \|f_4\|_{U^2(V)}. \end{equation} For functions $f \in L^{4/3}(X)$ supported on a set $X$ of positive finite measure, we also have from Cauchy--Schwarz that $$ \|f\|_{U^2(X)} = \|f*f \|_{L^2(2X)}^{1/2} \geq m_V(2X)^{-1/4} \|f*f \|_{L^1(2X)}^{1/2}$$ and hence \begin{equation}\label{flower} \|f\|_{U^2(X)} \geq \mu_V(2X)^{-1/4} \| f \|_{L^1(X)}. \end{equation} In view of \eqref{young}, one can think of the $U^2(V)$ norm of a function $f$ as the $L^{4/3}(V)$ norm multiplied by a dimensionless (scale-invariant) quantity that informally measures the amount of ``additive structure'' present on $V$. The Gowers norm is clearly invariant under changes of variable by measure-preserving affine homomorphisms. In particular it is translation-invariant, hence by Minkowski's inequality one has $$ \|f*g\|_{U^2(V)} \leq \|f\|_{U^2(V)} \|g\|_{L^1(V)}$$ for all $f \in L^{4/3}(V)$ and $g \in L^1(V)$. In a similar spirit, one has \begin{equation}\label{fmuv} \|f*\mu\|_{U^2(V)} \leq \|f\|_{U^2(V)} \end{equation} for any $f \in L^{4/3}(V)$ and any Radon probability measure $\mu$, where the convolution is now defined as $$ f*\mu(x) \coloneqq \int_V f(x-y)\ d\mu(y).$$ As mentioned previously, we will primarily be concerned with the Gowers norm on the non-negative cone $L^{4/3}(V)_+$ of $L^{4/3}(V)$. Clearly the Gowers norm is monotone on this cone in the sense that \begin{equation}\label{monotone} \|f\|_{U^2(V)} \leq \|g\|_{U^2(V)} \end{equation} whenever $f,g \in L^{4/3}(V)_+$ are such that $f \leq g$ pointwise almost everywhere. If one has two functions $f \in L^{4/3}(V), f' \in L^{4/3}(V')$ on two LCA groups $V,V'$, then we can define the tensor product $f \otimes f' \in L^{4/3}(V \times V')$ on the product LCA group $V \times V'$ (equipped with product Haar measure) by the formula $$ f \otimes f'(x,x') \coloneqq f(x) f'(x')$$ for $x \in V$, $x' \in V'$. From the Fubini--Tonelli theorem we have the identity \begin{equation}\label{fub-ton} \| f \otimes f'\|_{U^2(V \times V')} = \|f\|_{U^2(V)} \|f'\|_{U^2(V')}. \end{equation} For functions that are not of tensor product form, we have the following ``splitting inequality'' that serves as a partial substitute for \eqref{fub-ton}: \begin{lemma}[Splitting inequality]\label{split} Let $V,V'$ be LCA groups. If $f \in L^{4/3}(V \times V')$, then \begin{equation}\label{uv} \|f\|_{U^2(V \times V')} \leq \|f_{V} \|_{U^2(V')} \end{equation} where $f_{V} \in L^{4/3}(V')$ is the function $$ f_{V}(v') \coloneqq \| f(\cdot,v') \|_{U^2(V)}$$ where $f(\cdot,v')$ is the function $v \mapsto f(v,v')$ (this is well-defined in $L^{4/3}(V)$ for almost every $v' \in V'$). \end{lemma} The right-hand side of \eqref{uv} can be thought of as an iterated norm $\|f\|_{U^2(V'; U^2(V))}$, so \eqref{uv} can be written as $$ \|f\|_{U^2(V \times V')} \leq \|f \|_{U^2(V;U^2(V'))}$$ which can be compared with the Fubini--Tonelli identity $$ \|f\|_{L^{4/3}(V \times V')} = \|f \|_{L^{4/3}(V;L^{4/3}(V'))}.$$ Thus one can view Lemma \ref{split} as an analogue of the Fubini--Tonelli theorem for the Gowers uniformity norm $U^2$. \begin{proof} The fact that $f_{V} \in L^{4/3}(V')$ follows from \eqref{young} and the Fubini--Tonelli theorem. From another application of Fubini--Tonelli one has \begin{align*} \langle f,f,f,f \rangle_{U^2(V \times V')} &= \int_{V'} \int_{V'} \int_{V'} \langle f(\cdot,x'), f(\cdot,x'+h'), f(\cdot,x'+k'), f(\cdot,x'+h'+k') \rangle_{U^2(V)}\\ &\quad dm_{V'}(x') dm_{V'}(h') dm_{V'}(k') \end{align*} and hence by the Gowers--Cauchy--Schwarz inequality \eqref{gcz} $$ \langle f,f,f,f \rangle_{U^2(V \times V')} \leq \int_{V'} \int_{V'} \int_{V'} f_V(x') f_V(x'+h') f_V(x'+k') f_V(x'+h'+k') dm_{V'}(x') dm_{V'}(h') dm_{V'}(k')$$ or equivalently $$ \|f\|_{U^2(V \times V')}^4 \leq \|f_V \|_{U^2(V')}^4$$ and the claim follows. \end{proof} We will be interested in \emph{inverse theorems} for the trivial inequality \eqref{triv-u2}, that is to say descriptions of those $f$ for which this inequality is close to sharp. We now recall a key definition (see e.g., \cite[Definition 2.25]{tao-vu}: \begin{definition}[Approximate group] Let $V$ be an LCA group and $K \geq 1$. A subset $H$ of $V$ is said to be a \emph{$K$-approximate group} if $H$ contains the origin, is symmetric (thus $-H=H$), and if $H+H$ can be covered by at most $K$ translates of $H$. (In particular, this implies that $m_V(mH) \leq K^{m-1} m_V(H)$ for all natural numbers $m$.) If $H$ has positive finite measure, we define the uniform probability measure $\nu_H$ on $H$ by the formula $$ \nu_H(E) \coloneqq \frac{m_V(E \cap H)}{m_V(H)}$$ for all measurable $E \subset V$. Similarly, if $H$ has finite cardinality, we define the uniform probability measure $\nu_H$ on $H$ by the formula $$ \nu_H(E) \coloneqq \frac{\#(E \cap H)}{\# H}$$ (note that these two definitions are compatible with each other when $V$ is discrete). \end{definition} \begin{examples} If $r>0$ and $d \geq 1$, then the ball $B(0,r) = B^d(0,r)$ is a $O(1)^d$-approximate group, If $v \in V$ and $M$ is a natural number, then the arithmetic progression $\{mv: m=-M,\dots,M\}$ is an $O(1)$-approximate group. If $H,H' \subset V$ are a $K$-approximate group and $K'$-approximate group respectively, then $H+H'$ is a $KK'$-approximate group. \end{examples} \begin{theorem}[Inverse theorem]\label{inverse} Let $V$ be an LCA group. Let $A,N > 0$ and $0 < \eps \leq 1/2$, and let $f \in L^{4/3}(X)_+$ for some compact subset $X$ of $V$ obeying the bounds \begin{align} \|f\|_{L^\infty(X)} &\leq A \label{f-infty}\\ \|f\|_{L^1(X)} &\leq AN \label{f-1}\\ \|f\|_{U^2(X)} &\geq \eps A N^{3/4}\label{f-u2} \end{align} (so in particular \eqref{triv-u2} is sharp up to a factor of $\eps$). \begin{itemize} \item[(i)] (Truncating the small values of $f$) One has $$ \|f 1_{f \geq \eps^4 A/16} \|_{U^2(X)} \geq \frac{\eps}{2} A N^{1/4}.$$ \item[(ii)] (Approximate symmetry along an approximate subgroup) There exists a $\eps^{-O(1)}$-approximate group $H$ in $10X-10X$ of measure $m_V(H) = \eps^{O(1)} N$ such that \begin{equation}\label{fnuh} \|f * \nu_H\|_{U^2(11X-10X)} = \eps^{O(1)} A N^{3/4}. \end{equation} \end{itemize} \end{theorem} Informally, the conclusion of Theorem \ref{inverse} (when compared against \eqref{f-u2} and \eqref{fmuv}) asserts that $f*\nu_H$ resembles $f$ in some weak statistical sense, so that $f$ is ``approximately symmetric along $H$'', again in a weak statistical sense. \begin{proof} From \eqref{triv-u2}, \eqref{f-1} one has $$ \|f 1_{f < \eps^4 A/16} \|_{U^2(X)} \leq \|f\|_{L^1(X)}^{3/4} (\eps^4 A/16)^{1/4} \leq \frac{\eps}{2} A N^{3/4}$$ and the claim (i) then follows from \eqref{f-u2} and the triangle inequality for the $U^2$ norm. For (ii) our main tool will be the Balog--Szemer\'edi--Gowers theorem, to conclude that $f$ looks roughly like the function $A 1_{H+y}$ for some translate $H+y$ of an approximate group $H$, which we can then use to establish \eqref{fnuh}. Let $F \coloneqq \{ x \in X: f(x) \geq \eps^4 A/16\}$, then $E \subset X$ and we have the pointwise bound $$ f 1_{f \geq \eps^4 A/16} \leq A 1_F$$ and hence by \eqref{monotone} $$ \|1_F \|_{U^2(X)} \geq \frac{\eps}{2} N^{3/4}.$$ Also, from \eqref{f-1} and Markov's inequality one has $$ m_V(F) \leq \frac{16}{\eps^4} N.$$ From \eqref{triv-u2} one has $\|1_F \|_{U^2(X)} \leq m_V(F)^{3/4}$. Comparing these inequalities we conclude that $$ m_V(F) = \eps^{O(1)} N$$ and $$ \|1_F\|_{U^2(X)}^4 = \eps^{O(1)} N^3.$$ In the language of \cite[Definition 4.1]{tao-product} (adapted to the additive group $V$), the quantity $ \|1_F\|_{U^2(X)}^4$ is the additive energy $\Energy(F,F)$ of $F$. Applying the Balog--Szemer\'edi--Gowers theorem (in the form of \cite[Theorem 5.4]{tao-product}), we can then find a $\eps^{-O(1)}$-approximate group $H$ in $V$ of measure $$ m_V(H) = \eps^{O(1)} m_V(F) = \eps^{O(1)} N$$ such that \begin{equation}\label{fey2} m_V(F \cap (H+y)) = \eps^{O(1)} m_V(F) = \eps^{O(1)} N \end{equation} for some $y \in V$. An inspection of the proof \cite[Theorem 5.4]{tao-product} also reveals that the approximate group $H$ is constructed to lie in $10F-10F$ (say), and thus also lies in $10X-10X$; in particular, $f * \nu_H$ is supported in $11X-10X$. From \eqref{monotone}, \eqref{fey2} and the $\eps^{-O(1)}$-approximate group nature of $H$ we then have \begin{align*} \| f * \nu_H \|_{U^2(11X-10X)} &\geq \left\| \frac{\eps^4 A}{16} 1_{F \cap (H+y)} * \nu_H \right\|_{U^2(V)} \\ &= \eps^{O(1)} A \| 1_{F \cap (H+y)} * \nu_H * 1_{F \cap (H+y)} * \nu_H \|_{L^2(V)}^{1/2} \\ &= \eps^{O(1)} A \| 1_{F \cap (H+y)} * \nu_H * 1_{F \cap (H+y)} * \nu_H \|_{L^2(4H+2y)}^{1/2} \\ &\geq \eps^{O(1)} A m_V(4H+2y)^{-1/4} \| 1_{F \cap (H+y)} * \nu_H * 1_{F \cap (H+y)} * \nu_H \|_{L^1(4H+2y)}^{1/2} \\ &\gtrsim \eps^{O(1)} A m_V(H)^{3/4} \\ &= \eps^{O(1)} A N^{3/4} \end{align*} while from \eqref{fmuv}, \eqref{triv-u2} we have $$ \|f*\nu_H\|_{U^2(11X-10X)} \leq \|f\|_{U^2(V)} \leq \|f\|_{L^1(V)}^{3/4} \|f\|_{L^\infty(V)}^{1/4} \leq A N^{3/4}$$ and the claim (ii) follows. \end{proof} Approximate groups $H$ (approximately) contain long arithmetic progressions $\{ mv: m = -M,\dots,M\}$ for many choices of generator $v$. A precise formulation of this result is given by the following lemma (cf. \cite[Corollary 6.11]{dyatlov-zahl}): \begin{lemma}[Approximate groups contain arithmetic progressions] \label{gap} Let $V$ be an LCA group isomorphic to either a lattice $\Z^d$ or a Euclidean space $\R^d$. Let $H$ be a bounded open $K$-approximate group for some $K \geq 1$. Then for any natural number $M$, the set $$ S \coloneqq \{ v \in V: mv \in 8H \forall m = -M,\dots,M\}$$ has measure $$ m_V(S) \gtrsim \exp\left( - O\left( \log^{O(1)}(2K) \log(2M) \right) \right) m_V(H).$$ \end{lemma} One can also establish this result (with a worse dependence on $M$, and with $8H$ improved to $4H$) using the Sanders-Croot-Sisask lemma \cite{sanders}, \cite{croot-sisask}, \cite{sanders-br}. It is likely that this lemma applies to arbitrary LCA groups, and not just the lattices and Euclidean spaces, but these are the only cases we will need here. \begin{proof} First suppose that $V$ is isomorphic to $\Z^d$. Applying \cite[Theorem 1.1]{sanders-br}, we see that $4H$ contains a generalized arithmetic progression $P$ of dimension $O(\log^{O(1)}(2K))$ and cardinality $\gtrsim \exp( -O(\log^{O(1)}(2K)) \# H$. By reducing all the lengths of the progression by $M$, we see that the set $$ \{ v \in V: mv \in P \forall m = -M,\dots,M \}$$ has cardinality $$ \gtrsim O(M)^{-O(\log^{O(1)}(2K))} \# P \gtrsim \exp\left( - O\left( \log^{O(1)}(2K) \log(2M) \right) \right) \# H.$$ As this set is contained in $S$, we obtain the claim (with $8H$ replaced by $4H$). For the remaining case we may assume that $V = \R^d$ (equipped with Lebesgue measure). Then for $\eps>0$, $2H \cap \eps \Z^d$ is a $O(K^{O(1)})$-approximate group (this follows for instance from \cite[Exercise 2.4.7]{tao-vu}), and for $\eps$ small enough the cardinality of this set is $\sim \eps^{-d} K^{O(1)} m(H)$. Applying the preceding case, we see that $$ \# S \cap \eps \Z^d \gtrsim \exp\left( - O\left( \log^{O(1)}(2K) \log(2M) \right) \right) \eps^{-d} K^{O(1)} m(H).$$ Multiplying by $\eps^d$ and sending $\eps \to 0$, we obtain the claim. \end{proof} Given a Radon measure $\mu$ on an LCA group $V$ and a measurable set $H \subset V$, we can define the energy $$ \Energy(\mu,H) \coloneqq \mu^4\left( \{ (x_1,x_2,x_3,x_4) \in (\R^d)^4: x_1+x_2-x_3-x_4 \in H \}\right).$$ This generalizes the definition of energy in the introduction; indeed, when $V = \R^d$, we have $$ \Energy(\mu,r) = \Energy(\mu,B(0,r)).$$ It is also invariant under affine isomorphisms $T: V \to V'$, in the sense that \begin{equation}\label{affine} \Energy(T_* \mu, T H) = \Energy(\mu,H) \end{equation} for any Radon measure $\mu$ on $V$ and measurable $H \subset V$, where $T_* \mu$ is the pushforward of $\mu$ by $T$. We now relate these energies to Gowers norms: \begin{lemma}\label{relate} Let $V$ be an LCA group, let $\mu$ be a Radon measure on $V$, let $f \in L^{4/3}(V)_+$, and let $H$ be a $K$-approximate group in $V$ of positive finite measure for some $K \geq 1$. \begin{itemize} \item[(i)] (Relation between energy and Gowers norm) One has $$ \Energy(\mu,4H) = K^{O(1)} m_V(H) \| \mu * \nu_H \|_{U^2(V)}^4.$$ and for any integer $m \geq 4$, one has $$ \Energy(\mu,mH) = K^{O(m)} \Energy(\mu,4H).$$ In particular, for any $m \geq 1$ one has $$ \| \mu * \mu_{mH} \|_{U^2(V)} = K^{O(m)} \| \mu * \mu_{H} \|_{U^2(V)}.$$ (Compare with Lemma \ref{agn}.) \item[(ii)] (Shrinking the approximate symmetry group) If $m \geq 1$ is an integer and $P$ is a $K$-approximate group in $mH$ that is either finite or has positive finite measure, one has $$ \| f * \nu_H \|_{U^2(V)} \leq K^{O(m)} \| f * \nu_P \|_{U^2(V)}.$$ \item[(iii)] (Splitting at scale $H$) One has $$ \|f\|_{U^2(V)} \leq m_V(H)^{-3/4} \| f_{4H} \|_{U^2(V)}$$ where $f_{4H}$ is the local Gowers uniformity norm at scale $H$, defined by $$ f_{4H}(y) \coloneqq \| f \|_{U^2(4H+y)}.$$ \end{itemize} \end{lemma} Lemma \ref{relate}(iii) asserts, roughly speaking, that to control the ``global'' $U^2$ norm of a function $f$ on an LCA group $V$, it first suffices to control the ``local'' $U^2$ norm on the ``cosets'' $4H+y$ of the approximate group $H$, and then take a further $U^2$ norm over the translation parameter $y$ (and apply a suitable normalization). This fact can be used to give a slightly different proof of Proposition \ref{en-nearby} (setting $V=\R$ and $H=[-r,r]$), but we will not do so here. We will eventually apply Lemma \ref{relate}(iii) to a somewhat unusual approximate group $H$, namely the sum of a ball and an arithmetic progression. \begin{proof} From the Fubini--Tonelli theorem, we may expand $$ \| \mu * \nu_{H} \|_{U^2}^4 = m_V(H)^{-4} \int_{V^4} 1_H*1_H*1_H*1_H(x_1+x_2-x_3-x_4)\ d\mu(x_1) \dots d\mu(x_4).$$ From the pointwise upper bound $$ 1_H*1_H*1_H*1_H \leq m_V(H)^3 1_{4H}$$ we conclude that $$ \| \mu * \nu_{H} \|_{U^2}^4 \leq m_V(H)^{-1} \Energy(\mu, 4H).$$ Conversely, from the pointwise lower bound \begin{equation}\label{4h} 1_{4H} * 1_{4H} * 1_{4H} * 1_{4H} \geq m_V(H)^3 1_H \end{equation} we see that $$ \| \mu * \nu_{4H} \|_{U^2}^4 \geq m_V(H)^{-1} \Energy(\mu, H).$$ Replacing $H$ by $mH$ for any natural number $m$ we conclude that $$ \| \mu * \nu_{mH} \|_{U^2}^4 \leq m_V(H)^{-1} \Energy(\mu, 4mH)$$ and $$ \Energy(\mu, mH) \leq K^{O(m)} m_V(H) \| \mu * \nu_{4mH} \|_{U^2}^4.$$ Next, we can upper bound $\nu_{4mH}$ by the sum of at most $K^{O(m)}$ translates of $\nu_H$, thus by the triangle inequality $$ \| \mu * \nu_{4mH} \|_{U^2}^4 \leq K^{O(m)} \| \mu * \mu_{H} \|_{U^2}^4.$$ Combining these inequalities we obtain (i). For (ii), we observe the pointwise bound $$ \nu_H \leq \frac{m_V((m+1) H)}{m_V(H)} \nu_{(m+1)H} * \nu_P$$ and thus by \eqref{fmuv} $$ \| f * \nu_H \|_{U^2(V)} \leq \frac{m_V((m+1) H)}{m_V(H)} \| f * \nu_P \|_{U^2(V)}.$$ The claim (ii) now follows from the $K$-approximate group nature of $H$. Now we prove (iii). From \eqref{4h} we have $$ \| 1_{4H} \|_{U^2(V)} \geq m_V(H)^{3/4}$$ and hence by \eqref{fub-ton} we have $$ \| f \otimes 1_{4H} \|_{U^2(V \times V)} \geq m_V(H)^{3/4} \|f\|_{U^2(V)}.$$ Applying the measure-preserving transformation $(x,h) \mapsto (x+h,h)$ on $V \times V$ we conclude that $$ \| F \|_{U^2(V \times V)} \geq m_V(H)^{3/4} \|f\|_{U^2(V)}$$ where $F(x,h) \coloneqq f(x+h) 1_{4H}(h)$. Applying Lemma \ref{split}, we have $$\| F \|_{U^2(V \times V)} \leq \| f_{4H} \|_{U^2(V)},$$ and the claim (iii) follows. \end{proof} \section{The higher dimensional case}\label{higher-sec} We are now ready to establish Theorem \ref{main-second}. By repeating the arguments in Section \ref{induct-sec} (which extend to higher dimensions without difficulty), it suffices to establish the following proposition. \begin{proposition}[Slight gain over the trivial bound]\label{slight-gain} Let $d \geq 1$ be an integer, let $0 < \delta < d$ be a non-integer, and let $C>1$ and $0 < \eps \leq 1/2$. Let $0 < r_0 < 1$ be sufficiently small depending on $d,\delta,C,\eps$. Let $X \subset \R^d$ be a $\delta$-regular set on scales $[r_0,1]$ with constant $C$, and let $\mu_X$ be an associated regular measure. Then we have $$ \Energy(\mu|_{B(0,1)}, r_0) \leq \eps r_0^\delta.$$ In fact one can take to be quasipolynomial in $C/\eps$, in the sense that \begin{equation}\label{r0-size} r_0 = \exp\left( - \exp\left( O_{\delta,d}\left( \log^{O_{\delta,d}}(C/\eps) \right) \right) \right). \end{equation} \end{proposition} To prove this proposition we will use induction on the ambient dimension $d$. To facilitate the induction, it is convenient to relax the hypotheses on $\mu$ somewhat. More precisely, our induction hypothesis will be as follows. \begin{proposition}[Induction hypothesis]\label{induct-hyp} Let Let $d \geq 1$ be an integer, let $0 < \delta < d$ be a non-integer, and let $C>1$ and $0 < \eps \leq 1/2$. Let $r_0$ be the quantity \eqref{r0-size}. Let $X \subset B(0,1)$ be a compact set with the following property: \begin{itemize} \item[(i)] (Upper $\delta$-regularity) For any $r_0 \leq r_2 \leq r_1 \leq 1$ and any $x \in \R^d$, the set $X \cap B(x,r_1)$ can be covered by at most $C (r_1/r_2)^\delta$ balls of radius $r_2$. \end{itemize} Let $\mu$ be a Radon measure supported on $X$ obeying the upper regularity bound \begin{equation}\label{crd} \mu(B(x,r_1)) \leq C r_1^\delta \end{equation} for all $r_0 \leq r_1 \leq 1$. Then $$ \Energy(\mu, r_0) \leq \eps r_0^\delta.$$ \end{proposition} The main advantage of working with Proposition \ref{induct-hyp} instead of with Proposition \ref{slight-gain} is that all the hypotheses on $\mu$ and $X$ are of ``upper bound'' type rather than ``lower bound'' type, so that it becomes easier to remove unwanted portions of $\mu$ or $X$ as needed. A simple covering argument shows that if $X$ is a $\delta$-regular set on scales $[r_0,1]$ then the property (i) of Proposition \ref{induct-hyp} is satisfied with $C$ replaced by $O_{\delta,d,C}(1)$, and the property \eqref{crd} is immediate from the definition of a $\delta$-regular measure. Hence the general dimension case of Proposition \ref{slight-gain} follows from Proposition \ref{induct-hyp}. To prove Proposition \ref{induct-hyp}, we induct on $d$, assuming that the claim has already been proven for dimension $d-1$ (this induction hypothesis is vacuous when $d=1$). Let $0 < \delta < d$ be non-integer, let $C>1$ and $\eps>0$, and let $r_0>0$ be chosen later (eventually it will be of the form \eqref{r0-size}). Let $X, \mu_X$ obey the hypotheses of Proposition \ref{induct-hyp}. We assume for sake of contradiction that \begin{equation}\label{start} \Energy(\mu, r_0) > \eps r_0^\delta. \end{equation} To abbreviate notation we now allow all implied constants in asymptotic notation to depend on $\delta,d$. From Lemma \ref{relate}(i) we conclude that $$ \| \mu * \nu_{B(0,r_0)} \|_{U^2(B(0,2))} \gtrsim \eps^{O(1)} r_0^{(\delta-d)/4}.$$ Informally, this estimate asserts that $X + B(0,r_0)$ has high additive energy. We now use the inverse theory to obtain some regularity along a non-trivial arithmetic progression $P$. From \eqref{crd} we also have the uniform bound $$\| \mu * \nu_{B(0,r_0)} \|_{L^\infty(B(0,2))} \lesssim C r_0^{\delta-d}$$ and from \eqref{crd} (with $r=1$) and Young's inequality one has $$\| \mu * \nu_{B(0,r_0)} \|_{L^1(B(0,2))} \lesssim C.$$ We can now apply Theorem \ref{inverse}(ii) to conclude that there exists a $(C/\eps)^{O(1)}$-approximate group $H$ in $B(0,20)$ of measure $|H| = (C/\eps)^{O(1)} r_0^{d-\delta}$ such that \begin{equation}\label{miro} \| \mu * \nu_{B(0,r_0)} * \nu_H \|_{U^2(B(0,20))} = (C/\eps)^{O(1)} r_0^{(\delta-d)/4}. \end{equation} Informally, this estimate asserts that $X + B(0,r_0)$ not only has high additive energy, but also behaves like a union of translates of $B(0,r_0) + H$. Let $M$ be a large integer (depending on $C,d,\delta,\eps$) to be chosen later. By Lemma \ref{gap}, the set $$ S \coloneqq \{ v \in \R^d: mv \in 8H \forall m = -10^3 M,\dots,10^3 M\}$$ has measure $$ |S| \gtrsim M^{-O(\log^{O(1)}(C/\eps))} |H| \sim M^{-O(\log^{O(1)}(C/\eps))} r_0^{d-\delta}.$$ In particular, since $\delta>0$, we have $|S| > |B(0,r_0)|$ if $r_0$ is sufficiently small, and more specifically we can take $r_0$ of the form \begin{equation}\label{r0-M} r_0 = M^{-O(\log^{O(1)}(C/\eps))}. \end{equation} We conclude that there is a vector $v \in \R^d$ with $|v| > r_0$ such that the progression $$ P \coloneqq \{ mv: m = -M,\dots, M \} $$ is such that $10^3 P \subset 8H$. In particular we must have the scale relations $$ r_0 \leq |v| \leq Mv \leq 1.$$ From \eqref{miro} and Lemma \ref{relate}(ii) we conclude that \begin{equation}\label{muppet} \| \mu * \nu_{B(0,r_0)} * \nu_P \|_{U^2(B(0,20))} \gtrsim (C/\eps)^{O(1)} r_0^{(\delta-d)/4}. \end{equation} Informally, this estimate asserts that $X + B(0,r_0)$ not only has high additive energy, but also behaves like a union of translates of the set $B(0,r_0) + P$, which is an arithmetic progression of small balls. Now that we have obtained some regularity along a progression $P$, the next step is to localize at the scale $M|v|$ of the diameter of $P$. For any $y \in \R^d$, the quantity \begin{equation}\label{quant} \| \mu * \nu_{B(0,r_0)} * \nu_P \|_{U^2(B(y,M|v|))} \end{equation} vanishes unless $y$ lies within $O(M|v|)$ of $X$, which constraints $y$ to a set of measure $O( (M|v|)^{d-\delta})$ thanks to the hypothesis (i). From \eqref{triv-u2} we conclude that the $U^2$ norm of the quantity \eqref{quant} (viewed as a function of $y$) is at most $$ \lesssim (M|v|)^{3(d-\delta)/4} \sup_{y \in \R^d} \| \mu * \nu_{B(0,r_0)} * \nu_P \|_{U^2(B(y,M|v|))}$$ and hence by Lemma \ref{relate}(iii) we have $$ \|\mu * \nu_{B(0,r_0)} * \nu_P\|_{U^2(B(0,20))} \lesssim (C/\eps)^{O(1)} (M|v|)^{-3\delta/4} \sup_{y \in \R^d} \| \mu * \nu_{B(0,r_0)} * \nu_P \|_{U^2(B(y,M|v|))}.$$ Comparing this with \eqref{muppet}, we conclude that there exists $y_0 \in \R^d$ such that $$ \| \mu * \nu_{B(0,r_0)} * \nu_P \|_{U^2(B(y_0,M|v|))} \gtrsim (C/\eps)^{O(1)} (M|v|)^{3\delta/4} r_0^{(\delta-d)/4}.$$ Informally, this estimate asserts that $(X + B(0,r_0)) \cap B(y_0, M|v|)$ not only has high additive energy, but also behaves like a union of translates of the set $B(0,r_0) + P$, which has diameter comparable to that of the ball $B(y_0, M|v|)$. Fix this $y_0$. We now apply Lemma \ref{relate}(iii) using the $O(1)$-approximate group $$ H \coloneqq B(0,|v|) + P$$ (which geometrically is approximately a cylinder of dimensions $|v| \times M|v|$, oriented in the direction of $v$) to conclude that $$ \| \mu * \nu_{B(0,r_0)} * \nu_P \|_{U^2(B(y_0,M|v|))} \lesssim (C/\eps)^{O(1)} (M |v|^d)^{-3/4} \|f\|_{U^2(\R^d)}$$ where $f(y)$ is the local Gowers norm $$ f(y) \coloneqq \| \mu * \nu_{B(0,r_0)} * \nu_P \|_{U^2(B(y_0,M|v|) \cap (B(y,4|v|)+4P )}.$$ The function $f$ vanishes unless $y \in B(y_0,10M|v|)$, thus \begin{equation}\label{fu2} \|f\|_{U^2(B(y_0,10M|v|))} \gtrsim (C/\eps)^{O(1)} M^{3(\delta+1)/4} |v|^{3(\delta+d)/4} r_0^{(\delta-d)/4}. \end{equation} Informally, this estimate asserts that the collection of cosets $H+y$ of the ``cylinder'' $H$ in which $(X + B(0,r_0)) \cap B(y_0, M|v|)$ has high additive energy, itself has high additive energy. We now study the quantity $f(y)$ for some $y \in (y_0,10M|v|)$. We can use monotonicity and the triangle inequality to bound \begin{align*} f(y) &\leq \| \mu|_{B(y,5|v|)+5P} * \nu_{B(0,r_0)} * \nu_P \|_{U^2(\R^d)} \\ &\leq \sum_{z \in y+5P} \| \mu|_{B(z,5|v|)} * \nu_{B(0,r_0)} * \nu_P \|_{U^2(\R^d)}. \end{align*} The summand vanishes unless $z$ lies in $X + B(0,5|v|)$. We also have from \eqref{crd} and Young's inequality that $$ \| \mu|_{B(z,5|v|)} * \nu_{B(0,r_0)} * \nu_P \|_{L^1(\R^d)} \leq \mu( B(z,5|v|) ) \lesssim (C/\eps)^{O(1)} |v|^\delta$$ and $$ \| \mu|_{B(z,5|v|)} * \nu_{B(0,r_0)} * \nu_P \|_{L^\infty(\R^d)} \lesssim (C/\eps)^{O(1)} M^{-1} r_0^{\delta-d}$$ and hence by \eqref{triv-u2} we have $$ \| \mu|_{B(z,5|v|)} * \nu_{B(0,r_0)} * \nu_P \|_{U^2(\R^d)} \lesssim (C/\eps)^{O(1)} M^{-1/4} r_0^{(\delta-d)/4} |v|^{3\delta/4}.$$ We thus have $$ f(y) \lesssim (C/\eps)^{O(1)} M^{-1/4} r_0^{(\delta-d)/4} |v|^{3\delta/4} \# ( (y+5P) \cap (X + B(0,5|v|) ) ).$$ If we introduce the function $$ F(y) \coloneqq 1_{B(y_0,10M|v|) \cap (X + B(0, 10|v|))}$$ then a simple volume-packing argument shows that $$ \# ( (y+5P) \cap (X + B(0,5|v|) ) ) \lesssim (C/\eps)^{O(1)} M F * \nu_{B(0,10|v|)+10P}(y) $$ and thus we have the pointwise bound $$ f \lesssim (C/\eps)^{O(1)} M^{3/4} r_0^{(\delta-d)/4} |v|^{3\delta/4} F * \nu_{B(0,10|v|)+10P}.$$ Inserting this into \eqref{fu2}, we conclude that $$ \|F * \nu_{B(0,10|v|)+10P} \|_{U^2(\R^d)} \gtrsim (C/\eps)^{O(1)} M^{3\delta/4} |v|^{3d/4}.$$ Clearly we have $$\|F * \nu_{B(0,10|v|)+10P} \|_{L^\infty(\R^d)} \leq \|F\|_{L^\infty(\R^d)} \leq 1$$ while from the hypothesis (i) the support of $F$ is covered by $O(M^\delta)$ balls of radius $|v|$, and hence \begin{equation}\label{fb} \|F * \nu_{B(0,10|v|)+10P} \|_{L^1(\R^d)} \leq \|F\|_{L^1(\R^d)} \lesssim M^\delta |v|^d. \end{equation} Applying Theorem \ref{inverse}(i), we conclude that \begin{equation}\label{fog} \| (F * \nu_{B(0,10|v|)+10P}) 1_G \|_{U^2(\R^d)} \gtrsim (C/\eps)^{O(1)} M^{3\delta/4} |v|^{3d/4} \end{equation} where $G$ is a set of the form $$ G \coloneqq \{ y: F * \nu_{B(0,10|v|)+10P}(y) \geq (C/\eps)^{-C_0}\}$$ for some sufficiently large $C_0$ depending only on $d,\delta$. Note that $G$ is a compact subset of $B(y_0,100M|v|)$. From \eqref{fb}, \eqref{fog} we conclude that \begin{equation}\label{1g} \|1_G \|_{U^2(B(y_0,100M|v|))} \gtrsim (C/\eps)^{O(1)} M^{3\delta/4} |v|^{3d/4}. \end{equation} Informally, this estimate asserts that the collection of cosets $H+y$ of the ``cylinder'' $H$ in which $(X + B(0,r_0)) \cap B(y_0, M|v|)$ has large density, itself has high additive energy. We now split into two cases: the low-dimensional case $\delta < 1$ and the high-dimensional case $\delta > 1$ (recall that $\delta$ is assumed to be non-integer). In the low-dimensional case we observe that for any $y \in \R^d$, the only portion of $X$ that contributes to $F * \nu_{B(0,10|v|)+10P}(y)$ lies in $B(y, 100 M |v|)$, and is thus covered by $O(M^\delta)$ balls of radius $|v|$ thanks to the hypothesis (i). This leads to the pointwise estimate $$ F * \nu_{B(0,10|v|)+10P}(y) \lesssim (C/\eps)^{O(1)} M^{-1} M^\delta;$$ as we are in the low-dimensional case $\delta<1$, taking $M = (C/\eps)^{C_1}$ for a sufficiently large $C_1$ (depending on $d,\delta$) sufficiently large will then imply that the set $G$ is empty, which contradicts \eqref{1g}, with $r_0$ of the required size thanks to \eqref{r0-M}. (Informally, the point is that the set $X$ is too low dimensional to adequately fill out a coset $H+y$.) Now suppose we are in the high-dimensional case $\delta>1$, which of course forces $d \geq 2$. Here we shall ``quotient out'' by $P$ and use the induction hypothesis. Let $v^\perp \coloneqq \{ x \in \R^d: x \cdot v = 0\}$ be the hyperplane in $\R^d$ orthogonal to $v$, and let $\pi \colon \R^d \to v^\perp$ be the orthogonal projection. As $G$ is a compact subset of $B(y_0,100M|v|)$, $\pi(G)$ is a compact subset of $B_{v^\perp}(\pi(y_0), 100M|v|)$, where we use $B_{v^\perp}$ to denote the balls in the hyperplane $v^\perp$. After applying a rigid motion to identify $v^\perp$ with $\R^{d-1}$, one can view $G$ as a subset of $\pi(G) \times [-100M|v|, 100M|v|]$, hence by \eqref{monotone}, \eqref{fub-ton}, \eqref{triv-u2} we have $$ \|1_G \|_{U^2(B(y_0,100M|v|))} \lesssim (C/\eps)^{O(1)} (M|v|)^{3/4} \|1_{\pi(G)} \|_{U^2(B_{v^\perp}(\pi(y_0), 100M|v|))}$$ and thus by \eqref{1g} \begin{equation}\label{1-pig} \|1_{\pi(G)} \|_{U^2(B_{v^\perp}(\pi(y_0), 100M|v|))} \gtrsim (C/\eps)^{O(1)} M^{3(\delta-1)/4} |v|^{3(d-1)/4}. \end{equation} Informally, this estimate asserts that the set $G$ (which roughly speaking tracked the translates of $H+y$ in which $(X + B(0,r_0)) \cap B(y_0, M|v|)$ had large density) continues to have large additive energy after applyign the orthogonal projection $\pi$. The set $\pi(G)$ also has ``dimension $\delta-1$'' in the scale range $[|v|, M|v|]$ in the following sense: \begin{lemma}[$\pi(G)$ is $\delta-1$-dimensional]\label{delta} For any $|v| \leq r_2 \leq r_1 \leq M|v|$ and any $x \in v^\perp$, the set $\pi(G) \cap B_{v^\perp}(x,r_1)$ can be covered by at most $ (C/\eps)^{O(1)} (r_1/r_2)^{\delta-1}$ balls of radius $r_2$ in $v^\perp$. \end{lemma} \begin{proof} Let $\Sigma$ be any $100 r_2$-separated subset of $\pi(G) \cap B_{v^\perp}(x,r_1)$. It will suffice to show that $\# \Sigma \lesssim (C/\eps)^{O(1)} (r_1/r_2)^{\delta-1}$. Each $\sigma \in \Sigma$ can be written as $\pi(y_\sigma)$ for some $y_\sigma \in G$. By construction of $G$ and $F$, we have $$ |(B(y_\sigma,10|v|) + 10P) \cap (X + B(0, 20|v|))| \gtrsim (C/\eps)^{O(1)} M |v|^d$$ for all $\sigma \in \Sigma$. In particular, it requires at least $\gtrsim (C/\eps)^{O(1)} M|v| / r_2$ balls of radius $r_2$ to cover this set. As these sets are at least $10r_2$-separated (say) from each other as $\sigma$ varies, we conclude that the set $$ \bigcup_{\sigma \in \Sigma} (B(y_\sigma,10|v|) + 10P) \cap (X + B(0, 20|v|))$$ requires at least $\gtrsim (C/\eps)^{O(1)} \frac{M|v|}{r_2} \# \Sigma$ balls of radius $r_2$ to cover this set. On the other hand, this set is contained in \begin{equation}\label{xbb} (X + B(0,20|v|)) \cap B( y_0, 200M|v|) \cap \pi^{-1}( B_{v^\perp}(x,20r_1)). \end{equation} The set $B( y_0, 200M|v|) \cap \pi^{-1}( B_{v^\perp}(x,20r_1))$ (which is shaped roughly like a $r_1 \times M|v|$ cylinder) can be covered by $O( M|v| / r_1 )$ balls of radius $r_1$. Applying hypothesis (i), we conclude that the set \eqref{xbb} can be covered by $O( (C/\eps)^{O(1)} (M|v|/r_1) (r_1/r_2)^\delta )$ balls of radius $r_2$. Comparing these bounds, we obtain the claim. \end{proof} Let $\mu'$ denote the measure $$ \mu' \coloneqq |v|^{1-d} M^{1-\delta} m_{v^\perp}|_{\pi(G)+ B_{v^\perp}(0,|v|)}$$ where $m_{v^\perp}$ is Lebesgue measure on $v^\perp$. From Lemma \ref{delta} (taking $r_2 = |v|$) we see that $$ \mu'( B_{v^\perp}(y, r) ) \lesssim (C/\eps)^{O(1)} (r / M|v|)^{\delta-1}$$ for all $r \in [|v|, M|v|]$. Applying the induction hypothesis Proposition \ref{induct-hyp} (with $d$ replaced by $d-1$, $\delta$ replaced by $\delta-1$, and $r_0$ replaced by $1/M$) and an affine change of variable to rescale $v^\perp$ to $\R^{d-1}$ and $B_{v^\perp}(y_0,100M|v|)$ to $B^{d-1}(0, 1)$, we conclude from \eqref{affine} that $$ \Energy(\mu', |v|) \leq \eps' M^{-(\delta-1)}$$ for any $\eps'>0$, if $M$ is sufficiently large depending on $C,\delta,d,\eps, \eps'$; indeed we can take $M$ to be of the form $$ M = \exp\left( \exp( O( \log^{O(1)}(C/\eps\eps') ) ) \right).$$ Applying Lemma \ref{relate}(i), we conclude that $$ \| \mu' * \nu_{B_{v^\perp}(0, |v|)} \|_{U^2(v^\perp)} \lesssim (C/\eps)^{O(1)} (\eps')^{1/4} |v|^{-(d-1)/4} M^{-(\delta-1)/4}.$$ Using the pointwise bound $$ \mu' * \nu_{B_{v^\perp}(0, |v|)} \gtrsim (C/\eps)^{O(1)} |v|^{1-d} M^{1-\delta} 1_{\pi(G)}$$ we conclude that $$ \| 1_{\pi(G)} \|_{U^2(v^\perp)} \lesssim (C/\eps)^{O(1)} (\eps')^{1/4} |v|^{3(d-1)/4} M^{3(\delta-1)/4}.$$ By taking $\eps'$ to equal $(\eps/C)^{C_2}$ for a sufficiently large $C_2$ depending only on $d,\delta$, we contradict \eqref{1-pig}, giving the claim, with the right size \eqref{r0-size} for $r_0$ thanks to \eqref{r0-M} and the choice of parameters $\eps',M$. \section{Nonlinear expansion}\label{nonlinear-sec} In this section we establish Theorem \ref{nonlinear}. We establish the higher dimensional case $d>1$ here; the one-dimensional case $d=1$ is proven similarly. The strategy is to localize to a small enough scale that the nonlinear function $F$ can be well approximated by a linear one, at which point one can apply the Cauchy--Schwarz inequality to control the relevant expressions by additive energies, so that Theorems \ref{main}, \ref{main-second} may be applied. We allow all constants to depend on $\delta,d,F$. Let $\eps>0$ be a small quantity depending on $\delta,d,F$ to be chosen later; by shrinking $r$ as necessary we may assume that $r$ is sufficiently small depending on $\eps,\delta,d,F$. Let $y_0$ be a point in $Y$, thus $y_0 \in B(0,2)$. We will only work in the $\eps r^{1/2}$-neighborhood of $y_0$ in $Y_r$. Indeed, it will suffice to establish the bound $$ |F( X_r, Y_r \cap B(y_0, \eps r^{1/2}) )| \gtrsim_{C,\eps} r^{d-\delta-\beta}.$$ By regularity, we can cover $X_r$ by $O_\eps(r^{-\delta/2})$ balls $B(x_i, \eps r^{1/2})$ of radius $\eps r^{1/2}$ with $x_i \in X \cap B(0,2)$. Using Taylor expansion, the $C^2$ nature of $F$ and the non-vanishing of the differential map $D_x F(x,y)$, we see that any two sets $$ F( B(x_i,\eps r^{1/2}), B(y_0, \eps r^{1/2})), F( B(x_j,\eps r^{1/2}), B(y_0, \eps r^{1/2}))$$ with $|x_i-x_j| \leq \eps$ will be disjoint unless $|x_i-x_j| \lesssim \eps r^{1/2}$. From this we conclude that the sets $F( B(x_i,\eps r^{1/2}), B(y_0, \eps r^{1/2}))$ have overlap $O_\eps(1)$, so it will suffice to establish the bound $$ |F( X_r \cap B(x_0, \eps r^{1/2}), Y_r \cap B(y_0, \eps r^{1/2}) )| \gtrsim_{C,\eps} r^{d-\delta/2-\beta}$$ for each $x_0 \in X \cap B(0,2)$. By Taylor expansion and the $C^2$ nature of $F$, for $x \in B(x_0,\eps r^{1/2})$ and $y \in B(y_0,\eps r^{1/2})$ one has $$ F(x,y) = F(x_0,y_0) + D_x F(x_0,y_0) (x-x_0) + D_y F(x_0,y_0) (y-y_0) + O(\eps r).$$ From this and the invertibility of $D_x F(x_0,y_0), D_y F(x_0,y_0)$ we see that the set $$ F( X_r \cap B(x_0, \eps r^{1/2}), Y_r \cap B(y_0, \eps r^{1/2}) )$$ contains the set $$ F(x_0,y_0) + D_x F(x_0,y_0) ((X_{r/2} \cap B(x_0, \eps r^{1/2})) - x_0) + D_y F(x_0,y_0) ((Y_{r/2} \cap B(y_0, \eps r^{1/2})) -y_0)$$ so after subtracting a constant and inverting the linear transformation $D_x F(x_0,y_0)$ it will suffice to establish the bound \begin{equation}\label{ab} |A + B| \gtrsim_{C,\eps} r^{d-\delta/2 - \beta} \end{equation} where $$ A \coloneqq X_{r/2} \cap B(x_0, \eps r^{1/2})$$ and $$ B \coloneqq D_x F(x_0,y_0)^{-1} D_y F(x_0,y_0) (Y_{r/2} \cap B(y_0, \eps r^{1/2})).$$ From the regular nature of $X$ one sees from Definition \ref{reg-set} and standard volume-packing calculations that $$ |A| \sim_{C,\eps} r^{d-\delta/2}$$ and a similar argument (using also the invertibility of $D_y F$ and the $C^2$ nature of $F$) gives \begin{equation}\label{b-card} |B| \sim_{C,\eps} r^{d-\delta/2}. \end{equation} In particular $$ \| 1_A * 1_B \|_{L^1(\R^d)} \sim_{C,\eps} r^{2(d-\delta/2)}.$$ Since $1_A*1_B$ is supported on $A+B$, to prove \eqref{ab} it thus suffices by Cauchy-Schwarz to show that $$ \| 1_A * 1_B \|_{L^2(\R^d)}^2 \lesssim_{C,\eps} r^{3(d-\delta/2)+\beta}.$$ By the Gowers-Cauchy-Schwarz inequality we have $$\| 1_A * 1_B \|_{L^2(\R^d)}^2 \leq \|1_A \|_{U^2(\R^d)}^2 \|1_B \|_{U^2(\R^d)}^2.$$ From \eqref{young}, \eqref{b-card} we have $$ \|1_B \|_{U^2(\R^d)}^4 \lesssim_{C,\eps} r^{3(d-\delta/2)}$$ so it will suffice to establish the bound $$ \|1_A \|_{U^2(\R^d)}^4 \lesssim_{C,\eps} r^{3(d-\delta/2) + \beta}.$$ From the pointwise estimate $$ 1_A \lesssim 1_{X_r \cap B(x_0,r^{1/2})} * \nu_{B(0,r)}$$ and Lemma \ref{relate}(i) (or the higher-dimensional version of Lemma \ref{agn}) it suffices to show that $$ \Energy(1_{X_r \cap B(x_0,r^{1/2})} dm,r) \lesssim_{C,\eps} r^{4d-3\delta/2 + \beta}$$ (where $dm$ is Lebesgue measure). We rescale this as $$ \Energy(r^{(\delta-d)/2} 1_{\tilde X_{r^{1/2}} \cap B(0,1)} dm,r) \lesssim_{C,\eps} r^{\delta/2 + \beta}$$ where $\tilde X$ is a rescaled version of $X$: $$ \tilde X \coloneqq \left\{ \frac{x-x_0}{r^{1/2}}: x \in X \right\}.$$ From the regularity of $X$ and a routine change of variables we check that $\tilde X$ is $\delta$-regular at scales $[r^{1/2},1]$ (with constant $C^{O(1)}$), and that $r^{(\delta-d)/2} 1_{\tilde X_{r^{1/2}} \cap B(0,1)} dm$ is an associated regular measure (again with constant $C^{O(1)}$). The claim now follows from Theorem \ref{main-second} (adjusting the constants in the definition of $\beta$ appropriately). \begin{remark} The regularity hypotheses on $Y$ can be relaxed substantially; in fact with a little more effort one could replace $Y_r$ here by any subset of $B(0,1)$ of measure $\gtrsim_{C,\delta} r^{d-\delta}$. We leave the details to the interested reader. \end{remark} \section{From additive energy to the fractal uncertainty principle}\label{fract-sec} We now prove Theorem \ref{add-eng}. Let the notation and hypotheses be as in that theorem. Clearly we have $$ \| {\mathcal F}_h 1_{Y_h} \|_{L^1(\R^d) \to L^\infty(\R^d)} \lesssim_d h^{-d/2}$$ so by the Riesz--Thorin theorem it suffices to show that $$ \| {\mathcal F}_h 1_{Y_h} \|_{L^\infty(\R^d) \to L^4(\R^d)} \lesssim_d h^{d/2-3\delta/4+2\beta}.$$ Let $f \in L^\infty(\R)$ be of norm one. From \eqref{u24-v} and a rescaling we have $$ \| {\mathcal F}_h (f 1_{Y_h}) \|_{L^4(\R^d)} \sim_d h^{d/4} \| f 1_{Y_h} \|_{U^2(\R^d)}.$$ As $Y$ is $\delta$-regular, we have the pointwise bound $$ f 1_{Y_h} \lesssim_d C h^{d-\delta} \mu_Y * \nu_{B(0,2h)}$$ and hence $$\| {\mathcal F}_h (f 1_{Y_h}) \|_{L^4(\R^d)} \lesssim_d C h^{3d/4-\delta} \| \mu_Y * \nu_{B(0,2h)} \|_{U^2(\R^d)}.$$ Applying Lemma \ref{relate}(i) we conclude that $$\| {\mathcal F}_h (f 1_{Y_h}) \|_{L^4(\R^d)} \lesssim_d C h^{d/2-\delta} \Energy( \mu_Y, \delta )^{1/4}$$ and the claim \eqref{fey} now follows from Theorems \ref{main}, \ref{main-second}, after adjusting $\beta$ as necessary. The claim \eqref{fey-2} then follows from H\"older's inequality after observing from the $\delta'$-regularity of $X$ that $$ |X_h| \lesssim_{C',\delta',d} h^{d-\delta'}.$$
1,108,101,566,828
arxiv
\section{introduction - } \section{Introduction: Adaptive Behaviour Modeling for Ga\-me Theory} Since the five last decades, game theory has become a major aspect in economic sciences modelling and in a great number of domains where strategical aspects has to be involved. Game theory is usually defined as a mathematical tool allowing to analyse strategical interactions between individuals. \\ Initially funded by mathematical researchers, J. von Neumann, E. Borel or E. Zermelo in 1920s, game theory increased in importance in the 1940s with a major work by J. von Neumann and O. Morgenstern and then with the works of John Nash in the 1950s \cite{Eb}. John Nash has proposed an original equilibrium ruled by an adaptive criterium. In game theory, the Nash equilibrium is a kind of optimal strategy for games involving two or more players, whereby the players reach an outcome to mutual advantage. If there is a set of strategies for a game with the property that no player can benefit by changing his strategy while the other players keep their strategies unchanged, then this set of strategies and the corresponding payoffs constitute a Nash equilibrium. \\ We can understand easily that the modelization of a player behavior needs some adaptive properties. The computable model corresponding to genetic automata are in this way a good tool to modelize such adaptive strategy.\\ The plan of this paper is the following. In the next section, we present some efficient algebraic structures, the automata with multiplicities, which allow to implement powerful operators. We present in section 3, some topological considerations about the definition of distances between automata which induces a theorem of convergence on the automata behaviors. Genetic operators are proposed for these automata in section 4. For that purpose, we show that the relevant ``calculus'' is done by matrix representions unravelling then the powerful capabilities of such algebraic structures. In section 5, we focus our attention on the "iterated prisonner dilemma" and we buid an original evolutive probabilistic automaton for strategy modeling, showing that genetic automata are well-adapted to model adaptive strategies. Section 6 shows how we can use the genetic automata developed previously to represent agent evolving in complex systems description. An agent behavior semi-distance is then defined and allows to propose an automatic computation of emergent systems as a kind of self-organization detection. \section{Automata from boolean to multiplicies theory (Automata with scalars)} Automata are initially considered as theoretical tools. They are created in the 1950's following the works of A. Turing who previously deals with the definition of an abstract "machine". The aim of the Turing machines is to define the boundaries for what a computing machine could do and what it could not do.\\ The first class of automata, called finite state automata corresponds to simple kinds of machines \cite{Sc}. They are studied by a great number of researchers as abstract concepts for computable building. In this aspect, we can recall the works of some linguist researchers, for example N. Chomsky who defined the study of formal grammars.\\ In many works, finite automata are associated to a recognizing operator which allows to describe a language \cite{BR,Ei}. In such works, the condition of a transition is simply a symbol taken from an alphabet. From a specific state $S$, the reading of a symbol $a$ allows to make the transitions which are labeled by $a$ and $\ come\ from S$ (in case of a deterministic automaton - a DFA - there is only one transition - see below). A whole automaton is, in this way, associated to a language, the recognized language, which is a set of words. These recognized words are composed of the sequences of letters of the alphabet which allows to go from a specific state called initial state, to another specific state, called final state.\\ A first classification is based on the geometric aspect~: DFA (Deterministic Finite Automata) and NFA (Nondeterministic Finite Automata). \begin{itemize} \item In Deterministic Finite Automata, for each state there is at most one transition for each possible input and only one initial state. \item In Nondeterministic Finite Automata, there can be none or more than one transition from a given state for a given possible input. \end{itemize} Besides the classical aspect of automata as machines allowing to recognize languages, another approach consists in associating to the automata a functional goal. In addition of accepted letter from an alphabet as the condition of a transition, we add for each transition an information which can be considered as an output data of the transition, the read letter is now called input data. We define in such a way an {\it automaton with outputs} or {\it weighted automaton}.\\ Such automata with outputs give a new classification of machines. {\it Transducers} are such a kind of machines, they generate outputs based on a given input and/or a state using actions. They are currently used for control applications. {\it Moore machines} are also such machines where output depends only on a state, i.e. the automaton uses only entry actions. The advantage of the Moore model is a simplification of the behaviour.\\ Finally, we focus our attention on a special kind of automata with outputs which are efficient in an operational way. This automata with output are called {\it automata with multiplicities}. An automaton with multiplicities is based on the fact that the output data of the automata with output belong to a specific algebraic structure, a semiring \cite{Go,St}. In that way, we will be able to build effective operations on such automata, using the power of the algebraic structures of the output data and we are also able to describe this automaton by means of a matrix representation with all the power of the new (i.e. with semirings) linear algebra.\\ \begin{definition} {\bf (Automaton with multiplicities)}\\ An automaton with multiplicities over an alphabet $A$ and a semiring $K$ is the 5-uple $(A,Q,I,T,F)$ where \begin{itemize} \item $Q=\{S_1,S_2\cdots S_n\}$ is the finite set of state; \item $I: Q\mapsto K$ is a function over the set of states, which associates to each initial state a value of K, called entry cost, and to non- initial state a zero value ; \item $F: Q\mapsto K$ is a function over the set states, which associates to each final state a value of K, called final cost, and to non-final state a zero value; \item $T$ is the transition function, that is $T: Q\times A\times Q\mapsto K$ which to a state $S_i$, a letter $a$ and a state $S_j$ associates a value $z$ of $K$ (the cost of the transition) if it exist a transition labelled with $a$ from the state $S_i$ to the state $S_j$ and and zero otherwise.\\ \end{itemize} \end{definition} \begin{remark} Automata with multiplicities are a generalisation of finite automata. In fact, finite automata can be considered as automata with multiplicities in the semiring $K$, the boolan set $B=\{0,1\}$ (endowed with the logical ``or/and''). To each transition we affect 1 if it exists and 0 if not.\\ \end{remark} \begin{remark} We have not yet, on purpose, defined what a semiring is. Roughly it is the least structure which allows the matrix ``calculus'' with unit (one can think of a ring without the "minus" operation). The previous automata with multiplicities can be, equivalently, expressed by a matrix representation which is a triplet \begin{itemize} \item $\lambda\in K^{1\times Q}$ which is a row-vector which coefficients are $\lambda_i=I(S_i)$, \item $\gamma\in K^{Q\times 1}$ is a column-vector which coefficients are $\gamma_i=F(S_i)$, \item $\mu: A^*\mapsto K^{Q\times Q}$ is a morphism of monoids (indeed $K^{Q\times Q}$ is endowed with the product of matrices) such that the coefficient on the $q_i$th row and $q_j$th column of $\mu(a)$ is $T(q_i,a,q_j)$ \end{itemize} \end{remark} \section{Topological considerations} If $K$ is a field, one sees that the space ${\mathcal A}_{(n)}$ of automata of dimension $n$ (with multiplicities in $K$) is a $K$-vector space of dimension $k.n^2+2n$ ($k$ is here the number of letters). So, in case the ground field is the field of real or complex numbers \cite{Bo1}, one can take any vector norm (usually one takes one of the H\"older norms $||(x_i)_{i\in I}||_\alpha := \big(\sum_{i\in I} | x_i |^\alpha\big)^{\frac{1}{\alpha}}$ for $\alpha\geq 1$, but any norm will do) and the distance is derived, in the classical way, by \begin{equation} d({\mathcal A}_1,{\mathcal A}_2)=norm(V({\mathcal A}_1)- V({\mathcal A}_2)) \end{equation} where $V({\mathcal A})$ stands for the vector of all coefficients of ${\mathcal A}=(\lambda,\mu,\gamma)$ arranged in some order one has then the result of Theorem \ref{th1}. Assuming that $K$ is the field of real or complex numbers, we endow the space of series/behaviours with the topology of pointwise convergence (Topology of F. Treves \cite{Tr}). \begin{theorem}\label{th1} Let $({\mathcal A}_n)$ be a sequence of automata with limit ${\mathcal L}$ (${\mathcal L}$ is an automaton), then one has \begin{equation} Behaviour({\mathcal L})=\lim_{n\rightarrow \infty} Behaviour({\mathcal A}_n) \end{equation} where the limit is computed in the topology of Treves. \end{theorem} \section{Genetic automata as efficient operators} We define the chromosome for each automata with multiplicities as the sequence of all the matrices associated to each letter from the (linearly ordered) alphabet. The chromosomes are composed with alleles which are here the lines of the matrix \cite{BFJOP2}.\\ In the following, genetic algorithms are going to generate new automata containing possibly new transitions from the ones included in the initial automata.\\ The genetic algorithm over the population of automata with multiplicities follows a reproduction iteration broken up in three steps \cite{Gol,Mi,Ko}: \begin{itemize} \item {\it Duplication}: where each automaton generates a clone of itself; \item {\it Crossing-over}: concerns a couple of automata. Over this couple, we consider a sequence of lines of each matrix for all. For each of these matrices, a permutation on the lines of the chosen sequence is made between the analogue matrices of this couple of automata; \item {\it Mutation}: where a line of each matrix is randomly chosen and a sequence of new values is given for this line. \end{itemize} Finally the whole genetic algorithm scheduling for a full process of reproduction over all the population of automata is the evolutionary algorithm: \begin{enumerate} \item For all couple of automata, two children are created by duplication, crossover and mutation mechanisms; \item The fitness for each automaton is computed; \item For all 4-uple composed of parents and children, the performless automata, in term of fitness computed in previous step, are suppressed. The two automata, still living, result from the evolution of the two initial parents. \end{enumerate} \begin{remark} The fitness is not defined at this level of abstract formulation, but it is defined corresponding to the context for which the automaton is a model, as we will do in the next section. \end{remark} \section{Applications to competition-cooperation modeling using prisoner dilemma} We develop in this section how we can modelize competition-cooperation processes in a same automata-based representation. The genetic computation allows to make automatic transition from competition to cooperation or from coopeartion to competition. The basic problem used for this purpose is the well-known prisoner dilemma \cite{Ax}. \subsection{From adaptive strategies to probabilistic automata} The prisoner dilemma is a two-players game where each player has two possible actions: cooperate ($C$) with its adversary or betray him ($\overline{C}$). So, four outputs are possible for the global actions of the two players. A relative payoff is defined relatively to these possible outputs, as described in the following table where the rows correspond to one player behaviour and the columns to the other player one.\\ \begin{table}[htp] \begin{center} \begin{tabular}{|l|c|c|} \hline & $C$ & $\overline{C}$ \\ \hline $C$ & (3,3) & (0,5) \\ \hline $\overline{C}$ & (5,0) & (1,1) \\ \hline \end{tabular} \caption{Prisoner dilemma payoff} \label{prisonerDilemmaPayoff} \end{center} \end{table} In the iterative version of the prisoner's dilemma, successive steps can be defined. Each player do not know the action of its adversary during the current step but he knows it for the preceding step. So, different strategies can be defined for a player behaviour, the goal of each one is to obtain maximal payoff for himself.\\ In Figures \ref{titfortat} and \ref{vindictive}, we describe two strategies with transducers. Each transition is labeled by the input corresponding to the player perception which is the precedent adversary action and the output corresponding to the present player action. The only inital state is the state 1, recognizable by the incoming arrow labeled only by the output. The final states are the states 1 and 2, recognizable with the double circles.\\ In the strategy of Figure \ref{titfortat}, the player has systematically the same behaviour as its adversary at the previous step. In the strategy of Figure \ref{vindictive}, the player chooses definitively to betray as soon as his adversary does it. The previous automaton represents static strategies and so they are not well adapted for the modelization of evolutive strategies. For this purpose, we propose a model based on a probabilistic automaton described by Figure \ref{probaDilemma} \cite{BFJOP1}.\\ \begin{figure} [htp] \begin{center} \includegraphics[scale=0.7]{titfortat.eps} \caption{Tit-for-tat strategy automaton} \label{titfortat} \end{center} \end{figure} \begin{figure} [htp] \begin{center} \includegraphics[scale=0.7]{rancunier.eps} \caption{Vindictive strategy automaton} \label{vindictive} \end{center} \end{figure} \begin{figure} [htp] \begin{center} \includegraphics[scale=0.7]{proba.eps} \caption{Probabilistic multi-strategies two-states automaton} \label{probaDilemma} \end{center} \end{figure} This automaton represents all the two-states strategies for cooperation and competitive behaviour of one agent against another in prisoner's dilemma.\\ The transitions are labeled in output by the probabilities $p_i$ of their realization. The first state is the state reached after cooperation action and the second state is reached after betrayal. \\ For this automaton, the associated matrix representation, as described previously, is: \begin{eqnarray} I &=& \pmatrix{p_1 & 1-p_1}; \\ F &=& \pmatrix{p_6\cr 1-p_6};\\ T(C) &=& \pmatrix{p_2 & 1-p_2\cr p_3 & 1- p_3};\\ T(\overline{C}) &=& \pmatrix{p_4 & 1-p_4\cr p_5 & 1- p_5} \end{eqnarray} \subsection{From probabilistic automata to genetic automata} With the matrix representation of the automata, we can compute genetic automata as described in previous sections. Here the chromosomes are the sequences of all the matrices associated to each letter. We have to define the fitness in the context of the use of these automata. The fitness here is the value of the payoff. \subsection{General Genetic Algorithm Process for Genetic Automata} A population of automata is initially generated. These automata are playing against a predefined strategy, named $S_0$.\\ Each automaton makes a set of plays. At each play, we run the probabilistic automaton which gives one of the two outputs: ($C$) or ($\overline{C}$). With this output and the $S_0$'s output, we compute the payoff of the automaton, according with the payoff table.\\ At the end of the set of plays, the automaton payoff is the sum of all the payoffs of each play. This sum is the fitness of the automaton. At the end of this set of plays, each automaton has its own fitness and so the selection process can select the best automata. At the end of these selection process, we obtain a new generation of automata.\\ This new generation of automata is the basis of a new computation of the 3 genetics operators.\\ This processus allows to make evolve the player's behavior which is modelized by the probabilistic multi-stra\-te\-gies two-states automaton from cooperation to competition or from competition to cooperation. The evolution of the strategy is the expression of an adaptive computation. This leads us to use this formalism to implement some self-organisation processes which occurs in complex systems. \section{Extension to Emergent Systems Modeling} In this section, we study how evolutive automata-based modeling can be used to compute automatic emergent systems. The emergent systems have to be understood in the meaning of complex system paradigm that we recall in the next section. We have previously defined some way to compute the distance between automata and we use these principles to define distance between agents behaviours that are modeled with automata. Finally, we defined a specific fitness that allows to use genetic algorithms as a kind of reinforcement method which leads to emergent system computation \cite{Ho}. \subsection{Complex System Description Using Automata-Ba\-sed Agent Model} \begin{figure*} [ht] \begin{center} \includegraphics[scale=1.0]{sys2beh.eps} \caption{Multi-scale complex system description: from global to individual models} \label{sys2beh} \end{center} \end{figure*} According to General System Theory \cite{gst, Mo}, a complex system is composed of entities in mutual interaction and interacting with the outside environment. A system has some characteristic properties which confer its structural aspects, as schematically described in part (a) of Figure \ref{sys2beh}: \begin{itemize} \item The set elements or entities are in interactive dependance. The alteration of only one entity or one interaction reverberates on the whole system. \item A global organization emerges from interacting constitutive elements. This organization can be identified and carries its own autonomous behavior while it is in relation and dependance with its environment. The emergent organization possesses new properties that its own constitutive entities don't have. "The whole is more than the sum of its parts". \item The global organization retro-acts over its constitutive components. "The whole is less than the sum of its parts" after E. Morin.\\ \end{itemize} The interacting entities network as described in part (b) of Figure \ref{sys2beh} leads each entity to perceive informations or actions from other entities or from the whole system and to act itself.\\ A well-adapted modeling consists of using an agent-based representation which is composed of the entity called agent as an entity which perceives and acts on an environment, using an autonomous behaviour as described in part (c) of Figure \ref{sys2beh}.\\ To compute a simulation composed of such entities, we need to describe the behaviour of each agent. This one can be schematically described using internal states and transition processes between these states, as described in part (d) of Figure \ref{sys2beh}.\\ There are several definitions of ``agents'' or ``intelligent agents'' according to their behaviour specificities~\cite{Fe, We}. Their autonomy means that the agents try to satisfy a goal and execute actions, optimizing a satisfaction function to reach it.\\ For agents with high level autonomy, specific actions are realized even when no perception are detected from the environment. To represent the process of this deliberation, different formalisms can be used and a behaviour decomposed in internal states is an effective approach. Finally, when many agents operate, the social aspects must also be taken into account. These aspects are expressed as communications through agent organisation with message passing processes. Sending a message is an agent action and receiving a message is an agent perception. The previous description based on the couple: perception and action, is well adapted to this. \subsection{Agent Behavior Semi-Distance} We describe in this section the bases of the genetic algorithm used on the probabilistic automata allowing to manage emergent self-organizations in the multi-agent simulation.\\ For each agent, we define $e$ an evaluation function of its own behaviour returning the matrix $M$ of values such that $M_{i,j}$ is the output series from all possible successive perceptions when starting from the initial state $i$ and ending at the final state $j$, without cycle. It will clearly be $0$ if either $i$ is not an initial state or $j$ is not a final one and the matrix $M_{i,j}$ is indeed a matrix of evaluations \cite{BR} of subseries of \begin{equation} M^*:=(\sum_{a\in A} \mu(a)a)^* \end{equation} Notice that the coefficients of this matrix, as defined, are computed whatever the value of the perception in the alphabet $A$ on each transition on the successful path\footnote{A {\it succesful path} is a path from an initial state to a final state}. That means that the contribution of the agent behaviour for collective organization formation is only based, here, on probabilities to reach a final state from an initial one. This allows to preserve individual characteristics in each agent behaviour even if the agent belongs to an organization.\\ Let $x$ and $y$ two agents and $e(x)$ and $e(y)$ their respective evaluations as described above. We define $d(x,y)$ a semi-distance (or pseudometrics, see \cite{Bo1} ch IX) between the two agents $x$ and $y$ as $||e(x)-e(y)||$, a matrix norm of the difference of their evaluations. Let ${\cal{V}}_x$ a neighbourhood of the agent $x$, relatively to a specific criterium, for example a spatial distance or linkage network. We define $f(x)$ the agent fitness of the agent $x$ as~: $$ f(x) = \left\lbrace \begin{array}{ll} \frac{ {\displaystyle card({\cal{V}}_x) } } { {\displaystyle \sum\limits_{y_i \in {\cal{V}}_{x}} d(x, y_i)^2} } \ \ \ \ &\mbox{if} \sum\limits_{y_i \in {\cal{V}}_{x}} d(x, y_i)^2 \neq 0 \\ \infty &\mbox{otherwise} \end{array} \right. $$ \subsection{Evolutive Automata for Automatic Emergence of Self-Organized Agent- Based Systems} In the previous computation, we defined a semi-distance between two agents. This semi-distance is computed using the matrix representation of the automaton with multiplicities associated to the agent behaviour. This semi-distance is based on successful paths computation which needs to define initial and final states on the behaviour automata. For specific purposes, we can choose to define in some specific way, the initial and final states. This means that we try to compute some specific action sequences which are chararacterized by the way of going from some specific states (defined here as initial ones) to some specific states (defined here as final ones).\\ Based on this specific purpose which leads to define some initial and final states, we compute a behaviour semi-distance and then the fitness function defined previously. This fitness function is an indicator which returns high value when the evaluated agent is near, in the sense of the behaviour semi-distance defined previously, to all the other agents belonging to a predefined neighbouring.\\ Genetic algorithms will compute in such a way to make evolve an agent population in a selective process. So during the computation, the genetic algorithm will make evolve the population towards a newer one with agents more and more adapted to the fitness. The new population will contain agents with better fitness, so the agents of a population will become nearer each others in order to improve their fitness. In that way, the genetic algorithm reinforces the creation of a system which aggregates agents with similar behaviors, in the specific way of the definition of initial and final states defined on the automata.\\ The genetic algorithm proposed here can be considered as a modelization of the feed-back of emergent systems which leads to gather agents of similar behaviour, but these formations are dynamical and we cannot predict what will be the set of these aggregations which depends of the reaction of agents during the simulation. Moreover the genetic process has the effect of generating a feed- back of the emergent systems on their own contitutive elements in the way that the fitness improvement lead to bring closer the agents which are picked up inside the emergent aggregations.\\ For specific problem solving, we can consider that the previous fitness function can be composed with another specific one which is able to measure the capability of the agent to solve one problem. This composition of fitness functions leads to create emergent systems only for the ones of interest, that is, these systems are able to be developed only if the aggregated agents are able to satisfy some problem solving evaluation. \section{Conclusion} The aim of this study is to develop a powerful algebraic structure to represent behaviors concerning cooperation-competition processes and on which we can add genetic operators. We have explained how we can use these structures for modeling adaptive behaviors needed in game theory. More than for this application, we have described how we can use such adaptive computations to automatically detect emergent systems inside interacting networks of entities represented by agents in a simulation.
1,108,101,566,829
arxiv
\section{Introduction} ``Natural systems are undeniably subject to random fluctuations, arising from either environmental variability or thermal effects"\cite{sagu2007}. In the past several decades, how the noise affects the collective behavior of the self-organized systems which are shaped by the interplay of deterministic laws and randomness has attracted wide interest from various fields such as catalysis, cosmology, biology, reactive mixing, colloidal chemistry, geophysics, electronic engineering, statistical physics, economics and finance, see the review articles\cite{sagu2007,shin2001,Raser2005,Tsimring2014,Vicsek2012,Black1986}. The collective motion of groups of animals is a common and spectacular scene of nature. For example, schools of fish, flocks of birds or groups of ants sometimes can move in a highly orderly fashion. To quantitatively describe such phenomenon a well-known self-propelled particle(SPP) model was proposed by Vicsek \emph{et al.}\cite{Vicsek1995}. This model contains only one basic rule: at each time step each agent is driven by a constant absolute velocity, where its heading is updated by the average value of the headings of agents in its neighborhood of radius $r$, with some random perturbation added. Using simulations Vicsek \emph{et al.} mainly investigated the relation between the order and the noise and density, and conjectured that this model exhibited a second order phase transition from disordered to ordered motion concerning the noise and density under periodic boundary conditions\cite{Vicsek1995}. The Vicsek model has attracted much interesting from various fields including biology, physics, control theory and mathematics, where a fundamental reason is that such a model captures some common features of a number of actual systems. For example, the phase transition of this model has similarity to the phase transition of ferromagnetic\cite{Vicsek1995,Toner2005} and the phenomenon of superconduction\cite{Alicea2005,Dmitriev2012}; the variations of this model can be used to study the collective motion of a wide range of biological systems such as colony of cells, flocks of birds and swarm of locusts\cite{Buhl2006,Baskaran2009,Bialek2012,Belmonte2008}, and are also related to some engineering applications such as the distributed computation and formation control of multi-agent systems\cite{Li2010,Olshevsky2011,sakvin}. Another important reason is that the Vicsek model has become an entry point for the theoretic research of complex systems because it reveals a simple local interacting rule can result in rich global behaviors. For the mathematical analysis of the Vicsek model, the current research need to change its basic rule. A well-known modification is made by Jadbabaie \emph{et al.} who omitted the noise and linearized the heading updating equation\cite{Jad1}. Such modification is followed by \cite{sakvin,Ren,tang2007,Chen2014}. In our previous work \cite{Chen2014} we introduced the percolation theory to investigate such system with large population and quantitatively described its smallest possible interaction radius (or population density under a scaling) for consensus. This result can give an insight to the phase transition of the Vicsek model from disordered to ordered motion concerning the population density in a short time providing the noise is not large, because when the population size is large, according to the law of large numbers the short time effects of the noise are small and can be omitted. However, when the time grows large, the cumulative effects of the small noise cannot be omitted and may change the system's global behavior. Till now, how noise affects the collective behavior of the Vicsek model in a long time is unknown. Another change is to assume that each agent can communicate with all of others at any time\cite{smale2007,Cucker2008,Carrillo2010,Shen2007,Park2010,Bernoff2013}. There also exist some related works where the robust consensus is investigated by assuming the interaction between agents does not depend on the agents' states\cite{Wang2009,Shi2013,Tian2009,Munz2011,Khoo2009,CY2011}. The studies about Vicsek model have been reviewed by \cite{Vicsek2012,Yates2010,Hu2013}. However, to our best knowledge, no mathematical analysis on the Vicsek type models can keep all the three features of the original Vicsek model: self-driving, local interaction and randomness. Physicists mainly use the hydrodynamics to investigate the Vicsek model. This method assumes the population size is infinite and approximates the Vicsek model to some partial differential equations or stochastic partial differential equations, see the review article\cite{Toner2005}. However, this approximation is inevitable to change some natures and can only obtain part properties of the original system. Just as said by a recent paper\cite{Solon2015}, though the Vicsek model has been studied for twenty years and a number of literature already exists, we still lack a global understanding to such kind systems. This paper tries to give a global analysis for the original Vicsek model, and also for some inhomogeneous SPP systems. The main contribution of this paper can be listed as follows: Firstly we originally propose a general method to decouple the self-organized systems formulated by deterministic laws and randomness which widely exist in the nature, engineering, society and economy\cite{sagu2007,shin2001,Raser2005,Tsimring2014,Vicsek2012,Black1986}. This method transfers the analysis of such systems to the design of the control algorithms, though these models do not contain any control input. Using our method we strictly analyze the original Vicsek model for the first time, and make some extensions to the inhomogeneous SPP systems including the leader-follower models. Also, we give some clear answers to the problems of robust consensus and connectivity which are interested by the field of multi agent systems. Besides the analysis of the final states, our method can also be used to predict the possible configurations during the evolution of the complex systems. For example, we show the SPP systems can spontaneously generate the phenomena of turn, vortex, bifurcation and merger of flocks. Specially, our method can help us to predict the events which happen with a small probability in a finite time and are hard to observe through simulations, which has possible applications in complex engineering systems, such as the analysis of collision possibility and the design of collision avoidance in formation controls. Moreover, our results have significance in physics and biology. We show that the SPP systems will switch infinite times between ordered and disordered states for any noise intensity and population density, which indicates the small noise may break the order of the systems, and then provides a mathematical proof to the viewpoint that the randomness can result in the nonequilibrium systems exhibiting anomalously large fluctuations\cite{Tsimring2014,Keizer1987}. Also, this result implies the phase transition of the SPP systems should have new form differing from the traditional senses\cite{Vicsek1995,Czirak1999}. Furthermore, to some degree our results can give an explanation to the switches of the group's moving direction and the large fluctuations of order parameter in the locusts experiments for low and middle densities \cite{Buhl2006}, and predict that these phenomena will still exist for high density when the time step grows large enough. The rest of the paper is organized as follows: In Section \ref{problem}, we will introduce our model and give some definitions. Section \ref{transfer} provides the key method for the analysis of our models. The main results under open and periodic boundary conditions are put in Sections \ref{mrs_1} and \ref{mrs_2} respectively. In Section \ref{mrs_3} we give a theorem under an assumption. Section \ref{sim} provides some simulations, and Section \ref{conclude} concludes this paper with future works. \section{Models and Definitions}\label{problem} \subsection{The Original Vicsek Model} The original Vicsek model consists of $n$ autonomous agents moving in the plane with the same speed $v(v>0)$, where each agent $i$ contains two state variables: $X_i(t)=((x_{i1}(t),x_{i2}(t))\in\mathbb{R}^2$ and $\theta_i(t)\in [-\pi,\pi)$, denoting its position and heading at time $t$ respectively. Then the agent $i$'s velocity is $v(\cos\theta_i(t),\sin\theta_i(t))$ at time $t$. Each agent's heading is updated according to a local rule based on the average direction of its neighbors, and two agents are called neighbors if and only if the distance between them is less than a pre-defined radius $r (r>0)$. Let $$\mathcal{N}_i(t):=\left\{j:\|X_i(t)-X_j(t)\|_2\leq r \right\}$$ denote the neighbor set of agent $i$ at time $t$, where $\|\cdot\|_2$ is the Euclidean norm. Following \cite{Vicsek1995}, the dynamics of the original Vicsek model can be formulated by \begin{eqnarray}\label{m1_00} \begin{aligned} &\theta_{i}(t+1)={\rm{atan2}} \left(\sum_{j\in \mathcal{N}_{i}(t)} \sin\theta_{j}(t),\sum_{j\in\mathcal{N}_{i}(t)}\cos \theta_{j}(t)\right)+\zeta_i(t), \end{aligned} \end{eqnarray} and \begin{eqnarray}\label{m1} \begin{aligned} X_i(t+1)&=X_i(t)+V_i(t+1)\\ &=X_i(t)+v(\cos\theta_i(t+1),\sin\theta_i(t+1)) \end{aligned} \end{eqnarray} for all $i\in[1,n]$ and $t\geq 0$, where the function $atan2$ is the arctangent function with two arguments\footnote{Literature \cite{Vicsek1995} uses the $\arctan$ function here, but it should be not correct because the quadrant information is lost.}, and $\{\zeta_i(t)\}$ is a random noise sequence independently and uniformly distributed in a fixed interval whose midpoint is $0$. The system (\ref{m1_00})-(\ref{m1}) is called as the \emph{original Vicsek model}. Let $X(t)=(X_1(t),X_2(t),\ldots,X_n(t))$ and $\theta(t)=(\theta_1(t),\theta_2(t),\ldots,\theta_n(t))$. The original Vicsek model is very complex to analyze in mathematics. An important step forward in analyzing this model was given by Jadbabaie \emph{et al.} in \cite{Jad1} who omitted the noise item and linearized the heading updating rule (\ref{m1_00}) as follows: \begin{eqnarray*}\label{Jad_mod} \theta_i(t+1)= \frac{1}{\mathcal{N}_{i}(t)}\sum_{j\in\mathcal{N}_{i}(t)} \theta_j(t). \end{eqnarray*} \subsection{Our Inhomogeneous SPP systems} To be more practical this paper will make some extensions to the original Vicsek model. Firstly we assume each agent $i$ can have different interaction radius $r_i>0$, and the interaction weight between two agents $i$ and $j$ is a non-negative function $f_{ij}(t)$ satisfying:\\ (i) $f_{ii}(t)> 0$ for all $i,t$, which means each agent has a certain inertia; \\ (ii) $f_{ij}(t)=0$ when $\|X_i(t)-X_j(t)\|_2>r_i$ for all $i,j,t$, which indicates each agent cannot receive information directly from the ones out of its interaction radius.\\ Secondly we consider more general noises. Let $\xi_i(t)$ denotes the new noise. Set $\Omega^t=\Omega_n^t\subseteq \mathds{R}^{n\times(t+1)}$ be the sample space of $(\xi_i(t'))_{1\leq i \leq n, 0\leq t' \leq t}$, and $\mathcal{F}^t=\mathcal{F}_n^t$ be its Borel $\sigma$-algebra. Additively we define $\Omega^{-1}$ be the empty set. Let $P=P_n$ be the probability measure on $\mathcal{F}^{\infty}$ for $(\xi_i(t'))_{1\leq i \leq n, t' \geq 0}$, so the probability space is written as $( \Omega^{\infty},\mathcal{F}^{\infty},P)$. Throughout this paper we assume there exists a constant $\eta=\eta_n>0$ such that for all initial positions $X(0)$ and headings $\theta(0)$ and $t\geq 0$, the joint probability density of $(\xi_1(t),\ldots,\xi_n(t))$ in the region $[-\eta,\eta]^n$ has a uniform lower bound $\underline{\rho}=\underline{\rho}(\eta,n)>0$ under all previous samples, i.e., for any real numbers $a_i, b_i$ with $-\eta\leq a_i < b_i \leq \eta$, $1\leq i \leq n$, \begin{eqnarray}\label{noise_cond_2} \begin{aligned} &P\left(\bigcap_{i=1}^n \left\{ \xi_i(t)\in [a_i,b_i]\right\}| \forall w_{t-1} \in \Omega^{t-1} \right)\geq \underline{\rho} \prod_{i=1}^n (b_i-a_i),~~~~\forall t\geq 0. \end{aligned} \end{eqnarray} We note that besides the independent and uniform noise in the original Vicsek model, the new noise with (\ref{noise_cond_2}) also contains any non-degenerate zero-mean Gaussian white noise, and some other bounded or unbounded noises. With these two extensions the updating equation of the headings of the original Vicsek model is changed to \begin{eqnarray}\label{m1_new} &&\theta_{i}(t+1)={\rm{atan2}}\left(\sum_{j=1}^n f_{ij}(t) \sin\theta_{j}(t),\sum_{j=1}^n f_{ij}(t)\cos \theta_{j}(t)\right)+\xi_i(t).\nonumber \end{eqnarray} To simplify the exposition we call the system evolved by (\ref{m1}) and (\ref{m1_new}) as \emph{system I}. For better analysis we also consider the system whose heading is updated by \begin{eqnarray}\label{model1} \theta_i(t+1)= \frac{1}{\sum_{j=1}^n f_{ij}(t)}\sum_{j=1}^n f_{ij}(t)\theta_j(t)+\xi_i(t). \end{eqnarray} For all $i\in[1,n]$ and $t\geq 0$, we restrict the value of heading $\theta_i(t)$ to the interval $[-\pi,\pi)$ by modulo $2\pi$ when it is out this interval. Similarly, we call the system evolved by (\ref{m1}) and (\ref{model1}) as \emph{system II}. Differing from the existing changes for mathematical analysis, this system keeps all the features of self-driving, local interaction and randomness of the original Vicsek model. Also, from simulations it will be shown that this system exhibits the similar properties to the original Vicsek model, see Section \ref{sim}. It is worth noting that the systems I and II can satisfy the case of leader-follower relationships within a flock. For example, if agent $i$ is the follower of agent $j$, we can set $f_{ij}(t)$ be a large value and $f_{ik}(t)=0$ when $k\neq i,j$. The leader-follower relationship has been observed in the actual experiments\cite{Nagy2010}. \subsection{Order, Robust Consensus and Connectivity}\label{Subsection_def2} This paper will firstly investigate how the noise affects the order. Following \cite{Vicsek1995}, we define the order parameter $$\varphi(t):=\frac{1}{n}\big\|\sum_{i=1}^n\left(\cos\theta_i(t),\sin\theta_i(t)\right)\big\|_2$$ for all $t\geq 0$. It can be seen that $\varphi(t)$ is close to its extreme value $1$ indicating all the agents move in almost the same direction, whereas is close to $0$ indicating an absence of any collective alignment. Naturally, we say the systems I and II is \emph{ordered} at time $t$ when $\varphi(t)$ is close to $1$, and is \emph{disordered} when $\varphi(t)$ is close to $0$. We also give an intuitive definition concerning the order: \begin{de} For any heading vector $\theta=(\theta_1,\theta_2,\ldots,\theta_n)\in [-\pi,\pi)^n$, define the length of the shortest interval which can cover it as \begin{eqnarray*} d_{\theta}:=\inf\left\{l\in[0,2\pi): \mbox{there exists a constant } c\in[-\pi,\pi)\right.\\ \left.\mbox{ such that } \theta_i \in[c,c+l] \mbox{ for all }1\leq i\leq n \right\}, \end{eqnarray*} where $[c,c+l]:=[c,\pi)\cup [-\pi,c+l-2\pi]$ for the case of $c+l\geq\pi$. \end{de} This definition can also be understood to the \emph{maximum headings' difference} in the flocks. Obviously, $d_{\theta}$ is close to $0$ when all the agents move with almost same directions. The robust consensus has attracted much attention in the research of multi-agent systems\cite{Wang2009,Shi2013,Tian2009,Munz2011,Khoo2009,CY2011}. In \cite{Wang2009} Wang and Liu provided a definition of robust consensus for the systems whose topologies does not couple their states. We adapt this definition to our model as follows: \begin{de}\label{def_robust_consensus} The system I (or II) achieves robust consensus if there exists a function $g(\cdot)$ satisfying $\lim_{x\rightarrow 0^+}g(x)=0$, such that for any $\eta>0$ and $\omega\in \Omega^{\infty}$, $$\limsup_{t\rightarrow\infty} d_{\theta(t)} \leq g(\eta).$$ \end{de} This paper will try to study whether the robust consensus can be reached. The connectivity of the underlying graphs is a key issue for consensus of multi-agent systems. For system I (or II), let $G(t)=G(\mathcal{X},\mathcal{E}(t))$ denote its underlying graph at time $t$, where the vertex set $\mathcal{X}$ is the $n$ agents, and the edge set $\mathcal{E}(t)=\{(j,i):\|X_i(t)-X_j(t)\|_2\leq r_i\}.$ Note that $G(t)$ is a directed graph in our inhomogeneous systems. A directed graph is said to be \emph{strongly connected} if there exists at least one path in each direction between each pair of vertices of the graph. Let $\widetilde{G}(t)$ denote the graph obtained by replacing all directed edges of $G(t)$ with undirected edges. Obviously, the graph $\widetilde{G}(t)$ is undirected. An undirected graph is said to be \emph{connected} if there exists at least one path between its any two vertices. If $\widetilde{G}(t)$ is connected, then $G(t)$ is said to be \emph{weakly connected}. If a directed graph is strongly connected, of course it is also weakly connected. Given two graphs $G(\mathcal{X},\mathcal{E}_1)$ and $G(\mathcal{X},\mathcal{E}_2)$, define $G(\mathcal{X},\mathcal{E}_1)\cup G(\mathcal{X},\mathcal{E}_2):=G(\mathcal{X},\mathcal{E}_1\cup \mathcal{E}_2)$. Following \cite{Shi2013}, we give the definition of uniformly joint weak connectivity as follows: \begin{de} The graph sequence $\{G(t)\}_{t=0}^{\infty}$ is said to be uniformly jointly weakly connected if there exists an integer $T>0$ such that $\cup_{k=t}^{t+T}\widetilde{G}(k)$ is connected for any $t>0$. \end{de} The assumption of uniformly joint connectivity is widely used as a necessary condition of consensus in multi-agent systems\cite{Jad1,sakvin,Ren,Wang2009,Shi2013,Tian2009,Munz2011,CY2011,Huang2012}. For the systems whose topologies coupled with states, whether this assumption can be satisfied remains a quite interesting problem. This paper will show with probability $1$ the underlying graphs of our inhomogeneous SPP systems are not uniformly jointly weakly connected, and of course they are not uniformly jointly connected for the homogeneous case. \subsection{Turn, Vortex, Bifurcation and Merger} The turn, bifurcation and merger of flocks are very common phenomena in the nature. These phenomena have been studied by the well-known Boid model using simulations\cite{Reynold1987}. Saber provided a specific flocking algorithm which could produce bifurcation and merger behavior by adding a global leader and some obstacles\cite{Saber2006}. We will show that the SPP models can spontaneously produce these phenomena. The phenomena of turn, bifurcation and merger of flocks are hard to precisely define. We give their descriptive definitions as follows:\\ \emph{Turn} and \emph{vortex}: All agents of a flock gradually change their headings from one angle to another in a finite time, where the difference of the two angles is larger than a certain value(for example, $\pi/2$), and during this time all the agents keep almost synchronization, i.e. the headings of all the agents are almost same at each time. A turn with angle's change exceeding $2\pi$ is called a \emph{vortex}.\\ \emph{Bifurcation}: A group of agents whose headings are almost same separate into two groups with different directions, while in each group all the agents keep almost synchronization.\\ \emph{Merge}: Two groups of agents with different directions merge into one group with almost same direction. \section{Transform to Robust Cooperative Control}\label{transfer} To analyze systems I and II, we firstly need construct two robust control systems which can transform the analysis to the design of control algorithms. For $i=1,\ldots,n$ and $t\geq 0$, let $\delta_i(t)\in(0,\eta)$ be an arbitrarily given real number, $u_i(t)\in [-\eta+\delta_i(t),\eta-\delta_i(t)]$ denotes a bounded control input, and $b_i(t)\in [-\delta_i(t),\delta_i(t)]$ denotes the parameter uncertainty. For system I we construct the following control system \begin{eqnarray}\label{model2_new} \left\ \begin{array}{ll} \theta_i(t+1)={\rm{atan2}}(\sum_{j=1}^n f_{ij}(t) \sin\theta_{j}(t),\sum_{j=1}^n f_{ij}(t)\cos \theta_{j}(t))+u_i(t)+b_i(t), \\ X_{i}(t+1)=X_{i}(t)+v(\cos\theta_{i}(t+1),\sin\theta_i(t+1)), \end{array \right. \end{eqnarray} and for system II we do the same: \begin{eqnarray}\label{model2} \left\ \begin{array}{ll} \theta_i(t+1)=\frac{1}{\sum_{j=1}^n f_{ij}(t)}\sum_{j=1}^n f_{ij}(t)\theta_j(t)+u_i(t)+b_i(t), \\ X_{i}(t+1)=X_{i}(t)+v(\cos\theta_{i}(t+1),\sin\theta_i(t+1)). \end{array \right. \end{eqnarray} Let $ S^*:= \mathds{R}^{2n}\times [-\pi,\pi)^n$(or $[0,L]^{2n}\times [-\pi,\pi)^n$ for the periodic boundary case defined in Section \ref{mrs_2}) be the state space of $(X(t), \theta(t))$ for all $t\geq 0$. Given $ S_1\subseteq S^*$, we say $ S_1$ \emph{is reached at time $t$} if $(X(t),\theta(t))\in S_1$, and i\emph{s reached in the time $[t_1,t_2]$} if there exists $t'\in [t_1,t_2]$ such that $ S_1$ is reached at time $t'$. \begin{de}\label{def_reach} Let $ S_1, S_2\subseteq S^*$ be two state sets. Under protocol (\ref{model2_new}) (or (\ref{model2})), $S_1$ is said to be finite-time robustly reachable from $ S_2$ if: For any $(\theta(0),X(0))\in S_2$, $S_1$ is reached at time $0$, or there exist constants $T>0$ and $\varepsilon\in(0,\eta)$ such that we can find $\delta_i(t)\in[\varepsilon,\eta)$ and $u_i(t)\in [-\eta+\delta_i(t),\eta-\delta_i(t)]$, $1\leq i\leq n$, $0\leq t<T$ which guarantees that $ S_1$ is reached in the time $[1,T]$ for arbitrary $b_i(t)\in [-\delta_i(t),\delta_i(t)]$, $1\leq i\leq n$, $0\leq t<T$. \end{de} \begin{rem}\label{rem_def_reach} Under normal circumstances, we choose $\delta_i(t)$ being a constant $\varepsilon>0$ for all $1\leq i\leq n$ and $0\leq t<T$ to guarantee the system (\ref{model2_new}) (or (\ref{model2})) robustly reach a designated state set in the time $[1,T]$. Additionally, we usually set $\varepsilon$ sufficiently small such that the uncertainty item $b_i(t)$ does not affect the system's macro states such as the ordered or disordered states in finite time. \end{rem} The following lemma establishes a connection between systems I and (\ref{model2_new}) , and also systems II and (\ref{model2}). \begin{lem}\label{robust} Let $ S_1,\ldots, S_k\subseteq S^*$, $k\geq 1$ be state sets and assume they are finite-time robustly reachable from $ S^*$ under protocol (\ref{model2_new}) (or (\ref{model2})). Suppose the initial positions $X(0)$ and headings $\theta(0)$ are arbitrarily given. Then for the system I(or II):\\ (i)~ With probability $1$ there exists an infinite sequence $t_1<t_2<\ldots$ such that $ S_j$ is reached at time $t_{lk+j}$ for all $j=1,\ldots,k$ and $l\geq 0$.\\ (ii)~ There exist constants $T>0$ and $c\in (0,1)$ such that $$P\left(\tau_i-\tau_{i-1}>t\right)\leq c^{\lfloor t/T\rfloor}, \forall i,t\geq 1,$$ where $\tau_0=0$ and $\tau_i:=\min\{t: \mbox{ there exist }\tau_{i-1}<t_1'<t_2'<\cdots<t_k'=t \mbox{ such that for all }j\in[1,k], S_j \mbox{ is reached at time }t_j'\}$ for $i\geq 1$. \end{lem} \begin{proof} (i) Through out this proof we assume the initial state is arbitrarily given. We recall that $\Omega^t\subseteq \mathds{R}^{n\times(t+1)}$ is the sample space of $(\xi_i(t'))_{1\leq i \leq n, 0\leq t' \leq t}$. Under the system I (or II), the values of $X(t)$ and $\theta(t)$ are determined by the sample $w_{t-1}\in\Omega^{t-1}$, so for any $t\geq 1$ and $j\in[1,n]$ we can set $$\Omega_j^{t-1}:=\left\{w_{t-1}\in\Omega^{t-1}: (X(t),\theta(t))(w_{t-1})\in S_j\right\}$$ to be the subset of $\Omega^{t-1}$ such that $S_j$ is reached at time $t$. Thus, \begin{eqnarray}\label{rob_0} \begin{aligned} P\left(\left\{\mbox{$ S_j$ is reached at time $t$}\right\}|\forall w_{t-1}'\in \Omega_j^{t-1}\right)=1. \end{aligned} \end{eqnarray} Also, by our assumption $ S_j$ is finite-time robustly reachable under protocol (\ref{model2_new}) (or (\ref{model2})), so with Definition \ref{def_reach} there exist constants $T_j\geq 2$ and $\varepsilon_j\in(0,\eta)$ such that for any $t\geq 0$ and $(X(t),\theta(t))\notin S_j$, we can find parameters $\delta_i(t')\in [\varepsilon_j,\eta)$ and control inputs $u_i(t')\in [-\eta+\delta_i(t'),\eta-\delta_i(t')]$, $1\leq i\leq n$, $t\leq t'\leq t+T_j-2$ with which the set $ S_j$ is reached in the time $[t+1,t+T_j-1]$ for any uncertainties $b_i(t')\in [-\delta_i(t'),\delta_i(t')]$, $1\leq i\leq n, t\leq t'\leq t+T_j-2$. This acts on the system I (or II) indicating that for any $w_{t-1}^*\in (\Omega_j^{t-1})^c$, \begin{eqnarray}\label{rob_1} \begin{aligned} &P\left(\left\{\mbox{$ S_j$ is reached in $[t+1, t+T_j-1]$}\right\}| w_{t-1}^*\right)\\ &\geq P\left(\bigcap_{t\leq t'\leq t+T_j-2}\bigcap_{1\leq i\leq n}\left\{\xi_i(t')\in [u_i(t')-\delta_i(t'),u_i(t')+\delta_i(t')] \right\}|w_{t-1}^*\right). \end{aligned} \end{eqnarray} Here $(\Omega_j^{t-1})^c$ means the complement set of $\Omega_j^{t-1}$. Define $$F_t:=\bigcap_{1\leq i\leq n}\left\{\xi_i(t)\in [u_i(t)-\delta_i(t),u_i(t)+\delta_i(t)] \right\}.$$ By the Bayes' theorem we can get \begin{eqnarray}\label{rob_1_2} \begin{aligned} &\mbox{the right side of (\ref{rob_1})}\\ &=P\left( F_t|w_{t-1}\right)\prod_{t'=t+1}^{t+T_j-2} P\left(F_{t'}|\bigcap_{t\leq l<t'}F_{l},w_{t-1}^*\right)\\ &\geq \prod_{t'=t}^{t+T_j-2}\left[\underline{\rho} \prod_{i=1}^n (2\delta_i(t'))\right]\geq \underline{\rho}^{T_j-1} \left(2 \varepsilon_j \right)^{n (T_j-1)}, \end{aligned} \end{eqnarray} where the first inequality uses (\ref{noise_cond_2}) and the fact of $-\eta\leq u_i(t')-\delta_i(t')<u_i(t')+\delta_i(t')\leq \eta$ for $1\leq i\leq n$ and $t\leq t'<t+T_j$. Define the event $$E_{j,t}:=\left\{\mbox{$ S_j$ is reached in $[t, t+T_j-1]$}\right\},$$ Combining (\ref{rob_0}), (\ref{rob_1}) and (\ref{rob_1_2}) yields \begin{eqnarray}\label{rob_1_3} \begin{aligned} &P\left(E_{j,t} | \forall w_{t-1}\in \Omega^{t-1}\right)\geq \underline{\rho}^{T_j-1} \left(2 \varepsilon_j \right)^{n (T_j-1)}, \end{aligned} \end{eqnarray} where the second inequality uses (\ref{noise_cond_2}) and the fact of $-\eta\leq u_i(t')-\delta_i(t')<u_i(t')+\delta_i(t')\leq \eta$ for $1\leq i\leq n$ and $t\leq t'\leq t+T_j-2$. Using the Bayes' theorem and (\ref{rob_1_3}) we have for any $w_{t-1}\in \Omega^{t-1}$, \begin{eqnarray}\label{rob_2} \begin{aligned} &P\left(\bigcap_{j=1}^k E_{j,t+\sum_{l=1}^{j-1} T_l} | w_{t-1}\right)\\ &=P\left( E_{1,t}|w_{t-1}\right)\prod_{j=2}^{k} P\left(E_{j,t+\sum_{l=1}^{j-1} T_l}\Big|\bigcap_{l=1}^{j-1} E_{l,t+\sum_{p=1}^{l-1} T_p},w_{t-1}\right)\\ &\geq \prod_{j=1}^{k} \left[\underline{\rho}^{T_j-1} \left(2 \varepsilon_j \right)^{n (T_j-1)}\right]:=c. \end{aligned} \end{eqnarray} Set $E_t:=\bigcap_{j=1}^k E_{j,t+\sum_{l=1}^{j-1} T_l}$ and $T:=T_1+T_2+\ldots+T_k$. For any integer $M>0$, using Bayes' theorem again and (\ref{rob_2}) we have \begin{eqnarray}\label{rob_3} &&P\Big(\bigcap_{m=M}^{\infty}E_{mT}^c\Big)\nonumber\\ &&=P\left(E_{MT}^c\right)\prod_{m=M+1}^{\infty}P\Big(E_{mT}^c\big| \bigcap_{M\leq m'<m}E_{m'T}^c\Big)\nonumber\\ &&=\left[1-P\left(E_{MT}\right)\right]\prod_{m=M+1}^{\infty}\Big[1-P\Big(E_{mT}\big| \bigcap_{M\leq m'<m}E_{m'T}^c\Big)\Big]\nonumber\\ &&\leq \prod_{m=M}^{\infty}\left(1-c\right)=0, \end{eqnarray} which indicates that with probability $1$ there exits an infinite sequence $m_1<m_2<\ldots$ such that $E_{m_l T}$ occurs for all $l\geq 1$. Here $E_{mT}^c$ is the complement set of $E_{mT}$. By the definition of $E_t$, for each $l\geq 0$ we can find a time sequence $t_{lk+j}\in [m_l T+\sum_{p=1}^{j-1} T_p, m_l T+\sum_{p=1}^{j} T_p-1]$, $1\leq j\leq k$ such that $S_j$ is reached at time $t_{lk+j}$. (ii) For any $M\geq 0$ and $i>0$, the event $\tau_i-\tau_{i-1}>MT$ means that $E_t$ does not happen for all $t\in[\tau_{i-1}+1,\tau_{i-1}+1+(M-1)T]$. By the total probability theorem and (\ref{rob_3}) we have \begin{eqnarray*}\label{rob_4} \begin{aligned} P\left\{\tau_i-\tau_{i-1}>MT \right\}&\leq P\left(\bigcap_{m=0}^{M-1}E_{\tau_{i-1}+1+mT}^c\right)\\ &=\sum_{t=0}^{\infty}P(\tau_{i-1}=t) P\left(\bigcap_{m=0}^{M-1}E_{t+1+mT}^c\right) \\ &\leq \left(1-c\right)^M\sum_{t=0}^{\infty}P(\tau_{i-1}=t)=\left(1-c\right)^M, \end{aligned} \end{eqnarray*} so \begin{eqnarray*}\label{rob_5} \begin{aligned} P\left(\tau_i-\tau_{i-1}>t \right)&\leq P\left(\tau_i-\tau_{i-1}>\lfloor \frac{t}{T} \rfloor T \right)\leq \left(1-c\right)^{\lfloor t/T\rfloor}. \end{aligned} \end{eqnarray*} \end{proof} Specially, for the case of $k=1$, from Lemma \ref{robust} we can get the following corollary: \begin{cor}\label{robust2} Let $ S \subseteq S^*$ be a state set and assume it is finite-time robustly reachable from $S^c$ under protocol (\ref{model2_new}) (or (\ref{model2})). Suppose the initial positions $X(0)$ and headings $\theta(0)$ are arbitrarily given. Then for system I (or II):\\ (i)~With probability $1$ $ S$ will be reached infinite times.\\ (ii)~There exist constants $T>0$ and $c\in (0,1)$ such that $$P\left(\tau_i-\tau_{i-1}>t \right)\leq c^{\lfloor t/T\rfloor}, \forall i,t\geq 1,$$ where $\tau_0:=0$ and $\tau_i:=\min\{t>\tau_{i-1}: S \mbox{ is reached at time }t \}$ for $i\geq 1$. \end{cor} \begin{proof} Because $S$ is finite-time robustly reachable from $S^c$, of course it is also finite-time robustly reachable from $S^*$. From Lemma \ref{robust} our result can be deduced directly. \end{proof} \begin{rem} Using Lemma \ref{robust} and Corollary \ref{robust2}, we can transfer the analysis of systems I and II to design the robust control algorithms for protocols (\ref{model2_new}) and (\ref{model2}). In fact, from Remark \ref{rem_def_reach} we can choose the suitable parameters such that the uncertainty item $b_i(t)$ does not affect the system's macro states in finite time, so the analysis of systems I and II can be transformed to the design of the controls of the protocols (\ref{model2_new}) and (\ref{model2}). \end{rem} \begin{rem} The methods in Lemma \ref{robust} and Corollary \ref{robust2} do not depend the detailed expressions of the systems. In fact, for the system $x(t+1)=f(x(t),\xi(t))\in \mathds{R}^n$ where the noise $\xi(t)\in\mathds{R}^m$ we can use our methods to simplify its analysis, specially on the prediction of the possible configurations during its evolution and its final states. \end{rem} \section{Analysis under Open Boundary Conditions}\label{mrs_1} This section will give some results under open boundary conditions of positions of agents, which indicates that all the agents can move on $\mathbb{R}^2$ without boundary limitation. Throughout this section we assume that \emph{the population size $n\geq 2$, the parameters $\eta>0$, $\underline{\rho}>0$, $v>0$, $r_i\geq 0$, $1\leq i\leq n$, and the initial positions $X(0)\in\mathbb{R}^{2n}$ and headings $\theta(0)\in [-\pi,\pi)^n$ are arbitrarily given.} To simplify the exposition we will not repeat these settings in the statement of our results of this section. We introduce some definitions firstly. For any $t\geq 0$ and $1\leq i\leq n$, set \begin{eqnarray*} \widetilde{\theta}_i(t)=\left\ \begin{array}{ll} {\rm{atan2}}(\sum_{j=1}^n f_{ij}(t)\sin\theta_{j}(t),\sum_{j=1}^n f_{ij}(t)\cos\theta_{j}(t))~\rm{for~system~I~and~protocol~(\ref{model2_new})},\\ \frac{1}{\sum_{j=1}^n f_{ij}(t)}\sum_{j=1}^n f_{ij}(t)\theta_j(t) ~\rm{for~system~II~and~protocol~(\ref{model2})}. \end{array \right. \end{eqnarray*} Let $X=(X_1,\ldots,X_n)\in\mathbb{R}^{2n}$ and $\theta=(\theta_1,\ldots,\theta_n)\in [-\pi,\pi)^n$. For any $\alpha>0$, define \begin{eqnarray*} && S_{\alpha}^1:=\big\{(X,\theta)\in S^*:\max_{1\leq i\leq n}|\theta_i|\leq \frac{\alpha}{2}\big\}. \end{eqnarray*} We see $S_{\alpha}^1$ is a set of ordered states when $\alpha$ is small. The following Lemmas \ref{lem1} and \ref{lem1_2} describe a transition to the ordered state for protocols (\ref{model2}) and (\ref{model2_new}) respectively. \begin{lem}\label{lem1} For any $\alpha>0$, $S_{\alpha}^1$ is finite-time robustly reachable from $ S^*$ under protocol (\ref{model2}). \end{lem} \begin{proof} Without loss of generality we assume $\alpha\in(0,\eta]$. The main idea of this proof is: For each agent $i$, if its neighbors' average heading $\widetilde{\theta}_i(t)$ is larger than an upper bound, we set $u_i(t)$ be a negative input; if $\widetilde{\theta}_i(t)$ is less than a lower bound, we set $u_i(t)$ be a positive input; otherwise we select a control input such that $\theta_i(t+1)$ will be in the interval $[-\alpha/2,\alpha/2]$. With this idea, for $t\geq 0$ and $1\leq i \leq n$ we choose \begin{eqnarray}\label{lem1_1} &&\left(\delta_i(t),u_i(t)\right)=\left\ \begin{array}{ll} (\eta/4,-3\eta/4)~~~~~\mbox{if}~\widetilde{\theta}_i(t)> \eta-\alpha/2,\\ (\alpha/2,-\widetilde{\theta}_i(t))~~~~\mbox{if}~\widetilde{\theta}_i(t)\in [\alpha/2-\eta,\eta-\alpha/2],\\ (\eta/4,3\eta/4)~~~~~~~~\mbox{if}~\widetilde{\theta}_i(t)< \alpha/2-\eta. \end{array \right. \end{eqnarray} Then it can be computed that \begin{eqnarray}\label{lem1_2} u_i(t)\in [-\eta+\delta_i(t),\eta-\delta_i(t)], ~~\forall 1\leq i\leq n, t\geq 0, \end{eqnarray} which means our choice of $(u_i(t),\delta_i(t))$ meets their requirements in Definition \ref{def_reach}. Define $$\theta_{\max}(t):=\max_{1\leq i \leq n}\theta_i(t)~~~~\mbox{and}~~~~\theta_{\min}(t):=\min_{1\leq i \leq n}\theta_i(t).$$ If $\theta_{\max}(t)>\alpha/2+\eta/2$ we can get \begin{eqnarray}\label{lem1_3} \theta_{\max}(t+1)\leq \theta_{\max}(t)-\frac{\eta}{2}. \end{eqnarray} That is because if there exists $i\in [1,n]$ such that \begin{eqnarray}\label{lem1_4} \theta_i(t+1)>\theta_{\max}(t)-\frac{\eta}{2}>\frac{\alpha}{2}, \end{eqnarray} by $\theta_i(t+1)=\widetilde{\theta}_i(t)+u_i(t)+b_i(t)$ and (\ref{lem1_1}) we have $\widetilde{\theta}_i(t)>\eta-\alpha/2$ and $u_i(t)+b_i(t)\in [-\eta,-\eta/2]$. But at the same time, by the definition of $\widetilde{\theta}_i(t)$ we have $\widetilde{\theta}_i(t)\leq \theta_{\max}(t)$, so $$\theta_i(t+1)\leq\theta_{\max}(t)+u_i(t)+b_i(t)\leq \theta_{\max}(t)-\frac{\eta}{2},$$ which is contradictory with the first inequality of (\ref{lem1_4}). Similar to (\ref{lem1_3}), we can get that if $\theta_{\min}(t)<-\alpha/2-\eta/2$ then \begin{eqnarray}\label{lem1_5} \theta_{\min}(t+1)\geq \theta_{\min}(t)+\frac{\eta}{2}. \end{eqnarray} Combined this with (\ref{lem1_3}) we have if $\max_{1\leq i \leq n}|\theta_i(t)|>\alpha/2+\eta/2$ then \begin{eqnarray}\label{lem1_6} \max_{1\leq i \leq n}|\theta_i(t+1)|\leq \max_{1\leq i \leq n}|\theta_i(t)|-\frac{\eta}{2}. \end{eqnarray} Also, if $\max_{1\leq i \leq n}|\theta_i(t)|\leq \alpha/2+\eta/2$, by (\ref{lem1_1}) \begin{eqnarray}\label{lem1_7} \max_{1\leq i \leq n}|\theta_i(t+1)|\leq \alpha/2. \end{eqnarray} Let $t_1:=\lceil \frac{2\pi-\alpha}{\eta}\rceil$. By (\ref{lem1_6}), (\ref{lem1_7}) and with the fact of $\max_{1\leq i \leq n}|\theta_i(t)|\leq \pi$, we can get \begin{eqnarray}\label{lem1_8} \max_{1\leq i \leq n}|\theta_i(t_1)|\leq \alpha/2. \end{eqnarray} Combining (\ref{lem1_8}), (\ref{lem1_1}) and (\ref{lem1_2}) we have $ S_{\alpha}^1$ is robustly reached at time $t_1$ from any initial state under protocol (\ref{model2}). \end{proof} \begin{lem}\label{lem1_2} Suppose there exists a $\eta>\frac{\pi}{2}-\frac{\pi}{n}$ satisfying (\ref{noise_cond_2}). Then for any $\alpha>0$, $S_{\alpha}^1$ is finite-time robustly reachable from $ S^*$ under protocol (\ref{model2_new}). \end{lem} \begin{proof} Compared to the proof of Lemma \ref{lem1}, the biggest difference of this proof is to control the maximum headings' difference less than $\pi$ at the beginning time. In this proof the angle $a\in[b,c]$ means $a\mod 2\pi$ belongs to the set of the elements in $[b,c]$ module $2\pi$. Compute $\widetilde{\theta}_i(0)$, $1\leq i \leq n$, and the length of the shortest interval which can cover them must be not bigger than $2\pi(1-\frac{1}{n})$. Let $\theta^*$ be the middle point of this interval, then $\widetilde{\theta}_i(0)$, $1\leq i \leq n$, are all in $[\theta^*-\pi(1-\frac{1}{n}),\theta^*+\pi(1-\frac{1}{n})]$. Set $\varepsilon_1:=\min\{\frac{1}{3}(\eta-\frac{\pi}{2}+\frac{\pi}{n}),\frac{\pi}{8} \}$. For $1\leq i \leq n$, we choose $\delta_i(0)=\varepsilon_1$ and \begin{eqnarray*} &&u_i(0)=\left\ \begin{array}{ll} -2\varepsilon_1-\frac{n-2}{2(n-1)}(\widetilde{\theta}_i(0)-\theta^*)~~\mbox{if}~\widetilde{\theta}_i(0)-\theta^*\in [0,\pi(1-\frac{1}{n})],\nonumber\\ 2\varepsilon_1-\frac{n-2}{2(n-1)}(\widetilde{\theta}_i(0)-\theta^*)~~\mbox{if}~\widetilde{\theta}_i(0)-\theta^*\in [-\pi(1-\frac{1}{n}),0). \end{array \right. \end{eqnarray*} From this we can compute for $1\leq i \leq n$, \begin{eqnarray*} \begin{aligned} u_i(0)&\geq -2\varepsilon_1-\frac{n-2}{2(n-1)}\pi(1-\frac{1}{n})\\ &=-2\varepsilon_1-\frac{\pi}{2}+\frac{\pi}{n}\geq -\eta+\varepsilon_1, \end{aligned} \end{eqnarray*} and similarly $u_i(0) \leq \eta-\varepsilon_1,$ which indicates the condition of $u_i(0)\in [-\eta+\delta_i(0),\eta-\delta_i(0)]$ in Definition \ref{def_reach} is satisfied. Also, we can get \begin{eqnarray*} \begin{aligned} &\widetilde{\theta}_i(0)+u_i(0)-\theta^*\\ &\in \left[\min\{-2\varepsilon_1, 2\varepsilon_1-\frac{\pi}{2}\},\max\{2\varepsilon_1,-2\varepsilon_1+\frac{\pi}{2}\}\right]\\ &=\left[2\varepsilon_1-\frac{\pi}{2},-2\varepsilon_1+\frac{\pi}{2}\right] \end{aligned} \end{eqnarray*} and so with (\ref{model2_new}) \begin{eqnarray*} \begin{aligned} \theta_i(1)-\theta^*\in [\varepsilon_1-\frac{\pi}{2},-\varepsilon_1+\frac{\pi}{2}],~~~~\forall 1\leq i\leq n, \end{aligned} \end{eqnarray*} which indicates that \begin{eqnarray*} \widetilde{\theta}_i(1)-\theta^*\in [\varepsilon_1-\frac{\pi}{2},-\varepsilon_1+\frac{\pi}{2}],~~~~\forall 1\leq i\leq n. \end{eqnarray*} Next we control all the headings of the agents to the neighborhood of $\theta^*$. Let $\varepsilon_2:=\min\{\frac{\pi}{8},\frac{\eta}{4},\frac{\alpha}{2}\}$ and set $t_1:=\lceil \frac{\frac{\pi}{2}-\eta+\varepsilon_2}{\eta-2\varepsilon_2}\rceil+2$. For $1\leq t<t_1$ and $1\leq i \leq n$ we choose $\delta_i(t)=\varepsilon_2$ and \begin{eqnarray} u_i(t)=\left\ \begin{array}{ll} -\eta+\varepsilon_2~~~\mbox{if}~\widetilde{\theta}_i(t)-\theta^*\in (\eta-\varepsilon_2,\frac{\pi}{2})\nonumber\\ \theta^*-\widetilde{\theta}_i(t)~\mbox{if}~\widetilde{\theta}_i(t)-\theta^*\in [\varepsilon_2-\eta,\eta-\varepsilon_2],\\ \eta-\varepsilon_2~~~~~~\mbox{if}~\widetilde{\theta}_i(t)-\theta^*\in (\frac{-\pi}{2},\varepsilon_2-\eta). \end{array \right. \end{eqnarray} With the almost same process of (\ref{lem1_2})-(\ref{lem1_8}) we have \begin{eqnarray*} \theta_i(t_1)-\theta^*\in [-\varepsilon_2,\varepsilon_2],~~~~\forall 1\leq i\leq n. \end{eqnarray*} Finally we control all the headings of the agents to the neighborhood of $0$. Without loss of generality we assume $\theta^*\in [-\pi,0]$. For $t\geq t_1$ and $1\leq i \leq n$ we choose $\delta_i(t)=\varepsilon_2$ and \begin{eqnarray} u_i(t)=\left\ \begin{array}{ll} \eta-\varepsilon_2~~~~~\mbox{if}~\widetilde{\theta}_i(t)\in [-\pi-\varepsilon_2,\varepsilon_2-\eta)\nonumber\\ -\widetilde{\theta}_i(t)~~~~\mbox{otherwise}, \end{array \right. \end{eqnarray} and can get that $\theta_i(t_2)\in [-\varepsilon_2,\varepsilon_2]$, $1\leq i \leq n$ with $t_2:=\lceil\frac{\pi}{\eta-2\varepsilon_2}\rceil+t_1$. \end{proof} \begin{rem} The condition of $\eta>\frac{\pi}{2}-\frac{\pi}{n}$ can be satisfied for any non-degenerate zero-mean Gaussian white noise. In fact, we conjecture Lemma \ref{lem1_2} should also hold for any $\eta>0$. However, its strict proof has some difficulty. Liu and Guo\cite{liu2009b} considered the related problem for the original Vicsek model without noise but they need add an assumption on the initial headings. \end{rem} The following lemma describes a connection between the order parameter and the maximum headings' difference. \begin{lem}\label{lem2} For any $\varepsilon\in(0,1)$ and $\theta(t)\in[-\pi,\pi)^n$, if $d_{\theta(t)}\leq \arccos(1-\varepsilon)^2$ then the order function $\varphi(t)\geq 1-\varepsilon$. \end{lem} \begin{proof} By the definition of $\varphi(t)$ we have \begin{eqnarray*} \begin{aligned} \varphi(t)&=\frac{1}{n}\big\|\left(\sum_{i=1}^n\cos\theta_i(t),\sum_{i=1}^n \sin\theta_i(t) \right)\big\|_2\\ &=\frac{1}{n} \sqrt{\sum_{i,j}\cos\left[\theta_i(t)-\theta_j(t) \right] }\\ &\geq \sqrt{\cos \left( \arccos(1-\varepsilon)^2 \right)}\geq 1-\varepsilon. \end{aligned} \end{eqnarray*} \end{proof} For any $\varepsilon>0$, define \begin{eqnarray*} S_{\varepsilon}^2:=\big\{(X,\theta)\in S^*: \frac{1}{n}\big\|\sum_{i=1}^n(\cos\theta_i,\sin\theta_i\big)\big\|_2\leq \varepsilon\big\}. \end{eqnarray*} Then $S_{\varepsilon}^2$ is a set of disordered states providing $\varepsilon$ close to $0$. The following lemma describes a transition from ordered states to disordered states. \begin{lem}\label{lem3} For any $\varepsilon>0$, $ S_{\varepsilon}^2$ is finite-time robustly reachable from $ S_{\eta}^1$ under both protocols (\ref{model2_new}) and (\ref{model2}). \end{lem} The outline of the proof of Lemma \ref{lem3} is: We divide the agents into different sets firstly, then we control the agents' headings in different sets having a certain difference, which will break all the communications between different sets after a finite time. After that we control the headings in each set to a designed angle such that the order parameter of the system becomes very small. The detailed proof sees Appendix. We will give the following theorem which says the order parameter will switch infinite times between very large and very small. Note that the large order parameter indicates the ordered states and the small order parameter indicates the disordered states. Also, we recall that \emph{all the results use the assumptions provided in the first paragraph of this section}. \begin{thm}\label{result_1} Let $\varepsilon\in(0,1)$ be arbitrarily given. Then for system II (or system I with $\eta>\frac{\pi}{2}-\frac{\pi}{n}$), with probability $1$ there exists an infinite time sequence $t_1<t_2<\cdots$ such that \begin{eqnarray*} \varphi(t_i)\left\ \begin{array}{ll} \geq 1-\varepsilon~~~~~\mbox{if $i$ is odd},\\ \leq \varepsilon~~~~~~~~~~~~\mbox{if $i$ is even}. \end{array \right. \end{eqnarray*} Moreover, let $\tau_0=0$ and $\tau_i$ denote the stopping time as \begin{eqnarray*} \tau_i=\left\ \begin{array}{ll} \min\{t>\tau_{i-1}:\varphi(t)\geq 1-\varepsilon\}~~~~~\mbox{if $i$ is odd}\\ \min\{t>\tau_{i-1}:\varphi(t)\leq \varepsilon\}~~~~~~~~~~~~\mbox{if $i$ is even} \end{array \right. \end{eqnarray*} for $i\geq 1$, then for all $k\geq 0$ and $t\geq 0$, \begin{eqnarray}\label{theo1_0} \begin{aligned} P\left(\tau_{2k+2}-\tau_{2k}> t \right) \leq (1-c)^{\lfloor t/T\rfloor}, \end{aligned} \end{eqnarray} where $c\in (0,1)$ and $T>0$ are constants depending on $n, r_{\max},\eta, v$ and $\underline{\rho}$ only. \end{thm} \begin{proof} Firstly by Lemmas \ref{lem1} (or \ref{lem1_2}) and \ref{lem3} we can get $ S_{\varepsilon}^2$ is finite-time robustly reachable for any initial state. Also, define \begin{eqnarray*} \overline{ S}_{\varepsilon}:=\big\{(X,\theta)\in S^*: \frac{1}{n}\big\|\sum_{i=1}^n(\cos\theta_i,\sin\theta_i\big)\big\|\geq 1-\varepsilon\big\}. \end{eqnarray*} By Lemmas \ref{lem1} (or \ref{lem1_2}) and \ref{lem2} we have $\overline{ S}_{\varepsilon}$ is also finite-time robustly reachable for any initial state. Using Lemma \ref{robust} our results can be obtained by taking $ S_1=\overline{ S}_{\varepsilon}$ and $ S_2= S_{\varepsilon}^2$. \end{proof} We remark that if the noises are non-degenerate zero-mean Gaussian white noises we can find $\eta>\frac{\pi}{2}-\frac{\pi}{n}$ satisfying (\ref{noise_cond_2}). This condition can be relaxed under an assumption, see the following Theorem \ref{relt_assmp}. Also, without this condition we will get the disordered states are still reached. We firstly introduce a definition and a lemma as follows: Similar to $S_{\alpha}^1$ we set $$ S_{\alpha}^3:=\left\{(X,\theta)\in S^*: d_{\theta}<\alpha\right\}$$ for any $\alpha>0$. Compared to $S_{\alpha}^1$, the difference of this set may not take the zero as its center angle. \begin{lem}\label{lem3_new} $( S_{\pi}^3)^c$ is finite-time robustly reachable from $ S_{\pi}^3$ under both protocol (\ref{model2_new}) and (\ref{model2}). \end{lem} The proof of this lemma is put in Appendix. The following theorem says for any initial sates and system parameters the disordered states are still reached infinite times: \begin{thm}\label{result_org_1} For system I (or II), with probability $1$ there exists an infinite time sequence $t_1<t_2<\cdots$ such that $d_{\theta(t_i)}\geq\pi$ for all $i\geq 1$; moreover, let $\tau_0=0$ and $\tau_{i+1}$ denote the stopping time as \begin{eqnarray*} \tau_{i+1}:=\min\{t>\tau_{i}:d_{\theta(t)}\geq \pi\}, \end{eqnarray*} then for all $i\geq 1$ and $t\geq 0$, \begin{eqnarray}\label{theo1_0} \begin{aligned} P\left(\tau_{i}-\tau_{i-1}> t \right)\leq (1-c)^{\lfloor t/T\rfloor}, \end{aligned} \end{eqnarray} where $c\in (0,1)$ and $T>0$ are constants depending on $n, r_{\max},\eta, v$ and $\underline{\rho}$ only. \end{thm} \begin{proof} Immediate from Corollary \ref{robust2} and Lemma \ref{lem3_new}. \end{proof} The applications and significance of Theorems \ref{result_1} and \ref{result_org_1} are provided in Section \ref{mrs_2} together with the corresponding results under periodic boundary conditions. As mentioned in the Subsection \ref{Subsection_def2}, the robust consensus has been interested by many papers\cite{Wang2009,Shi2013,Tian2009,Munz2011,Khoo2009,CY2011}. We also give a result for the robust consensus: \begin{cor}\label{cor_1} The robust consensus cannot be achieved for both systems I and II. \end{cor} \begin{proof} Immediate from Definition \ref{def_robust_consensus} and Theorem \ref{result_org_1}. \end{proof} In \cite{Jad1} Jadbabaie $et~al.$ analyzed the system II without noise, and mentioned that to understand the effects of additive noise, one should focus on how noise affected the connectivity of the associated neighbor graphs. Later, Tahbaz-Salehi and Jadbabaie investigated the original Vicsek model without noise and claimed that the neighbor graphs were jointly connected over infinitely many time intervals for almost all initial states under periodic boundary conditions\cite{Jad2007}. The following Theorem provides an answer of how noise affects the connectivity under the open boundary conditions: \begin{thm}\label{result_2} For system II (or system I with $\eta>\frac{\pi}{2}-\frac{\pi}{n}$), $\{G(t)\}_{t=0}^{\infty}$ is not uniformly jointly weakly connected with probability $1$. \end{thm} The proof of this theorem which uses the idea appearing in the proof of Lemma \ref{lem3} is put in Appendix. The colorful collective motion of groups of animals attracts much attention from various fields. An existing basic question is what the behind laws of the collective motion are. Furthermore, do these laws exist some common ground? We give two theorems concerning with turn, vortex, bifurcation and merge: \begin{thm}\label{turn_1} For system II (or system I with $\eta>\frac{\pi}{2}-\frac{\pi}{n}$), the events of turn, bifurcation and merge will happen infinite times with probability $1$. \end{thm} \begin{thm}\label{vortex_1} For system I with $\eta>\frac{\pi}{2}-\frac{\pi}{n}$, with probability $1$ there exist vortices whose duration can be arbitrarily long. \end{thm} The proofs of Theorems \ref{turn_1} and \ref{vortex_1} are put in Appendix. Our methods have possible applications in some engineering systems. For example, \cite{Yin2011,Wang2013} investigated the consensus algorithms for a platoon model, however the crash analysis is still lake. Using Lemma \ref{robust}, to analyze whether the vehicles will crash under the original system, we can try to design a cooperative control under its corresponding new system such that the crash states are reached. The design of the cooperative control may use the ideas of the proofs of Theorems \ref{turn_1} and \ref{vortex_1}. Based on this we may also investigate how to design the collision avoidance algorithms for consensus of platoon models. \section{Results under Periodic Boundary Conditions}\label{mrs_2} The system studied by Vicsek \emph{et al.} in \cite{Vicsek1995} assumes all the agents move in the square $[0,L)^2$ with periodic boundary conditions, which indicates that if an agent hits the boundary of the square, it will enter this square from the opposite boundary with the same velocity and heading. In mathematics, these conditions contain two meanings: (i) For all $i\in [1,n]$ and $t\geq 1$ we restrict $x_{i1}(t)$ and $x_{i2}(t)$ to the interval $[0,L)$ by modulo $L$ when they are out this interval; (ii) For all $i,j\in [1,n]$ and $t\geq 0$, \begin{eqnarray*} \begin{aligned} \|X_i(t)-X_j(t)\|_2^2&=\min\{|x_{i1}(t)-x_{j1}(t)|, |x_{i1}(t)-x_{j1}(t)\pm L|\}^2\\ &~~+\min\{|x_{i2}(t)-x_{j2}(t)|, |x_{i2}(t)-x_{j2}(t)\pm L|\}^2. \end{aligned} \end{eqnarray*} Similar to Section \ref{mrs_1}, throughout this section we assume that \emph{the population size $n\geq 2$, the parameters $\eta>0$, $v>0$, $r_i\geq 0$, $1\leq i\leq n$, and the initial positions $X(0)\in [0,L)^{2n}$ and headings $\theta(0)\in [-\pi,\pi)^n$ are arbitrarily given.} Additionally, we also assume \emph{all the agents move in $[0,L)^2$ with periodic boundary conditions.} To simplify the exposition we will not repeat these settings for our results in this section. With the same proofs we can get Lemmas \ref{lem1} and \ref{lem1_2} still hold under periodic boundary conditions. Define $$r_{\max}:=\max_{1\leq i\leq n}r_i.$$ For Lemmas \ref{lem3} and \ref{lem3_new}, the corresponding versions under periodic boundary conditions are provided as follows: \begin{lem}\label{lem4} Let $\varepsilon>0$ is arbitrarily given. For both protocols (\ref{model2_new}) and (\ref{model2}), if \begin{eqnarray}\label{theo2_00} L>\left\ \begin{array}{ll} 2r_{\max}+2v\sum_{k=0}^{\lfloor \frac{\pi}{2\eta}-\frac{1}{2}\rfloor}\sin(\frac{\eta}{2}+k\eta)\\ ~~\mbox{if $n$ is even or $\varepsilon> \frac{1}{n}$},\\ 3r_{\max}+2v\sum_{k=0}^{\lfloor \frac{\pi}{2\eta}+\frac{1}{\eta}\arcsin\frac{1}{n-1}-\frac{1}{2}\rfloor}\sin(\frac{\eta}{2}+k\eta)\\ ~~\mbox{otherwise},\\ \end{array \right. \end{eqnarray} then $S_{\varepsilon}^2$ is finite-time robustly reachable from $ S_{\eta}^1$. \end{lem} \begin{lem}\label{lem4_new} For both protocols (\ref{model2_new}) and (\ref{model2}), if \begin{eqnarray}\label{theo2_org_00} L>2r_{\max}+2v\sum_{k=0}^{\lfloor \frac{\pi}{2\eta}-\frac{1}{2}\rfloor}\sin(\frac{\eta}{2}+k\eta), \end{eqnarray} then $( S_{\pi}^3)^c$ is finite-time robustly reachable from $ S_{\pi}^3$. \end{lem} The proofs of Lemmas \ref{lem4} and \ref{lem4_new} are put in Appendix. Recall that \emph{all the results of this section use the assumptions given in its second paragraph}. Similar to Theorems \ref{result_1} and \ref{result_org_1} we give the following Theorems \ref{result_3} and \ref{result_org_3}: \begin{thm}\label{result_3} If (\ref{theo2_00}) holds then all the results of Theorem \ref{result_1} still hold with $c$ and $T$ depending on $L$ additionally. \end{thm} \begin{proof} With the same proofs we can get Lemmas \ref{lem1} and \ref{lem1_2} still hold under periodic boundary conditions. With the same proof as Theorem \ref{result_1} but using Lemma \ref{lem4} instead of Lemma \ref{lem3} yields our result. \end{proof} \begin{thm}\label{result_org_3} If (\ref{theo2_org_00}) holds then all the results of Theorem \ref{result_org_1} still hold with $c$ and $T$ depending on $L$ additionally. \end{thm} \begin{proof} Immediate from Corollary \ref{robust2} and Lemma \ref{lem4_new}. \end{proof} In traditional senses the order parameters of the SPP systems have phase transition with respect to the noise and the population density \cite{Vicsek1995,Czirak1999} which uses an assumption that these systems will keep in order after a certain time providing the noise is small and population density is high. However, we notice that Theorems \ref{result_org_1} and \ref{result_org_3} hold for any $\eta>0$ (providing (\ref{theo2_org_00}) holds under periodic boundary conditions) and $n\geq 2$, so for any noise intensity and population density, the order of the SPP systems can be broken when the time grows large. Additionally, by Theorems \ref{result_1}, \ref{result_3} and the following Theorem \ref{relt_assmp} the SPP systems will switch between ordered and disordered states infinite times for any size of noise and population density. Thus, our results indicate the order parameter does not exhibit the simple phase transition described in \cite{Vicsek1995,Czirak1999}. Combined our previous work\cite{Chen2014}, it is suggested that the time interval between ordered and disordered states may exhibit phase transition concerning the noise and population density. Also, our results provide a mathematical proof to the phenomenon that randomness can make the nonequilibrium systems exhibiting anomalously giant fluctuations, which has been observed in many actual systems such as glassy systems, granular packings, active colloids, etc\cite{Tsimring2014,Keizer1987}. The following theorem provides how noise affects the connectivity similar to Theorem \ref{result_2}. \begin{thm}\label{result_4} For protocol II (or system I with $\eta>\frac{\pi}{2}-\frac{\pi}{n}$), if $L>2r_{\max}$ then $\{G(t)\}_{t=0}^{\infty}$ is not uniformly jointly weakly connected with probability $1$. \end{thm} The proof of this Theorem is put in Appendix. Remark that applying Theorem \ref{result_4} to the homogeneous case we can get $\{G(t)\}_{t=0}^{\infty}$ is not uniformly jointly connected with probability $1$. Corollary \ref{cor_1} says the robust consensus cannot be reached under the open boundary conditions. However, for the periodic boundaries, the problem whether the robust consensus can be reached is still unresolved. In \cite{Wang2009} it was shown that for the systems whose topologies do not couple their states, the uniformly joint connectivity of the associated undirected graphs is a necessary and sufficient condition for the robust consensus. However, such result cannot be adapted to our models. Finally we give the corresponding results of Theorems \ref{turn_1} and \ref{vortex_1} for periodic boundary conditions. \begin{thm}\label{turn_2} For system II (or system I with $\eta>\frac{\pi}{2}-\frac{\pi}{n}$), with probability $1$ the event of turn will happen infinite times for any $L>0$. Additionally, if (\ref{theo2_org_00}) is satisfied, the events of bifurcation and merge will also happen infinite times with probability $1$. \end{thm} \begin{thm}\label{vortex_2} If $\eta>\frac{\pi}{2}-\frac{\pi}{n}$ and $L>0$, with probability $1$ system I will product vortices whose duration can be arbitrarily long. \end{thm} The proofs of Theorems \ref{turn_2} and \ref{vortex_2} are put in Appendix. Buhl \emph{et al.}\cite{Buhl2006} used the one-dimensional version of the Vicsek model to investigate the collective behavior of locusts. By simulations it was shown that the system exhibited the large fluctuations of order parameter and the repeated changes of group's moving direction when the density of the individuals was low and middle, but became highly ordered after a short time when the density was high. They also found the similarity between the simulations and actual experiments. Because the homogeneous versions of our models has the similar rules and features with the model in \cite{Buhl2006}, to some degree Theorems \ref{result_3}, \ref{turn_2}, \ref{vortex_2} and \ref{relt_assmp} can explain the repeated switches of the group's moving direction and the large fluctuations of the order parameter for low and middle densities in \cite{Buhl2006}, and can predict that these behaviors still exist for high density when the time step grows large enough. \section{Results under An Assumption}\label{mrs_3} Naturally the original Vicsek model can evolve from disordered to ordered states. This nature can be verified by the simulations\cite{Vicsek1995}, however, we can only prove it for the case of $\eta>\frac{\pi}{2}-\frac{\pi}{n}$ which should also hold for any $\eta>0$. If this bottleneck can be broken, we can get that the results of system I in Theorems \ref{result_1}-\ref{vortex_1} and \ref{result_3}-\ref{vortex_2} still hold by relaxing the condition $\eta>\frac{\pi}{2}-\frac{\pi}{n}$ to $\eta>0$. This fact can be formulated as the following theorem: \begin{thm}\label{relt_assmp} Assume Lemma \ref{lem1_2} still holds if the condition of $\eta>\frac{\pi}{2}-\frac{\pi}{n}$ is relaxed to $\eta>0$ under both open and periodic boundary conditions. Then the results of system I in Theorems \ref{result_1}, \ref{result_2}, \ref{turn_1}, \ref{vortex_1}, \ref{result_3}, \ref{result_4}, \ref{turn_2} and \ref{vortex_2} also hold when $\eta>\frac{\pi}{2}-\frac{\pi}{n}$ is replaced by $\eta>0$. \end{thm} The proof of this theorem is put in Appendix. \section{Simulations}\label{sim} To illustrate our results better this section will provide some simulations for both systems I and II under periodic boundary conditions, and for each system we consider both the homogeneous and inhomogeneous cases. In our simulations we assume the noise $\xi_i(t), 1\leq i\leq n, t\geq 0$ of systems I and II is independently and uniformly distributed in $[-0.6,0.6]$, and choose the speed $v=0.01$, and the square' side length $L=5$. Let the interaction radius $r_i$, $1 \leq i\leq n$ equal to $1$ for the homogeneous case, and be randomly selected in $[0,2]$ with the uniform and independent distribution for the inhomogeneous case. For $1\leq i,j\leq n$ and $t\geq 0$ we set the interaction weights \begin{eqnarray*} f_{ij}(t)=\left\ \begin{array}{ll} 1~~~~~~~~~~\mbox{if $\|X_i(t)-X_j(t)\|_2\leq r_i$},\\ 0~~~~~~~~~~\mbox{else}. \end{array \right. \end{eqnarray*} The initial headings and positions are randomly selected in $[-\pi,\pi)$ and $[0,L)^2$ with the independent and uniform distribution respectively. We firstly simulate the homogeneous system II by choosing $n=10,25,40$, which represent the low, middle and high density respectively. The maximum time step is set to be $10^6$. The value of the order function $\varphi(t)$ is shown in Figure \ref{Fig1}. \begin{figure*}[htbp] \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=3in]{FigL.eps} \caption{The order parameter $\varphi(t)$ of the homogeneous system II}\label{Fig1} \end{minipage} \hfill \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=3in]{FigN.eps} \caption{The order parameter $\varphi(t)$ of the homogeneous system I (original Vicsek model)}\label{Fig2} \end{minipage} \end{figure*} By this simulation it can be observed that from low density to high density, the system will exhibit ordered state at some moments and disordered state at some other moments when the time grows large. Such observation is completely in conformity with our theoretical results. \begin{figure*}[htbp] \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=3in]{FigLin.eps} \caption{The order parameter $\varphi(t)$ of the inhomogeneous system II}\label{Fig3} \end{minipage} \hfill \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=3in]{FigNin.eps} \caption{The order parameter $\varphi(t)$ of the inhomogeneous system I}\label{Fig4} \end{minipage} \end{figure*} Figure \ref{Fig2} give the simulation results for the homogeneous system I (original Vicsek model) with the same configurations as Figure \ref{Fig1}. In these simulations the original Vicsek model exhibits the similar behaviors of the order function as Figure \ref{Fig1}, which implies the condition $\eta>\frac{\pi}{2}-\frac{\pi}{n}$ for our theoretical results on this model can be relaxed. Figures \ref{Fig3} and \ref{Fig4} use the same configurations as Figures \ref{Fig1} and \ref{Fig2} except the interaction radius is randomly selected in $[0,2]$. Compared Figure \ref{Fig1} with Figure \ref{Fig3}, and Figure \ref{Fig2} with Figure \ref{Fig4} we can observe that the homogeneousness benefits the order of the system rather than the inhomogeneities. \section{Conclusion and Future Works}\label{conclude} The self-organized systems constituted by the deterministic laws and randomness widely exist in the natural, engineering, social and economic systems. However, the analysis on how local rules of these systems lead to their global behavior is a common hard problem in many fields. This paper originally proposes a general method for this problem which transforms it to the design of the cooperative control algorithms. Using our method we reveal how the noise affects the order and connectivity of the inhomogeneous SPP systems, and also show these systems can spontaneously produce the phenomena of turn, vortex, bifurcation and merge of flocks. An interesting problem for the SPP systems is how to minimize the effect of the noises and keep the system in order. A possible method is to adopt the idea of the distributed stochastic approximation: Each agent uses a decreasing gain function acting on the neighbors' information to reduce the measurement or communication noises\cite{Li2010,Huang2012,Yin2011,Cheng2014}. On the other hand, as mentioned in much literature the Vicsek model is very basic but probably not really descriptive of actual biological clusters. In the future we will try to use our methods to analyze more practical systems. Of course, the design of the control algorithms is still a challenge in the complex actual systems. Another future work is to develop the corresponding theory on how to design these algorithms.
1,108,101,566,830
arxiv
\section{Introduction} \label{section:introduction} \textsc{Scrabble} has been played for a long while in various settings, e.g. as a friendly game among friends or household members, in competitive matches and also as a language learning tool. Using game refinement theory \cite{paper:mathematical_theory_of_game_refinement}, we have discovered that \textsc{Scrabble} has fun game aspect over educational aspect \cite{paper:gamification_and_scrabble}. This paper is an attempt to enhance the \textsc{Scrabble} with learning aspect. Emotional excitement or mental engagement in games is the subject of game refinement theory. Early work in this direction has been carried out by Iida et al. \cite{paper:application_of_game_refinement_to_mah_jong}, while constructing a logistic model based on game outcome uncertainty to measure the attractiveness and sophistication of games, known as game refinement theory. Although many efforts have been devoted to the study of scoring sports and boardgames, \textsc{Scrabble} also has an educational aspect which requires extra dimension to explore. The structure of the paper is as follows. Section~\ref{section:scrabble} presents the basic rules of \textsc{Scrabble}. Section~\ref{section:game_refinement_measure} and Section~\ref{section:education_dimension} describe \textsc{Scrabble} in two distinct dimensions of measurement from the perspective of entertainment and education, respectively. Section~\ref{section:assessment_and_discussion} presents the assessment using the swing model, thus discusses the results of the analysis, and concluding remarks are given in Section~\ref{section:conclusion}. \section{Scrabble} \label{section:scrabble} \textsc{Scrabble} is a word anagram game in which 2 to 4 players competitively score points by placing tiles, each bearing a single letter, onto a 15x15 board. The standard board is shown in Figure~\ref{table:scrabble_board}. The tiles must form words that are accepted by the dictionary, in either the vertical or horizontal direction in a crossword style. \begin{figure}[ht] \centering \begin{tabular}{|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|p{0.4cm}|} \hline $3W$ & & &$2L$ & & & &$3W$ & & & &$2L$ & & &$3W$ \\ \hline &$2W$ & & & &$3L$ & & & &$3L$ & & & &$2W$ & \\ \hline & &$2W$ & & & &$2L$ & &$2L$ & & & &$2W$ & & \\ \hline & & &$2W$ & & & &$2L$ & & & &$2W$ & & & \\ \hline & & & &$2W$ & & & & & &$2W$ & & & & \\ \hline &$3L$ & & & &$3L$ & & & &$3L$ & & & &$3L$ & \\ \hline & &$2L$ & & & &$2L$ & &$2L$ & & & &$2L$ & & \\ \hline $3W$ & & &$2L$ & & & &$2W$ & & & &$2L$ & & &$3W$ \\ \hline & &$2L$ & & & &$2L$ & &$2L$ & & & &$2L$ & & \\ \hline &$3L$ & & & &$3L$ & & & &$3L$ & & & &$3L$ & \\ \hline & & & &$2W$ & & & & & &$2W$ & & & & \\ \hline & & &$2W$ & & & &$2L$ & & & &$2W$ & & & \\ \hline & &$2W$ & & & &$2L$ & &$2L$ & & & &$2W$ & & \\ \hline &$2W$ & & & &$3L$ & & & &$3L$ & & & &$2W$ & \\ \hline $3W$ & & &$2L$ & & & &$3W$ & & & &$2L$ & & &$3W$ \\ \hline \end{tabular} \caption{Standard \textsc{Scrabble} board} \label{table:scrabble_board} \end{figure} There are 2 general sets of acceptable words, named OCTWL and SOWPODS. These 2 sets were developed specially for \textsc{Scrabble} so that there are only words of 2-15 characters. OCTWL is generally used in the USA, Canada, and Thailand while other countries are using SOWPODS. There are differences in the number of words, as shown in Table~\ref{table:scrabble_acceptable_words_distribution}. \begin{table}[ht] \caption{Acceptable words distribution in \textsc{Scrabble}} \label{table:scrabble_acceptable_words_distribution} \centering \begin{tabular}{p{2.2cm} p{4.0cm} l} \Xhline{4\arrayrulewidth} Set &OCTWL &SOWPODS \\ \hline Usage &USA, Canada, Thailand &Others \\ Total Word &187632 &267751 \\ \Xhline{4\arrayrulewidth} \end{tabular} \end{table} Table~\ref{table:scrabble_population} shows the population distribution of players from \textsc{cross-tables} \cite{url:cross_tables}, the unofficial online \textsc{Scrabble} resource. Obviously, there is a large difference between those who are the native speaker and those who are not. Then, we hypothesize that current setting of \textsc{Scrabble} is more attractive for players that have sufficient English knowledge, than most language learners. \begin{table}[ht] \caption{Population distribution of \textsc{Scrabble} players in \textsc{cross-tables}} \label{table:scrabble_population} \centering \begin{tabular}{p{1.8cm} p{2.6cm} r r} \Xhline{4\arrayrulewidth} Country &Official language &Players &\%~~ \\ \hline Barbados &English, Bajan &2~~ &0.149 \\ Canada &English, French &293~~ &21.8331 \\ Israel &Hebrew, Arabic &1~~ &0.0745 \\ Thailand &Thai &3~~ &0.2235 \\ USA &English &1041~~ &77.5708 \\ Unknown &Unknown &2~~ &0.149 \\ \Xhline{4\arrayrulewidth} \end{tabular} \end{table} \section{Game Refinement Measure} \label{section:game_refinement_measure} This section gives a short description of game refinement theory. A general model of game refinement was proposed based on the concept of game progress and game information progress \cite{paper:mathematical_theory_of_game_refinement}. It bridges a gap between boardgames and sports games. \subsection{Game Progress Model} The 'game progress' is twofold. One is game speed or scoring rate, while another one is game information progress which focuses on the game outcome. Game information progress presents the degree of certainty of the game's results in time or in steps. Having full information of the game progress, i.e. after its conclusion, game progress $x(t)$ will be given as a linear function of time $t$ with $0 \leq t \leq t_k$ and $0 \leq x(t) \leq x(t_k)$, as shown in Eq.~(\ref{equation:game_refinement_history_1}). \begin{equation} \label{equation:game_refinement_history_1} x(t) = \frac{x(t_k)}{t_k} ~ t \end{equation} However, the game information progress given by Eq.~(\ref{equation:game_refinement_history_1}) is unknown during the in-game period. The presence of uncertainty during the game, often until the final moments of a game, reasonably renders game progress exponential. Hence, a realistic model of game information progress is given by Eq.~(\ref{equation:game_refinement_history_2}). \begin{equation} \label{equation:game_refinement_history_2} x(t) = x(t_k) (\frac{t}{t_k})^n \end{equation} Here $n$ stands for a constant parameter which is given based on the perspective of an observer of the game that is considered. Then the acceleration of the game information progress is obtained by deriving Eq.~(\ref{equation:game_refinement_history_2}) twice. Solving it at $t = t_k$, we have Eq.~(\ref{equation:game_refinement_history_3}). \begin{equation} \label{equation:game_refinement_history_3} x''(t_k) = \frac{x(t_k)}{(t_k)^n} t^{n-2} ~ n(n-1) = \frac{x(t_k)}{(t_k)^2} ~ n(n-1) \end{equation} It is assumed in the current model that game information progress in any type of game is encoded and transported in our brains. We do not yet know about the physics of information in the brain, but it is likely that the acceleration of information progress is subject to the forces and laws of physics. Therefore, we expect that the larger the value $\frac{x(t_k)}{(t_k)^2}$, the more exciting the game becomes, due in part to the uncertainty of the game outcome. Thus, we use its root square, $\frac{\sqrt{x(t_k)}}{t_k}$, as a game refinement measure for the game under consideration. We call it $GR$ value for short, also call $x(t_k)$ and $t_k$ as $G$ and $T$ respectively, as shown in Eq.~(\ref{equation:game_refinement_history_4}). \begin{equation} \label{equation:game_refinement_history_4} GR = \frac{\sqrt{G}}{T} \end{equation} In the previous works, the game progress model has been applied to various sports games \cite{paper:game_refinement_theory_score_limit_games} to verify its effectiveness. The appropriate zone of game refinement measure range from 0.07 to 0.08. The game progress model has been expanded to other domains such as multiplayer card games \cite{paper:game_refinement_theory_and_multiplayer_game_uno} and video games \cite{paper:quantifying_engagement_of_various_games}. We show, in Table~\ref{table:game_refinement_comparison_game_progress_model}, the results of measures of game refinement for some games. \begin{table}[ht] \caption{Comparison of game refinement values for some games} \label{table:game_refinement_comparison_game_progress_model} \centering \begin{tabular}{l c c c} \Xhline{4\arrayrulewidth} &Successful shoot (G) &Attempt (T) & GR \\ \hline Soccer &2.64 &22 &0.073 \\ Basketball &36.38 &82.01 &0.073 \\ UNO &0.976 &12.684 &0.078 \\ Badminton &46.336 &79.344 &0.086 \\ Table Tennis &54.863 &96.465 &0.077 \\ DotA &68.6 &106.2 &0.078 \\ \Xhline{4\arrayrulewidth} \end{tabular} \end{table} \subsection{Swing Model} In scoring boardgames like \textsc{Scrabble}, swing, a state transition of advantage during the game progress is considered as successful shoot, and game length as attempt respectively. Let $S$ and $N$ be the average number of swings and the game length, respectively. Then the refinement measure in the swing model is given by Eq.~(\ref{equation:game_refinement_swing}). \begin{equation} GR = \frac{\sqrt{S}}{N} \label{equation:game_refinement_swing} \end{equation} \section{Another Measure from Educational Perspective} \label{section:education_dimension} This section gives a new measure from the educational perspective given by focusing on a balance between complexity and learning efficiency. \subsection{Complexity} The measure of search-space complexity or complexity \cite{Allis1994} indicates the total possible in the game represented on the natural logarithm scale. Let $B$ and $D$ be the average branching factor and average game length respectively. The complexity is obtained by Eq.~(\ref{equation:complexity}). \begin{equation} C = D\ln{B} \label{equation:complexity} \end{equation} Complexity measure can express the complexity of the game from the viewpoint of players. The player who has the ability to handle problems with higher complexity would think wider and deeper, thus have a better solution and understanding the nature of the game. The complexity of some existing games from the viewpoint of experts are shown in Table~\ref{table:complexity_comparison}. According to the history of game artificial intelligence development, chess computer Deep Blue won a world champion Garry Kasparov in May 1997 \cite{paper:deep_blue_artificial_intelligence}. Then, Go computer AlphaGo won a world champion Ke Jie in May 2017 \cite{paper:mastering_the_game_of_go}. The difficulty in artificial intelligence development obviously shows that complexity of Go is much more than that of chess. Similarly, the complexity of chess is much more than that of Tic-tac-toe. \begin{table}[ht] \caption{Comparison of complexity for some board games} \label{table:complexity_comparison} \centering \begin{tabular}{p{2cm} c c c} \Xhline{4\arrayrulewidth} & Branching factor (B) & Game length (D) & Complexity (C) \\ \hline Tic-tac-toe & $\leq$9 & $\leq$9 & $\leq$19.775 \\ Chess & 35 & 80 & 284.428 \\ Go & 250 & 208 & 1148.464 \\ \Xhline{4\arrayrulewidth} \end{tabular} \end{table} \subsection{Learning Coefficient} From the experiments performed with different size of dictionary and different player's model, we found that the complexity measure has the linear relation with AI knowledge base. It enables to calculate the slope per each dictionary size. Let $d$ and $t$ be the custom dictionary size and total words in dictionary respectively. The total words in custom dictionary $d'$ can be obtained by Eq.~(\ref{equation:complexity_slope_proof_1}). \begin{equation} \label{equation:complexity_slope_proof_1} d' = td \end{equation} Let $p$ and $x$ be the current player knowledge base and the newly learned words respectively. The new player knowledge base $p'$ can be obtained by Eq.~(\ref{equation:complexity_slope_proof_2}). \begin{equation} \label{equation:complexity_slope_proof_2} p' = p + \frac{x}{d'} = p + \frac{x}{td} \end{equation} By the definition of slope $m$, it can be obtained by the difference of complexity in proportional to the difference of player knowledge base, as shown by Eq.~(\ref{equation:complexity_slope_proof_3}). \begin{equation} \label{equation:complexity_slope_proof_3} m = \frac{\Delta c}{\Delta p} = \frac{\Delta c}{p' - p} = \frac{\Delta c}{p + \frac{x}{td} - p} = \frac{\Delta c}{\frac{x}{td}} \end{equation} Thus we have learning coefficient (say $\Delta c$), as shown by Eq.~(\ref{equation:complexity_slope_proof_4}). \begin{equation} \label{equation:complexity_slope_proof_4} \Delta c = \frac{mx}{td} \end{equation} The newly learned words $x$ and total words $d$ are constant. To maximize the benefit, we need to maximize the increment of $\Delta c$ since it represents the improvement of a learner with the same amount of newly learned words. In short, we need to maximize $\frac{m}{d}$, which we define in this study as learning coefficient. \section{Assessment and Discussion} \label{section:assessment_and_discussion} Experiments are conducted by simulating the \textsc{Scrabble} matches between AI with the various knowledge base, using various dictionary sizes. \subsection{Possible Enhancement with focus on Complexity} The experimental results are analyzed using game refinement measure, as shown in Fig.~\ref{figure:15x15_game_refinement}. \begin{figure}[h!] \begin{tikzpicture} \begin{axis}[ width=12cm, height=8cm, xlabel={AI knowledge base}, ylabel={Game refinement value}, xmin=0.1, xmax=1, ymin=0.07, ymax=0.14, xtick={0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1}, ytick={0, 0.07, 0.08}, legend pos=north east, ymajorgrids=true, grid style=dashed, cycle list name=color ] \addplot coordinates { (0.1, 0.2204) (0.2, 0.1262) (0.3, 0.1115) (0.4, 0.0894) (0.5, 0.0886) (0.6, 0.0831) (0.7, 0.0887) (0.8, 0.0766) (0.9, 0.0795) (1, 0.0731) }; \addlegendentry{d = 0.1} \addplot coordinates { (0.1, 0.1405) (0.2, 0.0996) (0.3, 0.0804) (0.4, 0.0771) (0.5, 0.0761) (0.6, 0.0723) (0.7, 0.0738) (0.8, 0.0788) (0.9, 0.0793) (1, 0.0834) }; \addlegendentry{d = 0.2} \addplot coordinates { (0.1, 0.118) (0.2, 0.0789) (0.3, 0.0747) (0.4, 0.0706) (0.5, 0.0816) (0.6, 0.0733) (0.7, 0.0817) (0.8, 0.0814) (0.9, 0.0819) (1, 0.0813) }; \addlegendentry{d = 0.3} \addplot coordinates { (0.1, 0.1076) (0.2, 0.074) (0.3, 0.0777) (0.4, 0.0798) (0.5, 0.0755) (0.6, 0.078) (0.7, 0.0799) (0.8, 0.0757) (0.9, 0.0828) (1, 0.0894) }; \addlegendentry{d = 0.4} \addplot coordinates { (0.1, 0.0842) (0.2, 0.0738) (0.3, 0.0754) (0.4, 0.0787) (0.5, 0.0763) (0.6, 0.0805) (0.7, 0.0759) (0.8, 0.0837) (0.9, 0.0832) (1, 0.0837) }; \addlegendentry{d = 0.5} \addplot coordinates { (0.1, 0.0886) (0.2, 0.0801) (0.3, 0.0739) (0.4, 0.08) (0.5, 0.0816) (0.6, 0.077) (0.7, 0.0798) (0.8, 0.0864) (0.9, 0.0817) (1, 0.0765) }; \addlegendentry{d = 0.6} \addplot coordinates { (0.1, 0.0808) (0.2, 0.0755) (0.3, 0.0701) (0.4, 0.0756) (0.5, 0.0833) (0.6, 0.0833) (0.7, 0.082) (0.8, 0.0791) (0.9, 0.0848) (1, 0.0881) }; \addlegendentry{d = 0.7} \addplot coordinates { (0.1, 0.0772) (0.2, 0.0812) (0.3, 0.0755) (0.4, 0.0824) (0.5, 0.0786) (0.6, 0.0837) (0.7, 0.0832) (0.8, 0.0892) (0.9, 0.0887) (1, 0.0903) }; \addlegendentry{d = 0.8} \addplot coordinates { (0.1, 0.0763) (0.2, 0.0726) (0.3, 0.0807) (0.4, 0.0764) (0.5, 0.0903) (0.6, 0.0802) (0.7, 0.0841) (0.8, 0.0911) (0.9, 0.0872) (1, 0.0907) }; \addlegendentry{d = 0.9} \addplot coordinates { (0.1, 0.0751) (0.2, 0.077) (0.3, 0.0796) (0.4, 0.0847) (0.5, 0.0785) (0.6, 0.0905) (0.7, 0.0927) (0.8, 0.0875) (0.9, 0.0856) (1, 0.0926) }; \addlegendentry{d = 1} \addplot[ color=black, opacity=0.25, name path=A ] plot coordinates { (0, 0.07)(1, 0.07) }; \addlegendentry{Appropriate zone} \addplot[ color=black, opacity=0.25, name path=B ] plot coordinates { (0,0.08)(1,0.08) }; \addplot[ color=black, fill opacity=0.25 ] fill between[of=A and B]; \end{axis} \end{tikzpicture} \caption{Measures of game refinement and AI knowledge base: standard \textsc{Scrabble} matches between AIs with different knowledge base on various dictionary size} \label{figure:15x15_game_refinement} \end{figure} Game refinement measure of the original \textsc{Scrabble} is slightly higher than appropriate zone. This reveals that the original setting of \textsc{Scrabble} yields excess branching factors. One possible enhancement is to reduce the board size, and it is found that 13x13 board size gives the best setting. The complete board is shown in Fig.~\ref{table:scrabble_board_13x13}. This results in 24.89\% smaller compared to the standard \textsc{Scrabble}, thus can significantly reduce the branching factor. The results are shown in Fig.~\ref{figure:13x13_game_refinement}, which is much closer to the appropriate zone. \begin{figure}[h!] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $2W$ & & & &$3L$ & & & &$3L$ & & & &$2W$\\ \hline &$2W$ & & & &$2L$ & &$2L$ & & & &$2W$ &\\ \hline & &$2W$ & & & &$2L$ & & & &$2W$ & &\\ \hline & & &$2W$ & & & & & &$2W$ & & &\\ \hline $3L$ & & & &$3L$ & & & &$3L$ & & & &$3L$\\ \hline &$2L$ & & & &$2L$ & &$2L$ & & & &$2L$ &\\ \hline & &$2L$ & & & &$2W$ & & & &$2L$ & &\\ \hline &$2L$ & & & &$2L$ & &$2L$ & & & &$2L$ &\\ \hline $3L$ & & & &$3L$ & & & &$3L$ & & & &$3L$\\ \hline & & &$2W$ & & & & & &$2W$ & & &\\ \hline & &$2W$ & & & &$2L$ & & & &$2W$ & &\\ \hline &$2W$ & & & &$2L$ & &$2L$ & & & &$2W$ &\\ \hline $2W$ & & & &$3L$ & & & &$3L$ & & & &$2W$\\ \hline \end{tabular} \caption{13x13 variant of \textsc{Scrabble} board} \label{table:scrabble_board_13x13} \end{figure} \begin{figure}[h!] \begin{tikzpicture} \begin{axis}[ width=12cm, height=8cm, xlabel={AI knowledge base}, ylabel={Game refinement value}, xmin=0.1, xmax=1, ymin=0.07, ymax=0.14, xtick={0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1}, ytick={0, 0.07, 0.08}, legend pos=north east, ymajorgrids=true, grid style=dashed, cycle list name=color ] \addplot coordinates { (0.1, 0.2352) (0.2, 0.1641) (0.3, 0.1185) (0.4, 0.1255) (0.5, 0.0997) (0.6, 0.0864) (0.7, 0.0971) (0.8, 0.0918) (0.9, 0.0978) (1, 0.0933) }; \addlegendentry{d = 0.1} \addplot coordinates { (0.1, 0.1371) (0.2, 0.1029) (0.3, 0.0898) (0.4, 0.0897) (0.5, 0.0784) (0.6, 0.0782) (0.7, 0.0804) (0.8, 0.0792) (0.9, 0.082) (1, 0.0818) }; \addlegendentry{d = 0.2} \addplot coordinates { (0.1, 0.1309) (0.2, 0.0989) (0.3, 0.0883) (0.4, 0.0789) (0.5, 0.0767) (0.6, 0.0845) (0.7, 0.0789) (0.8, 0.0821) (0.9, 0.0784) (1, 0.0768) }; \addlegendentry{d = 0.3} \addplot coordinates { (0.1, 0.0955) (0.2, 0.0855) (0.3, 0.0768) (0.4, 0.0679) (0.5, 0.0762) (0.6, 0.0816) (0.7, 0.0756) (0.8, 0.0789) (0.9, 0.0815) (1, 0.0738) }; \addlegendentry{d = 0.4} \addplot coordinates { (0.1, 0.0965) (0.2, 0.0796) (0.3, 0.0803) (0.4, 0.0738) (0.5, 0.0755) (0.6, 0.0743) (0.7, 0.0728) (0.8, 0.0698) (0.9, 0.0767) (1, 0.0814) }; \addlegendentry{d = 0.5} \addplot coordinates { (0.1, 0.1025) (0.2, 0.0786) (0.3, 0.0781) (0.4, 0.0751) (0.5, 0.0783) (0.6, 0.0789) (0.7, 0.0738) (0.8, 0.0767) (0.9, 0.0782) (1, 0.0725) }; \addlegendentry{d = 0.6} \addplot coordinates { (0.1, 0.0857) (0.2, 0.0707) (0.3, 0.0779) (0.4, 0.0739) (0.5, 0.0779) (0.6, 0.0717) (0.7, 0.079) (0.8, 0.0819) (0.9, 0.0747) (1, 0.0736) }; \addlegendentry{d = 0.7} \addplot coordinates { (0.1, 0.0824) (0.2, 0.0783) (0.3, 0.0705) (0.4, 0.0792) (0.5, 0.0792) (0.6, 0.0735) (0.7, 0.0802) (0.8, 0.077) (0.9, 0.0734) (1, 0.0778) }; \addlegendentry{d = 0.8} \addplot coordinates { (0.1, 0.087) (0.2, 0.072) (0.3, 0.0767) (0.4, 0.075) (0.5, 0.0747) (0.6, 0.0777) (0.7, 0.0764) (0.8, 0.0819) (0.9, 0.078) (1, 0.0809) }; \addlegendentry{d = 0.9} \addplot coordinates { (0.1, 0.0771) (0.2, 0.0761) (0.3, 0.0787) (0.4, 0.073) (0.5, 0.0723) (0.6, 0.0788) (0.7, 0.0832) (0.8, 0.0791) (0.9, 0.0795) (1, 0.0808) }; \addlegendentry{d = 1} \addplot[ color=black, opacity=0.25, name path=A ] plot coordinates { (0.1, 0.07) (1, 0.07) }; \addlegendentry{Appropriate zone} \addplot[ color=black, opacity=0.25, name path=B ] plot coordinates { (0.1, 0.08) (1, 0.08) }; \addplot[ color=black, fill opacity=0.25 ] fill between[of=A and B]; \end{axis} \end{tikzpicture} \caption{Measures of game refinement and AI knowledge base: 13x13 variation of \textsc{Scrabble} matches between AIs with different knowledge base on various dictionary size} \label{figure:13x13_game_refinement} \end{figure} \subsection{Tendency of Game Refinement Measure Changes} The interesting part is that \textsc{Scrabble} with various size of a dictionary has different game refinement tendency, as shown in Table~\ref{table:15x15_game_refinement_tendency}. Note that `Dec' and `Inc' stand for decreasing and increasing, respectively. \begin{table}[ht] \caption{Tendency of game refinement measure changes for different size of dictionary} \label{table:15x15_game_refinement_tendency} \centering \begin{tabular}{p{2.7cm} l} \Xhline{4\arrayrulewidth} Dictionary size &GR tendency \\ \hline 0.01 - 0.1 & Dec \\ 0.2 - 0.6 & Dec then Inc \\ 0.7 - 0.9 & Inc \\ 1 & Inc then Dec \\ \Xhline{4\arrayrulewidth} \end{tabular} \end{table} From the earlier study \cite{paper:mathematical_theory_of_game_refinement}, a measure of game refinement would reflect the balance between chance and skill in playing the game under consideration. Higher game refinement value means that chance is a stronger factor than skill. Game refinement tendency would indicate the user experience of the game. If it is increasing, we know that chance has more effect for a match-up between expert players, and less for novice players respectively. Novice used to enjoy this type of game as a skill-based game, but once they become an expert player, only individual skill is not enough to beat the opponent since there are some other unexpected factors towards a game, e.g., chance, teamwork and imperfect information. This would offer the fun-game experience. On the contrary, tendency of decreasing would indicate the competitive-game experience. In case where tendency is decreasing-then-increasing, it combines both experiences in the different phase. At the beginning, players may feel the competitive-game experience, which is followed by the fun-game experience. On the other hand, the results using complexity measure are shown in Fig.~\ref{figure:15x15_complexity}. \begin{figure}[htb] \begin{tikzpicture} \begin{axis}[ width=12cm, height=8cm, xlabel={AI knowledge base}, ylabel={Complexity}, xmin=0.1, xmax=1, ymin=0, ymax=250, xtick={0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1}, ytick={0, 50, 100, 150, 200, 250}, legend pos=north east, ymajorgrids=true, grid style=dashed, cycle list name=color ] \addplot coordinates { (0.1, 6.4198) (0.2, 20.2083) (0.3, 35.2646) (0.4, 53.4054) (0.5, 62.0025) (0.6, 67.7768) (0.7, 78.855) (0.8, 89.8998) (0.9, 98.0648) (1, 103.2699) }; \addlegendentry{d = 0.1} \addplot coordinates { (0.1, 20.0262) (0.2, 40.1534) (0.3, 73.8202) (0.4, 95.6239) (0.5, 102.2435) (0.6, 116.9369) (0.7, 129.7059) (0.8, 141.53) (0.9, 152.9471) (1, 153.59) }; \addlegendentry{d = 0.2} \addplot coordinates { (0.1, 27.4848) (0.2, 74.6213) (0.3, 97.4117) (0.4, 115.1401) (0.5, 130.1514) (0.6, 151.604) (0.7, 155.0226) (0.8, 170.4017) (0.9, 170.1304) (1, 176.6164) }; \addlegendentry{d = 0.3} \addplot coordinates { (0.1, 38.3081) (0.2, 85.8965) (0.3, 113.1143) (0.4, 132.7561) (0.5, 150.2232) (0.6, 163.785) (0.7, 171.668) (0.8, 173.2142) (0.9, 177.9248) (1, 187.4923) }; \addlegendentry{d = 0.4} \addplot coordinates { (0.1, 58.7438) (0.2, 101.2365) (0.3, 136.4542) (0.4, 147.209) (0.5, 161.3325) (0.6, 177.0388) (0.7, 180.3928) (0.8, 187.1519) (0.9, 194.0312) (1, 198.1639) }; \addlegendentry{d = 0.5} \addplot coordinates { (0.1, 66.5162) (0.2, 116.3982) (0.3, 140.903) (0.4, 156.3169) (0.5, 168.3566) (0.6, 182.9952) (0.7, 185.9932) (0.8, 191.4725) (0.9, 198.5544) (1, 198.6712) }; \addlegendentry{d = 0.6} \addplot coordinates { (0.1, 74.0912) (0.2, 129.3638) (0.3, 152.2268) (0.4, 164.5299) (0.5, 175.9804) (0.6, 186.2004) (0.7, 190.9443) (0.8, 197.3241) (0.9, 200.2789) (1, 197.414) }; \addlegendentry{d = 0.7} \addplot coordinates { (0.1, 84.7892) (0.2, 141.1889) (0.3, 161.1653) (0.4, 175.0712) (0.5, 189.0449) (0.6, 191.7569) (0.7, 197.9346) (0.8, 201.1047) (0.9, 202.8148) (1, 204.7567) }; \addlegendentry{d = 0.8} \addplot coordinates { (0.1, 88.7559) (0.2, 147.3221) (0.3, 168.2361) (0.4, 184.7118) (0.5, 188.9076) (0.6, 194.681) (0.7, 198.8788) (0.8, 202.4208) (0.9, 202.0299) (1, 210.9456) }; \addlegendentry{d = 0.9} \addplot coordinates { (0.1, 100.927) (0.2, 153.2947) (0.3, 172.6909) (0.4, 188.158) (0.5, 193.2091) (0.6, 196.8328) (0.7, 203.9766) (0.8, 211.658) (0.9, 212.1199) (1, 208.8595) }; \addlegendentry{d = 1} \end{axis} \end{tikzpicture} \caption{Complexity and AI knowledge base: standard \textsc{Scrabble} matches between AIs with different knowledge base on various dictionary size} \label{figure:15x15_complexity} \end{figure} \subsection{Learning Coefficient and Summary} For every dictionary size, we have compared the learning coefficient as shown in Fig.~\ref{figure:15x15_complexity_slope}. \begin{figure}[h!] \begin{tikzpicture} \begin{axis}[ width=12cm, height=8cm, xlabel={Dictionary size}, ylabel={Complexity slope per dictionary size}, xmin=0, xmax=1, ymin=50, ymax=1350, xtick={0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1}, ytick={200, 400, 600, 800, 1000, 1200}, legend pos=north east, ymajorgrids=true, grid style=dashed, cycle list name=color ] \addplot coordinates { (0.01, 854.98) (0.02, 803.97) (0.03, 997.4433) (0.04, 1333.335) (0.05, 1231.206) (0.06, 1280.6317) (0.07, 930.6857) (0.08, 1122.5488) (0.09, 1051.3033) (0.1, 1073.907) (0.2, 741.5515) (0.3, 508.4443) (0.4, 366.311) (0.5, 275.526) (0.6, 214.2428) (0.7, 166.3429) (0.8, 135.0066) (0.9, 114.6033) (1, 98.7323) }; \end{axis} \end{tikzpicture} \caption{Relation between dictionary size and corresponding learning coefficient} \label{figure:15x15_complexity_slope} \end{figure} The peak of the highest complexity slope per dictionary size is at 4\% of dictionary size. Despite that, the corresponding game refinement measure is relatively uncomfortable. The good balance between entertainment and education would be around 10\% to 20\% of dictionary size. We show, in Table~\ref{table:conclusion}, the summary of our results in this study. \begin{table}[h!] \caption{Summary of analyzing \textsc{Scrabble} using game refinement measure and learning coefficient} \label{table:conclusion} \centering \begin{tabular}{l c c c l r} \Xhline{4\arrayrulewidth} Variation & Board size & d & GR & GR tendency & Learning coefficient \\ \hline Standard & 15x15 & 1 & 0.0751 -- 0.0926 &Inc then Dec &98.7323 \\ Entertainment & 13x13 & 1 & 0.0771 -- 0.0808 &Dec then inc &140.9987 \\ Education & 15x15 & 0.04 & 0.0951 -- 0.3843 &Dec &1333.335 \\ Balance & 15x15 & 0.1 & 0.0731 -- 0.2204 &Dec &1073.907 \\ \Xhline{4\arrayrulewidth} \end{tabular} \end{table} \section{Concluding Remarks} \label{section:conclusion} According to the study, \textsc{Scrabble} players would feel fun-game experience (i.e., entertainment) more than the educational experience. We proposed three ways of possible improvement: entertainment enhancement, education enhancement, and the good balance between them. \textsc{Scrabble} yields excessive branching factors. This results in that game refinement measures are higher than the appropriate zone. Entertainment enhancement can be done by reducing the standard board size (15x15) to 13x13. This can improve game refinement values significantly, specifically for native speakers to enjoy the competitive environment. On the other hand, we can also enhance \textsc{Scrabble} with the educational dimension. For this purpose we proposed two new models with focus on complexity and learning coefficient. Educational enhancement can be done by maximizing the learning coefficient value, while the good balance between entertainment and enhancement can be found by trading off. \subsubsection*{Acknowledgments.} This research is funded by a grant from the Japan Society for the Promotion of Science, in the framework of the Grant-in-Aid for Challenging Exploratory Research.
1,108,101,566,831
arxiv
\section{Introduction and main results} \def\theequation{1.\arabic{equation}}\makeatother \setcounter{equation}{0} This paper is a second part devoted to the study of the following nonlinear elliptic partial differential equation with zero Dirichlet boundary condition \begin{equation}\label{problem} \begin{aligned} -\Delta& u=K(x)u^{q}+\mu u \quad \mbox{in}\,\,\Omega,\\{}& u>0 \quad \mbox{in} \,\,\Omega,\quad u=0 \quad \mbox{on} \,\,\partial\Omega, \end{aligned} \end{equation}where $\Omega\subset \mathbb{R}^n,\,n\geq 5,$ is a bounded domain with a smooth boundary $\partial\Omega,$ $K(x)$ is a $C^2$-function in $\bar{\Omega},$ $q+1=\frac{2n}{n-2}$ is the critical exponent for the embedding $H_0^1\bigl(\Omega\bigr)$ into $L^{q+1}\bigl(\Omega\bigr)$ and $0< \mu< \mu_1(\Omega),$ where $\mu_1(\Omega)$ denotes the first eigenvalue of $(-\Delta)$ in $H_0^1\bigl(\Omega\bigr),$ \\ In \cite[Theorem 1.1]{Bo1}, we were interested on the existence of at least one solution to \eqref{problem}. This result was centered on a Lions's condition. Namely, by using, we have proved the following theorem. Denote $\sup_{\bar{\Omega}}(K(x)):=K_\infty$ and $S:=\mathrm{inf}\{\|u\|^2,\,\,\, u\in H_0^1\bigl(\Omega\bigr)\,\,\mathrm{and\,\,}\|u\|_{q+1}=1\}$ is the best Sobolev constant, where $J(u):=\int_\Omega K(x)|u(x)|^{q+1}\,\mathrm{d}x$ and $\|u\|_p^p:=\int_\Omega |u(x)|^p\,\mathrm{d}x$ for any $p> 1.$ \begin{theorem}\label{Lions}$\mathrm{\bigl([4]\bigr)}$ \begin{equation}\label{zakaria} \quad \end{equation}\end{theorem} When $K(x)\equiv 1,$ we recognize the Brezis--Nirenberg existence result \cite[Lemma 1.2]{BrNir}. In order to establish the condition \eqref{zakaria}, Brezis and Nirenberg \cite[Lemma 1.1]{BrNir} follow an original idea due to Aubin \cite{A}: By considering the following test function \begin{equation}\label{aubin}u_{\lambda,y_0}(x)=\varphi(x)\cdot c_n^{\frac{n-2}{4}} \bigl(\frac{\lambda}{1+\lambda^2|x-y_0|^2}\bigr)^\frac{n-2}{2}=:\varphi(x)\cdot \delta_{y_0,\lambda}(x),\end{equation}where $c_n:=n^2-2n,\,y_0\in \Omega,\,\lambda> 0,\,\delta_{y_0,\lambda}$ are the positive solutions in $ \mathbb{R}^n,$ concentrated at $y_0,$ of $-\Delta u = u^{\frac{n+2}{n-2}}$ and $\varphi$ is a cut-off function, they proved that the condition \eqref{zakaria} is satisfied for any $\mu> 0.$\\ When $K(x)\not\equiv 1,$ the situation becomes extremely different: Indeed the behavior of $K(x)$ plays a crucial role in establishing existence results; see, e.g., \cite{Bo} for the case $\mu=0$ and $K(x)$ is positive everywhere. But for $\mu=0$ and, of course, $K_\infty>0,$ the following Pohozaev identity \cite{P} \begin{equation}\label{Pohozaev}\frac{1}{2}\int_{\partial \Omega}|\frac{\partial u}{\partial\nu}(x)|^2\langle x,\, \nu(x)\rangle\,\mathrm{d}x =\frac{n-2}{2n}\int_{\Omega}\bigl\langle x,\, \nabla K(x)\bigr\rangle u^{\frac{2n}{n-2}}(x)\,\mathrm{d}x+\mu\int_{\Omega}u^2(x)\,\mathrm{d}x\end{equation} asserts that the problem \eqref{problem} has no solution provided that $\Omega$ is star-shaped with respect to the origin $o$ of $\mathbb{R}^n$ and $\langle x,\, \nabla K(x)\rangle\leq 0$ in $\Omega.$ Here $\nu(x)$ denotes the outward normal vector at $x$ to $\partial \Omega$ and $u$ is supposed to be a solution of \eqref{problem}. (This identity \eqref{Pohozaev} is obtained by multiplying the equation given in \eqref{problem} on the one hand by $u$ and on the other hand by $\sum_{i=1}^nx_i(\partial u/\partial x_i$), and using an integration by parts and the fact that on $\partial \Omega$ we have $\nabla u=(\partial u/\partial \nu)\nu$). In fact, in this case the condition \eqref{zakaria} is not satisfied. In view of this nonexistence result, naturally one can ask: What is the concrete condition that can we impose on $\mu$ and an absolute maximum $y_0$ of $K(x)$ in $\bar{\Omega}$ so that \eqref{zakaria} becomes satisfied? In the case $\mu> 0,$ Lions \cite[Remark 4.7]{Li} considered the test function \eqref{aubin} and he showed that the condition \eqref{zakaria} is satisfied provided that \begin{eqnarray*} K_\infty=K(y_0)> 0\quad \mathrm{with}\quad y_0 \in \Omega,\,\,\,\quad\qquad\label{lionss}\\ -\frac{(n-2)^2\bar{c}_2\Delta K(y_0)}{2nK(y_0)}< \mu \bar{c}_3,\quad\qquad\qquad\label{lions} \end{eqnarray*} where $\bar{c}_2$ and $\bar{c}_3$ are two positive constants depending only on $n;$ see Proposition \ref{proposition} below.\\ Convinced to expand the validity of the condition \eqref{zakaria} to more large class of functions $K(x)$ when $\mu$ is fixed, a choice of a test function taking care of the geometry of $\Omega$ becomes useful. To this end, let $P$ be the projection from $H^1(\Omega)$ onto $H_0^1(\Omega)$; that is, $u= Pf$ is the unique solution of $\Delta u = \Delta f \,\,\mbox{in}\,\,\Omega,\,\,\, u=0\,\, \mbox{on}\,\,\partial\Omega.$ Denote by $H$ the regular part of the Green's function of $\bigl(-\Delta\bigr)$ on $\Omega.$ By using the test function $P\delta_{y_0,\lambda},$ we are able to prove the following proposition: \begin{proposition}\label{proposition}{\it\,\,Let $n\geq 5.$ Let $K(x)\in C^2(\bar{\Omega})$ satisfying $K_\infty=K(y_0)> 0$ with $y_0\in \Omega$ and let $\mu> 0.$ Then the condition \eqref{zakaria} holds true provided that one of the following two conditions is satisfied:\\ ${\bf i)}$ $\quad-\frac{(n-2)^2\bar{c}_2\Delta K(y_0)}{2nK(y_0)}< \mu \bar{c}_3,$\\\\ ${\bf ii)}$ $\quad-\frac{(n-2)^2\bar{c}_2\Delta K(y_0)}{2nK(y_0)}= \mu \bar{c}_3$ and \begin{equation*}\label{not satisfied} \begin{aligned}&\liminf_{\lambda \rightarrow +\infty}\,\lambda^{n-2} \biggl[-\int_{B_0}\bigl(\frac{K(x)}{K(y_0)}-1-\frac{\Delta K(y_0)}{2nK(y_0)}|x|^2\bigr)\delta_{y_0, \lambda}^{\frac{2n}{n-2}}\mathrm{d}x\Bigr]+S_n\sum_{k=2}^{[\frac{n-2}{2}]}a_{n,k}\frac{\mu^k}{\lambda^{2k}}\biggr]\\&< \frac{n\bar{c}_4}{(n-2)},\end{aligned}\end{equation*}where $d_0:=\mathrm{dist}(y_0,\partial \Omega),$ $B_0$ is the ball of center $y_0$ and radius $d_0,$ $S_n:=\int_{\mathbb{R}^n}\bigl(1+|x|^2\bigr)^{-n}\mathrm{d}x ,\,\,\bar{c}_2=\int_{\mathbb{R}^n} |x|^2/(1+|x|^2)^n \mathrm{d}x,\,\,\bar{c}_3=\int_{\R^n}1/(1+|x|^2)^{n-2}\,\mathrm{d}x,$ $a_{n,k}$'s are the constants defined by the following Taylor expansion $$\bigl(1-\frac{\bar{c}_3}{c_nS_n}t\bigr)^{\frac{n}{n-2}} =1-\frac{n\bar{c}_3}{(n-2)c_nS_n}t+\sum_{k=2}^{[\frac{n-2}{2}]}a_{n,k}t^k+o\bigl(t^{\frac{n-2}{2}}\bigr)\quad as\quad t\rightarrow 0,$$\begin{equation*}\label{specified} \begin{aligned}\bar{c}_4:=&-H(y_0, y_0)\int_{\mathbb{R}^n} \frac{\mathrm{d}x}{(1+|x|^2)^{\frac{n+2}{2}}}+\mu c_n^{-1}\biggl[2\int_{\Omega}H(y_0,\,x)\frac{\mathrm{d}x}{|x-y_0|^{n-2}}\\&- \int_{\Omega}H^2(y_0,\,x)\mathrm{d}x+\int_{\R^n\setminus \Omega}\frac{\mathrm{d}x}{|x-y_0|^{2n-4}}\biggr]\end{aligned}\end{equation*} } \end{proposition} {\bf Remark 1.1}\,\,\,\,If we use the test function $u_{\lambda,y_0}$ instead of $P\delta_{y_0,\lambda},$ then the corresponding constant $\bar{c}_4$ can not be specified. This is due to the fact that $u_{\lambda,y_0}$ does not deal with the boundary of $\Omega.$\\\\ {\bf Example 1.1}\,\,\,\,Let $\mu> 0$ be fixed. To verify if a function $K(x)$ satisfies the condition ${\bf (ii)},$ we need to know its Taylor expansion, near $y_0,$ of order greater than $2.$ For example, let us take the case $\Omega=B$ is the unit ball of $\mathbb{R}^5$ and $y_0$ is the origin of $\mathbb{R}^5.$ Assume that $K(x)=f(|x|)$ is radial and radially non-increasing function with$$f(t)=f(0)+at^2+bt^{3}+o(t^3)\quad \mathrm{as}\,\,t\rightarrow 0.$$Then the condition ${\bf (ii)}$ is satisfied provided that$$-9\bar{c}_2a= \mu \bar{c}_3f(0)\quad \mathrm{and}\quad -3b\int_{\mathbb{R}^5} |x|^3/(1+|x|^2)^5 \mathrm{d}x< 5\bar{c}_4f(0).$$\\ In a second part of this work, we will try to analyze the optimality of the condition ${\bf (i)}$ for some class of functions when $\Omega$ is a ball, $n$ is odd with $n=5$ or $n> 19$ and $K(x)$ is close to a constant, radial and radially non-increasing. To this end, let us state the following assumptions: Assume that\\ $\mathbf{(K_1)}$ $\Omega=B(y_0,\,\gamma)$ is the ball of center $y_0$ and radius $\gamma$ in $\mathbb{R}^n.$\\ $\mathbf{(K_\eta)}$ $K(x)=K(y_0)+\eta f_1(|x-y_0|)$ is a non-negative $C^2$-function in $\bar{\Omega},$ where $\eta,\,K(y_0)> 0$ are fixed constants and $f_1$ is a non-increasing function on $[0,\,\gamma]$ independent of $\eta.$\\\\ In this case, we will refer to the problem \eqref{problem} as $\mathbf{(BN)}_\eta.$\\ $\mathbf{(K_3)}$ $\limsup\limits_{t\rightarrow 0}\frac{f_1'(t)-f_1''(0)t}{t^{n-3}}< +\infty.$\\\\ Our optimal result is the following: \begin{theorem}\label{nonexistence}{\it\,\,Let $n$ be an odd integer with $n=5$ or $n> 19$ and let $0< \mu< \mu_1(\Omega).$ Assume that $\Omega$ and $K(x)$ satisfy the assumptions $\mathbf{(K_1)},\,\mathbf{(K_\eta)}$ and $\mathbf{(K_3)}.$ Then there exists a constant $\bar{\eta}$ depending on $n,\,f_1(t)$ and $K(y_0)$ such that if $0< \eta\leq \bar{\eta},$ then the problem $\mathbf{(BN)}_\eta$ admits a solution if and only if\begin{equation}\label{lions}-\frac{(n-2)^2\bar{c}_2\Delta K(y_0)}{2nK(y_0)}< \mu \bar{c}_3.\end{equation}}\end{theorem} The proof of the sufficiency is obtained by a combination of the results Theorem \ref{Lions} and Proposition \ref{proposition}. For the necessity of the condition \eqref{lions}, we argue by contradiction: The key point is to establish an adequate Pohozaev type identity for the desired solution of \eqref{problem}; this identity is a natural extension to that given in the proof of \cite[Lemma 1.4]{BrNir}. To conclude, we need to investigate a constant $\bar{\eta}$ depending only on $n,\,f_1(t)$ and $K(y_0)$ such that if $\eta\leq \bar{\eta}$ and $\bigl[-(n-2)^2\bar{c}_2\Delta K(y_0)\bigr]/2nK(y_0)\geq \mu \bar{c}_3,$ then this identity becomes impossible. \section{Proof of the results}\label{sec2} \def\theequation{2.\arabic{equation}}\makeatother \setcounter{equation}{0} {\it Proof of Proposition \ref{proposition}.} Let $y_0\in \Omega$ be such that $K_\infty=K(y_0)> 0.$ Denoting, for $\lambda> 0$ a fixed constant large enough,\begin{equation}\label{Bahri}A_{y_0,\mu}(\lambda):=\frac{\int_{\Omega}|\nabla P\delta_{y_0, \lambda}|^2- \mu \int_{\Omega} (P\delta_{y_0, \lambda})^2} {\Bigl( \int_{\Omega}K(P\delta_{y_0, \lambda})^{\frac{2n}{n-2}} \Bigr)^{\frac{n-2}{n}}}. \end{equation}In order to get the claim of Proposition \ref{proposition}, it is sufficient to prove that, for $\lambda$ large enough, \begin{equation}\label{Bahri1}\bigl[A_{y_0,\mu}(\lambda)\bigr]^{\frac{n}{n-2}}< \frac{1}{K(y_0)}S^{\frac{n}{n-2}} \end{equation}provided that one of the conditions ${\bf (i)}$ and ${\bf (ii)}$ is satisfied. To this end, we need an estimation of the following three quantities:$$\int_{\Omega} (P\delta_{y_0, \lambda})^2,\quad \int_{\Omega}K(P\delta_{y_0, \lambda})^{\frac{2n}{n-2}} \quad \mathrm{and}\quad \int_{\Omega}|\nabla P\delta_{y_0, \lambda}|^2.$$ The last two quantities were estimated in \cite[(2.67), (5.31) and Estimate F8]{B}, and we have\begin{eqnarray} \int_{\Omega}K(P\delta_{y_0, \lambda})^{\frac{2n}{n-2}}=K(y_0)c_n^{\frac{n}{2}}\Bigl[S_n +\frac{\bar{c}_2}{2n}\frac{\Delta K(a)}{K(y_0)\lambda^2}- \frac{2n\bar{c}_1}{n-2}\frac{H(y_0, y_0)}{\lambda^{n-2}}\nonumber\\ \qquad\quad\qquad\qquad+\int_{B_0}\bigl(\frac{K(x)}{K(y_0)}-1-\frac{\Delta K(y_0)}{2nK(y_0)}|x|^2\bigr)\delta_{y_0, \lambda}^{\frac{2n}{n-2}}\mathrm{d}x\Bigr]\label{Bahri2}\\+o(\frac{1}{\lambda^{n-2}})+ O\Bigl(\frac{log(\lambda d_0)}{(\lambda d_0)^n}\Bigr),\qquad\quad\quad\qquad\qquad\nonumber\\ \int_{\Omega}|\nabla P\delta_{y_0, \lambda}|^2=c_n^{\frac{n}{2}}\bigl[S_n - \bar{c}_1\frac{H(y_0, y_0)}{\lambda^{n-2}}\bigr] + O\Bigl(\frac{log(\lambda d_0)}{(\lambda d_0)^n}\Bigr),\quad\qquad\,\,\,\label{Bahri2222} \end{eqnarray} where $d_0:=\mathrm{dist}(y_0,\,\partial \Omega),\,B_0$ is the ball of center $y_0$ and radius $d_0,$ $\bar{c}_1=\int_{\mathbb{R}^n} \frac{dx}{(1+|x|^2)^{\frac{n+2}{2}}}$ and $\bar{c}_2=\int_{\mathbb{R}^n} \frac{|x|^2}{(1+|x|^2)^n} dx.$ Then we are left with the first quantity:\begin{equation}\label{Bahri3}\int_\Omega(\mathrm{P}\delta_{y_0,\lambda})^2(x)\,\mathrm{d}x= \int_\Omega\delta^2_{y_0,\lambda}\,\mathrm{d}x+\int_\Omega\theta^2_{y_0,\lambda}(x)\,\mathrm{d}x-2\int_\Omega\delta_{y_0,\lambda}\theta_{y_0,\lambda}(x)\,\mathrm{d}x,\end{equation} where $\,\theta_{y_0,\lambda}:= \delta_{y_0, \lambda} - P\delta_{y_0, \lambda}.$ First we recall that from \cite[(5.25)]{B} we have the following estimate \begin{equation*}\label{Bahri4}\theta_{y_0,\lambda} (x)= \frac{c_n^{\frac{n-2}{4}}}{\lambda^{\frac{n-2}{2}}}H(y_0, x) + \frac{1}{\lambda^{\frac{n+2}{2}}d_0^n}\cdot O(1),\quad\forall\,\,x\in \Omega,\end{equation*}where $|O(1)|$ is a quantity upper-bounded by a positive constant $M$ independent of $x\in \Omega.$ This, together with Lebesgue's dominated convergence theorem, implies that \begin{eqnarray} \int_\Omega\delta_{y_0,\lambda}\theta_{y_0,\lambda}(x)\,\mathrm{d}x= \frac{c_n^{\frac{n-2}{2}}}{\lambda^{n-2}}\int_{\Omega}H(y_0,\,x)\frac{\lambda^{n-2}}{(1+\lambda^2|x-y_0|^2)^{\frac{{n-2}}{2}}}\,\mathrm{d}x+o( \frac{1}{\lambda^{n-2}})\nonumber\\=\frac{c_n^{\frac{n-2}{2}}}{\lambda^{n-2}}\int_{\Omega}H(y_0,\,x)\frac{1}{|x-y_0|^{n-2}}\,\mathrm{d}x+o( \frac{1}{\lambda^{n-2}}),\,\,\,\quad\qquad\label{Bahri5}\\ \int_\Omega\theta^2_{y_0,\lambda}\mathrm{d}x=\frac{c_n^{\frac{n-2}{2}}}{\lambda^{n-2}}\int_{\Omega}H^2(y_0,\,x)\mathrm{d}x+o( \frac{1}{\lambda^{n-2}}),\quad\qquad\qquad\label{Bahri6} \end{eqnarray}On the other hand, by using, again, Lebesgue's dominated convergence theorem we obtain\begin{equation}\label{regu}\begin{aligned}\int_\Omega\delta^2_{y_0,\lambda}\,\mathrm{d}x &=c_n^{\frac{n-2}{2}}\biggl(\frac{\bar{c}_3}{\lambda^2}- \int_{\R^n\setminus \Omega}\frac{\lambda^{n-2}}{(1+\lambda^2|x-y_0|^2)^{n-2}}\,\mathrm{d}x\biggr)\\& =c_n^{\frac{n-2}{2}}\biggl(\frac{\bar{c}_3}{\lambda^2}-\frac{\bar{\bar{c}}_5}{\lambda^{n-2}}+o( \frac{1}{\lambda^{n-2}})\biggr),\end{aligned}\end{equation} where $\bar{c}_3=\int_{\R^n}\frac{1}{(1+|x|^2)^{n-2}}\,\mathrm{d}x$ and $\bar{\bar{c}}_5=\int_{\R^n\setminus \Omega}\frac{1}{|x-y_0|^{2n-4}}\,\mathrm{d}x.$ Combining \eqref{Bahri3}--\eqref{regu} we obtain \begin{eqnarray} \int_\Omega(\mathrm{P}\delta_{y_0,\lambda})^2(x)\,\mathrm{d}x= \frac{c_n^{\frac{n-2}{2}}}{\lambda^{n-2}}\biggl[-2\int_{\Omega}H(y_0,\,x)\frac{1}{|x-y_0|^{n-2}}\,\mathrm{d}x+ \int_{\Omega}H^2(y_0,\,x)\mathrm{d}x\biggr]\nonumber\\+ c_n^{\frac{n-2}{2}}\biggl(\frac{\bar{c}_3}{\lambda^2}-\frac{\bar{\bar{c}}_5}{\lambda^{n-2}}\biggr)+o( \frac{1}{\lambda^{n-2}})\quad \qquad\qquad\qquad\quad\quad\nonumber\\ =:c_n^{\frac{n-2}{2}}\biggl(\frac{\bar{c}_3}{\lambda^2}-\frac{\bar{c}_6}{\lambda^{n-2}}\biggr)+o( \frac{1}{\lambda^{n-2}}),\quad \qquad\qquad\qquad\quad\qquad\quad\label{Bahri7} \end{eqnarray}where $\bar{c}_6:=\int_{\Omega}\bigl[2H(y_0,\,x)/|x-y_0|^{n-2}- H^2(y_0,\,x)\bigr]\mathrm{d}x+\bar{\bar{c}}_5.$ Combining \eqref{Bahri7} and \eqref{Bahri2222} we get, for $\lambda$ large enough, \begin{equation}\label{Bahri8}\begin{aligned} &\biggl[\int_{\Omega}|\nabla P\delta_{y_0, \lambda}|^2-\mu \int_\Omega(\mathrm{P}\delta_{y_0,\lambda})^2(x)\,\mathrm{d}x\biggr]^{\frac{n}{n-2}}\\&=(S_nc_n^{\frac{n}{2}})^{\frac{n}{n-2}} \biggl[1-\frac{\mu \bar{c}_3}{c_nS_n\lambda^2}-\frac{\bar{c}_0}{\lambda^{n-2}}+o( \frac{1}{\lambda^{n-2}})\biggr]^{\frac{n}{n-2}}\\&=(S_nc_n^{\frac{n}{2}})^{\frac{n}{n-2}}\biggl[1-\frac{n\mu \bar{c}_3}{(n-2)c_nS_n\lambda^2}-\frac{n\bar{c}_0}{(n-2)\lambda^{n-2}}+\sum_{k=2}^{[\frac{n-2}{2}]}a_{n,k}\frac{\mu^k}{\lambda^{2k}}\biggr]+o( \frac{1}{\lambda^{n-2}}), \end{aligned}\end{equation} where $a_{n,k}$'s are fixed constants defined by the following Taylor expansion$$\bigl(1-\frac{\bar{c}_3}{c_nS_n}t\bigr)^{\frac{n}{n-2}} =1-\frac{n\bar{c}_3}{(n-2)c_nS_n}t+\sum_{k=2}^{[\frac{n-2}{2}]}a_{n,k}t^k+o\bigl(t^{\frac{n-2}{2}}\bigr)\quad as\quad t\rightarrow 0$$($[(n-2)/2]$ denotes the integer part of $(n-2)/2$ and the sum $\sum_{k=2}^{[(n-2)/2]}$ is omitted when $n=5$) and $\bar{c}_0:=\bigl(\bar{c}_1H(y_0, y_0)+\mu c_n^{-1}\bar{c}_6\bigr)/S_n.$ \eqref{Bahri}, \eqref{Bahri2} and \eqref{Bahri8} imply that, for $\lambda$ large enough, \begin{equation}\label{Bahri9}\begin{aligned} &\bigl[A_{y_0,\mu}(\lambda)\bigr]^{\frac{n}{n-2}}\\&=\frac{S_n^{\frac{2}{n-2}}c_n^{\frac{n}{n-2}}}{K(y_0)}\biggl[1-\bigl(\frac{\bar{c}_2}{2n}\frac{\Delta K(a)}{K(y_0)}+\frac{\mu \bar{c}_3}{(n-2)^2}\bigr)\frac{1}{S_n\lambda^2}-\frac{n\bar{c}_4}{(n-2)S_n\lambda^{n-2}}+o( \frac{1}{\lambda^{n-2}})\\&\quad\quad\qquad\quad\quad-\frac{1}{S_n}\int_{B_0}\bigl(\frac{K(x)}{K(y_0)}-1-\frac{\Delta K(y_0)}{2nK(y_0)}|x|^2\bigr)\delta_{y_0, \lambda}^{\frac{2n}{n-2}}\mathrm{d}x\Bigr]+\sum_{k=2}^{[\frac{n-2}{2}]}a_{n,k}\frac{\mu^k}{\lambda^{2k}}\biggr], \end{aligned}\end{equation} where $\bar{c}_4:=-\bar{c}_1H(y_0, y_0)+\mu c_n^{-1}\bar{c}_6.$ On the other hand, observe that since $K(x)\in C^2(\bar{\Omega}),$ then \begin{equation}\label{Bahri999}\int_{B_0}\bigl(\frac{K(x)}{K(y_0)}-1-\frac{\Delta K(y_0)}{2nK(y_0)}|x|^2\bigr)\delta_{y_0, \lambda}^{\frac{2n}{n-2}}\mathrm{d}x=o(\frac{1}{\lambda^2}).\end{equation}Observe also that \begin{equation}\label{Bahri000}S=c_nS_n^{\frac{2}{n}}.\end{equation}Thus under the condition ${\bf (i)}$, the claim \eqref{Bahri1} follows by combining \eqref{Bahri9}--\eqref{Bahri000} and taking $\lambda$ large enough. If the condition ${\bf (ii)}$ is satisfied instead of ${\bf (i)},$ then \eqref{Bahri1} follows by taking $\lambda$ large enough in the right hand side of \eqref{Bahri9} and using \eqref{Bahri000}. This finishes the proof of Proposition \ref{proposition}.\\ {\it Proof of Theorem \ref{nonexistence}.} {\it\, Sufficiency of the condition \eqref{lions}:} From $\mathbf{(K_1)}$ and $\mathbf{(K_\eta)}$ we get $K_\infty=K(y_0)> 0$ with $y_0\in \Omega.$ This, together with the condition \eqref{lions} and the result of Proposition \ref{proposition}, implies that \eqref{zakaria} is satisfied. Thus a solution to problem \eqref{problem} is obtained by applying Theorem \ref{Lions}.\\ {\it Necessity of the condition \eqref{lions}:}\,\,Arguing by contradiction, assuming that the problem \eqref{problem} has a solution $u$ under the condition\begin{equation}\label{condition}-\frac{(n-2)^2\bar{c}_2\Delta K(y_0)}{2nK(y_0)}\geq \mu \bar{c}_3.\end{equation}In particular, we have $\Delta K(y_0)\neq 0.$ Up to a translation and a dilatation in the space, we can suppose that $$\Omega=B\quad \mathrm{is\, the\, unit\, ball\, of}\,\, \mathbb{R}^n.$$ Now, by a result of Gidas--Ni--Nirenberg \cite[Theorem 1$'$]{GNNir}, $\mathbf{(K_1)}$ and $\mathbf{(K_\eta)}$ imply that $u$ is necessarily spherically symmetric. We write $u(x)=:u(t)$ and $K(x)=:f(t),$ where $t=|x|\in [0,\,1].$ Thus $u$ satisfies the following ordinary differential equation \begin{eqnarray} &\qquad\qquad-u{''}-\frac{n-1}{t}u'=f(t)u^{\frac{n+2}{n-2}}+\mu u\qquad \mathrm{on\,\,}(0,\,1), \label{zak1}\\& u'(0)=u(1)=0. \nonumber \end{eqnarray}(Note that $u\in C^2(\bar{\Omega})$). Let $\psi$ be a smooth function on $[0,\,1]$ such that $\psi(0)=0.$ Multiplying the equation \eqref{zak1} by $t^{n-1}\psi u'$ and $\Bigl(t^{n-1}\psi'(t)-(n-1)t^{n-2}\psi(t)\Bigr)u$ and integrating by parts several times in order to obtain \begin{eqnarray} -\frac{1}{2}|u'(1)|^2\psi(1)+\frac{1}{2}\int_0^1|u'(t)|^2\Bigl(t^{n-1}\psi'(t)-(n-1)t^{n-2}\psi(t)\Bigr)\mathrm{d}t\nonumber\\ =-\bar{c}_n\int_0^1u^{\frac{2n}{n-2}}\biggl[f(t)\Bigl(t\psi'(t)+(n-1)\psi(t)\Bigr)+f'(t)t\psi(t)\biggr]t^{n-2}\mathrm{d}t \label{zak3}\\-\frac{\mu}{2} \int_0^1u^{2}\Bigl(t^{n-1}\psi'(t)+(n-1)t^{n-2}\psi(t)\Bigr)\mathrm{d}t ,\qquad\qquad\qquad\,\,\,\,\nonumber\\ \int_0^1\biggl[f(t)u^{\frac{2n}{n-2}}\Bigl(t\psi'(t)-(n-1)\psi(t)\Bigr)+\mu u^{2}\Bigl(t\psi'(t)-(n-1)\psi(t)\Bigr)\biggr]t^{n-2}\mathrm{d}t\nonumber\\ =-\frac{1}{2}\int_0^1u^{2}\biggl[t^3\psi^{(3)}(t)+(n-1)(n-3)\Bigl(\psi(t)-t\psi'(t)\Bigr)\biggr]t^{n-4}\mathrm{d}t\label{zak4}\\ +\int_0^1|u'(t)|^2\Bigl(t\psi'(t)-(n-1)\psi(t)\Bigr)t^{n-2}\mathrm{d}t,\qquad\qquad\qquad\quad\,\,\,\,\nonumber \end{eqnarray}respectively, where $\bar{c}_n:=\frac{n-2}{2n}.$ Combining \eqref{zak3} and \eqref{zak4} we get \begin{align}\label{zak5}\begin{split} &-\frac{1}{2}|u'(1)|^2\psi(1)+\int_0^1u^{2}\biggl[\mu\psi'(t)+\frac{1}{4}\psi^{(3)}(t) +\frac{1}{4}\frac{(n-1)(n-3)}{t^3}\bigl(\psi(t)-t\psi'(t)\bigr)\biggr]t^{n-1}\mathrm{d}t\\&= \int_0^1u^{\frac{2n}{n-2}}\biggl[-\bar{c}_ntf'(t)\psi(t)+\frac{(n-1)}{n}f(t)\Bigl(\psi(t)-r\psi'(t)\Bigr)\biggr]t^{n-2}\mathrm{d}t. \end{split}\end{align}(Note that the Pohozaev identity \eqref{Pohozaev} corresponds to the case where $\psi(t)=t$). In order to get the desired contradiction, we need to choose a suitable function $\psi$ as a solution of the following ordinary differential equation\begin{equation}\label{zak6} \mu\psi'+\frac{1}{4}\psi^{(3)}+\frac{1}{4}\frac{(n-1)(n-3)}{t^3}\bigl(\psi-t\psi'\bigr)=0,\quad\forall\,\,t\in (0,\,1]. \end{equation} A straightforward computation shows that the equation \eqref{zak6} has two solutions defined on $[0,\,1]$ by a series $\psi_1(t)=\sum_{p=0}^{+\infty}a_{2p+1}t^{2p+1}$ and $\psi_2(t)=\sum_{p=0}^{+\infty}a_{2p}t^{2p},$ where\begin{equation}\label{zak7}a_{2p+1} =-\frac{2(2p-1)\mu}{p\bigl[(2p+1)(2p-1)-(n-1)(n-3)\bigr]}a_{2p-1},\quad\forall\,\,p\geq 1,\end{equation} \begin{equation}\label{zak8} \left\{ \begin{array}{ll} a_{2p}=0, & \qquad\forall\,\,0\leq p< \frac{n-1}{2}; \\ a_{2p}=-\frac{8(p-1)\mu}{(2p-1)\bigl[4p(p-1)-(n-1)(n-3)\bigr]}a_{2p-2}, &\qquad\forall\,\,p\geq \frac{n+1}{2}. \end{array} \right.\end{equation} Let $a_1> 0$ and $a_{n-1}< 0$ be fixed. Note that $\psi_1$ and $\psi_2$ are smooth on $[0,\,1].$ On the other hand, we claim that, for $\mu$ small enough, we have \begin{equation}\label{positivity} \psi_1(t)> 0\quad\mathrm{ and }\quad \psi_2(t)< 0,\quad \forall\,\,t\in (0,\,1].\end{equation} Indeed, it is sufficient to remark that $\psi_1$ and $\psi_2$ satisfy the hypotheses of the alternating series theorem for $\mu$ small enough. Thus there exists a constant $\mu(n)>0$ depending only on $n,$ such that the claim \eqref{positivity} is valid for every $\mu\leq \mu(n).$ Denoting $\bar{\eta}_3:=-2n\bar{c}_3K(y_0)\mu(n)/(n-2)^2\bar{c}_2\Delta K(y_0).$ \eqref{positivity} enables us to choose $a_1$ and $a_{n-1}$ such that \begin{equation}\label{zak10}\bar{\psi}(t):=\psi_1(t)+\psi_2(t)\geq 0,\quad \forall \,\,t\in [0,\,1],\,\,\forall\,\,\mu\leq \mu(n). \end{equation} Regarding the identities \eqref{zak5} and \eqref{zak10} and in order to get the desired contradiction, it is sufficient to investigate a constant $\bar{\eta}> 0$ such that if $\eta \leq \bar{\eta},$ then, for any $t\in (0,\,1],$ \begin{equation}\label{zak15}-\frac{n-2}{2n}t\eta f_1'(t)\bar{\psi}(t)+\frac{n-1}{n} \bigl(f(0)+\eta f_1(t)\bigr)\Bigl(\bar{\psi}(t)-t\bar{\psi}'(t)\Bigr)> 0.\end{equation} Let $0< \delta\leq 1$ be a fixed constant and $\delta\leq t\leq 1.$ Combining \eqref{condition}, \eqref{zak7} and \eqref{zak8} and using the fact that $\mu\leq \mu(n)$ we obtain \begin{equation}\label{zak19}\begin{split}&\quad-\frac{n-2}{2n}\eta t f_1'(t)\bar{\psi}(t)+\frac{n-1}{n} f(t)\Bigl(\bar{\psi}(t)-t\bar{\psi}'(t)\Bigr)\\&= \frac{n-1}{n}a_1 f(t)\Bigl(\frac{\eta}{f(0)}O_n(1)- \frac{(n-2)a_{n-1}}{a_1}t^{n-4}\bigl(1+\frac{\eta}{f(0)} O_n(1)\bigr)\Bigr)t^3\\&\quad-\frac{n-2}{2n}t\eta f_1'(t)\bar{\psi}(t), \end{split}\end{equation}where $|O_n(1)|$ is upper-bounded by a fixed constant $M$ depending only on $n.$ Let $\bar{\eta}_2> 0$ be a constant such that, for any $0< \eta\leq \bar{\eta}_2,$ \eqref{zak10} is satisfied and \begin{equation}\label{zak20}-\frac{\eta}{f(0)}\bigl|O_n(1)\bigr|- \frac{(n-2)a_{n-1}}{a_1}\delta^{n-4}\bigl(1-\frac{\eta}{f(0)} |O_n(1)|\bigr)> 0. \end{equation}Combining \eqref{zak10}, \eqref{zak19}, $\mathbf{(K_\eta)},$ and \eqref{zak20} we obtain \eqref{zak15} for any $\delta\leq t\leq 1$ and any $0< \eta\leq \mathrm{min}(\bar{\eta}_2,\,\bar{\eta}_3).$ Observe that if we let $\delta$ tend to $0,$ then to regain \eqref{zak20} for $\eta\leq \bar{\eta}_2,$ $\bar{\eta}_2$ must go to $0:$ this fact leads to the loss of \eqref{zak15}. Thus we have to fix the constant $\delta$ and we need another argument for the case $0< t\leq \delta.$ To this end, we will take care of the local information about the function $f_1(t)$ near its critical point $0.$ First, let us observe that the condition $\mathbf{(K_3)}$ implies the existence of two constants $\delta,\,M_0> 0$ such that, for any $0<t\leq \delta,$\begin{equation}\label{M0}0\leq f_1'(t)-tf_1''(0)\leq M_0t^{n-3}\quad \mathrm{or}\quad f_1'(t)-tf_1''(0)\leq 0. \end{equation}In particular, we deduce from \eqref{zak10} and \eqref{M0} that, for any $0<t\leq \delta,$ \begin{equation}\label{M00}-\bigl(f_1'(t)-tf_1''(0)\bigr)\bar{\psi}(t)\geq-M_0t^{n-3}\bar{\psi}(t)\geq -M_0t^{n-2}(a_1-a_{n-1})|O_n(1)|, \end{equation}where $|O_n(1)|$ is upper-bounded by a constant $M_{n}$ depending only on $n.$ Now, by combining \eqref{condition}, \eqref{zak7}, \eqref{zak8} and \eqref{zak10} and using the fact that $\mu\leq \mu(n)$ we obtain\begin{equation}\label{zak16}\begin{split}&-\frac{n-2}{2n}\eta tf_1'(t)\bar{\psi}(t)+\frac{n-1}{n} \bigl(f(0)+\eta f_1(t)\bigr)\Bigl(\bar{\psi}(t)-t\bar{\psi}'(t)\Bigr)\\&=\Bigl(-\frac{n-2}{2n}\eta f_1''(0)a_1-2f(0)\frac{(n-1)}{n}a_3\Bigr)t^3-\frac{n-2}{2n}\eta t\bigl(f_1'(t)-tf_1''(0)\bigr)\bar{\psi}(t)\\&+\Bigl[-\bigl(\frac{n-2}{2n}+\frac{n-1}{n}\bigr)\eta f_1''(0) a_3-4f(0)\frac{n-1}{n}a_5+\frac{\eta^2\mu}{f(0)}a_1 O_{n,f_1}(1)\Bigr]t^5\\&\,\,+f(0)\Bigl(-\frac{(n-2)(n-1)}{n} a_{n-1}+\frac{\eta}{f(0)} O_{n,f_1}(1)(-a_{n-1}+a_1)\Bigr)t^{n-1}, \end{split}\end{equation} where $|O_{n,f_1}(1)|$ is upper-bounded by a fixed constant $M_{n,f_1}$ depending only on $n$ and the function $f_1(x).$ Finally, by using \eqref{zak7}, \eqref{condition} and the fact that $n\neq 7\text{--}19$ and that $\bar{c}_2/\bar{c}_3=n(n-4)/4(n-1)(n-2)$ we get \begin{equation}\label{zak17} -\frac{n-2}{2n}\eta f_1''(0) a_1-2f(0)\frac{n-1}{n}a_3\geq 0, \end{equation} \begin{equation}\label{zak18}\frac{1}{\eta\mu a_1}\Bigl(-\bigl(\frac{n-2}{2n}+\frac{n-1}{n}\bigr)\eta f_1''(0) a_3-4f(0)\frac{n-1}{n}a_5\Bigr)\geq M> 0, \end{equation}where $M$ is a constant depending only on $n.$ Combining \eqref{M00}--\eqref{zak18} and taking $\bar{\eta}_1> 0$ small enough such that, for any $0< \eta\leq \bar{\eta}_1,$\begin{eqnarray} -\frac{(n-2)(n-1)}{n} a_{n-1}-\frac{\eta}{f(0)} (-a_{n-1}+a_1)\bigl(|O_{n,f_1}(1)|+\frac{n-2}{2n}M_0|O_n(1)|\bigr)> 0,\nonumber\\ M-\frac{\eta}{f(0)}|O_{n,f_1}(1)|> 0,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\,\,\,\,\nonumber \end{eqnarray}we get \eqref{zak15} for any $0< t\leq \delta$ and any $0< \eta\leq \mathrm{min}(\bar{\eta}_1,\,\bar{\eta}_3).$ The proof of Theorem \ref{nonexistence} follows by choosing $\bar{\eta}=\mathrm{min}(\bar{\eta}_1,\,\bar{\eta}_2,\,\bar{\eta}_3).$\\\\ {\bf Acknowledgement.} The author is greatly indebted to Professor A. Bahri, Professor H. Brezis, Professor P. L. Lions, and Professor L. Nirenberg for their works that gave him the inspiration to prepare this work.
1,108,101,566,832
arxiv
\section{Introduction} The COVID-19 Pandemic has fundamentally changed society’s perspective~\cite{hughes2010origin} on common spaces (e.g., airports, schools, commercial establishments). The 'new normal` has heralded a new age of technological innovations, from contact tracing, telemedicine to newer screening methods\cite{barnes2020challenges}. Although temperature screening has been widely implemented \cite{bwire2020coronavirus,aw2020non} as a way to safely reopen, multi-vital screening provides greater insight into the health of an individual. Vital signs, including temperature, blood oxygen levels and heart rate, are the easiest and most critical data points gathered from an individual to assess their general health.~\cite{chen2020cardiovascular}. In emergency settings, patients have to be prioritized and guided to the correct place of treatment (``triage'') largely based on their vital signs~\cite{barfod2012abnormal, christ2016emergency, australasian2000guidelines,mchugh2012more}. Vitals screening methods are applied to high throughput areas (e.g., airports, businesses, warehouses, factories), especially in the case of a global pandemic~\cite{fda2020enforcement}. One of the vitals that is important during screening is heart rate. Elevated heart rate (Tachycardia) is a telltale sign associated with fever in the case of viral respiratory infections~\cite{karjalainen1986fever}. Another vital that can be helpful is blood oxygen levels (SpO2). Low blood oxygen levels were seen before the onset of a fever in many COVID-19 patients~\cite{chandra2020silent} and can indicate underlying signs of other viral diseases~\cite{chughtai2017presence}. Body temperature, heart rate, and blood oxygen levels together provide further insight into the health of an individual. Human body temperature is well established as one of the key vital signs~\cite{geneva2019normal}. The accepted mean value for normal human body temperature measured orally is 37°C (98.6°F). However newer research indicates that the average may actually be closer to 36.6°C (97.9°F)~\cite{protsiv2020decreasing}. Each individual has their normal body temperature, which varies slightly from the ideal value. Human body temperature constantly adapts to its environmental conditions. A body temperature of 38°C (100.4°F) or more is considered to be a fever~\cite{BT}. The most recent viral epidemics have had fever as the most common symptom (e.g., Ebola, SARS, H1N1~\cite{ghassemi2018best}, and COVID-19~\cite{grant2020prevalence}). It is evident that fever detection is one of the key components in screening. All current screening techniques rely on remote body temperature measurements. There are two problems that must be acknowledged when measuring the body temperature: i) normal body temperature variation; ii) infrared thermal imaging limitations. Body temperature can fluctuate based on the region selected for measurement~\cite{lenhardt2006estimation,lahiri2012medical}. Furthermore, research has shown that body temperature is a nonlinear function of several variables such as age, state of health, gender, environmental temperature, time of the diurnal cycle, and among many others~\cite{geneva2019normal}. On average, healthy elderly people have lower body temperature compared to younger adults~\cite{geneva2019normal}. The human body is constantly adapting its temperature to environmental conditions (e.g., goes up in the afternoon and lower at night). Despite these minor variations, elevated body temperature is still a universally accepted indicator of fever. Remote body temperature screening is a fast, non-invasive alternative to conventional clinical thermometers for monitoring body temperature~\cite{nguyen2010comparison, lahiri2012medical}. Average external body temperature (peripheral skin temperature) is 2-4°C (3.6-7.2°F) less than the core temperature~\cite{lenhardt2006estimation}. Therefore, mean body temperature must be calculated from external (or skin) temperature using an estimation algorithm. Infrared radiation emitted by a surface depends on the environmental conditions such as moisture, airflow and surrounding temperature~\cite{IPVM,chen2020investigation}. Other factors that impact temperature sensing are ambient temperature drift and aging of the sensor. An individual’s thermal state also affects the radiated heat (e.g., running, coming from cold environments, etc). Further, the distance and angle of the thermal camera relative to the subject plays a critical role in the sensor's fidelity. Blackbody devices (temperature references) are known to solve the issues related to ambient temperature and sensor aging, improving the accuracy of the sensor. However, they are often forgone due to the cost or complexity of deployment~\cite{FLIR}. Remote body temperature sensing is an ideal alternative to clinical thermometers that are sometimes cumbersome and often require an attendant~\cite{nguyen2010comparison, FDA6}. Beyond temperature sensing, blood oxygen levels are used to infer any impairment in lung function~\cite{tremper1989pulse}. Blood oxygen level is usually measured with a pulse oximeter (finger clip). A resting oxygen saturation level between 95\% and 100\% is regarded as normal for a healthy person at sea level, and below 95\% is considered abnormal~\cite{Hypoxemia,PulseOximetry, torp2020pulse,hafen2018oxygen}. Low blood oxygen can serve as an indicator to many different viral pneumonias~\cite{chandra2020silent}. The recent global pandemic (i.e. COVID-19) has demonstrated that many people can have dangerously low oxygen levels, without showing any other symptoms ('silent hypoxemia') ~\cite{teo2020early}. The detection of low oxygen levels in asymptomatic individuals can facilitate early diagnosis of an underlying illness. Blood oxygen measurement serves as a key component of Vital Screening. Heart rate is measured through pulse oximeters, in addition to blood oxygen levels. The American Heart Association defines the normal sinus heart rate as between 60 and 100 bpm at rest~\cite{avram2019real,d2015crosstalk,karjalainen1986fever} (it is important to note that athletes often have heart rates below 60 bpm at rest). Tachycardia is observed in case of anemia, intake of caffeine, and exercise \cite{Tachycardia}. Tachycardia is seen concomitantly with fever due to an increase in the Basal Metabolic Rate and cardiac output. In one study, when the temperature rose by 1°C (1.8°F) due to fever, the heart rate increased on the average by 8.5 beats per minute~\cite{karjalainen1986fever}. Thus, tachycardia, when seen along with fever, can point to possible infection. Pulse oximetry technology involves shining light at specific wavelengths through tissue (most commonly the fingernail bed) and using a detector to determine the amount of light that passes through~\cite{demeulenaere2007pulse, castaneda2018review}. There are several inherent limitations to pulse oximetry. One of the common examples of interfering factors is poor signal due to certain nail polish and artificial fingernails~\cite{luks2020pulse}. Poor peripheral perfusion because of cold, hypotension, or Raynaud's disease is the principal cause for failure to obtain a satisfactory signal, mainly because of an inadequate pulse wave~\cite{torp2020pulse,demeulenaere2007pulse}. Motion artifacts can interfere with signal detection because of an unstable waveform. Improperly seated sensors, shivering, seizures, or tremors can cause movement leading to inaccurate readings. Pulse oximetry, despite its limitations, is universally recognized as an essential vital measurement tool. The need to take preventative measures to prepare for inevitable future outbreaks is apparent. We present a solution using existing sensors: Vital Screening Techniques (VIST). VIST involves scanning individual's multiple vitals within seconds using robust sensors. It provides the added layer of safety needed to move past COVID-19 pandemic. \section{Vitals Screening Techniques} VIST encompasses any device that measures more than one independent vital( e.g., body temperature, heart rate, and blood oxygen level) of an individual. VIST devices usually have built in sensors in the form of thermal cameras and pulse oximeters. Most of the products display the readings on a user-friendly interface. A live video feed usually allows the user to adjust their positioning. VIST allows for the rapid mass screening of individuals to ensure a safe environment. For external temperature sensing, thermal cameras are used to obtain targets' radiated temperature. The thermal camera should ideally be set up in a room temperature environment~\cite{FDA3,FDA4, FDA5}. Measurements should be made only at a fixed distance, with the subject directly facing the camera. The incorporation of a blackbody device allows for higher accuracy in the narrow range of normal human body temperature. The subject should acclimatize to room temperature for a few minutes prior to measurement. Per the FDA~\cite{FDA6}, it is recommended to measure one person at a time. In pulse oximetry, blood oxygen levels and heart rate are measured through the process of photoplethysmography (PPG). PPG is a non-invasive technology that uses a light source and a photodetector at the surface of the skin (finger-tip) to measure the volumetric variations of blood circulation with a resulting waveform. Pulse oximeters can incorporate either the transmissive or reflective mode. In the transmissive mode, the light sources and the photodiode are opposite to each other with the measurement site between them~\cite{lee2016reflectance}. In the reflective mode, the light sources and photodiode are on the same side, and light is reflected to the photodiode across the measurement site. The transmissive mode is not only the most commonly used method, it is the only clinically approved one, because of its high accuracy and stability. The clip-style of the pulse oximeter probe eliminates some of the errors due to finger movement. The only drawback of the clip style is that it is difficult to clean and cannot incorporate UV-C disinfection. A pulse oximeter used in mass screening is a high-touch surface with the potential for disease transmission. Ultraviolet (UV-C) sterilization is one method that may be incorporated to address this concern. UV-C’s effectiveness against different strains of viruses has long been established ~\cite{buonanno2020far}. Studies show that UV-C light at 267 and 279 nm was very effective at inactivating the Coronavirus~\cite{gerchman2020uv} Recent studies have shown the chance of transmission of SARS-CoV-2 (through inanimate surfaces) is less frequent than previously recognised ~\cite{mondelli2020low}. The improper exposure to UV-C radiation poses risks to human's skin and eyes. UV-C can be offered as an optional feature to alleviate concerns amongst the general public about touching this high-contact surface. \begin{figure}[t!] \includegraphics[width=0.5\textwidth]{UI2.png} \centering \caption{ \textbf{Sample Interface from Symptomsense }} \label{fig:testbed-dack} \end{figure} \section{Evaluation} In this section, we evaluate Soter Technologies' sensors that are available for licensing, as well as full-integrated in products of various form factors (e.g., kiosk, handheld, gateway, etc.). \iffalse \begin{figure*} \includegraphics[width=1\textwidth, height = 15cm]{SS Kiosk vs other Kiosks - Sheet1.pdf} \vspace{-60pt} \caption{\textbf{In-depth comparison between existing products in the market ~\cite{TAVIS, OLEA,Ksubaka,Rapidscreen}}} \label{fig:comparative} \end{figure*} \fi \subsection{Sensors Fidelity} We compare Soter's sensors against other sensors on the market. The thermal sensor is compared against Braun thermometer ~\cite{Braun} and Famidoc~\cite{Famidoc}. Braun has FDA 510(K) Pre-market Notification, and Famidoc passed the EU standards for infrared thermometers accuracy. To the best of our knowledge, there is no existing remote thermal scanner that is FDA approved. The experiments were done over 20 different individuals with age ranges from 23 to 65 and various skin pigments. For each test, 10 samples were collected leading to total of 250 scans per each sensor. \textbf{Temperature Sensing.} Soter's thermal camera warms within 2 minutes after start-up. With an incorporated blackbody, ambient temperature does not affect readings. There is no drift or aging of the sensor due to the built-in blackbody technology. Multi-point temperature readings are made from different regions of the face. A unique weighted estimation algorithm is used adjust skin temperature to body temperature. Only one exposed region is necessary to obtain a reading, allowing for accurate temperature readings from individuals with head coverings, masks, etc. We find that Famidoc has ±0.09°C (0.17°F) precision and Braun has ±0.12°C (0.21°F) precision. On average, Famidoc and Braun are about 0.40°C (0.72°F) from each others measurements. We find that Braun cannot measure people of lower than average body temperature (e.g. body temperature of 96°F or less). For lower body temperature individuals, we find that Famidoc performs better than Braun, however with degraded accuracy and precision performance. Soter's temperature sensor retains precision of ±0.28°C (0.51°F) and accuracy of ±0.15°C (0.27°F) compared to Braun. Compared to Famidoc, Soter's thermal sensor has an accuracy of ±0.18°C (0.33°F) Further, Soter's sensor is able to robustly detect those individual's with generally lower than average body temperature with high precision and accuracy relative to Braun and Famidoc. The average scan time of Soter's temperature sensor is 4.76 seconds. Further, we find that Soter's sensor yielded 100\% completed scans, unlike Braun, 97.5\% (Famidoc had 100\% completed scans). This shows that Soter's sensor can detect larger population than the Braun sensors. \begin{figure}[t!] \includegraphics[width=0.5\textwidth]{Accuracycompared.png} \centering \caption{ \textbf{Accuracy of Temperature Sensors }} \label{fig:testbed-dack} \end{figure} \begin{table*}[] \resizebox{1.05\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Available} & \multirow{2}{*}{\checkmark} & \multirow{3}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}SymptomSense\\ Kiosk\end{tabular}}} & \multirow{3}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}SymptomSense \\ Gateway\end{tabular}}} & \multirow{3}{*}{\textbf{TAVIS}} & \multirow{3}{*}{\textbf{Olea Irvine}} & \multirow{3}{*}{\textbf{Olea Austin +}} & \multirow{3}{*}{\textbf{Ksubaka}} & \multirow{3}{*}{\textbf{Rapidscreen}} \\ & & & & & & & & \\ \cline{1-2} Unavailable & X & & & & & & & \\ \hline \multirow{4}{*}{\textbf{Temperature}} & Thermal Sensor & \checkmark &\checkmark& \checkmark& \checkmark& \checkmark& \checkmark& \checkmark\\ \cline{2-9} & Blackbody & \checkmark& \checkmark& X & X & X & X & X \\ \cline{2-9} & Live Video & \checkmark& X & \checkmark& \checkmark& \checkmark& \checkmark& \checkmark\\ \cline{2-9} & Facial Detection & \checkmark& \checkmark& X & \checkmark& \checkmark& \checkmark& \checkmark\\ \hline \multirow{2}{*}{\textbf{Pulse Oximeter}} & Blood Oxygen & \checkmark& \checkmark& \checkmark& X & X & X & X \\ \cline{2-9} & Heart Rate & \checkmark& \checkmark& \checkmark & X & X & X & X \\ \hline \multicolumn{1}{|l|}{\multirow{2}{*}{}} & Configurations & \checkmark& \checkmark &\checkmark & X & \checkmark & X & \checkmark \\ \cline{2-9} \multicolumn{1}{|l|}{} & Integration Potential &\checkmark&\checkmark&\checkmark & X & \checkmark & X & X \\ \hline \end{tabular}% } \caption{{\textbf{In-depth comparison between existing products in the market ~\cite{Symptomsense,TAVIS, OLEA,Ksubaka,Rapidscreen}}. The table shows existing sensors that provide reliable temperature sensing by combining known techniques, thus enabling the highest sensing accuracy (e.g., facial detection, blackbody usage, etc.). We also show existing products in the market that support pulse oximetry sensor to provide oxygen levels and heart rate readings.}} \label{tab:my-table} \end{table*} \textbf{Heart Rate and Blood Oxygen} Soter's product line offers various fully-integrated pulse oximeters available in different form factors (open design, clip etc.). All of Soter's pulse oximeters use transmissive PPG technology. Here, we evaluate the off-the-shelf, medical-grade, FDA approved pulse oximeter (Nonin 3231)\footnote{Test subjects included those that traditionally may have difficulty obtaining pulse oximetry readings, including those with powder coated nails, poor perfusion, as well as cold hands.}. We find the average pulse oximetry scan time is 7.7 seconds from our testing. Nonin 3231 yielded 100\% successfully completed scans. This pulse oximeter is FDA approved and has undergone a clinical study with a stated accuracy of ±1.31\% between 70-100\% blood oxygen~\cite{Nonin3231}. Heart rate accuracy is ±3 digits between 20-250 bpm. The Nonin 3231 clip pulse oximeter measures a wide range of heart rates: 18 – 321 beats per minute. It can measure blood oxygen levels between 0-100\%. The ratio between the amplitude of the red light at 660nm and infrared light at 910nm wavelength is used to determine oxygen saturation. The pulse oximeter requires only 2 beats and uses an averaging algorithm to obtain an accurate heart rate. Longer scan times (with more readings taken) will result in more accurate pulse oximetry values. \subsection{User Experience} As an example of existing products' user interfaces, SymptomSense provides a user friendly interface (Figure 1) where users can read their results (either vital values or pass/fail) right after screening. Figure 1 shows a completed scan with temperature, heart rate and blood oxygen readings (on the left of the interface), and pulse oximetry scan complete (on the right). Once readings are obtained, visual indicators are used to indicate if the individual has passed the screening or not per the ranges of acceptable pass defined by the system operator. \subsection{Existing Products} Table~\ref{tab:my-table} shows in-depth comparison between existing products that leverage both single and multi-vital sensing. Configurations include the ability to set vital ranges, pass or fail vs numeric readings. Integration features include an optional battery, printer, barcode system, etc. \section{Discussion} \textbf{UV-C and standard medical pulse oximeter.} There is a shift to incorporate UV-C into pulse oximeters. Soter's proprietary pulse oximeter with optional UV-C is currently in the final phase of development. This pulse oximeter utilizes a double-fail safe to automatically turn off when a finger is detected ensuring limited exposure to harmful UV-C rays. 275 nm of UV-C light (limited to the contact area) disinfects the surface for a 30 second duration after every use. To our knowledge, there is currently no FDA approved medical finger pulse oximeter with UV-C disinfection capability. \textbf{Remote respiratory rate is in development.} There is no established technology that can remotely measure the respiratory rate with high confidence. Nonetheless, there is recent work with various technology to estimate and measure the respiratory rate (thermal sensing~\cite{chauvin2014contact}, mmwave ~\cite{alizadeh2019remote}). Given the current issues remote respiratory rate technology is facing (e.g., random individual vibrations, and better sensors required), it is not mature yet for the consumer market. However, to obtain a valid respiratory rate reading with high confidence, the individual has to get scanned for a long duration (e.g., > 10 seconds) to obtain a valid reading, since respiratory rate is much lower than heart rate (5-20). \textbf{Screening best practices.} The FDA ~\cite{FDA1,FDA2, FDA3, FDA4, FDA5, FDA6} has provided proper guidelines on screening best practices (including thermal camera environment and calibration). Soter's sensors make the process of following the FDA procedure easier as they have self-calibrated blackbody based thermal system that shows final output to the user. Soter does not claim to do multi-person detection which has been shown to be ineffective. The Nonin 3231 pulse oximeter is already classified as a medical device and is FDA approved. \textbf{Further screening.} The entity using our kiosks may choose to do further screening/testing before rejecting the individual that our system flags ~\cite{FDA4}. We leave the guidelines of handling the user to the entity as it may differ and vary given location’s conditions. We recommend following CDC guidelines. \textbf{HIPAA compliance.} Most of the screening systems do not store or retain any information about the user and their vitals, it only shows the vital results for a brief moment. Facial recognition technology is currently not implemented in the SymptomSense product-line, Thus, there is no need for HIPAA clearance, hence anonymized vital screening~\cite{gerke2020regulatory}. \section{Acknowledgement} We thank Asheik Hussain and Cary Chu for their invaluable feedback and continuous assistance of this work. \section{Conclusion} In this paper, we introduce VIST, vitals screening techniques. We cover state of the art vitals scanners that exist in the market, with in depth comparison between them. We further cover details about how the underlying sensing technologies work and their drawbacks. { \bibliographystyle{unsrt}
1,108,101,566,833
arxiv
\section{Introduction} In the past few decades, large-scale surveys have played a huge role in advancing our understanding of the universe, and these surveys have produced enormous reservoirs of data that astronomers regularly access. However, tools for accessing these reservoirs are heterogeneous and often only available via graphical user interfaces (GUIs) or web sites. One of the cornerstones of research is reproducibility. To be able to reproduce research, the data need to be available to everyone. Many scientific journals encourage or demand that the underlying data accompany the article or be uploaded to a hosting service. Data sharing is not only important for new results, but also to provide the ability to test and verify published results. While many different efforts to promote data sharing have made the practice more common, it is difficult to keep track of how and where to retrieve a given data set. A common scripted interface to tie all these services together is a good way to make all the different data more accessible, and it provides authors with the ability to make the full analysis process they used -- from data download to publication -- repeatable. A centrally maintained library also safeguards against inevitable `link rot' on data archives, moving some of the responsibility for maintaining long-term reproducibility from each individual researcher to the broader community. Data sharing has taken on a variety of forms. The most prominent are the major observatory archives: MAST, NOAO, ESO, ESA, IPAC, CDS, NRAO, CXC, HEASARC, and CADC are the main organizations hosting raw and processed data from ground and space based telescopes. These data archives also serve as the primary means for serving data to users when the data are taken in queue mode, i.e., when the data are taken while the observer is not on-site. \begin{deluxetable*}{lp{8.5cm}ll} \tablecaption{List of all Services \& Surveys \package{astroquery} modules support.} \label{tab:surveys} \tablehead{Module name & Service or Organization & URL} \startdata \package{alfalfa} & ALFALFA data repository & \url{http://arecibo.tc.cornell.edu/hiarchive/alfalfa} \\ \package{alma} & Atacama Large Millimeter/submillimeter Array Archive & \url{http://almascience.org} \\ \package{atomic} & Atomic Line List & \url{http://www.pa.uky.edu/~peter/atomic} \\ \package{besancon} & Besancon model of the Galaxy& \url{http://model.obs-besancon.fr} \\ \package{cds} & Centre de Données astronomiques de Strasbourg & \url{http://cds.u-strasbg.fr} \\ \package{cosmosim} & CosmoSim database & \url{https://www.cosmosim.org/uws/query} \\ \package{esasky} & ESASky of the European Space Agency & \url{http://sky.esa.int} \\ \package{eso} & European Southern Observatory Science Archive & \url{http://archive.eso.org/cms.html} \\ \package{exoplanet\_orbit\_database} & Exoplanet Orbit Database& \url{http://exoplanets.org}\\ \package{fermi} & Fermi Gamma-ray Space Telescope Data & \url{https://fermi.gsfc.nasa.gov/ssc/data} \\ \package{gaia} & Gaia Archive of the European Space Agency & \url{https://gea.esac.esa.int/archive} \\ \package{gama} & Galaxy and Mass Assembly Survey & \url{http://www.gama-survey.org/dr2/query} \\ \package{heasarc} & High Energy Astrophysics Science Archive Research Center & \url{https://heasarc.gsfc.nasa.gov} \\ \package{hitran} & HIgh-resolution TRANsmission molecular absorption database & \url{http://hitran.org/hapi} \\ \package{ibe} & IRSA Image Server & \url{http://irsa.ipac.caltech.edu/ibe} \\ \package{irsa} & IRSA Catalog Query Service & \url{https://irsa.ipac.caltech.edu} \\ \package{irsa\_dust} & IRSA Galactic Dust Reddening and Extinction Query & \url{https://irsa.ipac.caltech.edu/applications/DUST} \\ \package{jplhorizons} & JPL's HORIZONS system & \url{https://ssd.jpl.nasa.gov/horizons_batch.cgi} \\ \package{jplsbdb} &JPL's Small-Body DataBase & \url{https://ssd-api.jpl.nasa.gov/doc/sbdb.html} \\ \package{jplspec} &JPL's Spectral Catalog & \url{https://spec.jpl.nasa.gov/cgi-bin/catform} \\ \package{lamda} & Leiden Atomic and Molecular Database & \url{http://home.strw.leidenuniv.nl/~moldata} \\ \package{magpis} & The Multi-Array Galactic Plane Imaging Survey & \url{https://third.ucllnl.org/gps} \\ \package{mast} & Barbara A. Mikulski Archive for Space Telescopes & \url{https://mast.stsci.edu} \\ \package{mpc} & Minor Planet Center Ephemeris Service & \url{https://minorplanetcenter.net} \\ \package{nasa\_ads} & SAO/NASA Astrophysics Data System & \url{https://api.adsabs.harvard.edu} \\ \package{nasa\_exoplanet\_archive} &NASA Exoplanet Archive & \url{https://exoplanetarchive.ipac.caltech.edu} \\ \package{ned} & NASA Extragalactic Database & \url{https://ned.ipac.caltech.edu} \\ \package{nist} &NIST Atomic Spectra Database & \url{https://physics.nist.gov/PhysRefData/ASD} \\ \package{nrao} & National Radio Astronomy Observatory Data Archive & \url{https://archive.nrao.edu/archive}\\ \package{nvas} & NRAO VLA Archive Survey Images Page & \url{https://archive.nrao.edu/nvas} \\ \package{oac} & Open Astronomy Catalog & \url{https://astrocats.space} \\ \package{ogle} & Interstellar Extinction toward the Galactic Bulge from OGLE-III data & \url{http://ogle.astrouw.edu.pl/cgi-ogle/getext.py} \\ \package{open\_exoplanet\_catalogue} & Open Exoplanet Catalogue & \url {http://openexoplanetcatalogue.com} \\ \package{sdss} & Sloan Digital Sky Survey & \url{http://skyserver.sdss.org}\\ \package{sha} & Spitzer Heritage Archive & \url{http://sha.ipac.caltech.edu/applications/Spitzer/SHA } \\ \package{simbad} & CDS SIMBAD Astronomical Database & \url{http://simbad.u-strasbg.fr}\\ \package{skyview} & NASA's SkyView Query & \url{ http://skyview.gsfc.nasa.gov} \\ \package{splatalogue} & Splatalogue Database for astronomical spectroscopy query & \url{https://www.cv.nrao.edu/php/splat} \\ \package{ukidss} & UKIRT Infrared Deep Sky Survey & \url{http://wsa.roe.ac.uk}\\ \package{vamdc} & VAMDC molecular line database & \url{https://vamdclib.readthedocs.io/} \\ \package{vizier} & CDS VizieR Astronomical Catalogues & \url{http://vizier.u-strasbg.fr} \\ \package{vo\_conesearch} &Simple Cone Search Databases & \url{https://astropy.stsci.edu/aux/vo_databases} \\ \package{vsa} & Vista Science Archive & \url{http://vsa.roe.ac.uk}\\ \package{xmatch} & CDS X-Match Service & \url{http://cdsxmatch.u-strasbg.fr} \\ \enddata \end{deluxetable*} In addition to observatories and telescopes, individual surveys often share their full data sets. In some cases, these data sets are shared via the observatory that acquired them, for example, the all-sky data acquired with Planck, WMAP, and COBE\@. Other surveys, particularly ground-based surveys, serve their own data. Examples include SDSS, 2MASS, UKIDSS, and VSA. Individual teams and small groups often share their data via their own custom websites. These services do not follow any particular standard and can be widely varied in the type and amount of data shared. Sometimes these data are shared via the archive systems (e.g., IRSA at IPAC hosts many individual survey data sets), while others use their own web hosting systems (e.g., MAGPIS). Finally, there are other data types relevant to astronomy that are not served by the typical astronomical databases. Examples include databases of molecular and atomic properties, such as those provided by Splatalogue and the NIST Atomic Spectra Database, bibliographic databases such as the NASA Astrophysics Data System (ADS), or services that are computationally intensive or require regular updates, like Solar System ephemerides provided by services like JPL HORIZONS, or the Minor Planet Center. \package{astroquery} arose from a desire to access these databases from the Python command line in a scriptable fashion. Script-based data access provides astronomers with the ability to make reproducible analysis scripts and pipelines in which the data are retrieved and processed into scientifically relevant results with minimal user interaction. In this paper, we provide an overview of the \package{astroquery} package. Section \ref{sec:software} describes the basic layout of the software and the shared API concept underlying all modules. Section \ref{sec:development} describes the development model. Finally, Section \ref{sec:documentation} describes how \package{astroquery} is documented. \section{The Software} \label{sec:software} \package{astroquery} consists of a collection of modules that mostly share a similar interface, but are meant to be used independently. They are primarily based on a common framework that uses the Python \package{requests}\footnote{\url{http://docs.python-requests.org/}} package to perform HTTP requests to communicate with web services. For new module development, there is a \texttt{template\_module} consisting of a folder with several individual python code files that lays out the basic framework of any new module. All modules have a single core \texttt{class} that has some number of \texttt{query\_*} methods. The most common query method is \texttt{query\_region}, which usually provides a ``cone search'' functionality, i.e., they search for data within a circular region. The results of the queries then are returned in an \package{astropy} \citep{Astropy-Collaboration2018, Astropy-Collaboration2013} \texttt{Table}.\footnote{\url{http://docs.astropy.org/en/stable/table/}} An example using the SIMBAD interface is shown below:\footnote{\url{http://astroquery.readthedocs.io/en/latest/simbad/simbad.html}} \begin{lstlisting}[caption=Query SIMBAD for a region around M81] from astroquery.simbad import Simbad result_table = Simbad.query_region("m81") \end{lstlisting} In this example, \texttt{Simbad} is an instance of \\ \texttt{astroquery.simbad.SimbadClass}, and \texttt{result\_table} is an \texttt{astropy.table.Table} containing the objects near M81. This common interface allows users to use different services and process the resulting data in the same manner despite the differences in the underlying methods and services (e.g., \texttt{SDSS.query\_region()}, \texttt{Simbad.query\_region()}, \texttt{NED.query\_region()}, etc.) While there is a common suggested API described in the \texttt{template\_module}, individual packages are not \emph{required} to support this API because, for some, it is not possible. For example, the atomic and molecular databases refer to physical data that is not related to positions on the sky and therefore their \package{astroquery} modules cannot include \texttt{query\_region} methods. The same applies to Solar System object ephemerides queries. Differences in the API are discussed in the \package{astroquery} documentation (see Section \ref{sec:documentation}). \subsection{Version Numbers} \label{sec:versionnumbers} \package{astroquery} uses the same format as traditional semantic versioning, with versions indicated in the format \texttt{MAJOR.MINOR.PATCH.devCOMMIT\_ID} (for example, \texttt{0.3.9.dev4581}). \package{astroquery} patches are frequently made to accommodate upstream changes, i.e., changes made to the remote service, and as such are not guaranteed to be backward-compatible. Thus, starting in mid-2018, \package{astroquery} switched from a manual release model to a continuous deployment model. Prior to this change, the \texttt{MAJOR.MINOR.PATCH} versions were each created manually by one of the maintainers, then pushed to package release services. After this change, each accepted pull request automatically triggered a new release via the python package index.\footnote{\url{https://pypi.org/}} We created a new manual release, v0.3.9, to accompany the publication of this paper. \subsection{HTTP User-Agent} \label{sec:useragent} \package{astroquery} identifies itself to host services using the \texttt{HTTP User-Agent} header data, which is automatically produced and sent to the archives with every request. Users do not need to be aware of this metadata being sent with their queries, but the information can be used by data hosting services to determine how many users are accessing their service via \package{astroquery} and to assist in debugging if improper queries are being submitted. The format of the user agent string is: \[\texttt{astroquery/\{version\} \{requests\_version\}}\] where \texttt{\{version\}} is a version number of the form described in \S \ref{sec:versionnumbers} and \texttt{\{requests\_version\}} is the corresponding version of the Python \package{requests} package. For example: \[\texttt{astroquery/0.3.9.dev4863 python-requests/2.14.2}\] \subsection{The API} The common API has a few features defined in the template module. Each service is expected to provide the following interfaces, assuming they are applicable: \begin{itemize} \item \texttt{query\_region} - A method that accepts an Astropy\xspace \texttt{SkyCoord} object representing a point on the sky plus a specification of the radius around which to search. The returned object is an Astropy\xspace table. \item \texttt{query\_object} - A method that accepts the name of an object. This method relies on the service to resolve the object name, i.e., it does not use a name resolver like \texttt{SESAME}.\footnote{\url{http://cds.u-strasbg.fr/cgi-bin/Sesame}} The returned object is an Astropy\xspace table. \item \texttt{get\_images} - For services that provide image data, this method accepts an Astropy\xspace \texttt{SkyCoord} object and a radius to search for data that cover the specified target. The returned object is a list of \texttt{astropy.io.fits.HDUList} objects. \end{itemize} We also require a low-level interface to the services so that queries with very large results can be handled by other methods (e.g., data streaming) if needed. The low-level interface consists of a series of methods with the same names, but with the additional suffix \texttt{\_async} (e.g., \texttt{query\_async}). The \texttt{query*\_async} methods return a \texttt{requests.Response} object from the accessed website, providing developers with the ability to access the data in a stream or access only the response metadata (i.e., the \texttt{async} methods do not download the corresponding data, so they may be useful for collecting metadata for very large files). The \texttt{get\_images\_async} method returns \texttt{FileContainer} objects that similarly provide `lazy' access to the data, but specifically for FITS files. Contributors need only implement these \texttt{\_async} methods because there is a wrapper tool that converts \texttt{\_async} methods into their corresponding non-asynchronous versions. Deviations from this standard API are documented in the \package{astroquery} documentation (see Section \ref{sec:documentation}). Most deviations are for services for which \texttt{query\_region} methods are not defined, such as atomic and molecular line databases. \subsection{Caching and login functionality} Astroquery provides tools to handle multiple aspects of querying that are common to all modules. The \texttt{BaseQuery} metaclass provides tools for caching requests and downloaded data, reducing the duration and the network load for repeated queries. Cached data are stored in the user's \texttt{\textasciitilde/.astropy/cache/astroquery} directory. The \texttt{BaseQuery} metaclass is also responsible for setting the User-Agent (\S \ref{sec:useragent}). The \texttt{QueryWithLogin} metaclass provides a framework for logging in securely to services that require user authentication, including a credential storage mechanism. \subsection{Error handling} Some queries will inevitably fail. Failures can take on different modes. For common and expected modes, such as searching for an object or location on the sky and getting no results, the result is clearly communicated as a simple null result or empty table. For unpredictable and unexpected errors, such as server failures, timeouts, and other related communication issues, the errors are handled by the \texttt{requests} module, and normal HTTP responses are returned (e.g., HTTP 200 means the request was successful, while 503 indicates the request was forbidden by server-side permissions; a complete list can be found at \url{https://en.wikipedia.org/wiki/List_of_HTTP_status_codes}). In some cases, when we know a particular failure mode is likely (because the developers have encountered it at least once), we catch and raise a specific \texttt{Exception} or \texttt{Warning}. The full list of these is in the \texttt{exceptions.py} file. Developers can use these custom exceptions to build in additional robustness to data pipelines using astroquery by either implementing workarounds to known issues or correctly informing users of the problem. \subsection{Testing} Astroquery testing is somewhat different from most other packages in the scientific Python ecosystem. While the tests are based on the Astropy\xspace testing infrastructure and use pytest to run and check the outputs, the astroquery tests are split into \emph{remote} and \emph{local}. The remote tests exactly replicate what a user would enter at the command line, but they are dependent on the stability of the remote services. In our experience it is quite rare for all of the astroquery-supported services to be accessible simultaneously.\footnote{While this issue affects testing, it rarely affects users, since simply retrying a query is often enough to fix user issues. When the servers are simply down or broken, astroquery is affected, and the resulting errors are sometimes unpredictable; users are encouraged to report such failures as github issues (\url{https://github.com/astropy/astroquery/issues}) so that better error messages can be provided.} We therefore require that each module provide some tests that do not rely on having an internet connection. These tests rely on \emph{monkeypatching}\footnote{Monkeypatching is the dynamic replacement of attributes at runtime, i.e., changing what functions do after they are imported.} to replace the remote requests. Instead of downloading data, the test suite uses locally available files to test the query mechanisms and the data parsers. Monkeypatching in the context of pytest results in code that is generally more difficult to understand than typical Python code, but a set of tests independent of the remote services is necessary. The local tests are run as part of the continuous integration for the project with each commit. The remote tests are run for merges and as part of a regularly-scheduled cron job. Running the remote tests less frequently helps reduce the burden on the remote services. \subsection{Other utilities} There are several general-use utilities implemented as part of astroquery, such as a bulk FITS file downloader and renamer and a download progressbar (these tools complement similar features in Astropy\xspace). There is also a schema system implemented to allow user-side parameter validation. The schema systems are basic syntax-checking tools that verify that the parameters the user has input are of the right type and format for the target service; for those services without schemas, the user can hypothetically send queries that the service will be unable to handle. The schema tool is only implemented in the ESO and Vizier modules, but it could be expanded to other modules to reduce the number of doomed-to-fail queries sent through astroquery. \section{Development history and status} \label{sec:development} Anyone can contribute to astroquery. The maintainers are committed to helping developers make new modules that meet the requirements of astroquery. This section describes how astroquery has been developed, but we welcome all sorts of new contributions, including new modules, upgrades to existing modules, and minor corrections to existing tools from both individuals and institutions. Astroquery is an Astropy\xspace coordinated package \citep{APE15} and is a critical component of the Astropy\xspace Project ecosystem \citep{Astropy-Collaboration2018}. It is a standalone project and will remain independent of the \package{astropy} core package,\footnote{Many Astropy\xspace affiliated packages are developed with the intent of eventually including them in the core of \package{astropy}. In contrast, astroquery intends to remain a separate package indefinitely largely because of its need to rapidly adapt to changes in the remote services; \package{astropy} cannot make such rapid changes because users rely on its stability.} but is coordinated by the Astropy\xspace Project to ensure sustainability and maintenance. Astroquery has received contributions from 77 people as of August 2018. While the primary maintenance burden is shouldered by two people at any given time (the first two authors), most individual modules have been implemented independently by interested contributors. Some contributions have come with direct institutional support. The ESA Gaia and ESASky modules were provided and supported by developers working for ESA\@. The ADS module is maintained by developers working at ADS. The MAST and VO Cone Search query tools were added by developers at STScI, with the latter moved over from \texttt{astropy.vo} (see Section \ref{sec:vo}). Astroquery also receives contributions from other funded programs. For instance, the JPLHorizons module has been implemented as part of the \texttt{sbpy} project\footnote{\url{http://sbpy.org}} with support from NASA. Further Solar System-related services are planned to be added to astroquery through this support. Astroquery has also received support from the Google Summer of Code program, with two students (co-authors Madhura Parikh and Simon Liedtke) from 2013--2014. Due to its nature as an openly developed package, new directions in astroquery are primarily driven by contributors and data providers adding or updating modules to reflect new or changed data sources. The underlying software architecture has been demonstrably sufficient to meet the needs of the current generation of data sources (proven by the user base of astroquery). While this policy may change in the future, the user-focused nature of astroquery means that making such architecture changes is unnecessary until there are specific data sources or use cases to drive them. \subsection{Relation to the VO} \label{sec:vo} The Virtual Observatory (VO) has some goals similar to astroquery, though their approach and philosophy is different. Where VO services provide a single point of access for all VO-compatible services, astroquery provides a collection of access points that do not require a specific API from the hosting service. The general philosophy in astroquery is to replicate the web page interface provided by a given service as closely as possible. While this approach makes some versions of cross-archive searches more difficult, it keeps the barrier to entry for new users fairly low and limits the maintenance burden for upstream developers. However, there are developments in progress to allow more VO-like queries within astroquery, such as searching for databases by keywords. As more services implement VO-based access, some query modules may adopt VO as a backend, but these changes should be transparent to users (i.e., the astroquery interfaces will remain unchanged). The documentation may guide users on how to use the more sophisticated VO tools that underly these tools. Some general VO tools are available in astroquery. The \texttt{vo\_conesearch} package, which originally resided in Astropy\xspace, is now part of astroquery. VO Cone Search has a \texttt{query\_region} interface like the other astroquery services in addition to the existing interfaces ported over from Astropy\xspace. As of \package{astropy} 3.0, \texttt{astropy.vo} no longer exists; therefore, astroquery is now the primary provider of this VO Cone Search service. From a typical user's standpoint, switching over from \texttt{astropy.vo} should result in no difference except for updating their Python \texttt{import} statements (e.g., \texttt{from astroquery.vo\_conesearch import conesearch} instead of \texttt{from astropy.vo.client import conesearch}). \section{Documentation and References} \label{sec:documentation} \subsection{Online documentation} The astroquery modules are documented online and can be accessed at \url{https://astroquery.readthedocs.io/}. We include one detailed example of how to use astroquery in Appendix \ref{sec:example}, but interested users will find many more on the documentation page and in the example gallery.\footnote{\url{https://astroquery.readthedocs.io/en/latest/gallery.html}} \subsection{Other Documents} Several authors have independently described how to use various astroquery modules, which is a helpful practice we encourage. \begin{itemize} \item Cosmosim:\footnote{\url{https://www.cosmosim.org/cms/news/cosmosim-package-for-astroquery/}} a worked example of downloading data from the cosmosim database, including logging in. \item \citet{Paletou2014a}: a worked example of querying Vizier and SIMBAD to make a surface gravity - effective temperature plot for a star survey. \item \citet{Guillochon2018a}: the definition of the Open Astronomy Catalog API and a description of the astroquery module built to use it. \item MAST:\footnote{\url{https://github.com/spacetelescope/MAST-API-Notebooks/blob/master/AstroqueryIntro/AstroqueryFunctionalityDemo.ipynb}} A tutorial on the MAST astroquery interface. \item GAIA:\footnote{\url{https://gea.esac.esa.int/archive-help/tutorials/python_cluster/index.html}} A tutorial on the GAIA astroquery interface. \end{itemize} \section{Summary} Astroquery is a toolkit for accessing remotely hosted astronomical data through Python. It is part of the \package{astropy} affiliated package system. We have described its general layout, its development model, and its role in developing reproducible workflows. Astroquery is developed for and by our community: we welcome any new contributions, and such contributions will continue to define the future directions of the package. \acknowledgements We would like to thank the members of the community that have contributed to \package{astroquery}, that have opened issues and provided feedback, and have supported the project in a number of different ways. We are greatful for the infrastructural support the Astropy\xspace community provides. \package{astroquery} is supported by and makes use of a number of organizations and services outside the traditional academic community: GitHub, Travis CI, Appveyor, and Read the Docs. Our package relies heavily on the following Python dependencies, we are grateful for their maintainers and contributors: \package{requests} \package{beautifulsoup}, and \package{keyring}. We thank Google for financing and organizing the Google Summer of Code program, that has funded two students (SL, and MP) to work on \package{astroquery} in 2013 and 2014. The following individuals would like to recognize support for their personal contributions. BMS is supported by the NSF grant AST-1715122 and acknowledges support from the DIRAC Institute in the Department of Astronomy at the University of Washington. The DIRAC Institute is supported through generous gifts from the Charles and Lisa Simonyi Fund for Arts and Sciences, and the Washington Research Foundation. MM, MVB, GG contributions are supported by the NASA PDART grant 80NSSC18K0987. \software{Astropy\xspace \citep{Astropy-Collaboration2018}, \package{numpy} \citep{numpy}, \package{requests}, \package{keyring}, \package{beautifulsoup4}, \package{html5lib}, \package{matplotlib} \citep{matplotlib}, \package{APLpy} \citep{aplpy}, \package{pyregions} \citep{pyregions}, \package{regions} \citep{regions} } \bibliographystyle{aasjournal}
1,108,101,566,834
arxiv
\section{Introduction} The current state of asteroid distribution resulted from many incidents in the past: the radial transference triggered by gas giants movements, the catastrophic disruptions among asteroids, mixing of asteroids, solar radiation induced radial scattering, and gravitational interferences through resonances of planets. It is currently widely accepted that asteroids were formed in different places from their current locations and subsequently displaced by the migration of gas and ice giants \citep[e.g.,][]{Morbidelli2005,Walsh2011}. This hypothesis of drastic repositioning of asteroids is supported also by the recent laboratory analyses of meteorites: the bimodal distribution of isotopic ratio between non-carbonaceous and carbonaceous chondrites \citep[e.g.,][]{Warren2011, Kruijer2020, Bermingham2020}. The current distribution of asteroids can be important constraints for the models that bridge and depict the evolution from the beginning to the current states of the Solar System. Hydrated minerals on minor bodies are especially important when discussing the origin of volatiles on Earth \citep{Owen&Bar-Nun1995}. Primitive dark minor bodies which inherit many hydrated minerals from the primordial Solar System are the most plausible sources of the water accreted by the Earth later on \citep{Morbidelli2000}. Moreover, if the water on the Earth is delivered from outside, carbonaceous chondritic asteroids are more plausible than comets \citep{Altwegg2015, Marty2017}. This motivated the recent sample return missions, Hayabusa2 by Japanese Aerospace Exploration Agency (JAXA) \citep{Watanabe2019} and OSIRIS-REx by National Aeronautics and Space Administration (NASA) \citep{Lauretta2019}, to the primitive asteroids (162173) Ryugu and (101955) Bennu, respectively. Hydrated asteroids mainly are constituted by phyllosilicates which are a product of aqueous alteration on anhydrous silicate rocks in hydrothermal systems. The presence of phyllosilicates constrains the environmental parameters of the early Solar System, such as water-rock ratio and accretion timing of the planetesimals. Asteroids are usually classified by spectral shapes and albedos \citep[e.g.,][]{Tholen1984, BB2002b, DeMeo2009, Mahlke2022}. The first well-established taxonomy was defined using principal component analysis in the near-ultraviolet (NUV) to visible (VIS) range with albedo by \citet{Tholen1984}. He found three major clusters in the principal component space, S, C-X, and D. Those clusters are well defined and might have formed in different regions in the early Solar System. Primitive asteroids often refer to the C-X and D classes. However, the origin of C-X and D classes might be different. The D-class asteroids are especially dominant in the Jupiter Trojans \citep{DeMeo&Carry2014}. They are hypothesized to be planetesimals formed in trans-Neptunian orbit and captured into co-orbital motion with Jupiter during the time when the giant planets migrated by removing neighbor objects \citep{Morbidelli2005, Levison2009}. Moreover, D-class asteroids are known to exhibit very red spectra compatible to trans-Neptunian objects and cometary nuclei in the visible wavelength region \citep{Jewitt&Luu1990, Luu1994, Emery&Brown2003, Fornasier2004, Campins2006, Licandro2008, Licandro2018, DeMeo&Binzel2008}. More recently the silicate spectral features found in Spitzer mid-infrared spectra of the comet nuclei matched those of D-class Trojans \citep{Kelley2017}. Thus, the formation region and materials for C-X and D classes could be fundamentally different. To date, however, still the observations and especially samples from D class are not enough to conclude the fundamental difference between C-X and D classes. We would like to leave this question for the future sample-return and/or in-situ remote-sensing spacecraft missions such as Martian Moons eXplorer \citep{Campagnola2018}. On the other hand, the differences between C and X complexes are not obvious in the principal component space defined by the NUV to VIS asteroid spectra, and could be continuous. X-class asteroids can be categorized into P, M, and E by albedo value. The P-class asteroids have low albedo similar to C-class ones. This suggests possible continuous compositional variation of C and P classes. Among the C complex, \citet{Tholen1984} defined F, B, and G sub-classes. The F class is characterized by blue to flat visible spectral end member of the C complex based on the distance from the central cluster of the C complex in the principal component analysis space. Especially, the members of F classes show less drop-off to the near-ultraviolet wavelength. The B class shows blue visible spectral slope and drop-off into the infrared. Also it is known that the albedo of the B class, such as (2) Pallas, is marginally higher than the typical C class. The G class is assinged to the objects with exceptionally deep ultraviolet absorption. These sub-classes are not well separated from the C complex, and should be considered as end members of the C complex. The NUV absorption is observed in spectra of both hydrous meteorites and primitive asteroids. The steep absorption shortward of 0.4 $\mu$m is attributable to an Fe$^{2+}$-Fe$^{3+}$ charge transfer band \citep{Gaffey&McCord1979}. Hydrated layer-lattice silicate or clay mineral grains comprise the bulk of CI-CM assemblages. Hydrated mineral grains contain both Fe$^{2+}$ and Fe$^{3+}$ giving rise to an intense charge transfer absorption in the blue, which is evident even with very low albedo asteroids. Other iron-rich silicates such as the olivine and pyroxene in CV and CO meteorites also produce NUV absorption. However, these minerals have lower optical densities than hydrated silicates, which means their UV absorption will be more effectively suppressed by the presence of opaque minerals, and high optical density hydrated silicates can dominate the NUV spectral reflectance \citep{Feierberg1981}. The strong correlation between the NUV and 3 $\mu$m was first suggested by \citet{Feierberg1985} measuring the reflectance at 2.92 $\mu$m instead of the OH-band itself. Most hydrous meteorites, CI, CM, and CR, exhibit a strong absorption in the NUV \citep{Cloutis2011a, Cloutis2011b, Cloutis2012, Hiroi2021}. \citet{Hiroi1996a, Hiroi1996b} showed the correlation between NUV and OH-band for hydrous meteorites, including the heated Murchison (CM2) and Ivuna (CI1) meteorites. More specifically, once meteorites are heated and dehydrated, both the NUV and OH-band absorptions are weakened. They also pointed out that some naturally thermally metamorphosed CI/CM meteorites (ATCC) exhibit much weaker NUV absorptions. Even though the laboratory measurements suggest the possibility of hydrated silicate measurements by the NUV absorption, the quantitative NUV absorption distribution among asteroids has not been discussed. This might be because of the difficulty in NUV reflectance observations. One reason is the low sensitivity in the NUV for CCDs and rapid decrease of solar photon flux at shorter wavelengths. Moreover, because the Rayleigh scattering by the atmosphere affects more on a shorter wavelength, the observable photon flux will be much smaller and the signal-to-noise ratio (SNR) will be lower for the NUV region from the ground. Another reason is the rareness of well-characterized solar analogs. Spectroscopic measurements of asteroids’ reflectance usually use a solar analog flux observed in a similar sky condition to divide an asteroid flux. However, there are very few well-characterized solar analogs in the NUV \citep{Hardorp1978, Tedesco1982, Tatsumi2022}. Thus, for quantitative spectroscopic measurement in the NUV, the solar analogs need to be investigated first. There are a few attempts to do this with ground-based spectroscopic studies in the NUV \citep{Tatsumi2022}, however, the possible uncertainties or errors have not been adequately understood. On the other hand, the photometric studies are more reliable in the NUV, because the photometric filters are well studied and characterized by numerous standard stars as well as the high SNR due to the broad bands. Thus, the well-defined asteroid reflectance colors in the NUV photometric surveys are suitable datasets in our investigation. In this study, we investigate the NUV distribution among asteroids in the main asteroid belt to the Cybele and Hilda regions, and discuss the distribution of hydrated asteroids. Finally, we will discuss the formation of primitive asteroids/planetesimals. Our study can be a milestone for the future NUV investigation of asteroid surveys such as the Gaia DR3 spectroscopic data \citep{Galluccio2022,Tanga2022} and the J-PLUS data \citep{Morate2021}. \section{Diagnostics of hydrated minerals in other wavelength regions} Hydrated minerals among asteroids have been investigated by reflectance spectroscopy. Direct indication of hydrated minerals can be found around the 3-$\mu$m wavelength range. \citet{Lebofsky1978} first detected the 3-$\mu$m absorption on (1) Ceres by ground-based observations, which had been later confirmed by the in-situ observations of the Dawn spacecraft \citep{DeSanctis2015, Ammannito2016}. Later, more primitive asteroids were found to have the 3-$\mu$m absorption (in fact, the large majority of the C and P classes observed in this region have it). There is variation in the depth, center and shape of this band \citep{Lebofsky1980, Lebofsky1990, Feierberg1985, Jones1990, Rivkin1995, Rivkin2002, Rivkin2015, Rivkin2019, Rivkin2022, Takir&Emery2012, Takir2015}. The 3-$\mu$m band is a broad and complex absorption of metal-OH, interlayer water, water ice, NH$_3$-bearing phases, carbonates, and organics. Metal-bearing phyllosilicates, such as serpentine and saponite, exhibit a sharp absorption centered at 2.7 -- 2.8 $\mu$m, here after the OH-band for convenience. The peak wavelength can evolve from the weakly altered CMs (maximum depth at $\sim$2.8 $\mu$m) to the extensively altered CMs/CIs (maximum depth at $\sim$2.7 $\mu$m), attributed to variations in the chemistry of the phyllosilicate phases from Fe-rich to Mg-rich \citep{Beck2010}. The OH-band overlaps with the atmospheric water band and cannot be directly observed from ground-based telescopes. The AKARI space infrared telescope directly investigated this region for 66 asteroids \citep{Usui2019} and it found that 17 out of 22 (77\%) C-complex asteroids and 5 out of 8 (63\%) low-albedo ($p_V<0.11$) X-complex asteroids have significant OH-band absorption \citep{Usui2019}. Only two out of 17 S-complex asteroids show a possible OH-band absorption. One T-class asteroid, (308) Polyxo, shows the OH-band absorption while none of the three D-class asteroids do (although only one T-class was observed by AKARI). It should be noted that, so far, a possible OH-band was detected on only one D-type asteroid, (773) Irmintraud by ground-based observations\citep{Kanno2003}. However, (773) Irmintraud was later observed by AKARI and the OH-band was not confirmed \citep{Usui2019}. The shallow absorption feature around 0.7 $\mu$m is also known to be associated with Fe-bearing phyllosilicates, which is caused by Fe$^{2+}$-Fe$^{3+}$ intervalence charge transfer. Given the abundance of Fe-bearing phyllosilicates in CMs, many CMs exhibit this feature. The 0.7-$\mu$m band was observed on asteroids as well \citep{Vilas&Gaffey1989, Vilas1993, Vilas1994, Barucci1998}. The strong correlation between the 0.7-$\mu$m band and the OH-band was pointed out by various studies \citep{Vilas1994, Howell2011, Rivkin2015}. AKARI has confirmed that the 0.7-$\mu$m feature always associates with the OH-band \citep{Usui2019}. However, absence of the 0.7-$\mu$m feature does not necessarily mean anhydrous objects \citep{Rivkin2015, Usui2019}. Thus, the presence of the 0.7-$\mu$m band on asteroids is more likely to be a proxy of CM-like mineral composition but not a proxy of phyllosilicates in general. The 0.7-$\mu$m band central wavelength of asteroids are shorter than those of CM meteorites. \citet{Fornasier2014} suggested CM2 meteorites could be only a subset of those asteroids with 0.7-$\mu$m band based on the comparison of central wavelength between CM2 meteorites and primitive asteroids. Alternatively, based on laboratory reflectance spectra, \citet{Vilas1994} proposed that the difference in the central wavelength of the 0.7-$\mu$m absorption feature observed between CM2 meteorites and primitive asteroids is a function of temperature on the asteroid surfaces. It is also possible that the 0.7-$\mu$m band is shallower compared with the depth of the OH-band, meaning that there could be observational bias due to the low signal-to-noise ratio for the 0.7-$\mu$m band. The 0.7-$\mu$m band, however, is modeled to be detectable in a visible spectrum having a signal-to-noise ratio of 10 \citep{Vilas1997}. \section{Asteroid spectrophotometry dataset} The number of asteroid observational datasets down to the NUV wavelength range are quite limited so far. Three photometric surveys, Sloan Digital Sky Survey (SDSS), Eight Color Asteroid Survey (ECAS), and J-PLUS had been conducted down to wavelengths $<0.4$ $\mu$m. We did not include the J-PLUS data in our analysis because most of the objects are included in the SDSS and ECAS. ECAS obtained photometric information using eight filters: \texttt{s} (0.337 $\mu$m), \texttt{u} (0.359 $\mu$m), \texttt{b} (0.437 $\mu$m), \texttt{v} (0.550 $\mu$m), \texttt{w} (0.701 $\mu$m), \texttt{x} (0.853 $\mu$m), \texttt{p} (0.948 $\mu$m), and \texttt{z} (1.041 $\mu$m) \citep{Zellner1985}, and SDSS using 5 filters: $u$ (0.3557 $\mu$m), $g$ (0.4825 $\mu$m), $r$ (0.6261 $\mu$m), and $i$ (0.7672 $\mu$m), $z$ (0.9097 $\mu$m) \citep{Fukugita1996}. The SDSS and ECAS surveys are complementary in terms of the absolute magnitude of the objects, meaning that SDSS mostly observed $8<\mathcal{H}<20$ and the objects in ECAS are mostly $\mathcal{H}<10$ (Fig. \ref{fig:sample}). This is because SDSS has the unique attribute that it is not able to measure the largest, brightest asteroids. Thus, we used these two datasets to cover a wide size range of asteroids. In addition, we also used the NIR spectra obtained by the AKARI space telescope, and the NIR spectrophotometry obtained by MOVIS. \begin{figure}[ht] \centering \includegraphics[width=\hsize]{fig/H_samples_v20221212.pdf} \caption{Absolute magnitude distribution of asteroids. The distributions of primitive asteroids (PAs) from SDSS and ECAS datasets used in this study are indicated by orange and blue hatches. The integrated PA dataset of SDSS and ECAS is shown by the green line. The black dashed line is the distribution of all asteroids in the ASTORB database (accessed on Dec. 2022). The x-axis on the top corresponds to approximated diameter assuming an albedo value $p_V=0.06$ using Eq. \ref{eq5}, typical of C types.} \label{fig:sample} \end{figure} \subsection{ECAS} The Eight Color Asteroid Survey (ECAS)\footnote{\url{https://sbn.psi.edu/pds/resource/ecas.html}} is the photometric survey of 589 asteroids, among which 405 astreoids were chosen as high-quality data. They derived color indices of asteroids to give mean color indices of zero for four well-characterized solar analogs \citep{Tedesco1982}. Thus, spectral reflectance $R_\lambda$ can be obtained by $\log(R_\lambda)=\pm 0.4 c_\lambda$, where $c_\lambda$ is the tabulated color index and the negative sign is chosen for wavelengths shorter than \texttt{v} band. \citet{Tholen1984} developed the taxonomic classification based on cluster analysis in the principal component space applied to the ECAS dataset with known geometric albedo which were given mainly by Tuscon Revised Index of Asteroid Data \citep[TRIAD;][]{Morrison&Zellner1979}. We used the albedo $p_V$, absolute magnitude $\mathcal{H}$, and diameter $d$ values adapted by the AcuA dataset by AKARI \citep{Usui2011} and the NEOWISE dataset \citep{MainzerPDS}, which are the newer datasets and cover more numbers of asteroids, while the original Tholen's taxonomy used the albedo from TRIAD \citep{Bender1978}. The comparison between the AKARI, IRAS and NEOWISE surveys suggests that NEOWISE might overestimate the albedo for large asteroids possibly owing to the detector saturation \citep{Usui2014}. While AKARI completed the albedo survey for the asteroids $\mathcal{H}<9$ and WISE measured many small asteroids which peak at $\mathcal{H}\sim 15$. By updating albedo using these two albedo catalogs, we could exploit the albedo of 536 asteroids out of 589 asteroids in ECAS. Furthermore, we found that there is a significant discrepancy in the albedo values between AKARI+NEOWISE and TRIAD. Figure \ref{fig:TRIAD} shows the albedo of the same asteroids in AKARI+NEOWISE and TRIAD, suggesting TRIAD dataset may underestimate albedo. Linear fitting of this dataset indicates that the AKARI+NEOWISE values are 1.4 times of the TRIAD values. This affects the classification of E (high albedo), M (medium albedo), and P (low albedo) classes in the X complex using albedo. Although originally the threshold of P class was $p_V<0.08$, using the AKARI+NEOWISE dataset the threshold is better set at $p_V<0.11$. Moreover, the bimodal histogram distribution of X class exhibits the minima around $p_V\sim0.11$ \citep{Usui2013}. Thus, we use the $p_V<0.11$ for classifying P class in this study. \begin{figure} \centering \includegraphics[width=\hsize]{fig/Albedo_comparison_TRIAD.add.pdf} \caption{Albedo values in TRIAD (data from \citealt{Morrison&ZellnerPDS}) compared with the AKARI+NEOWISE dataset. The solid line indicates a linear fit $y=1.4x$ and the dashed line indicates $y=x$.} \label{fig:TRIAD} \end{figure} We used high-quality 212 asteroids classified into C-complex, P (low-albedo X), and D classes based on Tholen’s taxonomy with albedo values adopted from AKARI+NEOWISE. We calculated the reflectance at the wavelengths of SDSS by the interpolation of two reflectance values at the nearest wavelengths from ECAS, and then, the spectral slopes were calculated by the following equations: \begin{align} S_{\rm NUV}&=\frac{(R_g/R_u)-1}{\lambda_{g}-\lambda_{u}}, \label{eq1}\\ S_{\rm VIS}&=\frac{(R_i/R_g)-1}{\lambda_{i}-\lambda_{g}}, \end{align} where $R$ is the reflectance at the SDSS filter interpolated from two nearest filters of the ECAS filter system, and $\lambda$ is the effective filter wavelength of the SDSS filter. In addition to this, we defined the NUV absorption strength as $S_{\rm NUV}-S_{\rm VIS}$. We also measured the indication of a 0.7-$\mu$m band absorption in the following two ways: \begin{align} {\rm HYD_{\rm ECAS}}&=1-\frac{2R_{\tt w}}{R_{\tt v}+R_{\tt x}}, \label{eqhyd}\\ {\rm HYD_{\rm SDSS}}&=1-\frac{2R_i}{R_r+R_z}. \end{align} The positive values of these parameters may indicate the stronger absorption at the 0.7-$\mu$m. The albedo, spectral slope, NUV absorption, and HYD for each taxonomic class are summarized in Table \ref{table:tax}. Some asteroids overlap in multiple classes, thus the total number of samples in all taxonomic classes exceed 212. This result suggests that the VIS spectral slope $S_{\rm VIS}$ and NUV absorption strength can classify those primitive asteroid classes. The G class exhibits the deepest NUV absorption by definition. The C and B classes have similar NUV absorption strengths but they are different in VIS spectral slopes. There are also large differences in albedo, and G and B classes are especially bright among primitive asteroids. Moreover, although the wavelengths of the SDSS filter system are not optimized for measuring the 0.7-$\mu$m band depth, the good correlation between HYD$_{\rm ECAS}$ and HYD$_{\rm SDSS}$ is shown in Table \ref{table:tax}. This agrees with the previous work by \citet{Rivkin2012} showing that the Ch and Cgh asteroids (in SMASS taxonomy), which exhibit the 0.7-$\mu$m band absorption, have positive values of HYD. This suggests that HYD$_{\rm SDSS}$ could be a good indicative for the 0.7-$\mu$m band. Nevertheless, the high HYD value for D types might be because of their concave visible reflectance spectral shapes. \begin{table*} \caption{Spectral slopes and albedo for each of Tholen’s primitive taxonomy classes. Note that some asteroids were classified into multiple classes in \citet{Tholen1984} and thus the total number (244) of samples exceeds 212.} \label{table:tax} \centering \begin{tabular}{c c c cc c cc} \hline\hline Class & Sample No. & Albedo & $S_{\rm VIS}$& $S_{\rm NUV}$ & $S_{\rm NUV}- S_{\rm VIS}$ & HYD$_{\rm ECAS}$ & HYD$_{\rm SDSS}$\\ & & (\%) & ($\mu$m$^{-1}$) & ($\mu$m$^{-1}$) & ($\mu$m$^{-1}$) & (\%) & (\%)\\ \hline C & 127& $5.9\pm 2.3$ & $0.09\pm0.13$ & $0.90\pm 0.32$ & $0.81\pm0.31$ & $0.3\pm2.4$ & $-0.1\pm1.2$\\ F & 25 & $5.9 \pm 2.2$ & $-0.04\pm 0.14$ & $0.28 \pm 0.25$ & $0.33\pm0.26$ & $-1.1\pm2.7$ & $-0.7\pm1.1$\\ G & 8 & $8.0\pm 0.9$ & $0.08 \pm 0.0.07$ & $1.58\pm 0.14$ & $1.58\pm0.14$ & $1.7\pm2.1$ & $0.7\pm0.8$\\ B & 11 & $9.9\pm3.7$ & $-0.07\pm0.11$ & $0.74\pm0.27$ & $0.81\pm0.33$ &$-1.1\pm1.7$ &$-0.03\pm1.0$\\ P & 52 & $4.9\pm1.4$ & $0.29\pm0.15$ & $0.56\pm0.29$ & $0.31\pm0.31$ &$-1.2\pm1.7$ &$-0.9\pm1.2$\\ D & 21 & $5.1\pm1.5$& $0.91\pm0.11$& $0.49\pm0.27$ & $-0.42\pm0.27$ &$-0.8\pm1.7$ &$-0.4\pm1.1$\\ \hline \end{tabular} \end{table*} \subsection{SDSS} The SDSS is the un-targeted broadband photometry survey. There have been several exploitation of solar system objects from SDSS so far. The SDSS Moving Object Catalog (MOC) is a series of photometric surveys of moving targets, mainly asteroids \citep{Ivezic2002}. The 4th data of SDSS MOC was published in 2008 \citep{Ivezic2010}. This contains more than 100,000 known Solar System objects at that time, although it contains only a fraction of the entire SDSS dataset. More recently, \citet{Sergeyev&Carry2021} conducted an exhaustive search on moving objects in SDSS images including all the observational period. This catalog includes $\sim 380,000$ known Solar System objects and its completeness is estimated to be about 95\%. We limited the semi-major axis from 2 to 5.2 au to obtain the main belt to the Cybele, Hilda, and Jupiter Trojan zone asteroids. In the same was as for the ECAS dataset, the albedo values were adopted from the AKARI and NEOWISE data. In our analysis we did not include the asteroids without albedo values. First, good-quality data was selected based on the photometry flag and the errors in observations. \citet{Sergeyev&Carry2021} defined the photometry flag to discriminate bad data. Moreover, since the $u$-band observations tend to have larger errors, we used objects with errors smaller than 0.1 mag. Second, to separate primitive asteroids from the whole dataset, we applied the following thresholds based on the color boundaries of B, C, X, and D complexes calculated by \citet{Sergeyev&Carry2021}: the bluer group ($g-r) < 0.55$ and ($i-z) >-0.15$ for the C complex, and the redder group ($g-r$) $\geq$ 0.55 and $p_V < 0.11$ for P and D classes. This is because it is known that sometimes the B class contains high albedo members \citep[e.g.,]{Tholen1984, Usui2013, Ali-Lagoa2013}. The primitive asteroids selected from the whole dataset are shown in Fig. \ref{fig:sdss_color}. Also, we did not use faint samples with absolute magnitude $\mathcal{H} < 17.5$ for main belt asteroids to avoid observational bias in which small, dim objects at a far distance are less observable. These selections gave us the final photometric spectra of 8,956 objects. Like for the ECAS dataset, the VIS and NUV spectral slopes were calculated using the following equations: \begin{align} S_{\rm NUV}&=\frac{10^{-0.4(g-u-(g-u)_\odot)}-1}{\lambda_{g}-\lambda_{u}},\\ S_{\rm VIS}&=\frac{10^{-0.4(i-g-(i-g)_\odot)}-1}{\lambda_{i}-\lambda_{g}}, \end{align} where $\lambda$ is the effective filter wavelength of the filter and the solar colors are $(i-g)_\odot=-0.57$ , $(g-u)_\odot=-1.40$ from \citet{Holmberg2006}. \begin{figure} \centering \includegraphics[width=0.9\hsize]{fig/sdss_HQ_color.pdf} \caption{Primitive asteroids (blue) separated from other classes (orange) based on our criteria in the color plots: (a) ($g-r$) vs. ($i-z$); (b) ($g-r$) vs. ($g-i$); and (c) ($u-g$) vs. ($g-i$).} \label{fig:sdss_color} \end{figure} \subsection{Comparison between SDSS and ECAS} Using the SDSS and ECAS dataset, we obtained the spectrophotometry of 9,168 objects in total. We estimated the diameter from the absolute magnitude. The transformation between the asteroid absolute magnitude $\mathcal{H}$ and its effective diameter $d$ [km] requires knowledge of the albedo $p_V$ \citep{Harris1997}, \begin{equation} \mathcal{H}=18.1-2.5\log \left(\frac{p_V}{0.1}\right)-5\log(d).\label{eq5} \end{equation} Among the asteroids which have been observed multiple times with SDSS, there are 132 objects observed commonly in both SDSS and ECAS (including all taxonomy classes). Thus, we compare the VIS and NUV spectral slopes in both catalogs. We found slight offsets between values from the two catalogs (Fig. \ref{fig:caldiff}). This might be because of uncertainty in the solar color in the broad-band filters, and the interpolation of spectra to match wavelengths of ECAS to SDSS. Thus, we correct this offset by adding the median values of differences in spectral slopes between SDSS and ECAS, $-0.27$ $\mu$m$^{-1}$ and $-0.08$ $\mu$m$^{-1}$ from the NUV and VIS slope values calculated from SDSS, respectively. \begin{figure}[ht] \centering \includegraphics[width=0.9\hsize]{fig/CAL_DIFF_CS2021.pdf} \caption{Difference of spectral slope values of the common objects between ECAS and SDSS. We measured 132 common asteroids between two catalogs and compared the values. We see a shift of $-0.08$ [$\mu$m$^{-1}$] for VIS slope and $-0.27$ [$\mu$m$^{-1}$] for NUV slope in the SDSS data.} \label{fig:caldiff} \end{figure} \section{Relation between the OH-band and the NUV absorption} In this section, we tested the relationship between the OH-band and the NUV absorption, which was raised by \citet{Feierberg1985, Hiroi1993, Hiroi1996a, Hiroi1996b}. We used recent datasets from meteorites and asteroids to check if the NUV absorption can be the proxy of hydrated minerals among the primitive asteroids. \subsection{Meteorites} Primitive asteroids are linked to carbonaceous chondrites. Especially, the hydration of meteorites and asteroids is thought to occur through aqueous alteration of precursor anhydrous minerals. CM, CI, and CR meteorites are reported to have phyllosilicate abundances of 70 -- 90 vol\%, 81 -- 84 vol\%, and 1 -- 70 vol\%, respectively \citep{Bland2004, Howard2011, Howard2015, King2015}. Some hydrous carbonaceous chondrites experienced subsequent thermal metamorphism \citep[ATCCs;][]{Ikeda1992, Nakamura2005, Tonui2014}. Phyllosilicate abundance of ATCCs varies from 0 to >80 vol\% depending on the degree of heating \citep{King2021}. ATCCs in heating stage IV (heating temperature of >750 $^\circ$C) do not contain phyllosilicates due to decomposition, while ones in heating stage I and II (300 -- 500 $^\circ$C) show similar phyllosilicate abundance as unheated CM/CIs \citep{King2021}. In this study we focus on the hydrous carbonaceous chondrites and ATCCs to illustrate the spectral characteristics related to hydration and dehydration states. \citet{Hiroi2021} recently measured the reflectance spectra of 148 carbonaceous chondrites selected from the Antarctic meteorite collections of National Institute of Polar Research in Japan, NASA Johnson Space Center, and Smithsonian Institute National Museum of Natural History. Together with the previous data from the RELAB database (see Table 3 in \citealt{Hiroi2021}), 78 spot spectra from 77 hydrous meteorites (CI, CM, CR, Tagish Lake, and ATCCs) were obtained covering the wavelength of 0.3 -- 4 $\mu$m under ambient air. They conducted 6th-order Gaussian fitting to evaluate the atmospheric water contamination. Water absorption is a broad and round-shaped absorption typically around 3.1 $\mu$m. Here we use the Gaussian fitting bands having a center wavelength shorter than 2.8 $\mu$m so as to evaluate a real OH-band absorption. \citet{Hiroi1993} has shown that naturally-heated CI/CM meteorites (ATCCs) and experimentally-heated Murchison samples have similar UV to NIR spectral shape. Their experimental and natural CI/CM spectra show the strong correlation between NUV and OH-band absorption strengths. However, at that time, the absorbed water was not taken into account for measurement of the OH-band and actually, a broad absorption around 3-$\mu$m can be observed in their spectra. Thus, using the given 6th-order Gaussian fitting by \citet{Hiroi2021}, we remove the contamination of terrestrial absorbed water by having into account the Gaussian components which have center wavelength shorter than 2.8 $\mu$m. Figure \ref{fig:met} shows the NUV and OH-band absorption strengths of hydrous meteorites. We find a similar correlation as in \citet{Hiroi1993}: the more hydrated meteorites show deeper NUV absorption. ATCC samples (CM(D)/CM(U)/CI(D)) follow well the heating experiment trends. This is because the phyllosilicates are decomposed by heating. Another important point here is that powder samples (open symbols) tend to show deeper absorptions than chip samples. When compared with other hydrated carbonaceous chondrites, Tagish Lake exhibits lower NUV absorption, possibly because absorption features are masked by high carbon contents, $\sim5.4$ wt\% \citep{Brown2000}. Tagish Lake has extremely low reflectance and does not have any feature in the wavelength range $<2.5$ $\mu$m. Tagish Lake is also known to have a very red visible reflectance spectrum \citep{Hiroi2001}. Tagish Lake’s dark-red spectrum without the strong NUV absorption suggest similarity with P and D classes. Some P-class asteroids exhibit OH-band absorption, suggesting that Tagish Lake could be the meteorite analog. Although a D-class asteroid with a clear OH-band absorption has not been found, the sample number is quite small and the analog meteorites of P or D classes need further investigation in OH-band region. Our result shows strong correlation between the NUV and the OH-band absorption strength in hydrous meteorite spectra. Especially, CMs and CIs exhibit up to 50\% of OH-band and 3 -- 4 $\mu$m$^{-1}$ NUV absorption strengths. CMs without 0.7-$\mu$m band absorption (blue triangles) exhibit relatively shallow absorptions in both OH-band and NUV. \begin{figure}[ht] \centering \includegraphics[width=0.9\hsize]{fig/Meteorite_OH_NUV-VIS.rev.pdf} \caption{OH-band depth and NUV absorption of hydrated carbonaceous chondritic meteorites. The meteorite sample spectra are from \citet{Hiroi2021} (available at the RELAB database). Open symbols indicate the powder samples and filled symbols indicate chip samples. CMs are classified into with (circle) and without (triangle) the 0.7-$\mu$m band absorption. Squares with lines show the heating experiments of Ivuna (CI1) and Murchison (CM2) conducted in \citet{Hiroi1996a,Hiroi1996b}. Unheated Ivuna and Murchison are shown by squares with circles inside. CM(D), CM(U), and CI(D) are ATCCs.} \label{fig:met} \end{figure} \subsection{Asteroids} AKARI is the infrared space telescope developed by JAXA \citep{Murakami2007}. Using the Infrared Camera onboard AKARI, infrared reflectance spectra from 2.5 to 5 $\mu$m of 66 asteroids were observed, including 34 successfuly observed primitive asteroids (C-complex, P , and D classes) \citep{Usui2019}, making this the first large survey directly observing the 3-$\mu$m region. All the primitive asteroids observed by AKARI are larger than 80 km in diameter. Even though the correlation between OH-band and NUV absorption was suggested by \citet{Feierberg1985} and \citet{Hiroi1996a, Hiroi1996b}, at that time the OH-band (2.7-$\mu$m band) of asteroids was not directly observed and 2.9 -- 3.0 $\mu$m depth was measured instead. Thus, here we compare the 2.7-$\mu$m band depth from the AKARI data and corresponding NUV absorption from the ECAS data. 32 primitive asteroids were commonly observed by AKARI and ECAS. The 0.7-$\mu$m absorption using three bands of ECAS data, $v$, $w$, and $x$. Thus, the positive value of the 0.7-$\mu$m absorption means that the spectral shape is concave and shows an absorption, while the negative value of the 0.7-$\mu$m absorption means that the spectral shape is convex and has no clear absorption. Figure \ref{fig:asteroid} shows the positive correlation between OH-band depth and NUV absorption. Asteroids with NUV absorption > 2 $\mu$m$^{-1}$ show >30\% absorption in OH-band and >1\% absorption in the 0.7-$\mu$m band. Moreover, most asteroids with the 0.7-$\mu$m band > 1\% depth have NUV absorption > 1 $\mu$m$^{-1}$. It should be noted that even asteroids without the 0.7-$\mu$m band but the depth $>-1$\% (pink and purple circles) tend to show an offset towards deeper NUV absorption than those with depth $<-1$\% (yellow circles). This might be because their higher contents of Fe-rich phyllosilicates make the 0.7-$\mu$m region from convex to flatter. This is quite reasonable because both the 0.7-$\mu$m and NUV absorptions are possibly caused by iron in the phyllosilicates. Pink and purple circle ones may contain more Fe-rich phyllosilicates than yellow circle ones. Based on the correlation of OH-band and NUV absorptions observed both in asteroids and meteorites, we can estimate the degree of hydration based on the NUV absorption. \begin{figure}[ht] \centering \includegraphics[width=\hsize]{fig/AKARI-ECAS.rev.pdf} \caption{OH-band depth and NUV absorption of primitive asteroids. Symbol color indicates the 0.7-$\mu$m band depth (HYD$_{\rm ECAS}$). Asteroids with deep 0.7-$\mu$m band absorptions show relatively deep OH-band and NUV absorptions. Even asteroids with the 0.7-$\mu$m band depth of $-1$\% to $1$\% exhibit slightly higher NUV absorptions compared with the ones with less 0.7-$\mu$m band depth.} \label{fig:asteroid} \end{figure} \section{Distribution of NUV absorption in the asteroid main belt}\label{sec:newtax} The asteroid main belt has been divided into three zones: inner (IMB, 2.06 -- 2.50 au), middle (MMB, 2.50 -- 2.82 au), and outer (OMB, 2.82 -- 3.28 au). Cybele and Hilda zones are located beyond the asteroid main belt at 3.3 -- 3.5 au (between 2:1 and 5:3 mean-motion resonances or MMR with Jupiter) and 4 au (the 3:2 MMR with Jupiter), respectively. Jupiter trojans are trapped in Jupiter's L4 and L5 lagragian regions at 5 au, so that they are considered to be dynamically stable. We also divided the asteroids into different size ranges by diameter $d$: very large ($d>100$ km), large ($50<d<100$ km), medium ($10 < d < 50$ km), small ($5<d<10$ km), very small ($d < 5$ km). The NUV absorption strength distributions for each region with different size ranges are shown in Fig. \ref{fig:uvdist}. The median value and interquartile range for each group is shown in Table \ref{table:uvdist}. To evaluate the difference in the distributions among IMB, MMB, and OMB asteroids, we calculated the $p$-values for each combination. First, we applied the Shapiro-Wilk test to test the normality for each group. If the $p$-value for a group is less than 0.05, indicating a non-parametric distribution, we used the Mann-Whitney U test to calculate the probability that the two groups of populations are equal. If the $p$-value of the Shapiro-Wilk test for a group is larger than 0.05, indicating a normal distribution, we used the Student’s t-test. For the large diameter populations ($d > 50$ km), the NUV absorption strengths are not significantly different ($p>0.05$ for all combinations IMB-MMB, IMB-OMB, and MMB-OMB). This suggests that the NUV distributions of the large diameter populations are similar through the main asteroid belt. On the other hand, the distributions of the small population ($d<10$ km) are different for IMB-MMB and MMB-OMB with $p \ll 0.01$, while the difference between IMB and OMB is not significant, $p>0.05$. Based on the statistical test, the IMB and OMB populations with $d<10$ km show less NUV absorption strengths (0.1 --0.4 $\mu$m$^{-1}$) compared with MMB (0.4--0.7 $\mu$m$^{-1}$). The distributions of medium and small populations in IMB and OMB are slightly different, $p<0.05$. The MMB distribution is similar between medium and small populations. Over all, MMB shows the deepest NUV absorption for all size groups except for the very large $d>100$ km group, suggesting that MMB are the most hydrated zone among the main asteroid belt. And IMB is least hydrated for $d < 10$ km. \begin{figure*}[ht] \centering \includegraphics[width=0.8\hsize]{fig/UVdist_ECAS+SDSS_rev1.pdf} \caption{Distribution of spectral slopes among the main asteroid belt. Left column: NUV slope, middle column: VIS slope, right column: NUV absorption strength ($S_{\rm NUV}-S_{\rm VIS}$). The Y axis indicates the number of asteroids in a bin. The blue line is the inner main belt, the yellow line is the middle main belt, and the red line is the outer main belt. The asteroids were divided by the diameter; first row: $d > 100$ km, second row: $50 < d < 100$ km, third row: $10 < d < 50$ km, fourth row: $5 < d < 10$ km. The vertical lines are the median value for each group.} \label{fig:uvdist} \end{figure*} \begin{table*}[ht] \caption{NUV absorption strength for each size range and region in the main asteroid belt. IQR is the interquartile range of samples. We did not calculate IQR for $d>100$ km in IMB due to few number (5) of samples.} \label{table:uvdist} \centering \begin{tabular}{c|cc|cc|cc|cc|cc} \hline\hline & \multicolumn{10}{c}{$S_{\rm NUV}-S_{\rm VIS}$ ($\mu$m$^{-1}$)}\\ Region & \multicolumn{2}{c|}{$d>100$ km} & \multicolumn{2}{c|}{$50<d<100$ km} & \multicolumn{2}{c|}{$10<d<50$ km} & \multicolumn{2}{c}{$5<d<10$ km} & \multicolumn{2}{c}{$d<5$ km}\\ & Med. & IQR & Med. & IQR & Med. & IQR & Med. & IQR & Med. & IQR\\ \hline IMB & 1.35 & -- & 0.57 & 1.10 & 0.56 & 1.18 & 0.33 & 1.08 & 0.11 & 1.17\\ MMB & 0.80 & 0.45 & 0.72 & 0.80 & 0.68 & 1.07 & 0.66 & 1.34 & 0.38 & 1.34\\ OMB & 0.80 & 0.57 & 0.61 & 0.81 & 0.54 & 1.12 & 0.38 & 1.31 & 0.17 & 1.39\\ All in MB & 0.80 & 0.54 & 0.63 & 0.93 & 0.57 & 1.12 & 0.44 & 1.32 & 0.21 & 1.28\\ \hline \end{tabular} \end{table*} To depict the taxonomic distribution, we classified the primitive asteroids into F, C, B, G, P, and D using the values in Table \ref{table:threshold}. The thresholds for the classifications were defined based on the ECAS taxonomy. Figure \ref{fig:nuv-vis_tax} shows the distribution of our samples in the NUV absorption vs. VIS slope space. We depict the ECAS samples by colored circles in order to demonstrate the correspondence of our classification and the ECAS taxonomy. The line dividing blue, F and B classes, and red, C, G and P classes (L1) is defined by the direction of largest dispersion of dataset. The average spectrophotometry in the SDSS filters of taxonomy classified by Table \ref{table:threshold} are shown in Fig. \ref{fig:spectra_tax}. Using this classification, we could measure the number fraction of each taxonomy as a function of size and regions (Fig. \ref{fig:pi}). \citet{DeMeo&Carry2014} investigated similar distribution using only the VIS wavelength. Thus, they could not distinguish between C, F, and G, while our dataset including NUV region can distinguish them and exploit more compositional information (see Sect. \ref{sec:discussion}). The C-type asteroids are dominant for asteroids $d> 50$ km diameter, and quite minor for smaller sizes. The F types are common throughout the main asteroid belt, even out to the small size of Cybele region. Components of the main belt are completely different between $d>50$ km and $d<10$ km and F type asteroids are the major component of $d<10$ km in IMB. All the classes are almost evenly distributed for MMB and OMB for small asteroids $d<10$ km, although the enhancement of F class is found for the asteroids $d<10$ km. We also note that P-class asteroids are > 20\% through most of the main asteroid belt regions and size ranges, which is comparable to other types, suggesting that P types are not a minor component of the main asteroid belt. Although P types are dominant for the Cybele and Hilda regions, if P types originated from the outer Solar System, they have significantly contaminated the main asteroid belt. Nevertheless the abundance of P types through the main asteroid belt suggests that P types were formed in the same reservoir as the C complex asteroids. In contrast, in most main asteroid belt zones and sizes, the D-type asteroids are minor $\leq$ 10\%, while they are a main component for the large size Jupiter Trojans, and the middle size Hilda and Cybele zones. The Cybele and Hilda zones show completely different taxonomic population from size to size, which agree with the result form \citet{DeMeo&Carry2014}. In the Cybele zone P, and D types are dominating for population $d > 10$ km, whereas F types are the largest population in $d < 10$ km. Our distribution can be compared with \citet{Tholen1984}. As most asteroids in \citet{Tholen1984} is larger than 50 km, we can compare simply the distribution of $d>50$ km. \citet{Tholen1984} found that the C types are dominant among primitive asteroids in the main belt: $\sim 70$\% for IMB, $\sim 80$\% for MMB, and $\sim 70$\% for OMB. Our result also suggests the dominant distribution of C types for large asteroids, but not as much as \citet{Tholen1984}. Instead, we found more P types distributing through the main belt, $\sim 20$\%, which is consistent with \citep{DeMeo&Carry2014}. This is may be because P types have relatively low albedo and could be missed by the discovery bias. Similarly, we also found several D types in the main belt. The distributions in IMB show a quite difference between our result and \citet{Tholen1984} may be because of the sample number of \citet{Tholen1984} is less than 10. Beyond the main belt, the rapid decrease of C type as suggested by \citet{Tholen1984}. In the Cybele zone, \citet{Tholen1984} found $\sim 35$\% for both C and P types and $\sim 20$\% of D type, while our result suggest more abundance of P type in large asteroids and D type is only 10\%. Our result is more consistent with \citep{DeMeo&Carry2014}, and again this could be the sample bias. In the Hilda and Jupiter Trojan zones, our result is quite consistent with \citet{Tholen1984} because these populations are dominated by P and D types, which both have comparable low albedos. \begin{table} \caption{Thresholds for taxonomic classification used in this study to classify asteroids similar to Tholen's taxonomy. L1 is the line dividing bluer (F and B) and redder objects in VIS. L1 is defined as the line along the largest dispersion in the dataset.} \label{table:threshold} \centering \begin{tabular}{c c c cc} \hline\hline Class & \multicolumn{2}{c}{$S_{\rm VIS}$ ($\mu$m$^{-1}$)} & \multicolumn{2}{c}{$S_{\rm NUV}-S_{\rm VIS}$ ($\mu$m$^{-1}$)}\\ & min & max & min & max \\ \hline C & L1 & 0.25 & 0.5 & 1.1 \\ F & -- & L1 & -- & 0.5 \\ G & L1 & 0.25 & 1.1 & -- \\ B & -- & L1 & 0.5 & 1.1 \\ P & L1 & 0.6 & -- & 0.5 \\ D & 0.6 & -- & -- & --\\ \hline \end{tabular} \end{table} \begin{figure}[ht] \centering \includegraphics[width=\hsize]{fig/ALL_PA_tax.rev.add.pdf} \caption{Spectral characteristics of all primitive asteroids using the ECAS and SDSS datasets (black circles). The colored circles indicate the ECAS dataset with the taxonomy defined by \citet{Tholen1984}. The dashed line shows the direction of the largest dispersion of the black circles ($y=-0.15x+0.05$). The taxonomic classifications (Table \ref{table:threshold}) are also visualized.} \label{fig:nuv-vis_tax} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\hsize]{fig/SDSS_taxonomy.rev.pdf} \caption{Average spectrophotometry of taxonomy classes through the SDSS filter set classified by Table \ref{table:threshold}. The error bars show the interquartile range in the taxonomic classes. (left) The spectra for whole wavelength. (right) The same spectra closed up in the NUV.} \label{fig:spectra_tax} \end{figure} \begin{figure*}[ht] \centering \includegraphics[width=\hsize]{fig/FractionFCBGPD.rev.comp.pdf} \caption{Taxonomic distribution of primitive asteroids as a function of size and dynamical populations. The total number in each bin is shown in the center of pie chart.} \label{fig:pi} \end{figure*} \begin{table*}[ht] \caption{Primitive asteroid distribution over taxonomic classes and dynamical populations for different asteroid size bins.} \label{table:matome} \centering \begin{tabular}{crrrrrrrr} \multicolumn{2}{l}{All} \\ \hline\hline Class & Samples & IMB & MMB & OMB & Cybele & Hilda & Trojan & Others\\ \hline F & 2,154 & 652 & 610 & 866 & 15 & 2 & 0 & 0\\ B & 1,089 & 117 & 308 & 661 & 3 & 0 & 0 & 0\\ G & 1,252 & 178 & 528 & 537 & 9 & 0 & 0 & 0\\ C & 1,687 & 260 & 579 & 827 & 18 & 1 & 2 & 0\\ P & 2,197 & 339 & 598 & 1,187 & 55 & 11 & 7 & 0\\ D & 798 & 75 & 170 & 453 & 11 & 25 & 29 & 4\\ \hline Total & 9,168 & 1,621 & 2,793 & 4,531 & 142 & 39 & 38 & 4\\ \hline \end{tabular} \vspace{3mm} \begin{tabular}{crrrrrrrr} \multicolumn{2}{l}{$d> 50$ km} \\ \hline\hline Class & Samples & IMB & MMB & OMB & Cybele & Hilda & Trojan & Others\\ \hline F & 20 & 3 & 8 & 8 & 0 & 1 & 0 & 0\\ B & 28 & 4 & 9 & 15 & 0 & 0 & 0 & 0\\ G & 23 & 5 & 5 & 11 & 2 & 0 & 0 & 0\\ C & 133 & 8 & 52 & 63 & 7 & 1 & 2 & 0\\ P & 88 & 6 & 17 & 33 & 18 & 8 & 6 & 0\\ D & 50 & 2 & 5 & 8 & 3 & 5 & 24 & 3\\ \hline Total & 342 & 28 & 96 & 138 & 30 & 15 & 32 & 3\\ \hline \end{tabular} \vspace{3mm} \begin{tabular}{crrrrrrrr} \multicolumn{2}{l}{$10 <d< 50$ km} \\ \hline \hline Class & Samples & IMB & MMB & OMB & Cybele & Hilda & Trojan & Others\\ \hline F & 198 & 15 & 48 & 131 & 3 & 1 & 0 & 0\\ B & 236 & 2 & 45 & 187 & 2 & 0 & 0 & 0\\ G & 270 & 14 & 69 & 184 & 3 & 0 & 0 & 0\\ C & 420 & 15 & 85 & 315 & 5 & 0 & 0 & 0\\ P & 511 & 14 & 72 & 395 & 26 & 3 & 1 & 0\\ D & 243 & 7 & 27 & 153 & 30 & 20 & 5 & 1\\ \hline Total & 1,878 & 67 & 346 & 1,365 & 69 & 24 & 6 & 1\\ \hline \end{tabular} \vspace{3mm} \begin{tabular}{crrrrrr} \multicolumn{2}{l}{$5 <d< 10$ km} \\ \hline\hline Class & Samples & IMB & MMB & OMB & Cybele\\ \hline F & 1,064 & 164 & 278 & 611 & 11 \\ B & 612 & 42 & 158 & 411 & 1 \\ G & 619 & 42 & 274 & 299 & 4 \\ C & 743 & 60 & 281 & 397 & 5 \\ P & 941 & 49 & 248 & 633 & 11 \\ D & 352 & 31 & 69 & 244 & 8 \\ \hline Total & 4,331 & 388 & 1,308 & 2,595 & 40 \\ \hline \end{tabular} \hspace{5mm} \begin{tabular}{crrrrrr} \multicolumn{2}{l}{$d< 5$ km} \\ \hline\hline Class & Samples & IMB & MMB & OMB & Cybele\\ \hline F & 863 & 470 & 276 & 116 & 1 \\ B & 213 & 69 & 96 & 48 & 0 \\ G & 340 & 117 & 180 & 43 & 0 \\ C & 391 & 117 & 161 & 52 & 0 \\ P & 657 & 270 & 261 & 126 & 0 \\ D & 153 & 35 & 69 & 48 & 1 \\ \hline Total & 2,617 & 1,138 & 1,043 & 433 & 3 \\ \hline \end{tabular} \end{table*} \section{Near-infrared characteristics of the primitive asteroids}\label{sec:movis} The MOVIS catalogs \citep{Popescu2016, Popescu2018} include the surveys of Solar System objects observed in a serendipitous manner by VISTA-VHS (Visible and Infrared Survey Telescope for Astronomy - VISTA Hemisphere Survey) program \citep{McMahon2013, Sutherland2015}. This survey covered the entire southern sky hemisphere by using the near-infrared broad-band filters $Y$ (centered at 1.020 $\mu$m), $J$ (1.252 $\mu$m), $H$ (1.645 $\mu$m), and $Ks$ (2.147 $\mu$m). The data retrieved for Solar System objects include: the detections catalog (MOVIS-D), the magnitudes catalog (MOVIS-M), and the colors catalog (MOVIS-C). The number of measured colors for each Solar System object varies according to the observing strategy and to the limiting magnitude of each filter. The average time interval between the measurements (which affect the color uncertainty) done with $Y$ and $J$ filters is $8.47~\pm~5.99$ min, while for $J$ and $Ks$ filters is $7.40~\pm~6.56$ min. This time interval is negligible compared with the typical rotation period of main belt asteroids, which is in the order of several hours \citep{Warner2009}, thus the errors introduced by the lightcurve variations can be ignored. \begin{table}[ht] \caption{The $(Y-J)$ and $(J-Ks)$ of each taxonomic class discussed in this work. The average values, the standard deviation ($\sigma$), and the median values are shown.} \label{table:nircolors} \vspace{3mm} \begin{tabular}{c| c c c| c c c} \hline \hline Class & & $(Y-J)$ & & & $(J-Ks)$ & \\ \hline & avg. & $\sigma$ & med. & avg. & $\sigma$ & med.\\ \hline D & 0.360 & 0.135 & 0.346 & 0.538 & 0.102 & 0.563\\ P & 0.278 & 0.061 & 0.279 & 0.485 & 0.120 & 0.490\\ C & 0.258 & 0.040 & 0.252 & 0.421 & 0.076 & 0.420\\ B & 0.238 & 0.025 & 0.242 & 0.467 & 0.075 & 0.478\\ F & 0.244 & 0.025 & 0.241 & 0.420 & 0.070 & 0.427\\ G & 0.261 & 0.064 & 0.258 & 0.410 & 0.070 & 0.427\\ \hline \end{tabular} \end{table} We used the latest version of the MOVIS-C catalog \citep{Popescu2018} to compare with the results we obtained based on the SDSS and ECAS dataset. We found 242 objects in common with accurate $(Y-J)$ and $(J-Ks)$ colors, where respectively the errors follow the conditions $(Y-J)_{\rm err} \leq 0.05$ and $(J-Ks)_{\rm err} \leq 0.05$. This sample includes 39 samples classified as D type, 79 P types, 40 C types, 21 B types, 31 F types, and 32 G types based on our classification in Sect. \ref{sec:newtax}. Table \ref{table:nircolors} shows some statistical parameters of $(Y-J)$ and $(J-Ks)$ colors for each of these classes. The differences between C, P and D-types are outlined in both $(Y-J)$ and $(J-Ks)$ colors and follows the reddening trend from C $\rightarrow$ P $\rightarrow$ D types. The average $(J-Ks)$ color for P is $(J-Ks)_{\rm P}=0.485\pm0.120$ mag which is almost 1$\sigma$ larger than the average of C types, $(J-Ks)_{\rm C}=0.421\pm0.076$ mag. In this wavelength region (1.25 -- 2.2) $\mu$m, P types have a broad spectral behavior (outlined by the value of dispersion of its colors). The $(Y-J)$ color distributions of C and P types are slightly different, although the average values are comparable, $(Y-J)_{\rm C}=0.258\pm0.040$ and $(Y-J)_{\rm P}=0.278\pm0.060$. The marginally red spectrophotometry (over 1.020 -- 1.252 $\mu$m of these two types can be inferred by comparing these average values with the median for the solar analogs $(Y-J)_{\rm G2V}=0.219$ \citep{Popescu2018}. D types have a well separated distribution for both $(Y-J)$ and $(J-Ks)$ color (Fig. \ref{fig:MOVIS}). The average values are very red: $(Y-J)_{\rm D}=0.360\pm0.135$ mag (a value about 2$\sigma$ larger compared to the rest of primitive classes), and $(J-Ks)_{\rm D}=0.537\pm0.102$ mag. The wide spread of $(J-Ks)$ may indicate compositional variations inside this group. The $(Y-J)$ color distribution of the B, C, F, and G classes are similar (Table \ref{table:nircolors}). The color values are smaller than those of P and D classes. We found that B types exhibit red spectral spectrophotometric slope, indicated by their $(J-Ks)$ color (Table \ref{table:nircolors}). This shows potential differences in NIR. This spectral turning up at longer wavelengths 1.2 -- 2.1 $\mu$m might be similar to the trend in the Themis group classified by \citet{Clark2010} and the G1 and G2 groups in \citet{deLeon2012}. They found a good match between the B-type asteroids with this NIR upturn and CM, CI, and thermally metamorphosed CI/CM. We note that the B types defined by \citet{DeMeo2009} should have negative NIR slopes, which means that $(Y-J)$ and $(J-Ks)$ color should have a lower value compared with the colors of solar analogs which are $(Y-J)_\odot = 0.219$ mag, $(J-Ks)_\odot = 0.336$ mag \citep{Popescu2018}. However, our classification was made using only the NUV-VIS colors provided by the SDSS filters applying Tholen's taxonomy. Figure \ref{fig:MOVIS} outlines that less than $\approx10\%$ of our C, B, F, and G types are below this threshold, meaning DeMeo's B type. Thus, majority of the B, C, F, and G types found in this sample have positive spectral slopes in the 1.02 -- 2.2 $\mu$m spectral intervals. This can be explained by considering the spectral variety presented by \citet{Clark2010} and \citet{deLeon2012}. They found that asteroids classified as B types according to their visible spectra show a broad variation of NIR spectral slopes, ranging from negative, blue slopes, to positive, moderately red slopes. The strong correlation between the NUV-VIS and NIR wavelengths is not found in the C-complex classes, i.e., B, C, F, and G classes, while B types may exhibit a potential difference in NIR. \begin{figure*}[ht] \centering \includegraphics[width=6cm,height=4.5cm]{fig/YmJ_DPCPC.pdf} \includegraphics[width=6cm,height=4.5cm]{fig/YmJ_CBFG.pdf}\\ \includegraphics[width=6cm,height=4.5cm]{fig/JmK_DPCPC.pdf} \includegraphics[width=6cm,height=4.47cm]{fig/JmK_CBFG.pdf} \caption{Normalized cumulative distribution of $(Y-J)$ (top panels), and of $(J-Ks)$ (bottom panels) for the sample of 242 objects taxonomically classified by this work using the ECAS and SDSS dataset.} \label{fig:MOVIS} \end{figure*} \section{Discussion}\label{sec:discussion} \subsection{Aqueous alteration through the asteroid belt} It is important to reveal the distribution of the aqueous alteration state through the current asteroid belt to constrain the boundary conditions for the Solar System evolution calculation such as the Grand-Tack model \citep[e.g.,][]{Walsh2011} and the Nice model \citep[e.g.,][]{Morbidelli2005}. We here discuss the aqueous alteration state as a function of the semi-major axis based on NUV, 0.7-$\mu$m band, and 3-$\mu$m features. Based on the OH-band depth by indirect measurements of ground-based observations of a reflectance drop from 2.5 to 2.9 $\mu$m, \citet{Jones1990} found that the hydrated silicates slightly decline in abundance with distance from the Sun from 2.5 to 3.5 au because of the P/D population in the outer main belt. However, more recent work by \citet{Rivkin2015} argue that when focusing on the Ch asteroids, OH-band depth is poorly correlated with the semi-major axis, suggesting the population is well mixed in the asteroid belt. Direct measurements of the OH-band depth by the AKARI space telescope did not find the correlation between OH-band depth and the current semi-major axis of asteroids, either \citep{Usui2019}. Note that the asteroids which were observed in this region are quite large in size, i.e., more than several tens of km. Thus, by looking at the large members the hydrated state does not have clear correlation with the semi-major axis through the main belt. This is consistent with the similar NUV absorption strength through the main asteroid belt (Fig. \ref{fig:uvdist}). Nevertheless, based on the band shape, the broad and rounded 3.1-$\mu$m band, possibly ice frost, was found in semi-major axes beyond 3.1 au coupled with the P and D types, while the sharp 3.1-$\mu$m band was found in a very wide range of 2.5 -- 4 au \citep{Takir&Emery2012}. This may point that the outer main belt might preserve the composition of anhydrous silicates, water ice, and possibly complex organic materials originated from the outer Solar System. The 0.7-$\mu$m band and the OH-band features are highly correlated, suggesting the presence of the 0.7-$\mu$m band indicates phyllosilicates resulting from the aqueous alteration process \citep{Vilas1994, Howell2011, Fornasier2014, Usui2019}. Using the ECAS dataset, the distribution of 0.7-$\mu$m absorption was investigated by \citet{Vilas1994}, finding that the percentage of objects showing 0.7-$\mu$m absorption in different spectral types decreases from G > C > B > F > P > D. \citet{Fornasier2014} came to the same conclusion using spectroscopic observations. Asteroids with 0.7-$\mu$m band were proposed to dominate a zone from 2.6 to 3.5 au by \citet{Vilas1994} and later on it was updated to a zone from 2.3 to 3.1 au by \citet{Fornasier2014}. Furthermore, \citet{Fornasier2014} found that more than half of the objects in the IMB show the 0.7-$\mu$m band although MMB is the main region of hydrated objects in terms of number among the main asteroid belt. \citet{Rivkin2012} tried to measure the percentage of objects with 0.7-$\mu$m band using the SDSS photometry data. They found a percentage of 30\% to 40\% through the main asteroid belt although there are slight differences: OMB is the lowest, MMB is the highest. To summarize the observation of the 0.7-$\mu$m band, the MMB has the highest percentage of objects with the 0.7-$\mu$m band absorption. The NUV absorption strength decreases from G > C/B > F > P > D, which is in good agreement with the percentage of objects showing 0.7-$\mu$m band found by \citet{Vilas1994} and \citet{Fornasier2014}. The G-type asteroids, the best analogs of the CM chondrites, characterized by NUV and 0.7-$\mu$m band absorptions, are distributed in the MMB with highest ratio. The B types are possibly analogs of CI or CM considering the turning up in NIR. Most B types distribute in MMB and OMB. The C types show moderate NUV absorption and almost flat reflectance spectra at VIS-NIR. The C types are prominent in the large members of main belt, suggesting possible survived primordial bodies from beginning. Thus, the C type spectra might show the composition of the outer most surface, such as NH$_4$-bearing phyllosilicates observed on the surface of (1) Ceres \citep{Ammannito2016}. The P/D types are also abundant in MMB, OMB, and beyond, although the domination in the Cybele, Hilda, and Jupiter Trojan zones is remarkable. However, P types are more abundant in the main belt region compared with D types. P types might be an end-member of C complex asteroids, which agrees with the previous studies \citep{Vernazza2017, Vernazza2021, Mahlke2022}. D types are hypothesized to be implanted in the Cybele and Hilda regions later than the formation of the main belt \citep{Levison2009}. In contrast, the IMB, especially in small members, is dominated by F types, the least NUV absorption strength among C complexes, which could be composed of Fe-poor phyllosilicates like CI chondrites or thermally metamorphosed carbonaceous chondritic materials. Previously, the F types were hypothesized as thermally metamorphosed carbonaceous chondrites due to their little NUV absorptions \citep{Hiroi1993}. If this is the case, the asteroids in IMB might have experienced hight temperature when formed, e.g., rapid formation and/or large-size planetesimals \citep[e.g.,][]{Grimm&Mcsween1993, Neumann2020}. It also should be noted that small asteroids could be fragments of catastrophic disruptions and have rubble-pile structure, and thus they may exhibit the internal compositions. This proposes the possibility that asteroids currently in IMB had relatively less heated crust and highly heated internal composition. Alternatively, they could be CI chondritic material (see also Sect. \ref{sec:ryugu}). This CI chondritic material could originated from fragments of core of large $d>100$ km IDP-like bodies as hypothesized by \citet{Vernazza2017}. In that case, they experienced low heating and might be formed in outer Solar System, e.g., slow formation and/or high water-rock ratio \citep[e.g.,][]{Nakamura2022}. These two hypotheses for the F type composition will inspire totally different history of the Solar System formation. However, we do not have clear evidence to conclude. Thus, more sample return missions will have importance. If the second case is plausible, the unique distribution of F types suggests that they were implanted in their current positions by different mechanisms apart from other primitive asteroids, or they were formed in a distinctive place in the early Solar System. Summarizing, the characteristics of aqueous alteration through the main belt to the Cybele and Hilda zones can be depicted as 1) Fe-poor phyllosilicates or thermally metamorphosed carbonaceous chondrites in asteroids with $d<10$ km of IMB, 2) Fe-rich phyllosilicates in asteroids with $d>10$ km of MMB and OMB, 3) none or little aqueously altered comet-like materials with water frost in the Cybele, Hilda, and Jupiter Trojan zones, and 4) partially aqueously altered materials for all regions. \begin{figure}[ht] \centering \includegraphics[width=\hsize]{fig/SemimajorAxis_vs_Tax.rev.pdf} \caption{Taxonomy distribution of primitive asteroids through the main belt to the Cybele and Hilda zone; the distributions of C, F, G, and B types (top) and those of C, P, and D types (bottom).} \label{fig:semimajor_tax} \end{figure} \subsection{Grain size and surface physical condition} It is known that the physical surface conditions affect the spectral slopes. Fine grain samples show redder spectral slope over VIS to NIR wavelengths \citep[e.g.,][]{Cloutis2018}. Grain size of asteroid surfaces can be estimated through thermal inertia derived by thermal infrared observations. Thermal inertia is indicative of surface roughness at a scale bigger than the typical diurnal heat propagation distance. Thermal inertia correlates with surface grain size: high thermal inertia can be interpreted as indicating coarse or rigid rock surface and low thermal inertia as indicating fine grain regolith \citep{Gundlach&Blum2013}. Based on the thermal infrared observations, increasing thermal inertia with decreasing diameter of the asteroid was found \citep{Delbo2007, Hanus2018}. Specifically, many of the asteroids with $d > 10$ km show low thermal inertia of $< 100$ Jm$^{-2}$s$^{-1/2}$K$^{-1}$ \citep{Hanus2018}. If the grain size plays a significant role on the asteroids, we would find the variation in spectral slope among different size groups. However, we did not find a significant difference between $d < 10$ km group and $d>10$ km group in visible spectral slope (Fig. \ref{fig:uvdist}). This might suggest that the grain size is not the primary cause to change the spectral slope on asteroid surfaces. Polarimetry is also a powerful tool for estimating an asteroid’s surface condition. The linear polarization degree can be measured as a function of phase angle \citep[see][as a review]{Belskaya2015}. The linear polarization degree of asteroids reaches minimum value ($P_{\rm min}$) at a certain phase angle ($\alpha_{\rm min}$). The primitive bodies, such as D-class asteroids and the dark side of Iapetus, exhibit the smallest $|P_{\rm min}|$ \citep{Hasegawa2021}, while the asteroids with 0.7-$\mu$m band (Ch,G) have the largest $\alpha_{\rm min}$ and $|P_{\rm min}|$ \citep{Belskaya2017}. F types have values in-between these two end members, characterized by relatively small $\alpha_{\rm min}$ and $|P_{\rm min}|$ compared with other C-complex asteroids \citep{Belskaya2005, Belskaya2017, Gil-Hutton&Canada-Assandri2012, Hasegawa2021}. Asteroids classified as C and P types have similar inversion angles (the angle at which the linear polarization changes its sign) but different depths of negative polarization \citep{Belskaya2017}. It should be noted that asteroid spectral taxonomic classes defined by the reflectance in NUV-VIS space were well separated in polarization space \citep{Belskaya2017}. Although F and B types sometimes have similar negative visible slopes, they are well separated in polarization space. If the grain size plays a dominant role to determine the polarization feature, taxonomic classes should align in the same order of visible slope values. However, it is not the case. This suggests that the polarization features are related to the differences in composition rather than particle sizes or physical conditions. The interpretation of $P_{\rm min}$ and the depth of the negative branch have not been fully understood. The coherent back-scatter enhancement has been considered by several authors as the most plausible cause of negative polarization \citep{Shkuratov1985, Shkuratov1991, Shkuratov1994, Muinonen1990, Mishchenko1993}. More recent laboratory and theoretical studies showed a considerable role in the formation of negative polarization by single-particle scattering \citep{Ovcharenko2001, Shkuratov2004}. A few experimental studies suggested that adding small amounts of a bright material to a dark material might enhance the negative polarization branch, when bright particles have more prominent negative polarization than the dark absorbing particles \citep{Shkuratov2004, Belskaya2005}. The relatively small values of the parameters $\alpha_{\rm min}$ and $|P_{\rm min}|$ for D/F-type asteroids can be treated as a diagnostic of optical homogeneity and darkness of the regolith micro-structure on scales of the order of the wavelength \citep{Belskaya2005}, while large values of $\alpha_{\rm min}$ and $|P_{\rm min}|$ for Ch/G-type asteroids can be explained as inhomogeneity caused by bright inclusions such as Calcium-Aluminum-rich Inclusions and chondrules \citep{Devogele2017}. The peculiar polarimetric characteristics of the F type could also contribute to the NUV reflectance behavior. Thus, it is important to classify asteroids taking into account the polarimetric behavior in the future. \subsection{F-type asteroids: Relation with (162173) Ryugu and (101955) Bennu}\label{sec:ryugu} F types can only be classified using the NUV wavelengths; they are lately confused with B/C types in the visible wavelength range. The F-type asteroids have a peculiar distribution compared with other types of primitive asteroids (Fig. \ref{fig:semimajor_tax}): they concentrate in the IMB. Moreover, the Hayabusa2 spacecraft recently visited asteroid (162173) Ryugu, which is an F-type asteroid \citep{Tatsumi2022}. Ryugu was considered to originate from the IMB \citep{Campins2010} and more specifically from the Polana-Eulalia family \citep{Campins2013, Sugita2019, Tatsumi2021}. Thus, the Ryugu samples could be mineralogical representative of the F-type asteroids. From the analysis of the returned samples, Ryugu, which shows a strong and sharp OH absorption at 2.72 $\mu$m, has a composition similar to that of CI chondrites \citep{Yada2022, Pilorget2022, Yokoyama2022,Nakamura2022}. Based on the remote-sensing spectra, Ryugu was presumed to be a dehydrated carbonaceous chondrite due to its very flat and shallow absorption features in VIS-NIR \citep{Sugita2019, Kitazato2019, Tatsumi2021}. The case of Ryugu and CI chondrites led us to misinterpret the remote-sensing spectra due to the terrestrial contamination of CI chondrites and the space weathering of the Ryugu surface. This is a good lesson to learn and suggests that most of our charbonaceous chondrite meteorites might be contaminated and do not show exactly the right spectra to compare with asteroid surfaces. Nevertheless, CI chondrites are composed mainly of Mg-rich phyllosilicates and lack chondrules and CAIs \citep{Cloutis2011a}. Less abundance of Fe-righ pyllosilicates or presence of magnetite might result in the flat NUV reflectance in this case. Alternatively, the space weathering after the catastrophic disruption of the parent body might cause the shallower absorption features by phyllosilicates \citep{Noguchi2022}. These geochemical characteristics match the distant formation from the Sun. Moreover, the measurement of Cr isotopic heterogeneities suggests that CI chondrite Orgueil is formed farther from the Sun than Murchison (CM), Allende (CK) and Tagish Lake \citep{Fukai&Arakawa2021}. If F types were formed at a far distance from the Sun, they were implanted relatively recently into the current positions, mainly in the IMB. Another noticeable feature of F types is that they were formed relatively small in size. This could be the reason that they could be easily moved by disturbances due to the giant planets. The mechanism of displacement needs to be investigated further by dynamical calculations. Furthermore, the asteroid (101955) Bennu visited and sampled by the OSIRIS-REx spacecraft is also classified as an F type \citep{Hergenrother2013}. Bennu could also originate from the Polana-Eulalia family in the IMB \citep{Campins2010, Bottke2015, Tatsumi&Popescu2021}. The returned samples from Bennu and Ryugu will add more geochemical information to our understanding of the F-type asteroids. \section{Conclusions} Phyllosilicates on primitive asteroids resulted from the reaction of aqueous alteration process in the early history of the Solar System. They are a key to constrain for the formation conditions of the primitive asteroids/planetesimals. The NUV absorption has been proposed to be a good proxy for phyllosilicates, but to date this had not been investigated well. For this reason, we explored the NUV region for dark primitive asteroids using two spectrophotometric surveys, SDSS and ECAS. Photometric surveys are more robust to investigate the sensitive NUV region than the spectroscopic observations from ground which can be affected greatly by the atmospheric conditions and the solar analogs. Using two surveys we can cover 9,168 asteroids with $\mathcal{H}<17.5$. First, we investigated the correlation between the NUV absorption and the OH-band (2.7 $\mu$m) absorption for asteroids and meteorites. We found a good correlation between the NUV and OH-band absorptions, which was originally discussed by \citet{Feierberg1985} and \citet{Hiroi1993}. We found that grain size may contribute the deeper band absorptions. Also, Fe-bearing phyllosilicates which also show the 0.7-$\mu$m band absorption may contribute the deeper NUV absorption. Based on these correlations, we confirmed that the NUV absorption is a good proxy for the phyllosilicate abundance on asteroids. \citet{Tholen1984}'s taxonomic classification of asteroids has taken into account the NUV absorption. Following the Tholen taxonomy, we classified the asteroids and found their distribution in the main belt to the Cybele, Hilda, and Jupiter Trojan zones. We found that large asteroids with $d > 50$ km and small asteroids with $d< 10$ km have a completely different taxonomic distribution. Large asteroids show quite similar distribution through the main belt, while small asteroids exhibit different distributions at the inner main belt vs. the middle and outer main belt. The inner main belt region shows significantly small value of the NUV absorption and is dominated by F types. On the contrary, the major constituent of the middle and outer main belt are G types, which have the largest NUV absorption. The Cybele, Hilda, and Jupiter Trojan zones show distinctive distributions, as being dominated by the red members, P and D types. F types are found to be abundant also in the small members of the Cybele zone. We found that the distribution of F types is unique: they concentrate in small members of inner main belt region. There is still not much constraint on the composition of F types. Recent sample-return missions, Hayabusa2 to (162173) Ryugu and OSIRIS-REx to (101955) Bennu, will also add fundamental chemical information on F types. So far the Ryugu samples are discussed to be compositionally similar to CI chondrites. Thus, the main constituent of F types may be CI-like Mg-rich phyllosilicates. Furthermore, the Gaia space telescope spectroscopically have observed numerous asteroids in this missing NUV wavelength. Gaia will open a new horizon to the compositional distribution using the NUV wavelength range. \begin{acknowledgements} The authors thank the anonymous reviewer for the constructive and elaborated comments and suggestions. ET, FTR, JdL, and JL acknowledge support from the Agencia Estatal de Investigaci\'{o}n del Ministerio de Ciencia e Innovaci\'{o}n (AEI-MCINN) under grant "Hydrated minerals and organic compounds in primitive asteroids" (PID2020-120464GB-I00/doi:\url{10.13039/501100011033}). JdL also acknowledges financial support from the Spanish Ministry of Science and Innovation (MICINN) through the Spanish State Research Agency, under Severo Ochoa Program 2020-2023 (CEX2019-000920-S). MP was supported by the grant of the Romanian National Authority for Scientific Research - UEFISCDI, project No. PN-III-P1-1.1-TE-2019-1504. SH was supported by the Hypervelocity Impact Facility (former name: The Space Plasma Laboratory), ISAS, JAXA. \end{acknowledgements} \bibliographystyle{aa}
1,108,101,566,835
arxiv
\section{Introduction} A plasma is a collection of fast-moving charged particles and is one of the four fundamental states of matter. Plasmas are the most common phase of ordinary matter in the universe, both by mass and by volume. Essentially, all of the visible light from space comes from stars, which are plasmas with a temperature such that they radiate strongly at visible wavelengths. Most of the ordinary (or baryonic) matter in the universe, however, is found in the intergalactic medium, which is also a plasma, but much hotter, so that it radiates primarily as X-rays. We refer to \cite{Bit,DelBer} for physics references in book form. One of the basic models for describing plasma dynamics is the Euler-Maxwell \textquotedblleft two-fluid\textquotedblright\ model, in which two compressible ion and electron fluids interact with their own self-consistent electromagnetic field. In this paper we consider a slightly simplified version, the so-called one-fluid Euler-Maxwell system (EM) for electrons, which accounts for the interaction of electrons and the electromagnetic field, but neglects the dynamics of the ion fluid. The model describes the dynamical evolution of the functions $n_e:\mathbb{R}^3\to\mathbb{R}$ (the density of the fluid), $v_e:\mathbb{R}^3\to\mathbb{R}^3$ (the velocity field of the fluid), and $E',B':\mathbb{R}^3\to\mathbb{R}^3$ (the electric and magnetic fields), which evolve according to the coupled nonlinear system \begin{equation}\label{initsyst} \begin{cases} &\partial_tn_e+\hbox{div}(n_ev_e)=0,\\ &m_e(\partial_tv_e+v_e\cdot\nabla v_e)=-P_e\nabla n_e-e\left[E'+(v_e/c)\times B'\right],\\ &\partial_tE'-c\nabla\times B'=4\pi en_ev_e,\\ &\partial_tB'+c\nabla\times E'=0,\\ \end{cases} \end{equation} together with the constrains \begin{equation}\label{constr} \hbox{div}(B')=0,\quad \hbox{div}(E')=-4\pi e(n_e-n^0). \end{equation} The constraints \eqref{constr} are propagated by the flow if they are satisfied at the initial time. There are several physical constants in the above system: $-e<0$ is the electron's charge, $m_e$ is the electron's mass, $c$ denotes the speed of light, and $P_e$ is related to the effective electron temperature (that is $k_BT_e=n^0P_e$, where $k_B$ is the Boltzmann constant). In the system above we have chosen, for simplicity, the quadratic adiabatic pressure law $p_e=P_en_e^2/2$. The system has a family of equilibrium solutions $(n_e,v_e,E',B')=(n^0,0,0,0)$, where $n^0>0$ is a constant. Our goal here is to investigate the long-term stability properties of these solutions. \subsection{The main theorem} The system \eqref{initsyst}--\eqref{constr} is a complicated coupled nonlinear system of ten scalar evolution equations and two constraints. To simplify it, we make first linear changes of variables to normalize the constants. More precisely, let \begin{equation*} \lambda:=\frac{1}{c}\sqrt{\frac{4\pi e^2n^0}{m_e}},\qquad\beta:=\sqrt{\frac{4\pi e^2n^0}{m_e}},\qquad \alpha:=\frac{\lambda m_e c^2}{e}=\frac{4\pi en^0}{\lambda},\qquad d:=\frac{P_en^0}{m_ec^2}>0, \end{equation*} and define the functions $n,v,E,B$ by \begin{equation*} \begin{split} &n_e(x,t)=n^0[1+n(\lambda x,\beta t)],\qquad v_e(x,t)=c\cdot v(\lambda x,\beta t),\\ &E'(x,t)=\alpha E(\lambda x,\beta t),\qquad\,\qquad\, B'(x,t)=\alpha B(\lambda x,\beta t). \end{split} \end{equation*} The system \eqref{initsyst}--\eqref{constr} becomes \beq \label{systI} \begin{cases} \partial_tn+\hbox{div}((1+n)v)&=0,\\ \partial_tv+v\cdot\nabla v+d\nabla n+E+v\times B&=0,\\ \partial_tE-\nabla\times B-(1+n)v&=0,\\ \partial_tB+\nabla\times E&=0, \end{cases} \eeq and \beq \label{constr1} \hbox{div}(B)=0,\qquad \hbox{div}(E)+n=0. \eeq The system depends only on the parameter $d$ in the second equation. In the physically relevant case we have $d\in(0,1)$, which we assume from now on. We now define the \textit{vorticity} of our system (allowed to be nontrivial) as \begin{equation}\label{vort} Y:=B-\nabla\times v. \eeq We note that the system \eqref{systI} admits a conserved energy, defined by \begin{equation}\label{EnCons} \mathcal{E}_{conserved}:=\int_{\mathbb{R}^3}\big\{d|n|^2+(1+n)|v|^2+|E|^2+|B|^2\big\}\,dx. \end{equation} To state our main theorem we need to introduce some notation. \begin{definition}\label{OneDef} We define the rotational vector-fields, \beq\label{difop} \O_1:=x_2\p_3-x_3\p_2,\qquad\O_2:=x_3\p_1-x_1\p_3,\qquad\O_3:=x_1\p_2-x_2\p_1. \eeq For $m\geq 0$ let $\mathcal{V}_m$ denote the set of differential operators of the form \beq\label{coordrotm} \mathcal{V}_m:=\{\partial_1^{\alpha_1}\partial_2^{\alpha_2}\partial_3^{\alpha_3}\O_1^{\beta_1}\O_2^{\beta_2}\O_3^{\beta_3}:\alpha_1+\alpha_2+\alpha_3+\beta_1+\beta_2+\beta_3\leq m\}. \eeq For $N\geq 1$ and $p\in[1,\infty]$ we define the spaces $\H^{N}(\mathbb{R}^3)$ and $\mathcal{W}^{N,p}(\mathbb{R}^3)$ by the norms \begin{equation}\label{alx1} \|u\|_{\H^{N}(\mathbb{R}^3)}:=\sum_{\mathcal{L}\in \mathcal{V}_N}\|\L u\|_{L^2(\mathbb{R}^3)},\qquad \|u\|_{\mathcal{W}^{N,p}(\mathbb{R}^3)}:=\sum_{\mathcal{L}\in \mathcal{V}_N}\|\L u\|_{L^p(\mathbb{R}^3)}. \end{equation} For $N\geq 1$ as above, we let $\widetilde{\H}^N$ be the normed space \begin{equation}\label{alx2} \begin{split} \widetilde{\H}^N:=\{&(n,v,E,B):\mathbb{R}^3\to\mathbb{R}\times\mathbb{R}^3\times\mathbb{R}^3\times\mathbb{R}^3:\\ &\|(n,v,E,B)\|_{\widetilde{\H}^N}:=\|n\|_{\H^N}+\|v\|_{\H^N}+\|E\|_{\H^N}+\|B\|_{\H^N}<\infty\}. \end{split} \end{equation} \end{definition} The following theorem in the main result of this paper: \begin{theorem}\label{MainThm} Assume $d\in(0,1)$, and let $N_0:=100$, $N_1:=N_0/2+2$, and $\beta:=10^{-6}$. Then there is a constant $\bar{\ep}=\bar{\ep}(d)>0$ with the following property: assume that $(n_0,v_0,E_0,B_0):\mathbb{R}^3\to\mathbb{R}\times\mathbb{R}^3\times\mathbb{R}^3\times\mathbb{R}^3$ are small, smooth, and localized initial data, i.e. \begin{equation}\label{smallin} \|(n_0,v_0,E_0,B_0)\|_{\widetilde{\H}^{N_0}}+\|(1+|x|^2)^{(1+\beta)/2}(1-\Delta)^3(n_0,v_0,E_0,B_0)\|_{\H^{N_1}}\leq \bar{\ep}, \end{equation} satisfying the compatibility conditions \beq \label{compatin} \hbox{div}(B_0)=0,\qquad \hbox{div}(E_0)+n_0=0. \eeq Assume that the initial vorticity $Y_{0}=B_0-\nabla\times v_{0}$ satisfies the additional smallness condition \begin{equation}\label{smallvort} \|(1+|x|^2)^{1/4}Y_0\|_{\mathcal{H}^{N_1}}\leq \d_{0}\leq\bar{\ep}. \end{equation} Then there exists a unique solution $(n,v,E,B)\in C([0,T_{\delta_0}]: \widetilde{\H}^{N_0})$ of the system \eqref{systI}--\eqref{constr1} having the initial data $(n_0,v_0,E_0,B_0)$, where \begin{equation}\label{Alx4} T_{\delta_0}=\bar{\ep}/\d_0. \end{equation} \end{theorem} \begin{remark}\label{Alx0} (i) The main conclusion of the theorem is that the solutions extend and stay smooth at least up to time $T_{\delta_0}\gtrsim 1/\delta_0$, which depends only on the size $\delta_0$ of the vorticity of the initial data. Notice that this implies global regularity in the irrotational case $\delta_0=0$, thus providing a quantitative version of the earlier theorems of \cite{GeMa} and \cite{IoPa2}. (ii) One can derive more information about the solution $(n,v,E,B)$ of the system. For example, the solution satisfies the uniform bounds, for all $t\in[0,T_{\delta_0}]$, \begin{equation*} \|(n(t),v(t),E(t),B(t))\|_{\widetilde{\H}^{N_0}}\lesssim\overline{\ep},\qquad \|(1+|x|^2)^{1/4}Y(t)\|_{\mathcal{H}^{N_1}}\lesssim \d_{0}, \end{equation*} where $Y(t)=B(t)-\nabla\times v(t)$. Moreover, the solution decouples into a superposition of two dispersive components $U_e$ and $U_b$ which propagate with different group velocities and decay, and a vorticity component $Y$, which is essentially transported by the flow. The two dispersive components can be studied precisely using the $Z$-norm, see Definition \ref{MainZDef}. \end{remark} \subsection{Previous work on long-term regularity} The local regularity theory of the Euler--Maxwell system follows easily by energy estimates. The question of long-term regularity is much more interesting and has been studied in several recent papers. The dynamics of the full Euler--Maxwell system is extremely complex, due to a large number of coupled interactions and many types of resonances. Even at the linear level, there are ion-acoustic waves, Langmuir waves, light waves etc. At the nonlinear level, the Euler--Maxwell system is the origin of many well-known dispersive PDE's which can be derived via scaling and asymptotic expansions. See also the introduction of \cite{GuIoPa} for a longer discussion of the Euler--Maxwell system in 3D, and its connections to many other models in mathematical physics, such as the Euler--Poisson model, the Zakharov system, the KdV, the KP, and the NLS. Because of this complexity it is natural to study first simplified models, such as the one-fluid Euler--Poisson model (first studied by Guo \cite{Guo}) and the one-fluid Euler--Maxwell system (which is the system \eqref{initsyst}). In particular, the one-fluid Euler--Maxwell system shares many of the features and the conceptual difficulties of the full system, but is simpler at the analytical level. Under suitable irrotationality assumptions, this system can be reduced to a coupled system of two Klein--Gordon equations with different speeds and no null structure. While global results are classical in the case of scalar wave and Klein--Gordon equations, see for example \cite{Jo,JoKl, Kl2, KlVf, Kl, Kl4, Ch, Sh, Si, DeFa, DeFaXu, Alin2, Alin3}, it was pointed out by Germain \cite{Ge} that there are key new difficulties in the case of a coupled system of Klein--Gordon equations with different speeds. In this case, the classical vector-field method does not seem to work well, and there are large sets of resonances that contribute in the analysis. Global regularity for small irrotational solutions of this model was proved by Germain--Masmoudi \cite{GeMa} and Ionescu--Pausader \cite{IoPa2}, using more subtle arguments based on Fourier analysis. In 3 dimensions, nontrivial global solutions of the full two-fluid system were constructed for the first time by Guo--Ionescu--Pausader \cite{GuIoPa} (small irrotational perturbations of constant solutions), following the earlier partial results in simplified models in \cite{Guo,GuPa,GeMa,IoPa2}. The one-fluid Euler--Poisson system and the one-fluid Euler--Maxwell system have also been studied in 2 dimensions, where the global results are harder due to less dispersion and slower decay. See \cite{IoPa1}, \cite{LiWu}, and \cite{DeIoPa}. \subsubsection{Nontrivial vorticity} We remark that all the global regularity results described above are restricted to the case of solutions with trivial vorticity. This is also the case with the global regularity results in many other quasilinear fluid models, such as water waves, see the introduction of \cite{DeIoPaPu} for a longer discussion. In fact, all proofs of global existence in quasilinear evolutions depend in a crucial way on establishing quantitative decay of solutions over time. On the other hand, one usually expects that vorticity is transported by the flow and does not decay. This simple fact causes a serious obstruction to proving global existence for solutions with dynamically nontrivial vorticity. In this paper we would like to initiate the study of long-term regularity of solutions with nontrivial vorticity. However, we are not able to establish the global existence of such solutions for any of the Euler-Maxwell or Euler--Poisson systems. Instead we prove that sufficiently small solutions extend smoothly on a time of existence that depends only on the size of the vorticity. Such a theorem can be interpreted as a quantitative version of global regularity theorems for small solutions with trivial vorticity described earlier. In fact, our Theorem \ref{MainThm} immediately implies the global regularity theorems of \cite{GeMa} and \cite{IoPa2}, simply by letting $\delta_0\to 0$. An important consideration to keep in mind is the length of the time of existence of solutions. In our case we show that this time of existence is at least $c/\delta_0$, where $\delta_0$ is the size of the vorticity component of the initial data, and $c$ is a small constant. This is consistent with the time of existence of the simple equation \begin{equation}\label{MoVo} \partial_{t}Y=Y^2. \end{equation} One can think of this equation as a model for the vorticity equation, in dimension 3, which ignores all the other interactions and the precise structure of the vorticity equation. The $c/\delta_0$ time of existence appears to be quite robust, and one can hope to prove a theorem like Theorem \ref{MainThm} in other models in which global regularity for solutions with trivial vorticity is known. One might also hope that more involved analysis would allow one to extend solutions beyond the $c/\delta_0$ time of existence, particularly in certain models in dimension 2 when the vorticity equation is known to behave better than the simple equation \eqref{MoVo}. We hope to return to such issues in the future. \subsection{Main ideas of the proof}\label{MainIdea}The classical mechanism to establish long-term regularity for quasilinear equations has two main components: \setlength{\leftmargini}{1.8em} \begin{itemize} \item[(1)] Control of high frequencies (high order Sobolev norms); \smallskip \item[(2)] Dispersion/decay of the solution over time. \end{itemize} The interplay of these two aspects has been present since the seminal work of Klainerman \cite{Kl2}--\cite{Kl4}, Christodoulou \cite{Ch}, and Shatah \cite{Sh}. In the last few years new methods have emerged in the study of global solutions of quasilinear evolutions, inspired by the advances in semilinear theory. The basic idea is to combine the classical energy and vector-fields methods with refined analysis of the Duhamel formula, using the Fourier transform. This is the essence of the ``method of space-time resonances'' of Germain--Masmoudi--Shatah \cite{GeMaSh,GeMaSh2}, see also Gustafson--Nakanishi---Tsai \cite{GuNaTs}, and of the refinements in \cite{IoPa1,IoPa2,GuIoPa,GuIoPa2,DeIoPa,De,DeIoPaPu}, using atomic decompositions and sophisticated norms. This general framework needs to be adapted to our case, where we have non-decaying components and we are aiming for a lifespan that depends only on the size of these components. To illustrate the main ideas, consider the following schematic system \begin{equation}\label{scoo} \begin{split} (\partial_t+i\Lambda)U&=O(U^2)+O(UY)+O(Y^2),\\ \partial_t Y&=O(UY)+O(Y^2). \end{split} \end{equation} Here one should think of $U$ as generic dispersive variables (take for instance the Klein--Gordon case $\Lambda=\sqrt{1-\Delta}$) and $Y$ represent generic non-dispersive vorticity-type components. The nonlinearities $O(U^2), O(UY), O(Y^2)$ are to be thought of as generic quadratic nonlinearities that may lose derivatives. See \eqref{KG} for the precise system in our case, keeping in mind that there are two types of dispersive variables corresponding to two different speeds of propagation. Our analysis of solutions of such a system contains three main ingredients: \begin{itemize} \item Energy estimates for the full system. These estimates allow us to control high Sobolev norms and weighted norms (corresponding to the rotation vector-field) of the solution. They are not hard in our case, since we are able to prove independently $L^1_t$ pointwise control of the solution. \item Vorticity energy estimates. This is a new ingredient in our problem. We need to show that the vorticity stays small, that is $\lesssim\delta_0$, on the entire time of existence. These estimates depend again on the $L^1_t$ pointwise control of the solution and on the structure of the nonlinearity of the vorticity equation (without a $O(U^2)$ term). \item Dispersive analysis. The dispersive estimates, which lead to decay, rely on a bootstrap argument in a suitable $Z$ norm. The norm we use here is similar to the $Z$ norm introduced in the 2D problem in \cite{DeIoPa} and accounts for the rotation invariance of the system. We analyze carefully the Duhamel formula for the first equation in \eqref{scoo}, in particular the quadratic interactions related to the set of resonances. The analysis of the terms $O(Y^2)$ and $O(YU)$, which contain the $\mathrm{transport}\times\mathrm{transport}\to\mathrm{dispersive}$ and the $\mathrm{transport}\times\mathrm{dispersive}\to\mathrm{dispersive}$ interactions, is new, when compared to the irrotational global results described earlier such as \cite{IoPa2}. On the other hand, the analysis of the term $O(U^2)$, which involves a large set of space-time resonances, due to the two different speeds of propagation, has similarities with the analysis in \cite{IoPa1,IoPa2,GuIoPa,GuIoPa2}. \end{itemize} At the implementation level, we remark that we are able to completely decouple the decay parameter $\beta$, which can be taken very small, see Definition \ref{MainZDef}, from the smoothness parameters $N_0$ and $N_1$. These parameters were related to each other in earlier work, such as \cite{IoPa1,IoPa2, GuIoPa,GuIoPa2}. As a result, we are able to reduce substantially the total number of derivatives $N_0$ and $N_1$ in the main theorem.\footnote{These smoothness parameters can be further reduced by longer and more careful analysis, but our goal here is just to demonstrate that these parameters can be decoupled from the decay parameters in the $Z$ norm.} \subsection{Organization} The rest of the paper is organized as follows: in section \ref{prelims} we introduce most of the key definitions, such as the $Z$ norm, rewrite our main system as a dispersive system for the quasilinear variables (diagonalized at the linear level), and state the main bootstrap proposition. In section \ref{lemmas} we summarize some lemmas that are being used in the rest of the paper, mostly concerning linear analysis and the resonant structure of the oscillatory phases. In section \ref{EneEst} we prove our main energy estimates, both for the full energy of the system and for the vorticity energy. Finally, in sections \ref{ParT}--\ref{DispInter} we prove our main dispersive estimates for the decaying components of the solution. \section{Preliminaries}\label{prelims} In this section we rewrite our main system as a quasilinear dispersive system (diagonalized at the linear level), summarize the main definitions, and state the main bootstrap proposition. \subsection{Diagonalization} We assume that $(n,v,E,B)$ satisfy the system of equations \eqref{systI}--\eqref{constr1} and use the Hodge decomposition. Let \begin{equation}\label{Alx11} \begin{split} &F:=|\nabla|^{-1}\hbox{div}(v),\qquad \,G:=|\nabla|^{-1}\nabla\times v,\\ &Z:=|\nabla|^{-1}\hbox{div}(E),\qquad W:=|\nabla|^{-1}\nabla\times E,\qquad Y=B-\nabla\times v. \end{split} \end{equation} Let $R_j:=|\nabla|^{-1}\partial_j$ denote the Euclidean Riesz transforms. Then we can express the variables $n,v,E,B$ elliptically, in terms of $F,G,Z,W,Y$, according to the formulas \begin{equation}\label{Alx12} v_k=-R_kF+\in_{jlk}R_jG_l,\quad E_k=-R_kZ+\in_{jlk}R_jW_l,\quad n=-|\nabla|Z,\quad B=Y+|\nabla|G. \end{equation} Recall also that \begin{equation*} \hbox{div}(Y)=0,\qquad \hbox{div}(G)=0,\qquad \hbox{div}(W)=0. \end{equation*} By taking divergences and curls, the system \eqref{systI} gives the evolution equations \begin{equation}\label{Alx13} \begin{cases} \partial_t F+(1+d|\nabla|^2)Z&=-R\cdot(v\cdot \nabla v)-R\cdot (v\times B),\\ \partial_t G+W&=-R\times(v\cdot \nabla v)-R\times (v\times B),\\ \partial_t Z-F&=R\cdot (nv),\\ \partial_t W-(1+|\nabla|^2)G-|\nabla|Y&=R\times (nv),\\ \partial_t Y&=|\nabla|\big[R\times(v\cdot \nabla v)+R\times (v\times B)\big]. \end{cases} \end{equation} Since $B=Y+\nabla\times v$ and $v\times(\nabla\times v)=\nabla(|v|^2/2)-v\cdot\nabla v$ we have \begin{equation}\label{Alx14} \begin{split} R\cdot (v\times B)&=R\cdot (v\times Y)-|\nabla|(|v|^2)/2-R\cdot(v\cdot \nabla v),\\ R\times(v\times B)&=R\times(v\times Y)-R\times(v\cdot \nabla v). \end{split} \end{equation} Let \begin{equation}\label{Alx15} \begin{split} &U_e:=\Lambda_eZ+iF,\qquad\qquad\qquad\qquad\Lambda_e:=\sqrt{1+d|\nabla|^2},\\ &U_b:=W+i\Lambda_bG+i\Lambda_b^{-1}|\nabla|Y,\qquad\, \Lambda_b:=\sqrt{1+|\nabla|^2}. \end{split} \end{equation} The formulas above show that \beq \label{KG} \begin{cases} (\partial_t+i\Lambda_e) U_e&= \Lambda_e (R\cdot [n v])+i |\nabla| (|v|^2)/2-i R\cdot (v\times Y),\\ (\partial_t+i\Lambda_b) U_b&= R\times [nv]- i \Lambda_b^{-1} R\times (v\times Y),\\ \partial_t Y&=\nabla\times (v\times Y)\,. \end{cases} \eeq Conversely, the physical variables $n,v,E,B$ can be recovered from the dispersive variables $U_e,U_b,Y$ by the formulas, see \eqref{Alx12}, \begin{equation}\label{Alx17} \begin{split} &n=-|\nabla|Z,\qquad v=-RF+R\times G,\qquad E=-RZ+R\times W,\qquad B=Y+|\nabla|G,\\ &F=\Im(U_e),\qquad G=\Lambda_b^{-1}\Im(U_b)-\Lambda_b^{-2}|\nabla|Y,\qquad Z=\Lambda_e^{-1}\Re(U_e),\qquad W=\Re(U_b). \end{split} \end{equation} The formulas show that the sets of variables $(n,v,E,B,Y)$ and $(U_e,U_b,Y)$ are elliptically equivalent, for example, for any $m\geq 1$ \begin{equation}\label{Alx18} \|n\|_{\mathcal{H}^m}+\|v\|_{\mathcal{H}^m}+\|E\|_{\mathcal{H}^m}+\|B\|_{\mathcal{H}^m}+\|Y\|_{\mathcal{H}^m}\approx \|U_e\|_{\mathcal{H}^m}+\|U_b\|_{\mathcal{H}^m}+\|Y\|_{\mathcal{H}^m}. \end{equation} \subsection{Main notations and definitions}\label{NotDef} \subsubsection{Littlewood--Paley projections} We fix $\varphi:\mathbb{R}\to[0,1]$ an even smooth function supported in $[-8/5,8/5]$ and equal to $1$ in $[-5/4,5/4]$. Let \begin{equation*} \begin{split} &\varphi_k(x):=\varphi(|x|/2^k)-\varphi(|x|/2^{k-1})\qquad\text{ for any }k\in\mathbb{Z},\,x\in\mathbb{R}^3,\qquad\\ &\varphi_I:=\sum_{m\in I\cap\mathbb{Z}}\varphi_m\text{ for any }I\subseteq\mathbb{R}. \end{split} \end{equation*} For any $B\in\mathbb{R}$ let \begin{equation*} \varphi_{\leq B}:=\varphi_{(-\infty,B]},\quad\varphi_{\geq B}:=\varphi_{[B,\infty)},\quad\varphi_{<B}:=\varphi_{(-\infty,B)},\quad \varphi_{>B}:=\varphi_{(B,\infty)}. \end{equation*} For any $a<b\in\mathbb{Z}$ and $j\in[a,b]\cap\mathbb{Z}$ let \begin{equation}\label{Alx80} \varphi^{[a,b]}_j:= \begin{cases} \varphi_j\qquad&\text{ if }a<j<b,\\ \varphi_{\leq a}\qquad&\text{ if }j=a,\\ \varphi_{\geq b}\qquad&\text{ if }j=b. \end{cases} \end{equation} For any $x\in\mathbb{R}$ let $x^+:=\max(x,0)$, $x^-:=\min(x,0)$. Let \begin{equation*} \mathcal{J}:=\{(k,j)\in\mathbb{Z}\times\mathbb{Z}:\,j\geq \max(-k,0)\}. \end{equation*} For any $(k,j)\in\mathcal{J}$ let \begin{equation*} \phii^{(k)}_j(x):= \begin{cases} \varphi_{(-\infty,\max(-k,0)]}(x)\quad&\text{ if }\,\,j=\max(-k,0),\\ \varphi_j(x)\quad&\text{ if }\,\,j\geq 1+\max(-k,0). \end{cases} \end{equation*} and notice that, for any $k\in\mathbb{Z}$ fixed, \begin{equation*} \sum_{j\geq \max(-k,0)}\phii^{(k)}_j=1. \end{equation*} For any interval $I\subseteq\mathbb{R}$ let \begin{equation*} \phii^{(k)}_I(x):=\sum_{j\in I,\,(k,j)\in\mathcal{J}}\phii^{(k)}_j(x). \end{equation*} Let $P_k$, $k\in\mathbb{Z}$, denote the operator on $\mathbb{R}^3$ defined by the Fourier multiplier $\xi\to \varphi_k(\xi)$. Similarly, for any $I\subseteq \mathbb{R}$ let $P_I$ denote the operator on $\mathbb{R}^3$ defined by the Fourier multiplier $\xi\to \varphi_I(\xi)$. For any $(k,j)\in\mathcal{J}$ let $Q_{jk}$ denote the operator \begin{equation}\label{qjk} (Q_{jk} f)(x):=\phii^{(k)}_j(x)\cdot P_kf(x). \end{equation} \subsubsection{Phases, linear profiles, and the $Z$-norm} An important role will be played by the profiles $V_e,V_b$ defined by \begin{equation}\label{variables4} V_e(t):=e^{it\Lambda_e}U_e(t),\qquad V_b(t):=e^{it\Lambda_b}U_b(t), \end{equation} where $U_e$ and $U_b$ are the dispersive variables defined in \eqref{Alx15}, and $\Lambda_e=\sqrt{1-d\Delta}$ and $\Lambda_b=\sqrt{1-\Delta}$ as before. We define \begin{equation} \begin{split} &U_{-e}:=\overline{U_e},\qquad U_{-b}:=\overline{U_b};\qquad V_{-e}:=\overline{V_e},\qquad V_{-b}:=\overline{V_b};\\ &\Lambda_{-e}:=-\Lambda_{e},\qquad\Lambda_{-b}:=-\Lambda_b. \end{split} \label{notation}\end{equation} Let \begin{equation}\label{symbol0}\mathcal{P}:=\{e,b,-e,-b\}.\end{equation} For $\sigma,\mu,\nu\in \mathcal{P}$, we define the associated phase function \begin{equation}\label{phasedef} \Phi_{\sigma\mu\nu}(\xi,\eta):=\Lambda_{\sigma}(\xi)-\Lambda_{\mu}(\xi-\eta)-\Lambda_{\nu}(\eta), \end{equation} and the corresponding function \begin{equation} \label{deflambd} \begin{split} &\Phi^{+}_{\sigma\mu\nu}(\alpha,\beta):=\Phi_{\sigma\mu\nu}(\alpha e,\beta e)=\lambda_{\sigma}(\alpha)-\lambda_{\mu}(\alpha-\beta)-\lambda_{\nu}(\beta),\\ &\lambda_e(r)=-\lambda_{-e}(r):=\sqrt{1+dr^2},\qquad\lambda_b(r)=-\lambda_{-b}(r):=\sqrt{1+r^2}, \end{split} \end{equation} where $e\in\mathbb{S}^{1}$ and $\alpha,\beta\in\mathbb{R}$. If $(\mu,\nu)\in\mathcal{P}\times\mathcal{P}\setminus\{(e,-e),(-e,e),(b,-b),(-b,b)\}$, by Proposition \ref{spaceres} for any $\xi\in\mathbb{R}^2$ there exists a unique $\eta=p(\xi)\in\mathbb{R}^2$ so that $(\nabla_{\eta}\Phi_{\sigma\mu\nu})(\xi,\eta)=0$ (a space resonance point). We define, for a sufficiently large constant $\D_0$ that depends only on the parameter $d\in(0,1)$, \begin{equation}\label{psidag} \Psi_{\sigma\mu\nu}(\xi):=\Phi_{\sigma\mu\nu}(\xi,p(\xi)),\qquad \Psi_{\sigma}^{\dagger}(\xi):=2^{\mathcal{D}_0}(1+|\xi|)\inf_{\mu,\nu\in\mathcal{P};\nu+\mu\neq 0}|\Psi_{\sigma\mu\nu}(\xi)|, \end{equation} and notice that these functions are radial. The functions $\Psi_e^{\dagger}$ and $\Psi_b^{\dagger}$ are described in Remark \ref{largeres}; in particular, $\Psi_e^{\dagger}\geq 10$ while $\Psi_b^{\dagger}$ vanishes on two spheres $|\xi|=\gamma_{1,2}=\gamma_{1,2}(d)\in(0,\infty)$. These spheres correspond to space-time resonances. For $n\in\mathbb{Z}$ we define the operators $A_{n}^{\sigma}$ by \begin{equation}\label{aop} \widehat{A_{n}^{\sigma}f}(\xi):=\varphi_{-n}(\Psi_{\sigma}^{\dagger}(\xi))\cdot\widehat{f}(\xi), \end{equation} for $\sigma\in\{e,b\}$. Given an integer $j\geq 0$ we define the operators $A^\sigma_{n,(j)}$, $n\in\{0,\ldots,j+1\}$, by \begin{equation*} A_{0,(j)}^\sigma:=\sum _{n'\leq 0}A_{n'}^\sigma,\qquad A_{j+1,(j)}^\sigma:=\sum _{n'\geq j+1}A_{n'}^\sigma,\qquad A_{n,(j)}^\sigma:=A_{n}^\sigma\,\,\text{ if }\,\,0<n<j+1. \end{equation*} We are now ready to define the main $Z$-norm. \begin{definition}\label{MainZDef} For $\sigma\in\{e,b\}$ we define \begin{equation}\label{sec5} Z_1^\sigma:=\{f\in L^2(\mathbb{R}^3):\,\|f\|_{Z_1^\sigma}:=\sup_{(k,j)\in\mathcal{J}}\|Q_{jk}f\|_{B^\sigma_{j}}<\infty\}, \end{equation} where, with $\beta:=10^{-6}$, \begin{equation}\label{znorm2} \|g\|_{B_j^{\sigma}}:=\sup_{0\leq n\leq j+1}2^{(1+\beta)j-4\beta n}\|A_{n,(j)}^{\sigma}g\|_{L^{2}}. \end{equation} Finally, with $N_1=N_0/2+2$ as before, $\mathcal{V}_{N_1}$ as in \eqref{coordrotm}, and $D^\alpha=\partial_1^{\alpha_1}\partial_2^{\alpha_2}\partial_3^{\alpha_3}$, we define \begin{equation}\label{znorm} Z:=\big\{(f_e,f_b)\in L^2\times L^2:\,\|(f_e,f_b)\|_{Z}:=\sup_{\mathcal{L}\in \mathcal{V}_{N_1},\,|\alpha|\leq 4}\big[\|D^\alpha\mathcal{L}f_e\|_{Z_1^e}+\|D^\alpha\mathcal{L}f_b\|_{Z_1^b}\big]<\infty\big\}. \end{equation} \end{definition} Notice that, when $\sigma=e$ we have the simpler formula, \begin{equation*} \|g\|_{B_j^{e}}\approx 2^{(1+\beta)j}\|g\|_{L^{2}}. \end{equation*} Similarly if $j\lesssim 1$ then $\|g\|_{B_j^{b}}\approx\|g\|_{L^{2}}$. The operators $A_{n,(j)}^{\sigma}$ are relevant only when $\sigma=b$ and $j\gg 1$, to localize to thin neighborhoods of the space-time resonant sets. The small factors $2^{-4\beta n}$ in \eqref{znorm2}, which are connected to the operators $A_{n,(j)}^{b}$, are important only in the space-time resonant analysis, in the proof of the bound \eqref{top1} in Lemma \ref{Reso01}. \subsection{The main bootstrap proposition}\label{bootstrap0} Our main result is the following proposition: \begin{proposition}\label{bootstrap} Suppose $(n,v,E,B)$ is a solution to \eqref{systI}--\eqref{constr1} on some time interval $[0,T]$, $T\in[1,\bar{\eps}/\delta_0]$, with initial data $(n_0,v_0,E_0,B_0)$, and define $(V_e,V_b)$ as in \eqref{variables4} and $Y=B-\nabla\times v$. Assume that \begin{equation}\label{bootstrap1} \|(n_0,v_0,E_0,B_0)\|_{\widetilde{\H}^{N_0}}+\|(V_e(0),V_b(0))\|_{Z}\lesssim \bar{\eps} \end{equation} and \begin{equation}\label{bootstrapV1} \|(1+|x|^2)^{1/4}Y_0\|_{\mathcal{H}^{N_1}}\leq \d_{0}\leq\bar{\eps}. \end{equation} In addition, assume that for any $t\in[0,T]$, \begin{equation}\label{bootstrap2} \|(n(t),v(t),E(t),B(t))\|_{\widetilde{\H}^{N_0}}+\|(V_e(t),V_b(t))\|_{Z}\leq \overline{C}\bar{\eps} \end{equation} and \begin{equation}\label{bootstrapV2} \|(1+|x|^2)^{1/4}Y(t)\|_{\mathcal{H}^{N_1}}\leq \overline{C}\d_{0}, \end{equation} for some sufficiently large constant $\overline{C}$. Then, for any $t\in[0,T]$, \begin{equation}\label{bootstrap3} \|(n(t),v(t),E(t),B(t))\|_{\widetilde{\H}^{N_0}}+\|(V_e(t),V_b(t))\|_{Z}\leq \overline{C}\bar{\eps}/2 \end{equation} and \begin{equation}\label{bootstrapV3} \|(1+|x|^2)^{1/4}Y(t)\|_{\mathcal{H}^{N_1}}\leq \overline{C}\d_{0}/2. \end{equation} \end{proposition} The constant $\overline{C}$ can be fixed sufficiently large, depending only on $d$, and the constant $\overline{\eps}$ is small relative to $1/\overline{C}$. Given Proposition \ref{bootstrap}, Theorem \ref{MainThm} follows using a local existence result and a continuity argument. See \cite[Sections 2 and 3]{IoPa2} (in particular Proposition 2.2 and Proposition 2.4) for similar arguments. The rest of this paper is concerned with the proof of Proposition \ref{bootstrap}. This proposition follows from Proposition \ref{BootstrapEE1}, Proposition \ref{BootstrapEE2}, and Proposition \ref{BootstrapZNorm}. \section{Some lemmas}\label{lemmas} In this section we collect several lemmas that are used in the rest of the paper. We fix a sufficiently large constant $\mathcal{D}\geq 10\mathcal{D}_0$. \subsubsection{Integration by parts}\label{Ipa} We start with two lemmas that are used often in integration by parts arguments. See \cite[Lemma 5.4]{IoPa2} and \cite[Lemma ]{DeIoPa} for the proofs. \begin{lemma}\label{tech5} Assume that $0<\eps\leq 1/\eps\leq K$, $N\geq 1$ is an integer, and $f,g\in C^{N+1}(\mathbb{R}^3)$. Then \begin{equation}\label{ln1} \Big|\int_{\mathbb{R}^3}e^{iKf}g\,dx\Big|\lesssim_N (K\eps)^{-N}\big[\sum_{|\alpha|\leq N}\eps^{|\alpha|}\|D^\alpha_xg\|_{L^1}\big], \end{equation} provided that $f$ is real-valued, \begin{equation}\label{ln2} |\nabla_x f|\geq \mathbf{1}_{{\mathrm{supp}}\,g},\quad\text{ and }\quad\|D_x^\alpha f \cdot\mathbf{1}_{{\mathrm{supp}}\,g}\|_{L^\infty}\lesssim_N\eps^{1-|\alpha|},\,2\leq |\alpha|\leq N+1. \end{equation} \end{lemma} We will need another result about integration by parts using the rotation vector-fields $\Omega_j$. The lemma below (which is used only in the proof of the more technical Lemma \ref{Reso01}) follows from Lemma 3.8 in \cite{DeIoPa}. \begin{lemma}\label{RotIBP} Assume that $t\in[2^{m}-1,2^{m+1}]$, $m\geq 0$, $1\leq A\lesssim 2^m$, and \begin{equation}\label{hypo} \begin{split} \Vert f\Vert_{\H^{20}}+\Vert g\Vert_{\H^{20}}+\sup_{0\leq\vert\alpha\vert\le N}A^{-\vert\alpha\vert}\Vert D^\alpha\widehat{f}\,\Vert_{L^2}&\le 1,\\ \sup_{\xi,\eta}\sup_{\vert\alpha\vert\le N}2^{-|\alpha|m/2}\vert D^\alpha_\eta n(\xi,\eta)\vert&\le 1. \end{split} \end{equation} Assume that $\Phi=\Phi_{\sigma\mu\nu}$ for some $\sigma,\mu,\nu\in\{e,b,-e,-b\}$. For $\xi\in\mathbb{R}^3$ and $p\in[-m/2,0]$ let \begin{equation*} I^1_p(\xi):=\int_{\mathbb{R}^3}e^{it\Phi(\xi,\eta)}n(\xi,\eta)\varphi_p((\Omega_1)_\eta\Phi(\xi,\eta))\psi_1(\xi,\eta)\widehat{f}(\xi-\eta)\widehat{g}(\eta)d\eta, \end{equation*} where $(\Omega_1)_\eta=\eta_2\partial_{\eta_3}-\eta_3\partial_{\eta_2}$ is the rotation vector-field defined in \eqref{difop}, \begin{equation}\label{hypo5} \psi_1(\xi,\eta):=\varphi_{\geq -\D}(\mathrm{Pr}_1(\xi))\varphi_{\geq -\D}(\mathrm{Pr}_1(\eta))\varphi_{\geq -\D}(\mathrm{Pr}_1(\xi-\eta))\cdot \varphi_{\leq \D}(\xi)\varphi_{\leq \D}(\eta)\varphi_{\leq\D}(\xi-\eta), \end{equation} and $\mathrm{Pr}_1:\mathbb{R}^3\to\mathbb{R}^2$, $\mathrm{Pr}_1(v_1,v_2,v_3):=(v_2,v_3)$. Then \begin{equation}\label{OmIBP} \vert I^1_p(\xi)\vert\lesssim_N (2^{p}2^{m/2})^{-N}+(A2^{-m})^N+2^{-4m}. \end{equation} A similar bound holds for the integrals $I_p^2$ and $I_p^3$ obtained by replacing the vector-field $\Omega_1$ with the vector-fields $\Omega_2$ and $\Omega_3$ respectively, and replacing the cutoff function $\psi_1$ with cutoff functions $\psi_2$ and $\psi_3$ respectively (defined as in \eqref{hypo5}, but with the projection $\mathrm{Pr}_1$ replaced by the projections $\mathrm{Pr}_2(v_1,v_2,v_3):=(v_1,v_3)$ and $\mathrm{Pr}_3(v_1,v_2,v_3):=(v_1,v_2)$ respectively). In addition, if $(1+\beta/20)\nu\geq -m$, then the same bounds hold when $I_p^j$, $j\in\{1,2,3\}$, are replaced by the integrals (notice the additional localization in modulation factor $\varphi_\nu(\Phi(\xi,\eta))$) \begin{equation*} \widetilde{I^j_p}(\xi):=\int_{\mathbb{R}^3}e^{it\Phi(\xi,\eta)}\varphi_\nu(\Phi(\xi,\eta))n(\xi,\eta)\varphi_p((\Omega_j)_\eta\Phi(\xi,\eta))\psi_j(\xi,\eta)\widehat{f}(\xi-\eta)\widehat{g}(\eta)d\eta. \end{equation*} \end{lemma} \subsubsection{Linear and bilinear operators}\label{Lbl} To bound bilinear operators, we often use the following simple lemma. \begin{lemma}\label{L1easy} Assume $f_1,f_2,f_3\in L^2(\mathbb{R}^3)$, and $M:(\mathbb{R}^3)^2\to\mathbb{C}$ is a continuous compactly supported function. Then \begin{equation}\label{ener62} \Big|\int_{(\mathbb{R}^3)^2}M(\xi_1,\xi_2)\cdot\widehat{f_1}(\xi_1)\widehat{f_2}(\xi_2)\widehat{f_3}(-\xi_1-\xi_2)\,d\xi_1d\xi_2\Big|\lesssim \big\|\mathcal{F}^{-1}M\big\|_{L^1}\|f_1\|_{L^{p_1}}\|f_2\|_{L^{p_2}}\|f_3\|_{L^{p_3}}, \end{equation} for any exponents $p_1,p_2,p_3\in[1,\infty]$ satisfying $1/p_1+1/p_2+1/p_3=1$. As a consequence \begin{equation}\label{ener62.1} \Big\|\mathcal{F}_{\xi}^{-1}\Big\{\int_{\mathbb{R}^3}M(\xi,\eta)\widehat{f_2}(\eta)\widehat{f_3}(-\xi-\eta)\,d\eta\Big\}\Big\|_{L^{q}}\lesssim \big\|\mathcal{F}^{-1}M\big\|_{L^1}\|f_2\|_{L^{p_2}}\|f_{3}\|_{L^{p_3}}, \end{equation} if $q,p_2,p_3\in[1,\infty]$ satisfy $1/p_2+1/p_3=1/q$. \end{lemma} Our next lemma, which is also used to bound bilinear operators, shows that localization with respect to the phase is often a bounded operation. See \cite[Lemma 3.10]{DeIoPa} for the proof. \begin{lemma}\label{PhiLocLem} Let $s\in[2^{m}-1,2^m]$, $m\geq 0$, and $(1+\beta/20)p\geq -m$. With $\Lambda_0=0$ let\footnote{Notice that this is a slightly larger class of phases than those defined in section \ref{prelims}, i.e. it includes the contributions of the vorticity variables (corresponding to $\mu=0$ or $\nu=0$).} \begin{equation}\label{zax1} \Phi(\xi,\eta)=\Phi_{\sigma\mu\nu}(\xi,\eta)=\Lambda_\sigma(\xi)-\Lambda_\mu(\xi-\eta)-\Lambda_\nu(\eta),\qquad \sigma\in\mathcal{P},\,\mu,\nu\in\mathcal{P}\cup\{0\}. \end{equation} Assume that $1/2=1/q+1/r$, $\chi$ is a Schwartz function, and $\|\mathcal{F}^{-1}(n)\|_{L^1(\mathbb{R}^3\times\mathbb{R}^3)}\leq 1$. Then \begin{equation*} \begin{split} \Big\Vert \varphi_{\leq 10m}(\xi)\int_{\mathbb{R}^3}e^{is\Phi(\xi,\eta)}&\chi(2^{-p}\Phi(\xi,\eta))n(\xi,\eta)\widehat{f}(\xi-\eta)\widehat{g}(\eta)d\eta\Big\Vert_{L^2_\xi}\\ &\lesssim\sup_{t\in[s/10,10s]}\Vert e^{-it\Lambda_\mu}f\Vert_{L^q}\Vert e^{-it\Lambda_\nu}g\Vert_{L^r}+2^{-10m}\Vert f\Vert_{L^2}\Vert g\Vert_{L^2}, \end{split} \end{equation*} where the constant in the inequality only depends on the function $\chi$. \end{lemma} The nonlinearities in the dispersive system \eqref{KG} and the elliptic changes of variables \eqref{Alx11} and \eqref{Alx17} involve the Riesz transform. It is useful to note that our main spaces are stable with respect to the action of singular integrals. More precisely, for integers $n\geq 1$ let \begin{equation}\label{symb} \mathcal{S}^{n}:=\{q:\mathbb{R}^3\to\mathbb{C}:\|q\|_{\mathcal{S}^{n}}:=\sup_{\xi\in\mathbb{R}^3\setminus\{0\}}\sup_{|\rho|\leq n}|\xi|^{|\rho|}|D^\rho_\xi q(\xi)|<\infty\}, \end{equation} denote classes of symbols satisfying differential inequalities of the H\"{o}rmander--Michlin type. \begin{lemma}\label{tech3} Assume that $\widehat{Q f}(\xi)=q(\xi)\cdot \widehat{f}(\xi)$ for some $q\in\mathcal{S}^{10}$. Then \begin{equation}\label{compat} \begin{split} \|Qf\|_{Z_1^\sigma}&\lesssim \|f\|_{Z_1^\sigma},\qquad\text{ for any }\sigma\in\{e,b\} \text{ and }f\in Z_1^\sigma,\\ \|(1+|x|^2)^{1/4}Qf\|_{L^2}&\lesssim \|(1+|x|^2)^{1/4}f\|_{L^2}. \end{split} \end{equation} \end{lemma} See \cite[Lemma 5.1]{IoPa2} for a similar proof. \subsubsection{The phase functions} We collect now several properties of the phase functions $\Phi=\Phi_{\sigma\mu\nu}$. In this subsection we assume that $\sigma,\mu,\nu\in\{e,b,-e,-b\}$ (so $\mu\neq 0$, $\nu\neq 0$). We start with a suitable description of the geometry of resonant sets. See \cite[Proposition 8.2 and Remark 8.4]{DeIoPa} for proofs; the arguments provided in \cite{DeIoPa} are in two dimensions, but they extend with no difficulty to three dimensions. \begin{proposition}(Structure of resonance sets)\label{spaceres} The following claims hold: (i) If either $\nu+\mu=0$ or $\max(|\xi|,|\eta|,|\xi-\eta|)\geq 2^{\D_0}$ or $\min(|\xi|,|\eta|,|\xi-\eta|)\leq 2^{-\D_0}$ then \begin{equation}\label{res00}|\Phi(\xi,\eta)|\gtrsim (1+|\xi|+|\eta|)^{-1}\quad\mathrm{or}\quad|\nabla_{\eta}\Phi(\xi,\eta)|\gtrsim (1+|\xi|+|\eta|)^{-3}. \end{equation} (ii) If $\nu+\mu\neq 0$, then there exists a function $p=p_{\mu\nu}:\mathbb{R}^{2}\to\mathbb{R}^{2}$ such that $|p(\xi)|\lesssim|\xi|$ and $|p(\xi)|\approx |\xi|$ for small $\xi$, and \[\nabla_{\eta}\Phi(\xi,\eta)=0\quad\Leftrightarrow\quad \eta=p(\xi).\] There is an odd smooth function $p_{+}:\mathbb{R}\to\mathbb{R}$, such that $p(\xi)=p_{+}(|\xi|)\xi/|\xi|$. Moreover \begin{equation}\label{cas10} \text{ if }\quad|\eta|+|\xi-\eta|\leq U\in[1,\infty)\quad\text{ and }\quad|\nabla_{\eta}\Phi(\xi,\eta)|\leq\varep\quad\text{ then }\quad|\eta-p(\xi)|\lesssim\varep U^4. \end{equation} and, for any $s\in\mathbb{R}$, \begin{equation}\label{cas10.1} |D^\alpha p_+(s)|\lesssim_{\alpha} 1,\qquad |p'_+(s)|\gtrsim (1+|s|)^{-3},\qquad |1-p'_+(s)|\gtrsim (1+|s|)^{-3}. \end{equation} (iii) If $\nu+\mu\neq 0$, we define $p$ as above and $\Psi(\xi):=\Phi(\xi,p(\xi))$. Then $\Psi$ is a radial function, and there exist two positive constants $\gamma_{1}<\gamma_{2}$, such that $\Psi(\xi)=0$ if and only if either\[\pm(\sigma,\mu,\nu)=(b,e,e)\quad\mathrm{and}\quad |\xi|=\gamma_{1},\] or \[\pm(\sigma,\mu,\nu)\in\{(b,e,b),(b,b,e)\}\quad\mathrm{and}\quad |\xi|=\gamma_{2}.\] \end{proposition} \begin{remark}\label{largeres} For $\D_0$ sufficiently large we define the function \begin{equation}\label{reccl} \Psi_{\sigma}^{\dagger}(\xi)=2^{\D_0}(1+|\xi|)\inf_{\mu,\nu\in\mathcal{P};\,\nu+\mu\neq 0}\left|\Psi_{\sigma\mu\nu}(\xi)\right| \end{equation} as in \eqref{psidag}. We have \begin{equation}\label{cas2} \Psi_{\pm b}^{\dagger}(\xi)\approx_d 2^{\D_0}\frac{\min\big(\big||\xi|-\gamma_{1}\big|,\big||\xi|-\gamma_{2}\big|\big)}{1+|\xi|}\qquad\text{ and }\qquad10\leq \Psi_{\pm e}^{\dagger}(\xi)\lesssim 1. \end{equation} \end{remark} Our last lemmas are connected to the application of the Schur's test. See \cite[Lemma 8.7 and Proposition 8.8]{DeIoPa} for the proofs. We start with a general upper bound on the size of sublevel sets of functions. \begin{lemma}\label{lemma00} Suppose $L,R,M\in\mathbb{R}$, $M\geq \max(1,L,L/R)$, and $Y:B_R:=\{x\in\mathbb{R}^n:|x|<R\}\to\mathbb{R}$ is a function satisfying $\|\nabla Y\|_{C^{l}(B_R)}\leq M$, for some $l\geq 1$. Then, for any $\epsilon>0$, \begin{equation}\label{scale1} \big|\big\{x\in B_R:|Y(x)|\leq\epsilon\text{ and }\sum_{|\alpha|\leq l}|\partial_{x}^{\alpha}Y(x)|\geq L\big\}\big|\lesssim R^{n}ML^{-1-1/l}\epsilon^{1/l}. \end{equation} Moreover, if $n=l=1$, $K$ is a union of at most $A$ intervals, and $|Y'(x)|\geq L$ on K, then \begin{equation}\label{scale2}\left|\{x\in K:|Y(x)|\leq\epsilon\}\right|\lesssim AL^{-1}\epsilon.\end{equation} \end{lemma} As a consequence, we have precise bounds on the sublevel sets of our phase functions: \begin{lemma}\label{Shur2Lem} Assume that $R\geq 1$, $k\geq 0$, and $\epsilon\leq 1/2$. Let \begin{equation*}E=\{(\xi,\eta):\max(|\xi|,|\eta|)\leq 2^{k},\,|\xi-\eta|\leq R,|\Phi(\xi,\eta)|\leq 2^{-k}\epsilon\}. \end{equation*} Then \begin{equation}\label{cas4} \sup_{\xi}\int_{\mathbb{R}^{3}}\mathbf{1}_{E}(\xi,\eta)\,d\eta+\sup_{\eta}\int_{\mathbb{R}^{3}}\mathbf{1}_{E}(\xi,\eta)\,d\xi\lesssim 2^{5k}R^3\epsilon\log(1/\epsilon). \end{equation} \end{lemma} \subsubsection{Linear Estimates} We prove now several linear estimates. Given a function $f$, $(k,j)\in\mathcal{J}$, and $n\in\{0,\ldots,j+1\}$ (recall the notation in subsection \ref{NotDef}) we define \begin{equation}\label{Alx100} f_{j,k}:=P_{[k-2,k+2]}Q_{jk}f,\qquad \widehat{f_{j,k,n}}(\xi):=\varphi_{-n}^{[-j-1,0]}(\Psi^\dagger_\sigma(\xi))\widehat{f_{j,k}}(\xi). \end{equation} Notice that $f_{j,k,n}$ is nontrivial only if $n=0$ or ($n\geq 1$, $\sigma=b$, and $2^k\approx 1$). Moreover, \begin{equation}\label{Alx100.5} f_{j,k}=\sum_{n\in[0,j+1]}f_{j,k,n},\qquad P_kf=\sum_{j\geq \max(-k,0)}f_{j,k},\qquad f=\sum_{k\in\mathbb{Z}}P_kf. \end{equation} \begin{lemma}\label{LinEstLem} (i) Assume $\sigma\in\{e,b\}$ and \begin{equation}\label{Zs} \Vert f\Vert_{Z_1^\sigma}\leq 1. \end{equation} If $m\geq 0$ and $|t|\in[2^{m}-1,2^{m+1}]$ then \begin{equation}\label{LinftyBd} \| e^{-it\Lambda_\sigma}f_{j,k,n}\|_{L^\infty}\lesssim\min\big(2^{3k/2}2^{-(1+\beta)j}2^{-n/2+4\beta n},2^{5k^+/2}2^{-3m/2}2^{(1/2-\beta)j}2^{4\beta n}\big). \end{equation} As a consequence, for any $k\in\Z$ one has \begin{equation}\label{LinftyBd2} \Vert e^{-it\Lambda_\sigma}P_kf\Vert_{L^\infty}\lesssim 2^{-(1+\beta)m}2^{(1/2-\b)\,k} 2^{2k^{+}}. \end{equation} (ii) Assume $\sigma\in\{e,b\}$, $N\geq 10$, and \begin{equation}\label{Zs2} \Vert f\Vert_{Z_1^\sigma}+\Vert f\Vert_{\H^{N}}\leq 1. \end{equation} Then, for any $(k,j)\in\mathcal{J}$ and $n\in\{0,\ldots,j+1\}$, \begin{equation}\label{RadL2} \big\Vert \sup_{\theta\in\mathbb{S}^2}|\widehat{f_{j,k,n}}(r\theta)|\,\big\Vert_{L^2(r^2dr)}+\big\Vert \sup_{\theta\in\mathbb{S}^2}|f_{j,k,n}(r\theta)|\,\big\Vert_{L^2(r^2dr)}\lesssim 2^{-(1-2/N)((1+\b)j-4\beta n)}\,. \end{equation} Also, we have \begin{equation}\label{FLinftybd} \|\widehat{f_{j,k,n}}\|_{L^\infty}\lesssim 2^{j/2-k}2^{-(1-2/N)((1+\b)j-4\beta n)}, \end{equation} \begin{equation}\label{FLinftybdDER} \|D^{\alpha}\widehat{f_{j,k,n}}\|_{L^\infty}\lesssim_{|\alpha|} 2^{|\alpha|j}2^{j/2-k}2^{-(1-2/N)((1+\b)j-4\beta n)}. \end{equation} (iii) For any $f\in\H^2$ we have \begin{equation}\label{Zs3} \|f_{j,k}\|_{L^\infty}\lesssim 2^{k/2-j}\|f\|_{\H^2}. \end{equation} \end{lemma} \begin{proof} (i) The hypothesis gives \begin{equation}\label{Alx101} \Vert f_{j,k,n}\Vert_{L^2}\lesssim 2^{-(1+\b)j+4\beta n}. \end{equation} Using the definition, \begin{equation*} \Vert e^{-it\Lambda_\sigma}f_{j,k,n}\Vert_{L^\infty}\lesssim \Vert \widehat{f_{j,k,n}}\Vert_{L^1}\lesssim 2^{3k/2}2^{-(1+\beta)j}2^{-n/2+4\beta n}. \end{equation*} On the other hand, if $m\geq 10$ then the usual dispersion estimate gives \begin{equation*} \Vert e^{-it\Lambda_\sigma}f_{j,k,n}\Vert_{L^\infty}\lesssim 2^{5k^+/2}2^{-3m/2}\Vert f_{j,k,n}\Vert_{L^1}\lesssim 2^{5k^+/2}2^{-3m/2}2^{(1/2-\beta)j}2^{4\beta n}. \end{equation*} The bound \eqref{LinftyBd} follows. The bound \eqref{LinftyBd2} follows also, by summation over $j$ and $n$. (ii) The hypothesis \eqref{Zs2} shows that $\Vert f_{j,k,n}\Vert_{H^N_{\Omega}}\lesssim 1$, where \begin{equation*} \|g\|_{H^m_{\Omega}}:=\sum_{\beta_1+\beta_2+\beta_3\leq m}\|\Omega_1^{\beta_1}\Omega_2^{\beta_2}\Omega_3^{\beta_3}g\|_{L^2}. \end{equation*} The first inequality in \eqref{RadL2} follows from the interpolation inequality \begin{equation*} \|f\|_{H^p_\Omega}\lesssim \Vert f\Vert_{H^N_{\Omega}}^{p/N}\,\|f\|_{2}^{1-p/N},\qquad p\in[0,N]\cap\mathbb{Z}, \end{equation*} and the Sobolev embedding (along the spheres $\mathbb{S}^2$) \begin{equation}\label{Zs4} \begin{split} \big\Vert \sup_{\theta\in\mathbb{S}^2}|\widehat{f_{j,k,n}}(r\theta)|\,\big\Vert_{L^2(r^2dr)} &\lesssim \sum_{m_1+m_2+m_3\leq 2}\Vert \Omega_1^{m_1}\Omega_2^{m_2}\Omega_3^{m_3}\widehat{f_{j,k,n}}\Vert_{L^2}\lesssim \Vert \widehat{f_{j,k,n}}\Vert_{H^2_\Omega}. \end{split} \end{equation} The second inequality follows similarly. To prove \eqref{FLinftybd}, for $\theta\in\mathbb{S}^2$ fixed we estimate \begin{equation*} \|\widehat{f_{j,k,n}}(r\theta)\|_{L^\infty_r}\lesssim 2^{j/2}\|\widehat{f_{j,k,n}}(r\theta)\|_{L^2_r}+2^{-j/2}\|(\partial_r\widehat{f_{j,k,n}})(r\theta)\|_{L^2_r}\lesssim 2^{j/2}2^{-k}\|\widehat{f_{j,k,n}}(r\theta)\|_{L^2(r^2dr)}, \end{equation*} using the localization of the function $Q_{j,k}f$ in the physical space. The desired bounds \eqref{FLinftybd} follow from \eqref{RadL2}. The bounds in \eqref{FLinftybdDER} follow as well, if we notice that derivatives in $\xi$ corresponds to multiplication by $2^j$ factors, due to space localization. (iii) We may assume $\|f\|_{\H^2}=1$. Using Sobolev embedding in the spheres, as in \eqref{Zs4}, \begin{equation*} \big\Vert \sup_{\theta\in\mathbb{S}^2}|Q_{j,k}f(r\theta)|\,\big\Vert_{L^2(r^2dr)} \lesssim 1. \end{equation*} The desired estimate follows in the same way as the bound \eqref{FLinftybd}. \end{proof} \section{Energy estimates}\label{EneEst} In this section we prove our main energy estimates. In the rest of the paper we often use the standard Einstein convention that repeated indices are summed. We work in the physical space and divide the proofs into two parts: a high order estimate for the full system (the $\widetilde{\H}^{N_0}$ norm in \eqref{bootstrap3}), and a weighted estimate only for the vorticity components (the estimate \eqref{bootstrapV3}). \subsection{The total energy of the system}\label{TotalEnergy} In this subsection we prove the following: \begin{proposition}\label{BootstrapEE1} With the hypothesis in Proposition \ref{bootstrap}, we have, for any $t\in[0,T]$, \begin{equation}\label{bootstrapimp3} \|(n(t),v(t),E(t),B(t))\|_{\widetilde{\H}^{N_0}}\leq \overline{C}\bar{\eps}/2. \end{equation} \end{proposition} \begin{proof} Recall the real-valued variables $F,G,Z,W$ defined in \eqref{Alx11}, \begin{equation}\label{Alx11.1} F=|\nabla|^{-1}\hbox{div}(v),\qquad G=|\nabla|^{-1}\nabla\times v,\qquad Z=|\nabla|^{-1}\hbox{div}(E),\qquad W=|\nabla|^{-1}\nabla\times E, \end{equation} and the system \eqref{Alx13} (written now in terms of the variables $F,G,Z,W,B$),\footnote{It is important to write the system in terms of these variables, not the more physical variables $n,v,E,B$, in order to be able to prove energy estimates that include the rotation vector-fields.} \begin{equation}\label{Alx13.5} \begin{cases} \partial_t F+(1+d|\nabla|^2)Z&=-R\cdot(v\cdot \nabla v)-R\cdot (v\times B),\\ \partial_t G+W&=-R\times(v\cdot \nabla v)-R\times (v\times B),\\ \partial_t Z-F&=R\cdot (nv),\\ \partial_t W-G-|\nabla|B&=R\times (nv),\\ \partial_t B+|\nabla|W&=0. \end{cases} \end{equation} Recall that $\hbox{div}(B)=0$ and $n=-|\nabla|Z$. {\bf{Step 1.}} For $m\in[0,N_0]\cap\mathbb{Z}$ we define the energy functionals $\mathcal{E}_{m}:[0,T]\to\mathbb{R}$, \begin{equation}\label{entot1} \begin{split} \mathcal{E}_{m}(t):=\sum_{\mathcal{L}\in\mathcal{V}_{m}}\int_{\mathbb{R}^3}\big\{d|\L n(t)|^2&+(1+n(t))[|\L F(t)|^2+|\L G(t)|^2]\\ &+|\L Z(t)|^2+|\L W(t)|^2+|\L B(t)|^2\big\}\,dx. \end{split} \end{equation} Notice that the case $m=0$ is similar (but not identical, because of the different cubic correction) to the conserved physical energy in \eqref{EnCons}. Notice also that, for any $t\in[0,T]$, \begin{equation*} \mathcal{E}_{N_0}(t)\approx \|(n,F,G,Z,W,B)(t)\|_{\H^{N_0}}^2\approx \|(n,v,E,B)(t)\|_{\widetilde{\H}^{N_0}}^2. \end{equation*} In particular, there is a constant $C_1\geq 1$ such that, for any $t\in[0,T]$, \begin{equation}\label{Alx13.6} C_1^{-1}\mathcal{E}_{N_0}(t)\leq \|(n,v,E,B)(t)\|_{\widetilde{\H}^{N_0}}^2\leq C_1\mathcal{E}_{N_0}(t). \end{equation} We would like to estimate now the energy increment. For $\mathcal{L}\in\mathcal{V}_{N_0}$ let $\mathcal{E}_{\mathcal{L}}$ denote the term in \eqref{entot1} corresponding to the differential operator $\mathcal{L}$. We calculate, using \eqref{Alx13.5}, \begin{equation*} \begin{split} \frac{d}{dt}\mathcal{E}_{\mathcal{L}}&=\int_{\mathbb{R}^3}\big\{2d\mathcal{L}n\cdot \mathcal{L}[-|\nabla|F-\nabla\cdot(nv)]-[|\nabla|F+\nabla\cdot(nv)]\cdot [|\L F|^2+|\L G|^2]\\ &+2(1+n)\L F\cdot \mathcal{L} [-(1+d|\nabla|^2)Z+\mathcal{N}_F]+2(1+n)\L G\cdot\mathcal{L}[-W+\mathcal{N}_G]\\ &+2\L Z\cdot\mathcal{L}[F+R\cdot(nv)]+2\L W\cdot \mathcal{L}[G+|\nabla|B+R\times(nv)]-2\L B\cdot\L|\nabla|W\big\}\,dx, \end{split} \end{equation*} where $\mathcal{N}_F$ and $\mathcal{N}_G$ denote the nonlinearities corresponding to the equations for $F$ and $G$ in \eqref{Alx13.5}. Since $\mathcal{L}$ and $|\nabla|$ commute, all the quadratic terms in the expression above cancel, so \begin{equation}\label{Alx13.8} \begin{split} \partial_t\mathcal{E}_{\mathcal{L}}&=\int_{\mathbb{R}^3}\big\{-2d\mathcal{L}n\cdot \mathcal{L}(\nabla\cdot(nv))-[|\nabla|F+\nabla\cdot(nv)]\cdot [|\L F|^2+|\L G|^2]\\ &+2(1+n)\L F\cdot \mathcal{L} \mathcal{N}_F-2n\L F\cdot \mathcal{L} (1+d|\nabla|^2)Z+2(1+n)\L G\cdot\mathcal{L}\mathcal{N}_G-2n\L G\cdot\mathcal{L}W\\ &+2\L Z\cdot\mathcal{L}(R\cdot(nv))+2\L W\cdot \mathcal{L}(R\times(nv))\big\}\,dx. \end{split} \end{equation} {\bf{Step 2.}} We would like to show that, for any $t\in[0,T]$, \begin{equation}\label{Alx13.7} |\partial_t\mathcal{E}_{\mathcal{L}}(t)|\lesssim \|(n,v,E,B)(t)\|_{\widetilde{\H}^{N_0}}^2\|(n,v,E,B)(t)\|_{\mathcal{W}^{N_0/2,\infty}}. \end{equation} All the terms in \eqref{Alx13.8} are at least cubic, but we also need to avoid potential loss of derivatives. Let $\mathcal{A}_2(t):=\|(n,v,E,B)(t)\|_{\widetilde{\H}^{N_0}}$ and $\mathcal{A}_\infty(t):=\|(n,v,E,B)(t)\|_{\mathcal{W}^{N_0/2,\infty}}$. Notice that \begin{equation*} \mathcal{A}_\infty(t)\lesssim \mathcal{A}_2(t)\lesssim \bar{\eps}\qquad\text{ for any }t\in[0,T]. \end{equation*} Some of the terms in \eqref{Alx13.8} can be estimated easily, using the definitions \eqref{Alx11.1}, i.e. \begin{equation*} \begin{split} \Big|\int_{\mathbb{R}^3}&[|\nabla|F+\nabla\cdot(nv)]\cdot [|\L F|^2+|\L G|^2]\,dx\Big|+\Big|\int_{\mathbb{R}^3}n\L F\cdot \mathcal{L} Z\,dx\Big|+\Big|\int_{\mathbb{R}^3}n\L G\cdot\mathcal{L}W\,dx\Big|\\ &+\Big|\int_{\mathbb{R}^3}\L Z\cdot\mathcal{L}(R\cdot(nv))\,dx\Big|+\Big|\int_{\mathbb{R}^3}\L W\cdot \mathcal{L}(R\times(nv))\,dx\Big|\lesssim \mathcal{A}_2^2\mathcal{A}_\infty, \end{split} \end{equation*} since these terms do not lose derivatives. For the remaining terms, we extract first the components that could lose derivatives. Clearly \begin{equation*} \begin{split} \big\|\mathcal{L}(\nabla\cdot(nv))-[n\L\partial_j v_j+v_j\partial_j\L n]\big\|_{L^2}&\lesssim \mathcal{A}_2\mathcal{A}_\infty,\\ \big\|\mathcal{L}\mathcal{N}_F+R_j(v_k\partial_k\L v_j)\big\|_{L^2}&\lesssim \mathcal{A}_2\mathcal{A}_\infty,\\ \big\|(\mathcal{L}\mathcal{N}_G)_j+\in_{jab}R_a(v_k\partial_k\L v_b)\big\|_{L^2}&\lesssim \mathcal{A}_2\mathcal{A}_\infty. \end{split} \end{equation*} Using the general bound \begin{equation}\label{Alx13.11} \|R_j(f\cdot |\nabla|g)-f\cdot R_j|\nabla|g\|_{L^2}\lesssim \|g\|_{L^2}\big(\sum_{k\in\mathbb{Z}}2^k\|P_kf\|_{L^\infty}\big), \end{equation} we can further replace $R_j(v_k\partial_k\L v_j)$ by $v_k\cdot \partial_k\L R_jv_j$ and $\in_{jab}R_a(v_k\partial_k\L v_b)$ by $v_k\cdot\in_{jab}\partial_k\L R_av_b$ at the expense of acceptable errors. For \eqref{Alx13.7} it remains to prove that \begin{equation}\label{Alx13.9} |\mathcal{E}''_{\mathcal{L}}(t)|\lesssim \mathcal{A}_2(t)^2\mathcal{A}_\infty(t), \end{equation} where \begin{equation*} \begin{split} \mathcal{E}''_{\mathcal{L}}=\int_{\mathbb{R}^3}\big\{&-2d\mathcal{L}n\cdot [n\L\partial_j v_j+v_j\partial_j\L n]-2(1+n)\L F\cdot v_k\cdot \partial_k\L R_jv_j\\ &-2dn\L F\cdot \mathcal{L} |\nabla|^2Z-2(1+n)\L G_j\cdot v_k\cdot\in_{jab}\partial_k\L R_av_b\big\}\,dx. \end{split} \end{equation*} Since $R_jv_j=F$ and $\in_{jab}R_av_b=G_j$ we have \begin{equation*} \Big|\int_{\mathbb{R}^3}(1+n)\L F\cdot v_k\cdot \partial_k\L R_jv_j\,dx\Big|+\Big|\int_{\mathbb{R}^3}(1+n)\L G_j\cdot v_k\cdot\in_{jab}\partial_k\L R_av_b\,dx\Big|\lesssim \mathcal{A}_2^2\mathcal{A}_\infty. \end{equation*} We also have, using integration by parts \begin{equation*} \Big|\int_{\mathbb{R}^3}-2d\mathcal{L}n\cdot v_j\partial_j\L n\,dx\Big|\lesssim \mathcal{A}_2^2\mathcal{A}_\infty. \end{equation*} Combining the remaining terms in $\mathcal{E}''_{\mathcal{L}}$ and recalling that $n=-|\nabla|Z$ and $\partial_jv_j=|\nabla|F$, it remains to show that \begin{equation}\label{Alx13.10} \begin{split} \Big|\int_{\mathbb{R}^3}\big\{ -n\mathcal{L}n\cdot \L |\nabla|F+n\L F\cdot \mathcal{L} |\nabla|n\big\}\,dx\Big|\lesssim \mathcal{A}_2^2\mathcal{A}_\infty. \end{split} \end{equation} This follows using again the bound \eqref{Alx13.11} and the identity $-|\nabla|=R_j\partial_j$. The desired bound \eqref{Alx13.7} follows. {\bf{Step 3.}} Given \eqref{Alx13.6}, we estimate first \begin{equation*} \begin{split} \|(n,v,E,B)(t)\|_{\widetilde{\H}^{N_0}}^2&\leq C_1\mathcal{E}_{N_0}(0)+C_1\int_{0}^t|(\partial_s\mathcal{E}_{N_0})(s)|\,ds\\ &\leq C_1^2\|(n,v,E,B)(0)\|_{\widetilde{\H}^{N_0}}^2+C_1\int_{0}^t|(\partial_s\mathcal{E}_{N_0})(s)|\,ds. \end{split} \end{equation*} Since $\|(n,v,E,B)(0)\|_{\widetilde{\H}^{N_0}}^2\lesssim\bar{\eps}^2$ (see \eqref{bootstrap1}), using also \eqref{Alx13.7}, for \eqref{bootstrapimp3} it suffices to show that \begin{equation}\label{Alx14.1} \int_0^T\|(n,v,E,B)(t)\|_{\mathcal{W}^{N_0/2,\infty}}\,dt\lesssim \overline{\eps}. \end{equation} Using \eqref{Alx17} we have \begin{equation*} \|(n,v,E,B)(t)\|_{\mathcal{W}^{N_0/2,\infty}}\lesssim \sum_{k\in\mathbb{Z},\,\mathcal{L}\in\mathcal{V}_{N_0/2}}\big\{\|P_k\mathcal{L}U_e(t)\|_{L^\infty}+\|P_k\mathcal{L}U_b(t)\|_{L^\infty}+\|P_k\mathcal{L}Y(t)\|_{L^\infty}\big\}. \end{equation*} Recall that $U_e(t)=e^{-it\Lambda_e}V_e(t)$, $U_b(t)=e^{-it\Lambda_b}V_b(t)$, and $\|(V_e(t),V_b(t))\|_{Z}\lesssim\overline{\eps}$, see \eqref{bootstrap2}. The $L^\infty$ estimates \eqref{LinftyBd2} show that, for any $t\in[0,T]$, \begin{equation*} \sum_{k\in\mathbb{Z},\,\mathcal{L}\in\mathcal{V}_{N_0/2}}\big\{\|P_k\mathcal{L}U_e(t)\|_{L^\infty}+\|P_k\mathcal{L}U_b(t)\|_{L^\infty}\big\}\lesssim \bar{\eps}(1+t)^{-1-\beta}. \end{equation*} Moreover, recalling the bootstrap assumption \eqref{bootstrapV2}, for any $t\in[0,T]$, \begin{equation*} \sum_{k\in\mathbb{Z},\,\mathcal{L}\in\mathcal{V}_{N_0/2}}\|P_k\mathcal{L}Y(t)\|_{L^\infty}\lesssim \delta_0. \end{equation*} The desired inequality \eqref{Alx14.1} follows since $T\leq\overline{\eps}/\delta_0$, which completes the proof. \end{proof} \subsection{Control of the vorticity energy}\label{vortEn} In this subsection we prove the following: \begin{proposition}\label{BootstrapEE2} With the hypothesis in Proposition \ref{bootstrap}, we have, for any $t\in[0,T]$, \begin{equation}\label{bootstrapimp3.5} \|(1+|x|^2)^{1/4}Y(t)\|_{\H^{N_1}}\leq \overline{C}\delta_0/2. \end{equation} \end{proposition} \begin{proof} We define vorticity energy functionals \begin{equation}\label{env1} \mathcal{E}^Y_{N_1}(t):=\sum_{\mathcal{L}\in\mathcal{V}_{N_1}}\mathcal{E}^Y_{\L}(t),\qquad \mathcal{E}^Y_{\L}(t):=\int_{\mathbb{R}^3} (1+|x|^2)^{1/2}|\L Y(x,t)|^2\,dx. \end{equation} Notice that there is a constant $C_2\geq 1$ such that, for any $t\in[0,T]$, \begin{equation}\label{Alx14.2} C_2^{-1}\mathcal{E}^Y_{N_1}(t)\leq\|(1+|x|^2)^{1/4}Y(t)\|^2_{\mathcal{H}^{N_1}}\leq C_2\mathcal{E}^Y_{N_1}(t). \end{equation} To prove the proposition we need to estimate the increment of the vorticity energy. More precisely, we would like to show that \begin{equation}\label{Alx14.3} \big|\partial_t\mathcal{E}^Y_{\L}(t)\big|\lesssim \delta_0^3+\overline{\eps}(1+t)^{-1-\beta}\delta_0^2. \end{equation} Indeed, assuming this, we could estimate, for any $t\in[0,T]$, \begin{equation*} \begin{split} \|(1+|x|^2)^{1/4}Y(t)\|^2_{\H^{N_1}}&\leq C_2\mathcal{E}^Y_{N_1}(0)+C_2\int_0^T\big|\partial_t\mathcal{E}^Y_{N_1}(t)\big|\,dt\\ &\leq C_2^2\|(1+|x|^2)^{1/4}Y(0)\|^2_{\H^{N_1}}+C'\int_0^T(\delta_0^3+\overline{\eps}(1+t)^{-1-\beta}\delta_0^2)\,dt\\ &\leq C_2^2\delta_0^2+C''\overline{\eps}\delta_0^2, \end{split} \end{equation*} where we have used the assumptions \eqref{bootstrapV1} and $T\leq\overline{\eps}/\delta_0$. The desired conclusion \eqref{bootstrapimp3.5} follows, provided that $C_2\ll \overline{C}\ll\overline{\eps}^{\,-1/10}$. To prove \eqref{Alx14.3}, using the last equation in \eqref{KG} we calculate \begin{equation*} \partial_t\mathcal{E}^Y_{\L}=\int_{\mathbb{R}^3} 2(1+|x|^2)^{1/2}\L Y\cdot\L[\nabla\times(v\times Y)] \,dx. \end{equation*} Since $\mathrm{div}(Y)=0$ we calculate \begin{equation*} [\nabla\times(v\times Y)]_j=Y_l\partial_lv_j-Y_j\partial_lv_l-v_l\partial_lY_j. \end{equation*} Recall also that $v=-R\Im(U_e)+R\times\Lambda_b^{-1}\Im(U_b)-R\times\Lambda_b^{-2}|\nabla|Y$, see \eqref{Alx17}. Therefore, after integration by parts to remove the potential derivative loss coming from the term $v_l\partial_lY_j$, we see that $|\partial_t\mathcal{E}^Y_{\L}|$ is bounded by a sum of integrals of the form \begin{equation}\label{Alx14.4} C\int_{\mathbb{R}^3}(1+|x|^2)^{1/2}|\L Y|\cdot |Q_1\mathcal{L}_1^a Y|\cdot\big[|Q_2\mathcal{L}_2^b Y|+|\Lambda_2Q_2\mathcal{L}_2^b U_\sigma|\big]\,dx, \end{equation} where $a+b\leq N_1$, $\mathcal{L}_1^a\in\mathcal{V}_a$, $\mathcal{L}_2^b\in\mathcal{V}_b$, $Q_1,Q_2$ are operators defined by $\mathcal{S}^{10}$ symbols as in Lemma \ref{tech3}, and $\sigma\in\{e,b\}$. In view of \eqref{compat}, and using the bound \begin{equation*} \big\|(1+|x|^2)^{1/4}\L'Y(t)\big\|_{L^2}\lesssim \delta_0 \end{equation*} for any $t\in[0,T]$ and $\L'\in\mathcal{V}_{N_1}$ (see \eqref{bootstrapV2} and \eqref{Alx14.2}), the integral in \eqref{Alx14.4} is dominated by \begin{equation*} C\delta_0^3+C\delta_0^2\|\Lambda_2Q_2\mathcal{L}_2^b U_\sigma\|_{L^\infty}. \end{equation*} The desired bound \eqref{Alx14.3} follows once we notice that, using \eqref{LinftyBd2} \begin{equation*} \begin{split} \|\Lambda_2Q_2\mathcal{L}_2^b U_\mu(t)\|_{L^\infty}&\lesssim \sum_{k\in\mathbb{Z}}2^{k^+}\|P_ke^{-it\Lambda_\sigma}\mathcal{L}_2^b V_\sigma(t)\|_{L^\infty}\\ &\lesssim \sum_{k\in\mathbb{Z}}2^{k^+}(1+t)^{-1-\beta}2^{2k^+}2^{(1/2-\beta)k}\|P_k\mathcal{L}_2^b V_\sigma(t)\|_{Z_1^\sigma}\\ &\lesssim (1+t)^{-1-\beta}\sup_{|\alpha|\leq 4}\|D^\alpha\mathcal{L}_2^b V_\sigma(t)\|_{Z_1^\sigma}. \end{split} \end{equation*} This is bounded by $C\overline{\eps}(1+t)^{-1-\beta}$, in view of the bootstrap assumption \eqref{bootstrap3}. The desired conclusion \eqref{Alx14.3} follows, which completes the proof of the proposition. \end{proof} \section{Improved control of the $Z$-norm, I: setup and preliminary estimates}\label{ParT} In the next three sections we prove the following bootstrap estimate for the $Z$-norm. \begin{proposition}\label{BootstrapZNorm} With the hypothesis in Proposition \ref{bootstrap}, we have, for any $t\in[0,T]$, \begin{equation}\label{bootstrapimp3.7} \|(V_e(t),V_b(t))\|_{Z}\leq \overline{C}\bar{\eps}/2. \end{equation} \end{proposition} \subsection{The Duhamel formula} The functions $U_e$, $U_b$, $Y$ satisfy the equations, (see \eqref{KG}) \begin{equation}\label{za1} \begin{split} (\partial_t+i\Lambda_e) U_e&= \Lambda_e (R\cdot [n v])+i |\nabla| (|v|^2)/2-i R\cdot (v\times Y),\\ (\partial_t+i\Lambda_b) U_b&= R\times [nv]- i \Lambda_b^{-1} R\times (v\times Y),\\ \partial_tY&=\nabla\times(v\times Y). \end{split} \end{equation} We define $V_{\sigma}(t)=e^{it\Lambda_{\sigma}}U_{\sigma}(t)$, $\sigma\in\{e,b\}$, as before. Also, for simplicity of notation, let \begin{equation}\label{za2} U_0:=Y,\qquad V_0:=Y,\qquad \Lambda_0:=0. \end{equation} Since \begin{equation}\label{za2.5} n=-|\nabla|\Lambda_e^{-1}\Re(U_e),\qquad v=-R\Im(U_e)+R\times\Lambda_b^{-1}\Im(U_b)-R\times\Lambda_b^{-2}|\nabla|Y, \end{equation} see \eqref{Alx17}, our system \eqref{za1} can be written in the form \begin{equation}\label{system8} (\partial_{t}+i\Lambda_{\sigma})U_{\sigma}=\sum_{\mu,\nu\in\mathcal{P}'}\mathcal{N}_{\sigma\mu\nu}(U_{\mu},U_{\nu}) \end{equation} for $\sigma\in\{e,b,0\}$. Here $\mathcal{P}':=\{e,b,-e,-b,0\}$ and the nonlinearities are defined by \begin{equation}\label{system9} \left(\mathcal{F}\mathcal{N_{\sigma\mu\nu}}(f,g)\right)(\xi)=\int_{\mathbb{R}^{3}}\mathfrak{m}_{\sigma\mu\nu}(\xi,\eta)\widehat{f}(\xi-\eta)\widehat{g}(\eta)\,d\eta. \end{equation} for suitable multipliers $\mathfrak{m}_{\sigma\mu\nu}$ which are sums of functions of the form $m(\xi)m'(\xi-\eta)m''(\eta)$. In terms of the functions $V_\sigma$, the Duhamel formula is, in the Fourier space, \begin{equation}\label{duhamelDER} (\partial_s\widehat{V_{\sigma}})(\xi,s)=\sum_{\mu,\nu\in\mathcal{P}'}\int_{\mathbb{R}^3}e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}\mathfrak{m}_{\sigma\mu\nu}(\xi,\eta)\widehat{V_{\mu}}(\xi-\eta,s)\widehat{V_{\nu}}(\eta,s)\,d\eta, \end{equation} where \begin{equation*} \Phi_{\sigma\mu\nu}(\xi,\eta)=\Lambda_\sigma(\xi)-\Lambda_{\mu}(\xi-\eta)-\Lambda_\nu(\eta),\qquad\mu,\nu\in\mathcal{P}'=\{e,b,-e,-b,0\}. \end{equation*} In integral form this gives, for $\sigma\in\{e,b\}$ and $t\in[0,T]$, \begin{equation}\label{duhamel}\widehat{V_{\sigma}}(\xi,t)=\widehat{V_{\sigma}}(\xi,0)+\sum_{\mu,\nu\in\mathcal{P}'}\int_{0}^{t}\int_{\mathbb{R}^3}e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}\mathfrak{m}_{\sigma\mu\nu}(\xi,\eta)\widehat{V_{\mu}}(\xi-\eta,s)\widehat{V_{\nu}}(\eta,s)\,d\eta ds. \end{equation} A rotation vector-field $\Omega\in\{\Omega_1,\Omega_2,\Omega_3\}$ acts on the Duhamel formula according to \begin{equation*} \begin{split} \Omega_\xi&(\partial_s\widehat{V_{\sigma}})(\xi,s)=\sum_{\mu,\nu\in\mathcal{P}'}\int_{\mathbb{R}^3}(\Omega_\xi+\Omega_\eta)\big[e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}\mathfrak{m}_{\sigma\mu\nu}(\xi,\eta)\widehat{V_{\mu}}(\xi-\eta,s)\widehat{V_{\nu}}(\eta,s)\big]\,d\eta\\ &=\sum_{\mu,\nu\in\mathcal{P}'}\sum_{a_1+a_2+a_3=1}\int_{\mathbb{R}^3}e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}(\Omega_\xi+\Omega_\eta)^{a_1}\mathfrak{m}_{\sigma\mu\nu}(\xi,\eta)(\Omega^{a_2}\widehat{V_{\mu}})(\xi-\eta,s)(\Omega^{a_3}\widehat{V_{\nu}})(\eta,s)\,d\eta. \end{split} \end{equation*} We iterate this formula. It follows that for any $\L\in\mathcal{V}_{N_1}$ and $\alpha$ we have \begin{equation}\label{DuhamelDER2} \begin{split} \partial_s\widehat{f^{\alpha,\L}_{\sigma}}(\xi,s)=\sum_{\mu,\nu\in\mathcal{P}'}\sum_{|\alpha_1|+|\alpha_2|=|\alpha|}\sum_{(\L_1,\L_2,\L_3)\in X_{\L}}\int_{\mathbb{R}^3}&e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}\mathfrak{m}_{\sigma\mu\nu}^{\L_3}(\xi,\eta)\\ &\times\widehat{f^{\alpha_1,\L_1}_{\mu}}(\xi-\eta,s)\widehat{f^{\alpha_2,\L_2}_{\nu}}(\eta,s)\,d\eta, \end{split} \end{equation} where here we set \begin{equation*} X_{\L}:=\{(\L_1,\L_2,\L_3)\in \mathcal{V}_{N_1}\,|\,|\L_1|+ |\L_2|+|\L_3|\leq |\L|\,\}\,, \end{equation*} with $|\L|$ designating the order of the differential operator $\L$, and \begin{equation}\label{za3} f^{\beta,\L}_\theta:=D^\beta\L V_\theta,\qquad \theta\in\mathcal{P}',\,|\beta|\leq 4,\,\L\in\mathcal{V}_{N_1}. \end{equation} In integral form this becomes \begin{equation}\label{duhamel2} \begin{split} \widehat{f^{\alpha,\L}_{\sigma}}(\xi,t)=\widehat{f^{\alpha,\L}_{\sigma}}(\xi,0)+\sum_{\mu,\nu\in\mathcal{P}'}&\sum_{|\alpha_1|+|\alpha_2|=|\alpha|}\sum_{(\L_1,\L_2,\L_3)\in X_{\L}}\int_0^t\int_{\mathbb{R}^3}e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}\\ &\times\mathfrak{m}_{\sigma\mu\nu}^{\L_3}(\xi,\eta)\widehat{f^{\alpha_1,\L_1}_{\mu}}(\xi-\eta,s)\widehat{f^{\alpha_2,\L_2}_{\nu}}(\eta,s)\,d\eta. \end{split} \end{equation} We summarize below some of the properties of the functions $f^{\beta,\L}_\theta$ and $\partial_tf^{\beta,\L}_\theta$: \begin{proposition}\label{sDeriv} (i) The multipliers $\mathfrak{m}_{\sigma\mu\nu}^{\L}$, $\L\in\V_{N_1}$, are sums of functions of the form \begin{equation}\label{za6} (1+|\xi|^2)^{1/2}q(\xi)q'(\xi-\eta)q''(\eta),\qquad \|q\|_{\mathcal{S}^n}+\|q'\|_{\mathcal{S}^n}+\|q''\|_{\mathcal{S}^n}\lesssim_n 1, \end{equation} for any $n\geq 1$, see \eqref{symb} for the definition of the symbol spaces $\mathcal{S}^{n}$. (ii) Assume that $|\alpha|\leq 4$ and $\L\in\V_{N_1}$. Then, with the notation in \eqref{za3}, \begin{equation}\label{za4} \|f_\mu^{\alpha,\L}(t)\|_{\mathcal{H}^{N_0-1-|\L|-|\alpha|}}+\|f_0^{\alpha,\L}(t)\|_{\mathcal{H}^{N_0-1-|\L|-|\alpha|}}+\sup_{\L'\in\V_{N_1-|\L|},\,|\beta|\leq 4-|\alpha|}\|D^\beta\L'f_\mu^{\alpha,\L}(t)\|_{Z_1^\sigma}\lesssim\bar{\eps}, \end{equation} for any $t\in[0,T]$ and $\mu\in\{e,b\}$. Moreover, letting $\langle t\rangle:=(1+t)$, \begin{equation}\label{za5} \|(1+|x|^2)^{1/4}\cdot P_{\leq k}f_0^{\alpha,\L}(t)\|_{\mathcal{H}^{N_1-|\L|}}\lesssim \delta_02^{|\alpha|k}\lesssim \bar{\eps}\langle t\rangle^{-1}2^{|\alpha|k},\qquad k\in\mathbb{Z}_+. \end{equation} (iii) For $k\in\mathbb{Z}$, $\sigma\in\{e,b,0\}$, $\L\in\mathcal{V}_{N_1}$, $|\alpha|\leq 4$, and $t\in[0,T]$ we have \beq\label{sdL2cont} \|P_k(\partial_t f^{\alpha,\L}_\sigma)(t)\|_{L^2}\lesssim \bar{\eps}\min\big\{2^{3k/2},\,2^{-k^+(N_0-2-|\L|-|\alpha|)}\langle t\rangle^{-1},\,\,2^{-k^+(N_1-2-|\L|-|\alpha|)}\langle t\rangle^{-3/2}\big\}. \eeq Moreover \beq\label{sdL2cont2} \|P_k(\partial_t f^{\alpha,\L}_0)(t)\|_{L^2}\lesssim \bar{\eps}2^{-k^+(N_1-2-|\L|-|\alpha|)}\langle t\rangle^{-2}. \eeq \end{proposition} \begin{proof} The bounds on the multipliers $\mathfrak{m}_{\sigma\mu\nu}^{\L}$ follow from the explicit formulas for the nonlinearities in \eqref{za1} and the identities \eqref{za2.5}. The bounds \eqref{za4} follow from the bootstrap assumption \eqref{bootstrap2}, while the bounds \eqref{za5} follow from the bootstrap assumption \eqref{bootstrapV2}. For part (iii) we use the formula \eqref{DuhamelDER2}. We define the operator $I_{\sigma\mu\nu}=I_{\sigma\mu\nu}^\L$ by \begin{equation}\label{za9} \mathcal{F}\big\{I_{\sigma\mu\nu}[f,g]\big\}(\xi):=\int_{\mathbb{R}^3}e^{it\Phi_{\sigma\mu\nu}(\xi,\eta)}\mathfrak{m}_{\sigma\mu\nu}^{\L}(\xi,\eta)\widehat{f}(\xi-\eta)\widehat{g}(\eta)\,d\eta. \end{equation} We assume that $t\in[0,T]$ is fixed and sometimes drop it from the notation. For $k\in\mathbb{Z}$ let \begin{equation}\label{za10} \mathcal{X}_k:=\{(k_1,k_2)\in\mathbb{Z}^2:\,|\max(k_1,k_2)-k|\leq 6\,\text{ or }(\,\max(k_1,k_2)\geq k+7\,\text{ and }\,|k_1-k_2|\leq 6)\}. \end{equation} For simplicity of notation let $f_\mu:=f^{\alpha_1,\L_1}_{\mu}$, $f_\nu:=f^{\alpha_2,\L_2}_{\nu}$, $|\alpha_1|+|\alpha_2|\leq|\alpha|$, $|\L_1|+|\L_2|\leq|\L|$. We estimate first \begin{equation*} \|P_kI_{\sigma\mu\nu}[f_\mu,f_\nu]\|_{L^2}\lesssim 2^{3k/2}\|\mathcal{F}\{I_{\sigma\mu\nu}[f_\mu,f_\nu]\}\|_{L^\infty}\lesssim 2^{3k/2}\|f_\mu\|_{\H^1}\|f_\nu\|_{\H^1}\lesssim \bar{\eps}2^{3k/2} \end{equation*} for $k\leq 0$, using \eqref{za4} at the last step. This gives the first estimate in \eqref{sdL2cont}. For the second estimate, we write first, using Lemma \ref{L1easy} and \eqref{za6}, \begin{equation*} \|P_kI_{\sigma\mu\nu}[f_\mu,f_\nu]\|_{L^2}\lesssim 2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k,\,k_1\leq k_2}\|P_{k_1}e^{-it\Lambda_\mu}f_{\mu}\|_{L^\infty}\|P_{k_2}f_{\nu}\|_{L^2}. \end{equation*} Using \eqref{za4} we estimate $\|P_{k_2}f_{\nu}\|_{L^2}\lesssim \bar{\eps}2^{-k_2^+(N_0-1-|\L_2|-|\alpha_2|)}$. Using \eqref{za5} and \eqref{LinftyBd2} we estimate \begin{equation}\label{za11} \begin{split} \|P_{k_1}e^{-it\Lambda_\mu}f_{\mu}\|_{L^\infty}\lesssim \bar{\eps}\langle t\rangle^{-1-\beta}2^{k_1/4}2^{3k_1^+}\cdot 2^{-k_1^+(N_1+4-|\L_1|-|\alpha_1|)},\qquad&\text{ if }\mu\in\{e,b,-e,-b\},\\ \|P_{k_1}e^{-it\Lambda_\mu}f_{\mu}\|_{L^\infty}\lesssim \bar{\eps}\langle t\rangle^{-1}2^{-k_1^+(N_1-|\L_1|-|\alpha_1|)}2^{3k_1/2},\qquad&\text{ if }\mu=0, \end{split} \end{equation} where in the second estimate we used the fact that $\delta_0\lesssim\bar{\eps}(1+t)^{-1}$. Therefore, since $|\L_1|+|\L_2|\leq |\L|$ and $|\alpha_1|+|\alpha_2|\leq|\alpha|$ (the worst case is $|\L_1|=0, |\L_2|=|\L|, |\alpha_1|=0, |\alpha_2|=|\alpha|$), \begin{equation*} \begin{split} \|P_kI_{\sigma\mu\nu}[f_\mu,f_\nu]\|_{L^2}&\lesssim 2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k,\,k_1\leq k_2}\langle t\rangle^{-1}2^{k_1/4}2^{-2k_1^+}\cdot \bar{\eps}2^{-k_2^+(N_0-1-|\L|-|\alpha|)}\\ &\lesssim \bar{\eps}\langle t\rangle^{-1}2^{-k^+(N_0-2-|\L|-|\alpha|)}, \end{split} \end{equation*} which gives the second bound in \eqref{sdL2cont}. To prove the last estimate we may assume that $\langle t\rangle\geq 2^{20k^+}$. If $\mu=\nu=0$ then \begin{equation*} \begin{split} \|P_k&I_{\sigma\mu\nu}[f_\mu,f_\nu]\|_{L^2}\lesssim 2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k,\,k_1\leq k_2}\|P_{k_1}f_{\mu}\|_{L^\infty}\|P_{k_2}f_{\nu}\|_{L^2}\\ &\lesssim 2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k,\,k_1\leq k_2}\bar{\eps}\langle t\rangle^{-1}2^{-k_1^+(N_1-|\L_1|-|\alpha_1|)}2^{3k_1/2}\cdot \bar{\eps}\langle t\rangle^{-1}2^{-k_2^+(N_1-|\L_2|-|\alpha_2|)}\\ &\lesssim \bar{\eps}\langle t\rangle^{-2}2^{-k^+(N_1-2-|\L|-|\alpha|)}, \end{split} \end{equation*} using \eqref{za11} and \eqref{za5}. Similarly, if $\mu\neq 0$ and $\nu=0$ then \begin{equation*} \|P_kI_{\sigma\mu\nu}[f_\mu,f_\nu]\|_{L^2}\lesssim I+II \end{equation*} where \begin{equation*} \begin{split} I&:=2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k,\,2^{k_2}\geq \min(\langle t\rangle^{-4},2^{k_1})}\|P_{k_1}f_{\mu}\|_{L^\infty}\|P_{k_2}f_{\nu}\|_{L^2}\\ &\lesssim 2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k\,2^{k_2}\geq \min(\langle t\rangle^{-4},2^{k_1})}\bar{\eps}\langle t\rangle^{-1-\beta}2^{k_1/4}2^{-k_1^+(N_1+1-|\L_1|-|\alpha_1|)}\cdot \bar{\eps}\langle t\rangle^{-1}2^{-k_2^+(N_1-|\L_2|-|\alpha_2|)}\\ &\lesssim \bar{\eps}\langle t\rangle^{-2}2^{-k^+(N_1-2-|\L|-|\alpha|)} \end{split} \end{equation*} and \begin{equation*} \begin{split} II&:=2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k,\,2^{k_2}\leq \min(\langle t\rangle^{-4},2^{k_1})}\|P_{k_1}f_{\mu}\|_{L^2}\|P_{k_2}f_{\nu}\|_{L^\infty}\\ &\lesssim 2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k\,\,2^{k_2}\leq \min(\langle t\rangle^{-4},2^{k_1})}\bar{\eps}2^{-k_1^+(N_0-N_1-5)}\cdot \bar{\eps} \langle t\rangle^{-1}2^{3k_2/2}\\ &\lesssim \bar{\eps}\langle t\rangle^{-2}2^{-k^+(N_1-2-|\L|-|\alpha|)}. \end{split} \end{equation*} These three estimates suffice to prove the desired bound in \eqref{sdL2cont} (since $2^{k^+}\leq \langle t\rangle^{1/20}$), and also the bound \eqref{sdL2cont2} (since either $\mu=0$ or $\nu=0$ when $\sigma=0$, see the last equation in \eqref{za1}). Finally, assume that $\mu\neq 0$ and $\nu\neq 0$. We decompose \begin{equation}\label{za15} \begin{split} &f_\mu=\sum_{(k_1,j_1)\in\mathcal{J}}f^\mu_{j_1,k_1}=\sum_{(k_1,j_1)\in\mathcal{J}}P_{[k_1-2,k_1+2]}Q_{j_1k_1}f_\mu,\\ &f_\nu=\sum_{(k_2,j_2)\in\mathcal{J}}f^\nu_{j_2,k_2}=\sum_{(k_2,j_2)\in\mathcal{J}}P_{[k_2-2,k_2+2]}Q_{j_2k_2}f_\nu. \end{split} \end{equation} We estimate, using \eqref{LinftyBd} and \eqref{za4}, \begin{equation*} \begin{split} \|P_kI_{\sigma\mu\nu}[f_\mu,f_\nu]\|_{L^2}&\lesssim 2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k,\,j_1\leq j_2}\|e^{-it\Lambda_\mu}f_{j_1,k_1}^\mu\|_{L^\infty}\|f_{j_2,k_2}^\nu\|_{L^2}\\ &\lesssim 2^{k^+}\sum_{(k_1,k_2)\in\mathcal{X}_k,\,j_1\leq j_2}\bar{\eps}2^{5k_1^+/2}\langle t\rangle^{-3/2}2^{(1/2+3\beta)j_1}2^{-k_1^+(N_1+4-|\L_1|-|\alpha_1|)}\\ &\qquad\qquad\qquad\qquad\quad\times\bar{\eps}2^{-j_2(1-3\beta)}2^{-k_2^+(N_1+4-|\L_2|-|\alpha_2|)}\\ &\lesssim \bar{\eps}\langle t\rangle^{-3/2}2^{4k^+}, \end{split} \end{equation*} using also that in the sum $k_1\geq -j_1\geq-j_2$ and $k_2\geq -j_2$. This finishes the proof of \eqref{sdL2cont}. \end{proof} \subsection{The main reduction} We return now to the proof of Proposition \ref{BootstrapZNorm}. We have \begin{equation*} \|(V_e(t),V_b(t))\|_Z\lesssim \sup_{\L\in\mathcal{V}_{N_1},\,|\alpha|\leq 4}[\|f^{\alpha,\L}_e\|_{Z^1_e}+\|f^{\alpha,\L}_b\|_{Z^1_b}], \end{equation*} in view of Definition \ref{MainZDef}. We use the integral formula \eqref{duhamel2} and decompose the time integral into dyadic pieces. More precisely, given $t\in[0,T]$, we fix a suitable decomposition of the function $\mathbf{1}_{[0,t]}$, i.e. we fix functions $q_0,\ldots,q_{L+1}:\mathbb{R}\to[0,1]$, $|L-\log_2(2+t)|\leq 2$, with the properties \begin{equation}\label{nh2} \begin{split} &\mathrm{supp}\,q_0\subseteq [0,2], \quad \mathrm{supp}\,q_{L+1}\subseteq [t-2,t],\quad\mathrm{supp}\,q_m\subseteq [2^{m-1},2^{m+1}]\text{ for } m\in\{1,\ldots,L\},\\ &\sum_{m=0}^{L+1}q_m(s)=\mathbf{1}_{[0,t]}(s),\qquad q_m\in C^1(\mathbb{R})\text{ and }\int_0^t|q'_m(s)|\,ds\lesssim 1\text{ for }m\in \{1,\ldots,L\}. \end{split} \end{equation} Let $I_m$ denote the support of $q_m$. For $m\in[0,L+1]$, $\sigma\in\{e,b\}$, $\mu,\nu\in\mathcal{P}'$, $\L\in\mathcal{V}_{N_1}$, we define the bilinear operators $T_m^{\sigma\mu\nu}$ by \begin{equation}\label{za16} \mathcal{F}\{T_m^{\sigma\mu\nu}[f,g]\}(\xi):=\int_0^tq_m(s)\int_{\mathbb{R}^3}e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}\mathfrak{m}_{\sigma\mu\nu}^{\L}(\xi,\eta)\widehat{f}(\xi-\eta,s)\widehat{g}(\eta,s)\,d\eta. \end{equation} For Proposition \ref{BootstrapZNorm} it suffices to prove the following: \begin{proposition}\label{BootstrapZNorm2} With the hypothesis in Proposition \ref{bootstrap} and the notation above, we have \begin{equation}\label{za17} \sum_{k_1,k_2\in\mathbb{Z}}\big\|Q_{jk}T_m^{\sigma\mu\nu}[P_{k_1}f_\mu,P_{k_2}f_\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^2 2^{-om}, \end{equation} for any fixed $t\in[0,T]$, $m\in[0,L+1]$, $(k,j)\in\mathcal{J}$, $\sigma\in\{e,b\}$, $\mu,\nu\in\mathcal{P}'$, $f_\mu=f^{\alpha_1,\L_1}_{\mu}$, $f_\nu=f^{\alpha_2,\L_2}_{\nu}$, $|\L_1|+|\L_2|\leq N_1$, $|\alpha_1|+|\alpha_2|\leq 4$. Here $o:=10^{-8}$ is a small constant. \end{proposition} We prove this proposition in the next two sections. We remove first the contribution of very low and very high input frequencies. Then we consider the interactions containing one of the vorticity variables, in which either $\mu=0$ or $\nu=0$ (by symmetry we may assume that $\nu=0$). Finally, in section \ref{DispInter} we consider the purely dispersive interactions, i.e. $\mu,\nu\in\{e,b,-e,-b\}$. We will often need to localize the phase, in order to be able to integrate by parts in time. For this we define the operators $I_{l,s}^{\sigma\mu\nu}$, $I_{\leq l,s}^{\sigma\mu\nu}$, and $\widetilde{I}_{l,s}^{\sigma\mu\nu}$, $l\in\mathbb{Z}$, by \begin{equation}\label{vco6} \begin{split} \mathcal{F}\big\{I_{l,s}^{\sigma\mu\nu}[f,g]\big\}(\xi)&:=\int_{\mathbb{R}^3}e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}\varphi_l(\Phi_{\sigma\mu\nu}(\xi,\eta))\mathfrak{m}_{\sigma\mu\nu}^{\L}(\xi,\eta)\widehat{f}(\xi-\eta)\widehat{g}(\eta)\,d\eta,\\ \mathcal{F}\big\{I_{\leq l,s}^{\sigma\mu\nu}[f,g]\big\}(\xi)&:=\int_{\mathbb{R}^3}e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}\varphi_{\leq l}(\Phi_{\sigma\mu\nu}(\xi,\eta))\mathfrak{m}_{\sigma\mu\nu}^{\L}(\xi,\eta)\widehat{f}(\xi-\eta)\widehat{g}(\eta)\,d\eta,\\ \mathcal{F}\big\{\widetilde{I}_{l,s}^{\sigma\mu\nu}[f,g]\big\}(\xi)&:=\int_{\mathbb{R}^3}e^{is\Phi_{\sigma\mu\nu}(\xi,\eta)}\widetilde{\varphi}_l(\Phi_{\sigma\mu\nu}(\xi,\eta))\mathfrak{m}_{\sigma\mu\nu}^{\L}(\xi,\eta)\widehat{f}(\xi-\eta)\widehat{g}(\eta)\,d\eta, \end{split} \end{equation} where $\widetilde{\varphi}_l(x):= (2^l/x)\varphi_l(x)$. Then we define the operators $T_{m,l}^{\sigma\mu\nu}$, $T_{m,\leq l}^{\sigma\mu\nu}$, $l\in\mathbb{Z}$, by \begin{equation}\label{vco6.1} T_{m,l}^{\sigma\mu\nu}[f,g]:=\int_0^tq_m(s)I_{l,s}^{\sigma\mu\nu}[f(s),g(s)]\,ds,\quad T_{m,\leq l}^{\sigma\mu\nu}[f,g]:=\int_0^tq_m(s)I_{\leq l,s}^{\sigma\mu\nu}[f(s),g(s)]\,ds, \end{equation} compare with \eqref{za16}. We record the integration by parts identity \begin{equation}\label{vco6.2} \begin{split} T_{m,l}^{\sigma\mu\nu}&[f,g]=i2^{-l}\int_{0}^tq'_m(s)\widetilde{I}_{l,s}^{\sigma\mu\nu}[f(s),g(s)]\,ds\\ &+i2^{-l}\int_{0}^tq_m(s)\widetilde{I}_{l,s}^{\sigma\mu\nu}[(\partial_sf)(s),g(s)]\,ds+i2^{-l}\int_{0}^tq_m(s)\widetilde{I}_{l,s}^{\sigma\mu\nu}[f(s),(\partial_sg)(s)]\,ds. \end{split} \end{equation} \section{Improved control of the $Z$-norm, II: vorticity interactions}\label{Sec:Z1Norm} We start with a lemma that applies for all $\mu,\nu\in\mathcal{P}'$. \begin{lemma}\label{Vo1} (Very large or very small input frequencies) We have \begin{equation}\label{vco1} \sum_{\max(k_1,k_2)\geq j/41+\beta m-\D}\big\|Q_{jk}T_m^{\sigma\mu\nu}[P_{k_1}f_\mu,P_{k_2}f_\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^22^{-om}, \end{equation} and \begin{equation}\label{vco2} \sum_{\min(k_1,k_2)\leq -(2/3)(m+j)(1+\beta)}\big\|Q_{jk}T_m^{\sigma\mu\nu}[P_{k_1}f_\mu,P_{k_2}f_\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^2 2^{-om}. \end{equation} \end{lemma} \begin{proof} We estimate, using Definition \ref{MainZDef}, Lemma \ref{L1easy}, \eqref{za4}, and \eqref{za11}, \begin{equation*} \begin{split} \|Q_{jk}T_m^{\sigma\mu\nu}[P_{k_1}f_\mu,P_{k_2}f_\nu]\big\|_{B_j^\sigma}&\lesssim 2^{k^+}2^{(1+\beta)j}2^m\sup_{s\in I_m}\|e^{-is\Lambda_\mu}P_{k_1}f_\mu(s)\|_{L^\infty}\|P_{k_2}f_\nu(s)\|_{L^2}\\ &\lesssim 2^{k^+}\bar{\eps}\,^22^{(1+\beta)j}2^{\min(k_1,0)/4}2^{-(N_0-N_1-5)k_2^+}, \end{split} \end{equation*} if $k_1\leq k_2$. The bound \eqref{vco1} follows by summation over $(k_1,k_2)\in\mathcal{X}_k$ with $k_2\geq k_1$, $k_2\geq j/41+\beta m$. For the second bound we estimate \begin{equation*} \begin{split} \|Q_{jk}T_m^{\sigma\mu\nu}[P_{k_1}f_\mu,P_{k_2}f_\nu]\big\|_{B_j^\sigma}&\lesssim 2^{k^+}2^{(1+\beta)j}2^m\sup_{s\in I_m}2^{3k_1/2}\|P_{k_1}f_\mu(s)\|_{L^2}\|P_{k_2}f_\nu(s)\|_{L^2}\\ &\lesssim 2^{k^+}\bar{\eps}\,^22^{(1+\beta)j}2^m2^{3k_1/2}2^{-4k_2^+}, \end{split} \end{equation*} if $k_1\leq k_2$. The bound \eqref{vco2} follows. \end{proof} In the rest of the section we prove Proposition \ref{BootstrapZNorm2} when $\nu=0$. For simplicity of notation, in the rest we drop the superscripts $\sigma\mu\nu$, and write simply $T_m$ instead of $T_m^{\sigma\mu\nu}$, $\widetilde{I}_{l,s}$ instead of $\widetilde{I}_{l,s}^{\sigma\mu\nu}$ etc. We divide the proof into several lemmas, depending on the relative sizes of the main variables. In view of Lemma \ref{Vo1}, we need to consider only $\approx (j+m)^2$ pairs $(k_1,k_2)$; thus it suffices to prove that \begin{equation}\label{vco3} \big\|Q_{jk}T_m[P_{k_1}f_\mu,P_{k_2}f_0]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^2 2^{-2om-2oj}, \end{equation} where the pair $(k_1,k_2)$ is fixed and satisfies \begin{equation}\label{vco4} k_1,k_2\in[-(2/3)(m+j)(1+\beta),j/41+\beta m-\D]. \end{equation} \begin{lemma}\label{Vo2} (Approximate finite speed of propagation) The bound \eqref{vco3} holds provided that \begin{equation*} j\geq \max(-k,m)+\D. \end{equation*} \end{lemma} \begin{proof} We define $f^\mu_{j_1,k_1}$ and $f^0_{j_2,k_2}$ as in \eqref{Alx100}. Integration by parts in $\xi$ together with the change of variables $\eta\to\xi-\eta$ show that the contribution is negligible unless $\min(j_1,j_2)\geq 99 j/100$. On the other hand, for any $j_1,j_2$, we can estimate \begin{equation*} \big\|Q_{jk}T_m[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim 2^{j(1+\beta)}2^m\sup_{s\in I_m}2^{k^+}\|e^{-is\Lambda_\mu}f_{j_1,k_1}^\mu(s)\|_{L^{\infty}}\|f_{j_2,k_2}^0(s)\|_{L^{2}}, \end{equation*} using Lemma \ref{L1easy}. Then we estimate $\|f_{j_2,k_2}^0(s)\|_{L^{2}}\lesssim \bar{\eps}2^{-m}2^{-j_2/2}2^{-k_2^+(N_1-|\L_2|-|\alpha_2|)}$ (using \eqref{za5}), and $\|e^{-is\Lambda_\mu}f_{j_1,k_1}^\mu(s)\|_{L^{\infty}}\lesssim \bar{\eps}2^{-j_1}2^{-k_1^+(N_1-|\L_1|-|\alpha_1|)}$ (using \eqref{LinftyBd} if $\mu\neq 0$ and \eqref{Zs3} if $\mu=0$). Therefore \begin{equation*} \big\|Q_{jk}T_m[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^22^{j(1+\beta)}2^{6\max(k_1^+,k_2^+)}2^{-(3/2)\min(j_1,j_2)}. \end{equation*} The desired conclusion \eqref{vco3} follows by summing over pairs $(j_1,j_2)$ with $\min(j_1,j_2)\geq 99 j/100$, and recalling that $\max(k_1^+,k_2^+)\leq j/30$, see \eqref{vco4}. \end{proof} \begin{lemma}\label{Vo3} The bound \eqref{vco3} holds provided that \begin{equation*} j\leq \max(-k,m)+\D\qquad\text{ and }\qquad\mu=0. \end{equation*} \end{lemma} \begin{proof} In this case $|\Phi_{\sigma\mu\nu}(\xi,\eta)|=|\Lambda_\sigma(\xi)|\approx 2^{k_+}$ in the support of the integral, so we can integrate by parts in time. Using \eqref{vco6.2}, it suffices to prove that \begin{equation}\label{vco10} \begin{split} 2^{-k^+}2^{j(1+\beta)}\big[\|P_k&\widetilde{I}_{l,s}[P_{k_1}f_0(s),P_{k_2}f_0(s)]\|_{L^2}+2^m\|P_k\widetilde{I}_{l,s}[P_{k_1}(\partial_sf_0)(s),P_{k_2}f_0(s)]\|_{L^2}\\ &+2^m\|P_k\widetilde{I}_{l,s}[P_{k_1}f_0(s),P_{k_2}(\partial_sf_0)(s)]\|_{L^2}\big]\lesssim\bar{\eps}\,^2 2^{-2om-2oj}, \end{split} \end{equation} for any $s\in I_m$ and $l\in\mathbb{Z}$ with $|l-k^+|\lesssim 1$. Using \eqref{za5} and the last bound in \eqref{sdL2cont}, we have \begin{equation*} \|P_k\widetilde{I}_{l,s}[P_{k_1}f_0,P_{k_2}f_0]\|_{L^2}\lesssim 2^{3k/2}\|P_{k_1}f_0\|_{L^2}\|P_{k_2}f_0\|_{L^2}\lesssim 2^{3k/2}\bar{\eps}\,^22^{-2m}2^{4\max(k_1^+,k_2^+)}, \end{equation*} \begin{equation*} \|P_k\widetilde{I}_{l,s}[P_{k_1}(\partial_sf_0),P_{k_2}f_0]\|_{L^2}\lesssim 2^{3k/2}\|P_{k_1}(\partial_sf_0)\|_{L^2}\|P_{k_2}f_0\|_{L^2}\lesssim 2^{3k/2}\bar{\eps}\,^22^{-5m/2}2^{6k_1^+}2^{6k_2^+}, \end{equation*} and similarly \begin{equation*} \|P_k\widetilde{I}_{l,s}[P_{k_1}f_0,P_{k_2}(\partial_sf_0)]\|_{L^2}\lesssim 2^{3k/2}\bar{\eps}\,^22^{-5m/2}2^{6k_1^+}2^{6k_2^+}. \end{equation*} Therefore, the left-hand side of \eqref{vco10} is bounded by \begin{equation*} C2^{-k^+}(2^{m}+2^{-k})^{1+\beta}2^{3k/2}\bar{\eps}\,^22^{-3m/2}2^{6k_1^+}2^{6k_2^+}\lesssim \begin{cases} \bar{\eps}\,^22^{\beta m-m/2}2^{13\max(k_1^+,k_2^+)}&\text{ if }m\geq -k,\\ \bar{\eps}\,^22^{k/2-\beta k}2^{13\max(k_1^+,k_2^+)}&\text{ if }m\leq -k. \end{cases} \end{equation*} The desired conclusion \eqref{vco10} follows since $2^{\max(k_1^+,k_2^+)}\lesssim (2^m+2^{-k})^{1/30}2^{\beta m}$, see \eqref{vco4}. \end{proof} \begin{lemma}\label{Vo4} The bound \eqref{vco3} holds provided that \begin{equation*} j\leq -k+2\D\qquad\text{ and }\qquad\mu\in\{e,b,-e,-b\}. \end{equation*} \end{lemma} \begin{proof} Clearly $k\leq 2\D$. We estimate first, using \eqref{za4}--\eqref{za5}, \begin{equation*} \begin{split} \|Q_{jk}T_m[P_{k_1}f_\mu,P_{k_2}f_0]\big\|_{B_j^\sigma}&\lesssim 2^{(1+\beta)(-k)}2^m 2^{3k/2}\sup_{s\in I_m}\|P_{k_1}f_\mu(s)\|_{L^2}\|P_{k_2}f_0(s)\|_{L^2}\\ &\lesssim 2^{(1/2-\beta)k}\bar{\eps}\,^22^{-5k_1^+}2^{4k_2^+}. \end{split} \end{equation*} This suffices to prove \eqref{vco3} unless \begin{equation*} m\geq -100k\qquad\text{ and }\qquad m\geq 100\max(k_1^+,k_2^+). \end{equation*} On the other hand, if both these inequalities hold then we estimate the $L^\infty$ norm of the dispersive term using \eqref{LinftyBd2}, \begin{equation*} \begin{split} \|Q_{jk}T_m[P_{k_1}f_\mu,P_{k_2}f_0]\big\|_{B_j^\sigma}&\lesssim 2^{(1+\beta)(-k)}2^m \sup_{s\in I_m}\|e^{-is\Lambda_\mu}P_{k_1}f_\mu(s)\|_{L^\infty}\|P_{k_2}f_0(s)\|_{L^2}\\ &\lesssim 2^{(1+\beta)(-k)}\bar{\eps}\,^22^{-(1+\beta)m}2^{7\max(k_1^+,k_2^+)}, \end{split} \end{equation*} which suffices to complete the proof of the lemma. \end{proof} \begin{lemma}\label{Vo5} The bound \eqref{vco3} holds provided that \begin{equation*} -k+2\D\leq j\leq m+\D\qquad\text{ and }\qquad\mu\in\{e,b,-e,-b\}. \end{equation*} \end{lemma} \begin{proof} Let $\overline{k}:=\max(k_1^+,k_2^+)$ and define $f^\mu_{j_1,k_1}$ and $f^0_{j_2,k_2}$ as in \eqref{Alx100}. We consider three cases: {\bf{Case 1.}} Assume that \begin{equation}\label{vco20} |k_1^+-k_2^+|\leq\D. \end{equation} Then we estimate, using \eqref{za4}--\eqref{za5} and the last inequality in \eqref{LinftyBd}, \begin{equation}\label{vco20.5} \begin{split} \big\|Q_{jk}&T_m[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim 2^{j(1+\beta)}2^m\sup_{s\in I_m}2^{k^+}\|e^{-is\Lambda_\mu}f_{j_1,k_1}^\mu(s)\|_{L^{\infty}}\|f_{j_2,k_2}^0(s)\|_{L^{2}}\\ &\lesssim 2^{m(1+\beta)}2^m 2^{k^+}\cdot\bar{\eps}\min(2^{-3m/2}2^{(1/2+3\beta)j_1},2^{-(1+\beta)j_1})\bar{\eps}2^{-m}2^{-j_2/2}2^{-4\overline{k}}\\ &\lesssim \bar{\eps}^22^{m/500}2^{-|m-j_1|/2}2^{-j_2/2}2^{-3\overline{k}}. \end{split} \end{equation} The desired conclusion follows for the sum over the pairs $(j_1,j_2)$ with either $|j_1-m|\geq m/100$ or $j_2\geq m/100$. It remains to consider the pairs $(j_1,j_2)$ with \begin{equation}\label{vco21} |j_1-m|\leq m/100\qquad\text{ and }\qquad|j_2|\leq m/100. \end{equation} For such pairs we need additional localization in modulation. Recall the notation in \eqref{vco6}--\eqref{vco6.1}. With $l_0:=-m/10$ we estimate \begin{equation*} \begin{split} \big\|Q_{jk}&T_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^m\sup_{s\in I_m}\big\|P_kI_{\leq l_0,s}[f_{j_1,k_1}^\mu(s),f_{j_2,k_2}^0(s)]\big\|_{L^2}\\ &\lesssim 2^{m(1+\beta)}2^m 2^{k^+}\sup_{s\in I_m}\Big\|\int_{\mathbb{R}^3}\varphi_{\leq l_0}(\Phi_{\sigma\mu\nu}(\xi,\xi-\eta))\varphi_k(\xi)\,|\widehat{f_{j_1,k_1}^\mu}(\eta,s)|\,|\widehat{f_{j_2,k_2}^0}(\xi-\eta,s)|\,d\eta\Big\|_{L^2_\xi}. \end{split} \end{equation*} We estimate the $L^2$ norm in the expression above using Schur's test. Moreover \begin{equation}\label{lin} \|\widehat{f_{j_2,k_2}^0}(s)\|_{L^\infty}\lesssim \|f_{j_2,k_2}^0(s)\|_{L^1}\lesssim 2^{3j_2/2}\|f_{j_2,k_2}^0(s)\|_{L^2}\lesssim 2^{j_2}2^{-\overline{k}(N_1-|\L_2|-|\alpha_2|)}\bar{\eps}2^{-m}, \end{equation} using \eqref{za5} for the last estimate. Applying now Lemma \ref{Shur2Lem} and \eqref{lin} we get \begin{equation*} \begin{split} \sup_{\xi\in\mathbb{R}^3}\int_{\mathbb{R}^3}\varphi_{\leq l_0}(\Phi_{\sigma\mu\nu}(\xi,\xi-\eta))\varphi_{[k-4,k+4]}(\xi)\varphi_{[k_1-4,k_1+4]}(\eta)|\widehat{f_{j_2,k_2}^0}(\xi-\eta,s)|\,d\eta\lesssim \\ +\sup_{\eta\in\mathbb{R}^3}\int_{\mathbb{R}^3}\varphi_{\leq l_0}(\Phi_{\sigma\mu\nu}(\xi,\xi-\eta))\varphi_{[k-4,k+4]}(\xi)\varphi_{[k_1-4,k_1+4]}(\eta)|\widehat{f_{j_2,k_2}^0}(\xi-\eta,s)|\,d\xi\\ \lesssim 2^{j_2}2^{-\overline{k}(N_1-|\L_2|-|\alpha_2|-9)}2^{l_0}\bar{\eps}2^{-m} (1+m). \end{split} \end{equation*} Using Definition \ref{MainZDef} and \eqref{za5}, \begin{equation*} \|\widehat{f_{j_1,k_1}^\mu}(s)\|_{L^2}\lesssim 2^{-j_1(1-3\beta)}\|f_\mu(s)\|_{Z_1^\mu}\lesssim \bar{\eps}2^{-j_1+3\beta j_1}2^{-\overline{k}(N_1-|\L_1|-|\alpha_1|)}. \end{equation*} Therefore, by Schur's lemma and recalling that $l_0=-m/10$, \begin{equation}\label{vc022} \big\|Q_{jk}T_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^22^{m(1+2\beta)}2^{-j_1+3\beta j_1}2^{j_2}2^{l_0}2^{-2\overline{k}}. \end{equation} Notice that this suffices to control the contribution of the pairs $(j_1,j_2)$ as in \eqref{vco21}. It remains to control the control the contribution of the larger modulations $l\geq l_0+1$. For this we integrate by parts in time, as in Lemma \ref{Vo3}. Using \eqref{vco6.2} we bound \begin{equation*} \begin{split} \big\|Q_{jk}T_{m,l}&[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^{-l}\sup_{s\in I_m}\big\{\|P_k\widetilde{I}_{l,s}[f_{j_1,k_1}^\mu(s),f_{j_2,k_2}^0(s)]\|_{L^2}\\ &+2^m\|P_k\widetilde{I}_{l,s}[(\partial_sf_{j_1,k_1}^\mu)(s),f_{j_2,k_2}^0(s)]\|_{L^2}+2^m\|P_k\widetilde{I}_{l,s}[f_{j_1,k_1}^\mu(s),(\partial_sf_{j_2,k_2}^0)(s)]\|_{L^2}\big\}\\ &\lesssim 2^{m(1+\beta)}2^{-l}2^{3k^+}\sup_{s\in I_m}\big\{\|f_{j_1,k_1}^\mu(s)\|_{L^2}\|f_{j_2,k_2}^0(s)\|_{L^2}\\ &+2^m\|(\partial_sf_{j_1,k_1}^\mu)(s)\|_{L^2}\|f_{j_2,k_2}^0(s)\|_{L^2}+2^m\|f_{j_1,k_1}^\mu(s)\|_{L^2}\|(\partial_sf_{j_2,k_2}^0)(s)\|_{L^2}\big\}. \end{split} \end{equation*} Using now \eqref{za4}--\eqref{sdL2cont} we can estimate \begin{equation*} \begin{split} \|f_{j_1,k_1}^\mu(s)\|_{L^2}\|f_{j_2,k_2}^0(s)\|_{L^2}+2^m\|f_{j_1,k_1}^\mu(s)\|_{L^2}\|(\partial_sf_{j_2,k_2}^0)(s)\|_{L^2}&\lesssim \bar{\eps}^22^{-j_1(1-3\beta)}2^{-m/4}2^{-4\overline{k}},\\ 2^m\|(\partial_sf_{j_1,k_1}^\mu)(s)\|_{L^2}\|f_{j_2,k_2}^0(s)\|_{L^2}&\lesssim \bar{\eps}^22^{-5m/4}2^{-4\overline{k}}. \end{split} \end{equation*} Therefore, for $j_1\geq m-m/100$ as in \eqref{vco21}, \begin{equation}\label{vco30} \big\|Q_{jk}T_{m,l}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^{-l}\cdot \bar{\eps}^22^{-5m/4}2^{m/50}. \end{equation} The desired bound \eqref{vco3} follows by combining \eqref{vco20.5}, \eqref{vc022}, and \eqref{vco30}. {\bf{Case 2.}} Assume now that \begin{equation}\label{vco40} k_2^+\geq k_1^++\D. \end{equation} In this case $k_2\geq \D$, $|k-k_2|\leq 4$, and $|\Phi_{\sigma\mu\nu}(\xi,\eta)|=|\Lambda_\sigma(\xi)-\Lambda_\mu(\xi-\eta)|\approx 2^k$ in the support in the integral. We are therefore in the case when the modulation is large, so we can integrate by parts in time. As before, using \eqref{vco6.2} we bound, for $|l-k|\leq \D$ \begin{equation*} \begin{split} \big\|Q_{jk}T_{m,l}&[P_{k_1}f_\mu,P_{k_2}f_0]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^{-k}\sup_{s\in I_m}\big\{\|P_k\widetilde{I}_{l,s}[P_{k_1}f_\mu(s),P_{k_2}f_0(s)]\|_{L^2}\\ &+2^m\|P_k\widetilde{I}_{l,s}[P_{k_1}(\partial_sf_\mu)(s),P_{k_2}f_0(s)]\|_{L^2}+2^m\|P_k\widetilde{I}_{l,s}[P_{k_1}f_\mu(s),P_{k_2}(\partial_sf_0)(s)]\|_{L^2}\big\}. \end{split} \end{equation*} Using \eqref{za4}--\eqref{sdL2cont} and \eqref{LinftyBd2}, we estimate \begin{equation*} \|P_k\widetilde{I}_{l,s}[P_{k_1}f_\mu(s),P_{k_2}f_0(s)]\|_{L^2}\lesssim 2^{k^+}\|e^{-is\Lambda_\mu}P_{k_1}f_\mu(s)\|_{L^\infty}\|P_{k_2}f_0(s)\|_{L^2}\lesssim \bar{\eps}^22^{-(2+\beta)m}2^{6k_2}, \end{equation*} and similarly \begin{equation*} \begin{split} &2^m\|P_k\widetilde{I}_{l,s}[P_{k_1}(\partial_sf_\mu)(s),P_{k_2}f_0(s)]\|_{L^2}\lesssim \bar{\eps}^22^{-(4/3+\beta)m}2^{6k_2},\\ &2^m\|P_k\widetilde{I}_{l,s}[P_{k_1}f_\mu(s),P_{k_2}(\partial_sf_0)(s)]\|_{L^2}\lesssim \bar{\eps}^22^{-(4/3+\beta)m}2^{6k_2}. \end{split} \end{equation*} The desired conclusion follows in this case once we recall that $k_2\leq m/20$, see \eqref{vco4}. {\bf{Case 3.}} Finally, assume that \begin{equation}\label{vco50} k_1^+\geq k_2^++\D. \end{equation} In this case $k_1\geq \D$, $|k-k_1|\leq 4$. We use the same argument as in {\bf{Case 1}}. As in the proof of \eqref{vco20.5}, and using also that $n=0$ in this case, \begin{equation}\label{vco51} \big\|Q_{jk}T_m[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^22^{-(1/2-\beta)|m-j_1|}2^{-j_2/2}2^{4k}. \end{equation} This suffices to control the contribution of the pairs $(j_1,j_2)$ with $(1-\beta)|m-j_1|+j_2\geq 8k+\beta m$. On the other hand, if \begin{equation}\label{vco52} (1-\beta)|m-j_1|+j_2\leq 8k+\beta m, \end{equation} then we decompose dyadically in modulation. The contribution of low modulations $|\Phi_{\sigma\mu\nu}|\leq 2^{l_0}$ can be estimated using Schur's lemma. As in the proof of \eqref{vc022}, we can estimate \begin{equation}\label{vco53} \big\|Q_{jk}T_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^22^{(1+\beta)m}2^{-(1+\beta)j_1}2^{j_2}2^{l_0}. \end{equation} Notice that this suffices to control the contribution of the pairs $(j_1,j_2)$ as in \eqref{vco52} if \begin{equation}\label{vco54} -l_0:=(1+\beta)|m-j_1|+j_2+\beta m. \end{equation} On the other hand, for $l\geq l_0$ we integrate by parts in time and estimate, as in \eqref{vco30}, \begin{equation*} \begin{split} \big\|Q_{jk}T_{m,l}&[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^{-l}\sup_{s\in I_m}\big\{\|P_k\widetilde{I}_{l,s}[f_{j_1,k_1}^\mu(s),f_{j_2,k_2}^0(s)]\|_{L^2}\\ &+2^m\|P_k\widetilde{I}_{l,s}[(\partial_sf_{j_1,k_1}^\mu)(s),f_{j_2,k_2}^0(s)]\|_{L^2}+2^m\|P_k\widetilde{I}_{l,s}[f_{j_1,k_1}^\mu(s),(\partial_sf_{j_2,k_2}^0)(s)]\|_{L^2}\big\}\\ &\lesssim 2^{m(1+\beta)}2^{-l}\big\{\bar{\eps}^22^{4k}2^{-m(2+\beta)}+\bar{\eps}^22^{-15k}2^{-(1+6\beta)m}+\bar{\eps}^22^{4k}2^{-m(2+\beta)}\big\}, \end{split} \end{equation*} where in the last line we used Lemma \ref{PhiLocLem}, the bounds \eqref{sdL2cont2} and \eqref{LinftyBd2}, and the bound \begin{equation*} \|\partial_sf_{j_1,k_1}^\mu(s)\|_{L^2}\lesssim \bar{\eps}2^{-k_1^+(N_0-3-|\L_1|-|\alpha_1|)}2^{-m(1+6\beta)}, \end{equation*} which is obtained by interpolation from the last two bounds in \eqref{sdL2cont}. Therefore \begin{equation}\label{vco60} \sum_{-l\leq -l_0}\big\|Q_{jk}T_{m,l}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^0]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^22^{-\beta m}+\bar{\eps}^22^{14k}2^{-m+6\beta m}, \end{equation} recalling that $-l_0\leq 9k+3\beta m$, see \eqref{vco52} and \eqref{vco54}. The desired conclusion follows from \eqref{vco51}, \eqref{vco53}, and \eqref{vco60}. This completes the proof of the lemma. \end{proof} \section{Improved control of the $Z$-norm, III: dispersive interactions}\label{DispInter} In this section we prove Proposition \ref{BootstrapZNorm2} when $\mu,\nu\in\{e,b,-e,-b\}$. In view of Lemma \ref{Vo1} it suffices to prove that \begin{equation}\label{gmo1} \big\|Q_{jk}T_m^{\sigma\mu\nu}[P_{k_1}f_\mu,P_{k_2}f_\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^2 2^{-2om-2oj}, \end{equation} where the pair $(k_1,k_2)$ is fixed and satisfies \begin{equation}\label{gmo2} k_1,k_2\in[-(2/3)(m+j)(1+\beta),j/41+\beta m-\D]. \end{equation} The proof we present here is similar to the proof in \cite[Sections 6,7]{DeIoPa}. It is simpler, however, because we work here in 3 dimensions, as opposed to 2 dimensions, and this leads to more favorable dispersion and decay properties of the solutions. For the sake of completeness we provide all the details in the rest of this section. As in the previous section, we drop the superscripts $\sigma\mu\nu$ and consider several cases. In many estimates below we use the basic bounds on the functions $f_\mu=f_\mu^{\al_1,\L_1}$ and $f_\nu=f_\nu^{\al_2,\L_2}$ \begin{equation}\label{gmo2.1} \sup_{|\beta|\leq N_1+4-|\L|-|\alpha|}\|D^{\beta}f_\gamma^{\alpha,\L}(t)\|_{Z_1^\gamma}+\|f_\gamma^{\alpha,\L}(t)\|_{\mathcal{H}^{N_0-1-|\L|-|\alpha|}}\lesssim\bar{\eps}, \end{equation} and, for any $k\in\mathbb{Z}$, \beq\label{gmo2.2} \|P_k(\partial_t f^{\alpha,\L}_\gamma)(t)\|_{L^2}\lesssim \bar{\eps}\min\big\{2^{3k/2},\,2^{-k^+(N_0-2-|\L|-|\alpha|)}\langle t\rangle^{-1},\,\,2^{-k^+(N_1-2-|\L|-|\alpha|)}\langle t\rangle^{-3/2}\big\}, \eeq see Proposition \ref{sDeriv}, where $(\gamma,\L,\alpha)\in\{(\mu,\L_1,\alpha_1),(\nu,\L_2,\alpha_2)\}$ and $\langle t\rangle=1+t$. Recall also that $|\L_1|+|\L_2|\leq N_1$ and $|\alpha_1|+|\alpha_2|\leq 4$. We will often use the integration by parts formula \eqref{vco6.2}. We divide the proof into several lemmas, depending on the relative size of the main parameters. As before, we start with the simpler cases and gradually reduce to the main resonant cases in Proposition \ref{mo10}. \begin{lemma}\label{mo2} (Approximate finite speed of propagation) The bound \eqref{gmo1} holds provided that \eqref{gmo2} holds and, in addition, \begin{equation*} j\geq \max(-k,m)+\D. \end{equation*} \end{lemma} \begin{proof} We define $f^\mu_{j_1,k_1}$ and $f^\nu_{j_2,k_2}$ as in \eqref{Alx100}. As in the proof of Lemma \ref{Vo2}, integration by parts in $\xi$ together with the change of variables $\eta\to\xi-\eta$ show that the contribution is negligible unless $\min(j_1,j_2)\geq j(1-\beta/10)$. Without loss of generality we may assume that $k_1\leq k_2$. For any $j_1,j_2$, we can estimate \begin{equation}\label{gmo5} \big\|Q_{jk}T_m[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^22^{j(1+\beta)}2^m 2^{3k_2^+}2^{-4k_1^+}2^{-(1+\beta)j_1}2^{-(1+\beta)j_2}. \end{equation} Indeed, this follows by an $L^2\times L^\infty$ estimate, using \eqref{gmo2.1}, the first bound in \eqref{LinftyBd}, and Definition \ref{MainZDef} (we decompose in $n$ and place the function with the larger $n$ in $L^\infty$ in order to gain the favorable factor $2^{-n/2+4\beta n}$ in \eqref{LinftyBd}). The desired conclusion follows unless \begin{equation}\label{gmo6} k_2^+\geq k_1^++\D\qquad\text{ and }\qquad j_1,j_2\in[j(1-\beta/10),4m/3]. \end{equation} Assume now that \eqref{gmo6} holds. In particular, $k_2\geq\D$ and $|k-k_2|\leq 4$. We further decompose our operator in modulation. As in Lemma \ref{Vo5}, with $l_0:=-14k-20\beta m$ we estimate \begin{equation*} \begin{split} \big\|Q_{jk}&T_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim 2^{j(1+\beta)}2^m\sup_{s\in I_m}\big\|P_kI_{\leq l_0,s}[f_{j_1,k_1}^\mu(s),f_{j_2,k_2}^\nu(s)]\big\|_{L^2}\\ &\lesssim 2^{j(1+\beta)}2^m 2^{k^+}\sup_{s\in I_m}\Big\|\int_{\mathbb{R}^3}\varphi_{\leq l_0}(\Phi_{\sigma\mu\nu}(\xi,\eta))\varphi_k(\xi)\,|\widehat{f_{j_1,k_1}^\mu}(\xi-\eta,s)|\,|\widehat{f_{j_2,k_2}^\nu}(\eta,s)|\,d\eta\Big\|_{L^2_\xi}. \end{split} \end{equation*} We estimate the $L^2$ norm in the expression above using Schur's test. Using Lemma \ref{Shur2Lem}, it follows that \begin{equation}\label{gmo7} \begin{split} \big\|Q_{jk}T_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}&\lesssim 2^{j(1+\beta)}2^m 2^{k}\sup_{s\in I_m}(2^{10k}2^{l_0+\beta m})^{1/2}\|\widehat{f_{j_1,k_1}^\mu}(s)\|_{L^2}\|\widehat{f_{j_2,k_2}^\nu}(s)\|_{L^2}\\ &\lesssim \bar{\eps}^22^{j(1+\beta)}2^m\cdot 2^{-8\beta m}2^{-j_1(1-3\beta)}2^{-j_2(1+\beta)}. \end{split} \end{equation} On the other hand, for $l\geq l_0+1$ we integrate by parts in time. Using \eqref{vco6.2} we bound \begin{equation*} \begin{split} \big\|&Q_{jk}T_{m,l}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^{-l}\sup_{s\in I_m}\big\{\|P_k\widetilde{I}_{l,s}[f_{j_1,k_1}^\mu(s),f_{j_2,k_2}^\nu(s)]\|_{L^2}\\ &+2^m\|P_k\widetilde{I}_{l,s}[(\partial_sf_{j_1,k_1}^\mu)(s),f_{j_2,k_2}^\nu(s)]\|_{L^2}+2^m\|P_k\widetilde{I}_{l,s}[f_{j_1,k_1}^\mu(s),(\partial_sf_{j_2,k_2}^\nu)(s)]\|_{L^2}\big\}\\ &\lesssim \bar{\eps}^22^{m(1+\beta)}2^{-l}2^{k}\big\{2^{-j_1(1-3\beta)}2^{-j_2(1+\beta)}+2^{-m/2}2^{-j_2(1+\beta)}+2^{-j_1(1-3\beta)}2^{-20k_2}2^{-50\beta m}\big\}, \end{split} \end{equation*} where in the last term we estimated $\|\partial_sf_{j_2,k_2}^\nu(s)\|_{L^2}\lesssim \bar{\eps}2^{-m-50\beta m}2^{-30k_2^+}$ (interpolation between the last two bounds in \eqref{gmo2.2}). Therefore, for $j_1,j_2$ as in \eqref{gmo6} and $l_0=-14k-20\beta m$, \begin{equation}\label{gmo8} \sum_{l\geq l_0}\big\|Q_{jk}T_{m,l}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^22^{16k}2^{-m/2+30\beta m}+ \bar{\eps}^22^{-\beta m}. \end{equation} The desired conclusion follows from \eqref{gmo7} and \eqref{gmo8}. \end{proof} \begin{lemma}\label{mo3} The bound \eqref{gmo1} holds provided that \eqref{gmo2} holds and, in addition, \begin{equation*} j\leq -k+2\D. \end{equation*} \end{lemma} \begin{proof} Clearly $k\leq 2\D$, thus $|k_1^+-k_2^+|\leq 3\D$. We define $f^\mu_{j_1,k_1}$ and $f^\nu_{j_2,k_2}$ as before and estimate \begin{equation*} \big\|Q_{jk}T_m[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^22^{j(1+\beta)}2^m \cdot 2^{-3m/2}2^{-\max(j_1,j_2)(1/2-10\beta)}. \end{equation*} Indeed, this follows by estimating the term with the smaller $j$ in $L^\infty$ and using the last bound in \eqref{LinftyBd}, and the term with the larger $j$ in $L^2$ and using the Definition \ref{MainZDef}. The desired conclusion follows unless \begin{equation}\label{gmo9} [m+\max(j_1,j_2)](1/2-20\beta)+3\D\leq j\leq -k+2\D. \end{equation} Assume now that \eqref{gmo9} holds. In particular $k\leq -\D$. We consider first the high modulations, $l\geq l_0+1$, where $l_0:=-2k_1^+-\D$. Using \eqref{vco6.2} and \eqref{gmo2.1}--\eqref{gmo2.2} we estimate \begin{equation*} \begin{split} \big\|&Q_{jk}T_{m,l}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim 2^{j(1+\beta)}2^{-l}\sup_{s\in I_m}\big\{2^{3k/2}\|f_{j_1,k_1}^\mu(s)\|_{L^2}\|f_{j_2,k_2}^\nu(s)\|_{L^2}\\ &+2^m2^{3k/2}\|\partial_sf_{j_1,k_1}^\mu(s)\|_{L^2}\|f_{j_2,k_2}^\nu(s)\|_{L^2}+2^m2^{3k/2}\|f_{j_1,k_1}^\mu(s)\|_{L^2}\|\partial_sf_{j_2,k_2}^\nu(s)\|_{L^2}\big\}\\ &\lesssim \bar{\eps}^22^{j(1+\beta)}2^{-l}2^{3k/2}2^{-4k_1^+}. \end{split} \end{equation*} Deduce now that \begin{equation}\label{gmo10} \sum_{l\geq-2k_1^+-\D+1}\big\|Q_{jk}T_{m,l}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^22^{j(1+\beta)}2^{3k/2}. \end{equation} and since $2^{3k/2}2^{j(1+\beta)}\lesssim 2^{k(1/2-\beta)}\lesssim 2^{-m/6-\beta j}$ this takes care of the large modulation case. To estimate the contribution of small modulations we use first Proposition \ref{spaceres} (i). In particular we examine the integral defining $\mathcal{F}\{P_kT_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\}$ and notice that this integral is nontrivial only when $|\eta|+|\xi-\eta|\leq 2^{\D/2}$. Thus $k_1,k_2\in[-\D,\D]$ and, more importantly $|\nabla_\eta\Phi_{\sigma\mu\nu}(\xi,\eta)|\gtrsim 1$ in the support of the integral. Therefore, using integration by parts in $\eta$ (with Lemma \ref{tech5}), \begin{equation*} \|\mathcal{F}P_kT_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\|_{L^\infty}\lesssim \bar{\eps}^22^{-2m}\qquad\text{ if }\qquad \max(j_1,j_2)\leq m-\beta m. \end{equation*} On the other hand, if $\max(j_1,j_2)\geq m-\beta m$ then we can estimate directly \begin{equation*} \big\|Q_{jk}T_{m\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim 2^{j(1+\beta)}2^m 2^{3k/2}\|f_{j_1,k_1}^\mu\|_{L^2}\|f_{j_2,k_2}^\nu\|_{L^2}\lesssim \bar{\eps}^22^{j(1+\beta)}2^{3k/2}2^{10\beta m}. \end{equation*} Therefore, assuming \eqref{gmo9}, \begin{equation}\label{gmo11} \big\|Q_{jk}T_{m\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^22^{k/4}. \end{equation} The desired bound when \eqref{gmo9} is satisfied follows from \eqref{gmo10} and \eqref{gmo11}. \end{proof} We can now estimate the contribution of large modulations. \begin{lemma}\label{mo6} Assume that \eqref{gmo2} holds and, in addition, \begin{equation*} -k+2\D\leq j\leq m+\D. \end{equation*} Then \begin{equation}\label{gmo11.6} \sum_{l\geq -\D-10\max(k_1^+,k_2^+)-200\beta m}\big\|Q_{jk}T_{m,l}[P_{k_1}f_\mu,P_{k_2}f_\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^2 2^{-2om-2oj}. \end{equation} \end{lemma} \begin{proof} Using \eqref{vco6.2}, Lemma \ref{PhiLocLem}, \eqref{LinftyBd}, and \eqref{gmo2.1}--\eqref{gmo2.2} we estimate \begin{equation*} \begin{split} \big\|&Q_{jk}T_{m,l}[P_{k_1}f_\mu,P_{k_2}f_\nu]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^{-l}\sup_{s\in I_m}\big\{\|P_k\widetilde{I}_{l,s}[P_{k_1}f_\mu(s),P_{k_2}f_\nu(s)]\|_{L^2}\\ &+2^m\|P_k\widetilde{I}_{l,s}[P_{k_1}(\partial_sf_\mu)(s),P_{k_2}f_\nu(s)]\|_{L^2}+2^m\|P_k\widetilde{I}_{l,s}[P_{k_1}f_\mu(s),P_{k_2}(\partial_sf_\nu)(s)]\|_{L^2}\big\}\\ &\lesssim \bar{\eps}^22^{-l}2^{8\max(k_1^+,k_2^+)}2^{-m/2+\beta m}. \end{split} \end{equation*} This gives \eqref{gmo11.6}, since $\max(k_1,k_2)\leq m/41+\beta m$. \end{proof} \begin{lemma}\label{mo7} Let $\overline{k}:=\max(k_1^+,k_2^+)$. Assume that \eqref{gmo2} holds and, in addition, \begin{equation*} -k+2\D\leq j\leq m+\D. \end{equation*} Then \begin{equation}\label{gmo20} \big\|Q_{jk}T_{m,\leq -\D-10\overline{k}-200\beta m}[P_{k_1}f_\mu,P_{k_2}f_\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^2 2^{-4om} \end{equation} provided that \begin{equation}\label{gmo21} \mu=-\nu\qquad\text{ or }\qquad\min(k,k_1,k_2)\leq -\D/2\qquad\text{ or }\qquad\max(k,k_1,k_2)\geq \D/2. \end{equation} \end{lemma} \begin{proof} Using Proposition \ref{spaceres} (i) it follows that $|\nabla_\eta \Phi_{\sigma\mu\nu}(\xi,\eta)|\gtrsim 2^{-3\overline{k}}$ in the support of the integral defining $\mathcal{F}\{P_kT_{m,\leq -\D-10\overline{k}-200\beta m}[P_{k_1}f_\mu,P_{k_2}f_\nu]\}$. We define $f^\mu_{j_1,k_1}$ and $f^\nu_{j_2,k_2}$ as before and notice that the contribution of the components for which $\max(j_1,j_2)\leq m-\beta m-3\overline{k}$ is negligible, using integration by parts in $\eta$ (with Lemma \ref{tech5}). We consider two cases: {\bf{Case 1.}} Assume first that \begin{equation}\label{gmo40} |k_1^+-k_2^+|\leq\D,\qquad\max(j_1,j_2)\geq m-\beta m-3\overline{k}. \end{equation} In this case we do not lose derivatives. Assuming, without loss of generality, that $j_1\leq j_2$ we estimate first \begin{equation}\label{gmo41} \begin{split} \big\|Q_{jk}&T_{m,\leq -\D-10\overline{k}-50\beta m}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\\ &\lesssim 2^{j(1+\beta)}2^m 2^{\overline{k}}\big\{\sup_{s\in I_m,\,|t-s|\leq 2^{m/2}}\|e^{-it\Lambda_\mu}f_{j_1,k_1}^\mu(s)\|_{L^\infty}\|f_{j_2,k_2}^\nu(s)\|_{L^2}+2^{-8m}\big\}\\ &\lesssim 2^{m(1+\beta)}2^m\cdot 2^{-4\overline{k}}2^{-3m/2}2^{(1/2+3\beta)j_1}2^{-(1-3\beta)j_2}+2^{-4m}, \end{split} \end{equation} where we used Lemma \ref{PhiLocLem} and the second estimate in \eqref{LinftyBd}. This suffices to bound the contribution of the components with $j_1\leq m-20\beta m$ and $j_2\geq m-\beta m-3\overline{k}$. On the other hand, if $j_1\geq m-20\beta m$ then, using Schur's test and Lemma \ref{Shur2Lem}, \begin{equation*} \begin{split} \big\|Q_{jk}T_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}&\lesssim 2^{m(1+\beta)}2^m\cdot \sup_{s\in I_m}2^{\overline{k}}(2^{l_0+\beta m}2^{10\overline{k}})^{1/2}\|\widehat{f_{j_1,k_1}^\mu}(s)\|_{L^2}\|\widehat{f_{j_2,k_2}^\nu}(s)\|_{L^2}\\ &\lesssim 2^{-\beta m-\beta j_2}, \end{split} \end{equation*} provided that $l_0=-\D-10\overline{k}-200\beta m$. The desired bound \eqref{gmo20} follows using also \eqref{gmo11.6}. {\bf{Case 2.}} Assume now that \begin{equation}\label{gmo50} |k_1^+-k_2^+|\geq\D\,,\qquad\max(j_1,j_2)\geq m-\beta m-3\overline{k}. \end{equation} We may assume that $k_2^+-k_1^+\geq\D$ and, in particular $k_2\geq \D$, $|k-k_2|\leq 4$. In this case we examine the phase $\Phi_{\sigma\mu\nu}(\xi,\eta)=\Lambda_\sigma(\xi)-\Lambda_\mu(\xi-\eta)-\Lambda_\nu(\eta)$. Notice that \begin{equation*} \sqrt{1+a^2}+\sqrt{1+b^2}-\sqrt{1+(a+b)^2}\geq\sqrt{1+a^2}-a\geq (1+a)^{-1}/2 \end{equation*} for any $a\leq b\in[0,\infty)$. Recalling that $\Lambda_e=\sqrt{1+d|\nabla|^2}$, $\Lambda_b=\sqrt{1+|\nabla|^2}$, $d\in(0,1)$, it is easy to see that the operator is nontrivial only when \begin{equation}\label{gmo51} \nu=\sigma=b,\qquad \mu=\pm e,\qquad \Phi_{\sigma\mu\nu}(\xi,\eta)=\Lambda_b(\xi)\pm\Lambda_e(\xi-\eta)-\Lambda_b(\eta). \end{equation} In particular, $|\nabla_\eta\Phi_{\sigma\mu\nu}(\xi,\eta)|\gtrsim 1$ in the support of the integral defining our operator. Therefore, using integration by parts in $\eta$ (Lemma \ref{tech5}), the contribution is negligible unless $\max(j_1,j_2)\geq m-\beta m$. The same $L^2\times L^\infty$ estimate as in \eqref{gmo41}, using the $L^2$ norm on the term with the higher $j$ and the $L^\infty$ norm on the term with the lower $j$, gives the desired bound unless \begin{equation}\label{gmo51.5} j_1\in [m-20\beta m-8k_1^+,2m]\qquad \text{ and }\qquad j_2\in [m-20\beta m-8k_2^+,2m]. \end{equation} It remains to prove that, for $j_1$ and $j_2$ as in \eqref{gmo51.5}, \begin{equation}\label{gmo51.7} \big\|Q_{jk}T_{m,\leq -\D-10\overline{k}-200\beta m}[f^\mu_{j_1,k_1},f^\nu_{j_2,k_2}]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^2 2^{-5om}. \end{equation} Since $|\nabla_\eta\Phi_{\sigma\mu\nu}(\xi,\eta)|\gtrsim 1$ we also have stronger bounds on sublevel sets (compare with \eqref{cas4}). More precisely, combining (the proofs of) Lemma \ref{lemma00} and Lemma \ref{Shur2Lem}, we have that for any $\eps>0$ \begin{equation}\label{gmo52} \sup_{\xi\in\mathbb{R}^3}\int_{\mathbb{R}^3}\mathbf{1}_{E_\eps}(\xi,\eta)\,d\eta+\sup_{\eta\in\mathbb{R}^3}\int_{\mathbb{R}^3}\mathbf{1}_{E_\eps}(\xi,\eta)\,d\xi\lesssim \eps 2^{3k_1^+}, \end{equation} where, with $k\geq k_1^++\D-10$ and $\Phi_{\sigma\mu\nu}(\xi,\eta)=\Lambda_b(\xi)\pm\Lambda_e(\xi-\eta)-\Lambda_b(\eta)$ as before, \begin{equation}\label{gmo53} E_\eps:=\{(\xi,\eta)\in\mathbb{R}^3\times\mathbb{R}^3:\,|\xi|,|\eta|\in[2^{k-8},2^{k+8}],\,|\xi-\eta|\leq 2^{k_1+8},\,|\Phi_{\sigma\mu\nu}(\xi,\eta)|\leq\eps\}. \end{equation} Therefore with $l_0=-\D-10\overline{k}-200\beta m$, we can improve slightly the Schur's lemma argument: \begin{equation*} \begin{split} \big\|Q_{jk}T_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}&\lesssim 2^{m(1+\beta)}2^m\cdot \sup_{s\in I_m}2^{\overline{k}}(2^{l_0}2^{3k_1^+})^{1/2}\|\widehat{f_{j_1,k_1}^\mu}(s)\|_{L^2}\|\widehat{f_{j_2,k_2}^\nu}(s)\|_{L^2}\\ &\lesssim 2^{-\beta m-\beta j_2}. \end{split} \end{equation*} The desired bound \eqref{gmo20} follows in this case as well. \end{proof} \subsection{Space-time resonant interactions} In view of Lemmas \ref{mo2}, \ref{mo3}, \ref{mo6}, and \ref{mo7}, to complete the proof of \eqref{gmo1} it remains prove the following proposition: \begin{proposition}\label{mo10} For $\sigma\in\{e,b\}$ and $\mu,\nu\in\{e,b,-e,-b\}$, $\mu\neq-\nu$, we have \begin{equation}\label{nj1} \big\|Q_{jk}T_{m,\leq -\D}^{\sigma\mu\nu}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim \bar{\eps}^2 2^{-5om}, \end{equation} provided that \begin{equation}\label{nj2} k,k_1,k_2\in[-\D/2,\D/2],\qquad \max(j_1,j_2)\leq 2m,\qquad \text{ and }\qquad 3\D/2\leq j\leq m+\D. \end{equation} As before, we assume that $t\in[0,T]$ is fixed, $m\in[0,L+1]$, $(k,j),(k_1,j_1),(k_2,j_2)\in\mathcal{J}$, $f_\mu=f^{\alpha_1,\L_1}_{\mu}$, $f_\nu=f^{\alpha_2,\L_2}_{\nu}$, $|\L_1|+|\L_2|\leq N_1$, $|\alpha_1|+|\alpha_2|\leq 4$, and \begin{equation*} f_{j_1,k_1}^\mu=P_{[k_1-2,k_1+2]}Q_{j_1k_1}f_\mu,\qquad f_{j_2,k_2}^\nu=P_{[k_2-2,k_2+2]}Q_{j_2k_2}f_\nu. \end{equation*} \end{proposition} The proof of this proposition contains the analysis of space-time resonances. It is more delicate than before, in the sense that we need to use the restriction operators $A_n^\sigma$ and the precise definition of the spaces $B_j^\sigma$. We show first that we can restrict further the range of pairs $(j_1,j_2)$. \begin{lemma}\label{Reso0} With the hypothesis in Proposition \ref{mo10}, the bound \eqref{nj1} follows if \begin{equation}\label{top0} 2\max(j_1,j_2)\geq (1+20\beta)[m+\min(j_1,j_2)]\qquad\text{ or }\qquad \max(j_1,j_2)\geq 14m/15. \end{equation} \end{lemma} \begin{proof} Assume that $j_1\leq j_2$ and $2 j_2\geq (1+20\beta)(m+j_1)$. Then we estimate, as in \eqref{gmo41}, \begin{equation*} \big\|Q_{jk}T_{m,\leq -\D}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^m\cdot \bar{\eps}^22^{-3m/2}2^{(1/2+3\beta)j_1}2^{-j_2(1-3\beta)}\lesssim \bar{\eps}^22^{-\beta m/2}, \end{equation*} as desired. On the other hand, if \begin{equation*} j_2\geq 14m/15\qquad\text{ and }\qquad 2 j_2\leq (1+20\beta)(m+j_1) \end{equation*} then we can decompose dyadically in modulation. With $l_0:=-3m/7$ we estimate, using Schur's test as in Lemma \ref{mo7}, \begin{equation*} \big\|Q_{jk}T_{m,\leq l_0}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^m\cdot \bar{\eps}^2(2^{l_0}2^{\beta m})^{1/2}2^{-j_1+3\beta j_1}2^{-j_2+3\beta j_2}\lesssim \bar{\eps}^22^{-\beta m}. \end{equation*} Finally, for $l\geq l_0+1$ we estimate, as in Lemma \ref{mo6}, \begin{equation*} \big\|Q_{jk}T_{m,l}[f_{j_1,k_1}^\mu,f_{j_2,k_2}^\nu]\big\|_{B_j^\sigma}\lesssim 2^{m(1+\beta)}2^{-l}\cdot \bar{\eps}^22^{-3m/2}. \end{equation*} The desired conclusion follows. \end{proof} \begin{lemma}\label{Reso01} With the hypothesis in Proposition \ref{mo10}, the bound \eqref{nj1} follows if \begin{equation}\label{top01} 2\max(j_1,j_2)\leq (1+20\beta)[m+\min(j_1,j_2)]\qquad\text{ and }\qquad \max(j_1,j_2)\leq 14m/15. \end{equation} \end{lemma} \begin{proof} This lemma contains the main resonant cases. We decompose dyadically in modulation and integrate by parts, using the formula \eqref{vco6.2}. It remains to prove that for any $l\in [-m+\beta m/10,-\D+4]$ and $s\in I_m$ fixed we have \begin{equation}\label{top1} 2^{-l}\|I_{\leq l,s}[f_{j_1,k_1}^\mu(s),f_{j_2,k_2}^\nu(s)]\big\|_{B_j^\sigma}+2^{-l}\|\widetilde{I}_{l,s}[f_{j_1,k_1}^\mu(s),f_{j_2,k_2}^\nu(s)]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^2 2^{-\beta m/5}, \end{equation} and \begin{equation}\label{top2} 2^{-l}2^m\|\widetilde{I}_{l,s}[(\partial_sf_{j_1,k_1}^\mu)(s),f_{j_2,k_2}^\nu(s)]\big\|_{B_j^\sigma}+2^{-l}2^m\|\widetilde{I}_{l,s}[f_{j_1,k_1}^\mu(s),(\partial_sf_{j_2,k_2}^\nu)(s)]\big\|_{B_j^\sigma}\lesssim \bar{\eps}\,^2 2^{-\beta m/5}. \end{equation} {\bf{Proof of \eqref{top1}.}} We notice that \eqref{top1} is an instantaneous estimate, in the sense that the time evolution plays no role. Hence, it suffices to show the following: let $\chi\in C^\infty(\mathbb{R})$ be supported in $[-1,1]$ and assume that $j, l, s, m$ satisfy \begin{equation}\label{Ass3} -m+\beta m/10\le l\le -\D+4,\qquad 2^{m-4}\le s\le 2^{m+4},\qquad j\leq m+\D. \end{equation} Define the bilinear operator $I$ by \begin{equation}\label{gra0} \widehat{I[f,g]}(\xi):=\int_{\mathbb{R}^3}e^{is\Phi(\xi,\eta)}\chi_l(\Phi(\xi,\eta))\widehat{f}(\xi-\eta)\widehat{g}(\eta)d\eta,\qquad \chi_l(x)=\chi(2^{-l}x), \end{equation} where $\Phi=\Phi_{\sigma\mu\nu}$. Assume that $f,g$ satisfy \begin{equation}\label{gra1} \|f\|_{\H^{N_0-N_1-5}\cap Z_1^\mu}+\|g\|_{\H^{N_0-N_1-5}\cap Z_1^\nu}\leq 1, \end{equation} and define $f_{j_1,k_1}:=P_{[k_1-2,k_1+2]}Q_{j_1k_1}f$, $g_{j_2,k_2}:=P_{[k_2-2,k_2+2]}Q_{j_2k_2}g$. Then \begin{equation}\label{gra2} 2^{-l}\|Q_{jk}I[f_{j_1,k_1},g_{j_2,k_2}]\|_{B_j^\sigma}\lesssim 2^{-\beta m/5}, \end{equation} provided that $k,k_1,k_2\in[-\D/2,\D/2]$ and $j_1,j_2$ satisfy \eqref{top01}. In proving \eqref{gra2}, without loss of generality we may assume that $j_1\leq j_2\leq 14m/15$. With $I:=I[f_{j_1,k_1},g_{j_2,k_2}]$, recalling \eqref{psidag} and \eqref{cas2}, we will show that \begin{equation}\label{Alx81} 2^{-l}\sup_{|\xi|\in[2^{-3\D/4},2^{3\D/4}]} \vert (1+2^{m}\Psi^\dagger_\sigma(\xi))^{1/2+10\beta}\widehat{I}(\xi)\vert\lesssim 2^{2\beta m-m/2}. \end{equation} Notice that this is stronger than the bound \eqref{gra2}. Indeed if $\sigma=b$ then for $j$ fixed we estimate \begin{equation*} \begin{split} \sup_{0\leq n\leq j+1}&2^{(1+\beta)j}2^{-4\beta n}\big\|A_{n,(j)}^\sigma Q_{jk}I\big\|_{L^2}\\ &\lesssim \sup_{0\leq n\leq j+1}2^{(1+\beta)j}2^{-4\beta n}\big\|\varphi_{-n}^{[-j-1,0]}(\Psi^\dagger_\sigma(\xi))\varphi_{k}(\xi)\widehat{I}(\xi)\big\|_{L^2_\xi}\\ &\lesssim \sum_{n\geq 0}2^{(1+\beta)j}2^{-n/2-4\beta \min(n,j)}\big\|\varphi_{-n}^{(-\infty,0]}(\Psi^\dagger_b(\xi))\varphi_{k}(\xi)\widehat{I}(\xi)\big\|_{L^\infty_\xi}, \end{split} \end{equation*} and notice that \eqref{gra2} would follow from \eqref{Alx81}. The proof if similar (in fact simpler) if $\sigma=e$. To prove \eqref{Alx81} assume that $m\geq \D^2$ and $\xi\in\mathbb{R}^3$ is fixed with $|\xi|\in[2^{-3\D/4},2^{3\D/4}]$. Let \begin{equation*} \Xi(\xi,\eta):=(\nabla_\eta\Phi)(\xi,\eta). \end{equation*} We remove first the nonresonant contribution. With $\kappa_r:=2^{\beta m/40}\big(2^{-m/2}+2^{j_2-m}\big)$ we define \begin{equation}\label{grn1} \mathcal{NR}(\xi):=\int_{\mathbb{R}^3}e^{is\Phi(\xi,\eta)}\chi_{l}(\Phi(\xi,\eta))(1-\varphi(\kappa_r^{-1}\Xi(\xi,\eta)))\widehat{f_{j_1,k_1}}(\xi-\eta)\widehat{g_{j_2,k_2}}(\eta)d\eta. \end{equation} With $\psi_1:=\varphi_{\leq m-\beta m/20}$ and $\psi_2:=1-\varphi_{\leq m-\beta m/20}$, we further decompose \begin{equation*} \begin{split} &\mathcal{NR}(\xi)=\mathcal{NR}_1(\xi)+\mathcal{NR}_2(\xi),\\ &\mathcal{NR}_i(\xi):=C2^{l}\int_{\mathbb{R}}\int_{\mathbb{R}^2}e^{i(s+\lambda)\Phi(\xi,\eta)}\widehat{\chi}(2^l\lambda)\psi_i(\lambda)(1-\varphi(\kappa_r^{-1}\Xi(\xi,\eta)))\widehat{f_{j_1,k_1}}(\xi-\eta)\widehat{g_{j_2,k_2}}(\eta)\,d\eta d\lambda. \end{split} \end{equation*} Since $\widehat{\chi}$ is rapidly decreasing we have $\|\varphi_k\cdot\mathcal{NR}_2\|_{L^\infty}\lesssim 2^{-4m}$, which gives an acceptable contribution. On the other hand, in the support of the integral defining $\mathcal{NR}_1$, we have that $\vert s+\lambda\vert\approx 2^m$ and integration by parts in $\eta$ (using Lemma \ref{tech5}) gives $\|\varphi_k\cdot\mathcal{NR}_1\|_{L^\infty}\lesssim 2^{-4m}$. Therefore the contribution of $\mathcal{NR}$ can be estimated as claimed in \eqref{Alx81}. In view of Proposition \ref{spaceres} (ii), (iii), $\widehat{I}-\mathcal{NR}$ is nontrivial only if we have a space-time resonance. In particular, we may assume that \begin{equation}\label{Alx74.6} (\sigma,\mu,\nu)\in\{(b,e,e),(b,e,b),(b,b,e)\},\qquad \min\big(\big||\xi|-\gamma_1\big|,\big||\xi|-\gamma_2\big|\big)\leq 2^{-\D/2}. \end{equation} We may also assume that $|\xi_3|\geq 2^{-\D/2}$ (the proof is similar if $|\xi_1|\geq 2^{-\D/2}$ or if $|\xi_2|\geq 2^{-\D/2}$). By rotation, using the vector-fields $\Omega_1$ and $\Omega_2$ we may assume that $\xi=(0,0,\xi_3)$. We would like to use Lemma \ref{RotIBP}. Recalling now the definition of $\lambda_\mu$ in \eqref{deflambd}, we let \begin{equation}\label{nba1} \begin{split} &\Phi^1(\xi,\eta):=(\Omega_1)_\eta\Phi(\xi,\eta)=\frac{\lambda'_\mu(|\xi-\eta|)}{|\xi-\eta|}(\eta_2\xi_3-\eta_3\xi_2),\\ &\Phi^2(\xi,\eta):=(\Omega_2)_\eta\Phi(\xi,\eta)=\frac{\lambda'_\mu(|\xi-\eta|)}{|\xi-\eta|}(\eta_3\xi_1-\eta_1\xi_3),\\ &\Phi^3(\xi,\eta):=(\Omega_3)_\eta\Phi(\xi,\eta)=\frac{\lambda'_\mu(|\xi-\eta|)}{|\xi-\eta|}(\eta_1\xi_2-\eta_2\xi_1). \end{split} \end{equation} Let $\kappa_\theta:=2^{\beta m/40}2^{-m/2}$ and define \begin{equation*} \begin{split} \mathcal{R}_{\perp}(\xi):=\int_{\mathbb{R}^3}&e^{is\Phi(\xi,\eta)}\chi_{l}(\Phi(\xi,\eta))\varphi(\kappa_r^{-1}\Xi(\xi,\eta))\\ &\big[1-\varphi(\kappa_\theta^{-1}\Phi^1(\xi,\eta))\varphi(\kappa_\theta^{-1}\Phi^2(\xi,\eta))\big]\widehat{f_{j_1,k_1}}(\xi-\eta)\widehat{g_{j_2,k_2}}(\eta)d\eta. \end{split} \end{equation*} We apply Lemma \ref{RotIBP} twice, after decomposing \begin{equation*} 1-\varphi(\kappa_\theta^{-1}\Phi^1(\xi,\eta))\varphi(\kappa_\theta^{-1}\Phi^2(\xi,\eta))=[1-\varphi(\kappa_\theta^{-1}\Phi^2(\xi,\eta))]+\varphi(\kappa_\theta^{-1}\Phi^2(\xi,\eta))[1-\varphi(\kappa_\theta^{-1}\Phi^1(\xi,\eta))]. \end{equation*} Notice that the factors $\psi_1(\xi,\eta)$, $\psi_2(\xi,\eta)$ are already accounted for by the factor $\varphi(\kappa_r^{-1}\Xi(\xi,\eta))$ and the assumptions $|\xi_3|\geq 2^{-\D/2}$ and $m\geq \D^2$. It follows that $|\mathcal{R}_{\perp}(\xi)|\lesssim 2^{-4m}$. It remains to bound the resonant component \begin{equation}\label{nba2} \begin{split} \mathcal{R}_{||}(\xi):=J_{||}[f_{j_1,k_1},&g_{j_2,k_2}](\xi):=\int_{\mathbb{R}^3}e^{is\Phi(\xi,\eta)}\chi_{l}(\Phi(\xi,\eta))\varphi(\kappa_r^{-1}\Xi(\xi,\eta))\\ &\varphi(\kappa_\theta^{-1}\Phi^1(\xi,\eta))\varphi(\kappa_\theta^{-1}\Phi^2(\xi,\eta))\widehat{f_{j_1,k_1}}(\xi-\eta)\widehat{g_{j_2,k_2}}(\eta)d\eta. \end{split} \end{equation} More precisely, for \eqref{Alx81} it remains to prove that if $\xi=(0,0,\xi_3)$, $\xi_3\in[2^{-\D/2},2^{\D/2}]$, then \begin{equation}\label{nba4} \vert (1+2^{m}\Psi^\dagger_b(\xi))\mathcal{R}_{||}(\xi)\vert\lesssim 2^{2\beta m-m/2}2^l. \end{equation} We examine now the integral in \eqref{nba2}. In view of Proposition \ref{spaceres} (ii), this integral is nontrivial only if \begin{equation}\label{EstimPsi} \vert \Psi_b(\xi)\vert =\vert \Phi(\xi,p(\xi))\vert\lesssim\vert \Phi(\xi,\eta)\vert+\vert \Phi(\xi,\eta)-\Phi(\xi, p(\xi))\vert\lesssim 2^l+\kappa_r^2. \end{equation} Using \eqref{nba1} and Proposition \ref{spaceres} (ii), for $\xi=(0,0,\xi_3)$ fixed, $\eta$ is supported in the rectangle \begin{equation}\label{nba5} \mathcal{Q}_{\xi}:=\{\eta=(\eta_1,\eta_2,\eta_3):\,|\eta_1|+|\eta_2|\leq 2^{4\D}\kappa_\theta,\,|\eta_3-p_+(\xi_3)|\leq 2^{4\D}\kappa_r\}. \end{equation} Recall from Lemma \ref{LinEstLem} (ii) and \eqref{gra1} that \begin{equation}\label{nba6} \begin{split} 2^{j_1/2-j_1/20}\Vert \widehat{f_{j_1,k_1}}\Vert_{L^\infty}+2^{j_1-j_1/20}\Vert\sup_{\theta\in\mathbb{S}^2} |\widehat{f_{j_1,k_1}}(r\theta)|\Vert_{L^2(r^2dr)}&\lesssim 1,\\ 2^{j_2/2-j_2/20}\Vert \widehat{g_{j_2,k_2}}\Vert_{L^\infty}+2^{j_2-j_2/20}\Vert\sup_{\theta\in\mathbb{S}^2} |\widehat{g_{j_2,k_2}}(r\theta)|\Vert_{L^2(r^2dr)}&\lesssim 1. \end{split} \end{equation} Using only the $L^\infty$ bounds in \eqref{nba6} and ignoring the cutoff function $\chi_{l}(\Phi(\xi,\eta))$ in \eqref{nba2}, we estimate first \begin{equation*} |\mathcal{R}_{||}(\xi)|\lesssim \kappa_r\kappa_\theta^22^{-9j_2/20}2^{-9j_1/20}\lesssim 2^{\beta m/10}2^{-9j_2/20}2^{-9j_1/20}2^{-m}(2^{-m/2}+2^{j_2-m}). \end{equation*} Since $\vert \Psi_b(\xi)\vert \lesssim 2^l+\kappa_r^2$ (see \eqref{EstimPsi}), the desired bound \eqref{nba4} follows easily if $j_2\leq m/2$. On the other hand, if $j_2\geq m/2$ then the left-hand side of \eqref{nba4} is dominated by \begin{equation*} C2^m(2^l+\kappa_r^2)\cdot 2^{\beta m/10}2^{-9j_2/20}2^{-9j_1/20}2^{-m}2^{j_2-m}\lesssim (2^l+\kappa_r^2)2^{\beta m/2}2^{-m}2^{11j_2/20-9j_1/20}. \end{equation*} In view of the assumption \eqref{top01}, $11j_2/20-9j_1/20\leq 3m/10-10\beta m$. The desired bound \eqref{nba4} follows if $\kappa_r^2\leq 2^l2^{m/5}$. Finally assume that $\kappa_r^2\ge 2^l2^{m/5}$ (in particular $j_2\geq 11m/20$). In this case the restriction $|\Phi(\xi,\eta)|\lesssim 2^l$ is stronger and we have to use it. We decompose, with $p_-:=\lfloor\log_2(2^{l/2}\kappa_r^{-1})+\D\rfloor$, \begin{equation*} \mathcal{R}_{||}(\xi)=\sum_{p\in[p_-,0]}\mathcal{R}^p_{||}(\xi), \end{equation*} where \begin{equation}\label{nba7.5} \begin{split} \mathcal{R}_{||}^p(\xi):=J^p_{||}[f_{j_1,k_1},&g_{j_2,k_2}](\xi):=\int_{\mathbb{R}^3}e^{is\Phi(\xi,\eta)}\chi_{l}(\Phi(\xi,\eta))\varphi_p^{[p_-,1]}(\kappa_r^{-1}\Xi(\xi,\eta))\\ &\varphi(\kappa_\theta^{-1}\Phi^1(\xi,\eta))\varphi(\kappa_\theta^{-1}\Phi^2(\xi,\eta))\widehat{f_{j_1,k_1}}(\xi-\eta)\widehat{g_{j_2,k_2}}(\eta)d\eta. \end{split} \end{equation} Notice that if $\mathcal{R}_{||}^p(\xi)\neq 0$ then $\vert\Psi_b(\xi)\vert\lesssim 2^{2p}\kappa_r^2$ (this is stronger than \eqref{EstimPsi}). The term $\mathcal{R}_{||}^{p_-}(\xi)$ can be bounded as before. On the other hand, for $p\geq p_--1$ we would like to get a more precise description on the support of integration in $\eta$ (better than the one in \eqref{nba5}). For this we write \begin{equation}\label{nba8} \Phi(\xi,\eta)=\sqrt{1+|\xi|^2}-\sqrt{1+d_\mu|\xi-\eta|^2}-\sqrt{1+d_\nu|\eta|^2}, \end{equation} where $d_e=d\in(0,1)$ and $d_b=1$. Since $\xi=(0,0,\xi_3)$, $\xi_3\in[2^{-\D/2},2^{\D/2}]$, and $|\eta_1|+|\eta_2|\leq 2^{4\D}\kappa_\theta$, the condition $|\Xi(\xi,\eta)|\in[2^{p-2}\kappa_r,2^{p+2}\kappa_r]$ implies that $|\partial_{\eta_3} \Phi(\xi,\eta)|\approx 2^p\kappa_r$. In particular, using Proposition \ref{spaceres} (ii), the $\eta$ support of integration is included in the set \begin{equation*} \{\eta=(\eta_1,\eta_2,\eta_3):\,|\eta_1|+|\eta_2|\leq 2^{4\D}\kappa_\theta,\,|\eta_3-p_+(\xi_3)|\approx 2^p\kappa_r,\,|\Phi(\xi,\eta)|\leq 2^l\}. \end{equation*} Based on Lemma \ref{lemma00}, this set is essentially contained in a union of two $(\kappa_\theta)^2\times 2^l2^{-p}\kappa_r^{-1}$ tubes. Using \eqref{nba6} and estimating $\Vert \widehat{f_{j_1,k_1}}\Vert_{L^\infty}\lesssim 2^{-9j_1/20}\lesssim 2^{40\beta m}2^{9(m-2j_2)/20}$, see \eqref{top01}, we have \begin{equation*} \begin{split} |\mathcal{R}^p_{||}(\xi)|&\lesssim (\kappa_\theta)^2\times (2^l2^{-p}\kappa_r^{-1})^{1/2}\Vert \widehat{g_{j_2,k_2}}\Vert_{L^\infty_\theta L^2(rdr)}\Vert \widehat{f_{j_1,k_1}}\Vert_{L^\infty}\\ &\lesssim (\kappa_\theta)^2\times (2^l2^{-p}\kappa_r^{-1})^{1/2}2^{40\beta m}2^{9m/20}2^{-9j_2/5}. \end{split} \end{equation*} Therefore, since $|\Psi(\xi)|\lesssim 2^{2p}\kappa_r^2$ in the support of $\mathcal{R}^p_{||}$, \begin{equation*} \begin{split} \vert (1+2^{m}\Psi_b(\xi))\mathcal{R}^p_{||}(\xi)\vert&\lesssim 2^{m+2p}\kappa_r^2\cdot 2^{-m+42\beta m}(2^l2^{-p}\kappa_r^{-1})^{1/2}2^{9m/20}2^{-9j_2/5}\\ &\lesssim 2^{3p/2}2^{l/2}2^{-m}2^{-j_2/5}. \end{split} \end{equation*} This suffices to prove \eqref{nba4} since $2^p\leq 1$, $2^{-l/2}\leq 2^{m/2}$, and $2^{-j_2/5}\leq 2^{-m/10}$. This completes the proof of the main bound \eqref{top1}. {\bf{Proof of \eqref{top2}.}} As in \eqref{gra2}, it suffices to prove that \begin{equation}\label{grd2} 2^{-l}\|Q_{jk}I[F_{j_1,k_1},g_{j_2,k_2}]\|_{B_j^\sigma}+2^{-l}\|Q_{jk}I[f_{j_1,k_1},G_{j_2,k_2}]\|_{B_j^\sigma}\lesssim 2^{-\beta m/5}, \end{equation} where $I$ is defined as in \eqref{gra0}, $F=\bar{\eps}^{-1}2^m\partial_s f_\mu$, and $G:=\bar{\eps}^{-1}2^m\partial_s g_\nu$. The functions $f,g,F,G$ satisfy the bounds \begin{equation}\label{grd3} \begin{split} &\|f\|_{\H^{N_0-N_1-5}\cap Z_1^\mu}+\|g\|_{\H^{N_0-N_1-5}\cap Z_1^\nu}\leq 1,\\ &\|F\|_{\H^{N_0-N_1-6}}+2^{m/2}\|F\|_{L^2}+\|G\|_{\H^{N_0-N_1-6}}+2^{m/2}\|G\|_{L^2}\leq 1, \end{split} \end{equation} compare with the bounds in Proposition \ref{sDeriv} (iii). As before, we may assume that $k_1,k_2\in[-\D/2,\D/2]$, and that the parameters $j, l, s, m, j_1,j_2$ satisfy the bounds \eqref{Ass3} and \eqref{top01}. As before, for \eqref{grd2} it suffices to prove the stronger pointwise bound \begin{equation*} \begin{split} 2^{-l}&\sup_{|\xi|\in[2^{-3\D/4},2^{3\D/4}]}\big|(1+2^m\Psi_{\sigma}^\dagger(\xi))^{1/2+10\beta}\mathcal{F}\{I[F_{j_1,k_1},g_{j_2,k_2}]\}\big|\\ &+2^{-l}\sup_{|\xi|\in[2^{-3\D/4},2^{3\D/4}]}\big|(1+2^m\Psi_{\sigma}^\dagger(\xi))^{1/2+10\beta}\mathcal{F}\{I[f_{j_1,k_1},G_{j_2,k_2}]\}\big|\lesssim 2^{-m/2}. \end{split} \end{equation*} In proving this we may assume $j_1\leq j_2$, $m\geq \D^2$, and first remove the negligible nonresonant interactions (defined as in \eqref{grn1}). Then we may assume that $\sigma=b$, $\xi=(0,0,\xi_3)$, with $\xi_3\in [2^{-\D/2},2^{\D/2}]$, and remove the negligible non-parallel interactions. After these reductions, with $J_{||}$ defined as in \eqref{nba2}, it remains to prove that \begin{equation}\label{grd4} \begin{split} \big|(1+2^m\Psi_{b}^\dagger(\xi))^{1/2+10\beta}&J_{||}[F_{j_1,k_1},g_{j_2,k_2}](\xi)\big|\\ &+\big|(1+2^m\Psi_{b}^\dagger(\xi))^{1/2+10\beta}J_{||}[f_{j_1,k_1},G_{j_2,k_2}](\xi)\big|\lesssim 2^l2^{-m/2}. \end{split} \end{equation} The functions $f_{j_1,k_1}$ and $g_{j_2,k_2}$ satisfy the bounds \eqref{nba6}. Moreover, \begin{equation}\label{grd5} \Vert \sup_{\theta\in\mathbb{S}^2}|\widehat{F_{j_1,k_1}}(r\theta)|\Vert_{L^2(r^2dr)}+\Vert \sup_{\theta\in\mathbb{S}^2}| \widehat{G_{j_2,k_2}}(r\theta)|\Vert_{L^2(r^2dr)}\lesssim 2^{-m/2+m/40}. \end{equation} as a consequence of \eqref{grd3}, using the same interpolation argument as in the proof of \eqref{RadL2}. We ignore first the cutoff function $\chi_l(\Phi(\xi,\eta))$ and notice that the variable $\eta$ is included in the set $\mathcal{Q}_\xi$ defined in \eqref{nba5}. Using \eqref{grd5} and the $L^\infty$ bounds in \eqref{nba6} we estimate first \begin{equation}\label{grd7} \begin{split} |J_{||}[F_{j_1,k_1},g_{j_2,k_2}](\xi)|&+|J_{||}[f_{j_1,k_1},G_{j_2,k_2}](\xi)|\lesssim \kappa_\theta^2\kappa_r^{1/2} 2^{-j_1/2+j_1/20}2^{-m/2+m/40}\\ &\lesssim 2^{-3m/2+m/39}2^{-9j_1/20}(2^{-m/4}+2^{(j_2-m)/2}). \end{split} \end{equation} Since $\kappa_r=2^{\beta m/40}(2^{-m/2}+2^{j_2-m})$ and $\vert \Psi_b(\xi)\vert \lesssim 2^l+\kappa_r^2$ (see \eqref{EstimPsi}), the desired bound \eqref{grd4} follows easily from \eqref{grd7} if $j_2\leq m/2$. On the other hand, if $j_2\geq m/2$ then $2^{-9j_1/20}\lesssim 2^{40\beta m}2^{9/20(m-2j_2)}$, and the bound \eqref{grd7} gives \begin{equation}\label{grd8} |J_{||}[F_{j_1,k_1},g_{j_2,k_2}](\xi)|+|J_{||}[f_{j_1,k_1},G_{j_2,k_2}](\xi)|\lesssim 2^{-3m/2}2^{-2j_2/5}. \end{equation} The desired bound \eqref{grd4} follows if $\kappa_r^2\leq 2^l2^{2j_2/5}$. On the other hand, if $\kappa_r^2\geq 2^l2^{2j_2/5}$ (in particular this implies $j_2\geq 11m/20$) then we have to use the stronger restriction $|\Phi(\xi,\eta)|\lesssim 2^l$. For $p\in[p_-,0]$, $p_-:=\lfloor\log_2(2^{l/2}\kappa_r^{-1})+\D\rfloor$, we define the operators $J^p_{||}$ as in \eqref{nba7.5}. Notice that the contribution of $J^{p_-}_{||}$ can be estimated easily using the fact that $\vert \Psi_b(\xi)\vert \lesssim 2^l$ in the support of $J^{p_-}_{||}$. Moreover, as proved earlier, the $\eta$ support of integration in the definition of $J^{p_-}_{||}$ is included in the set \begin{equation*} \{\eta=(\eta_1,\eta_2,\eta_3):\,|\eta_1|+|\eta_2|\leq 2^{4\D}\kappa_\theta,\,|\eta_3-p_+(\xi_3)|\approx 2^p\kappa_r,\,|\Phi(\xi,\eta)\leq 2^l\}, \end{equation*} which is essentially contained in a union of two $(\kappa_\theta)^2\times 2^l2^{-p}\kappa_r^{-1}$ tubes (based again on Lemma \ref{lemma00}). Using \eqref{grd5} and the $L^\infty$ bounds in \eqref{nba6} we estimate \begin{equation*} |J_{||}^p[F_{j_1,k_1},g_{j_2,k_2}](\xi)|+|J_{||}^p[f_{j_1,k_1},G_{j_2,k_2}](\xi)|\lesssim \kappa_\theta^2(2^l2^{-p}\kappa_r^{-1})^{1/2} 2^{-9j_1/20}2^{-m/2+m/40}. \end{equation*} Since $2^{-9j_1/20}\lesssim 2^{40\beta m}2^{9/20(m-2j_2)}$ and $\vert \Psi_b(\xi)\vert \lesssim 2^{2p}\kappa_r^2$, it follows that \begin{equation*} \begin{split} \big|(1+2^m\Psi_{b}^\dagger(\xi))^{1/2+10\beta}&J^p_{||}[F_{j_1,k_1},g_{j_2,k_2}](\xi)\big|+\big|(1+2^m\Psi_{b}^\dagger(\xi))^{1/2+10\beta}J^p_{||}[f_{j_1,k_1},G_{j_2,k_2}](\xi)\big|\\ &\lesssim (2^{m+2p}\kappa_r^2)^{1/2+10\beta}\cdot 2^{-m}(2^l2^{-p}\kappa_r^{-1})^{1/2} 2^{9/20(m-2j_2)}2^{-m/2+m/38}\\ &\lesssim 2^{p/2}2^{l/2}2^{-2j_2/5}2^{-m}. \end{split} \end{equation*} The desired conclusion \eqref{grd4} follows, which completes the proof of the lemma. \end{proof}
1,108,101,566,836
arxiv
\section{\label{sec:introduction}INTRODUCTION} The interaction of ultrashort (fs-ps) intense laser pulses with solids is relevant to a wide area of research ranging from high-harmonic generation \cite{Shambhu,Vampa,You,Ndabashimiye,Morimoto} to material machining \cite{Zhigilei,Ilday,Audouard,Betz,Schmidt,Tiinnermann,Mourou,Wang,Gamaly}. The process of ultrafast laser micromachining, which can suppress heat-affected zone, starts from the energy transfer from the laser to the material by electron excitation, followed by that from the hot electrons to the lattice. As a result, the material undergoes phase and/or structural transition\cite{Medvedev}, leaving a change of the optical constants or a defect behind \cite{Mazur}, which eventually leads to ablation, drilling, or structuring \cite{Mirza,Silaeva,Rethfeld,Rethfeld2,Rudenko,Chimier,Lorazo,Thorstensen,Kondo,Upadhyay,Ivanov,Ivanov2,Itina2,Garrison,Sakabe,Ishino}. The comprehensive modeling of laser material machining is highly complex, multi-scale in both time and space, multi-phase (solid, fluid, plasma, cluster, etc.), and possibly accompanied by chemical reactions. Plasma or continuum models \cite{Zhigilei,Audouard,Matzen,Lehman,Wu,Itina}, for example, have been employed to describe and simulate such processes, advancing understanding. However, they have difficulties in examining initial transient dynamics before the local thermodynamic equilibrium is reached. It has now become possible to describe the attosecond-femtosecond electron dynamics under intense laser fields with the time-dependent density-functional theory (TDDFT) \cite{salmon,Yabana,Magyar,Otobe} or time-dependent density-matrix methods \cite{DM,Hirori,Sanari}. TDDFT is an {\it ab initio} method that offers a good compromise between accuracy and computational feasibility. Its computational cost is, however, still very high, especially, if one wants to perform long-timescale simulation, coupling it with molecular dynamics and electromagnetic-field analysis. In TDDFT, each electron orbital satisfies the time-dependent Kohn-Sham (TDKS) equation [see Eq.~(\ref{TDKS}) below]. The leading order of a semiclassical $\hbar$ expansion of the TDKS equation reduces to the Vlasov equation, which describes the temporal evolution of the electron distribution function in phase space. Thus, Vlasov-based approaches are expected to be a cost-effective alternative to TDDFT, in particular, for metals. Such approaches have previously been applied to ionization and explosion dynamics of molecules \cite{Ishikawa,c60} and metal clusters \cite{Giglio,Fennel,Kohn,Plagne,Domps,Domps2}. The Vlasov equation is numerically solved with so-called pseudo-particle methods in these studies, which represent the electron cloud as an assembly of classical test particles whose motion is governed by Newton’s equations of motion. There are several reports of application to Na clusters, well agreeing with TDDFT results \cite{Plagne,Domps,Domps2,Calvayrac,Feret}. In this paper, we extend the pseudo-particle method based on the Vlasov equation to the description of electron dynamics in extended systems under intense laser fields. The effective potential acting on the electrons contains not only the ionic potential, interelectronic Hartree potential, and interaction with laser but also the exchange-correlation potential within the local-density approximation (LDA), and incorporates the periodic boundary condition. We apply the present method to bulk aluminum. The calculated optical conductivity, refractive index, extinction coefficient, and reflectivity as well as energy absorption are in excellent agreement with TDDFT calculations and experimental references. The present paper is organized as follows. Section \ref{sec:methods} describes our simulation methods. We review the Vlasov equation and describe our numerical implementations with the periodic boundary condition. In Sec.~\ref{sec:results} we describe numerical application to bulk aluminum and compare the results with TDDFT and measurement values. The conclusions are given in Sec.~\ref{sec:conclusions}. \section{\label{sec:methods}METHODS} \subsection{\label{subsec:Vlasov equation}Vlasov equation} Among the methods for treating quantum many-body dynamics, TDDFT provides a feasible computational framework for treating electronic systems' optical response or charged particles' collision phenomena \cite{NaAr}. The time propagation of a $N_e$-electron system comes down to solving a set of equations for the Kohn-Sham orbitals $\{\phi_i(\mathbf{r},t)\}$ that evolve in a self-consistent mean field \cite{TDDFT}, \begin{equation} \mathrm{i}\hbar \frac{\partial}{\partial t}\phi_i(\mathbf{r},t) = h_{\mathrm{KS}}[n_{\mathrm{e}}(\mathbf{r},t)]\phi_i(\mathbf{r},t), \label{TDKS} \end{equation} where, \begin{equation} h_{\mathrm{KS}}[n_{\mathrm{e}}(\mathbf{r},t)] = -\frac{\hbar^2}{2m}\nabla^2 + V_{\mathrm{eff}}[n_{\mathrm{e}}(\mathbf{r},t)], \end{equation} denotes the Kohn-Sham Hamiltonian, $m$ the electron mass, $V_{\rm eff}$ the effective potential (see below), and the time-dependent electron density $n_e({\bf r},t)$ is defined as, \begin{equation} n_{\mathrm{e}}(\mathbf{r},t) = \sum_{i=1}^{N_e} |\phi_i(\mathbf{r},t)|^2. \end{equation} Analogously, the density operator $\hat{\rho}(t)$ is defined as, \begin{equation} \mel{\mathbf{r}}{\hat{\rho}(t)}{\mathbf{r}'} = \sum_{i=1}^{N_e} \phi_i^*(\mathbf{r},t)\phi_i(\mathbf{r}',t), \end{equation} whose evolution is govenrned by the von-Neumann equation (vNE), \begin{equation} \frac{\partial}{\partial t}\hat{\rho}(t) = -\frac{\mathrm{i}}{\hbar}\left[ \hat{h}_{\mathrm{KS}}(t), \hat{\rho}(t) \right]. \label{TDVN} \end{equation} Performing the Wigner transformation \cite{Wigner} and taking the limit $\hbar \to 0$, the density operator $\hat{\rho}(t)$ is mapped onto a real function $f(\mathbf{r},\mathrm{\mathbf{p}},t)$, which obeys the Vlasov equation, \begin{align} \frac{\partial}{\partial t}&f(\mathbf{r},\mathrm{\mathbf{p}},t)\notag\\ &= -\frac{\mathrm{\mathbf{p}}}{m}\cdot \nabla_{\mathbf{r}}f(\mathbf{r},\mathrm{\mathbf{p}},t)+\nabla_{\mathbf{r}}V_{\mathrm{eff}}[n_{\mathrm{e}}(\mathbf{r},t)]\cdot \nabla_{\mathrm{\mathbf{p}}}f(\mathbf{r},\mathrm{\mathbf{p}},t), \label{Vlasov} \end{align} which is a classical alternative to the vNE, Eq.~(\ref{TDVN}), where $\mathrm{\mathbf{p}}$ is the electron canonical momentum. Here, $f(\mathbf{r},\mathbf{p},t)$ is interpreted as the electron distribution in phase space. The effective potential $V_{\mathrm{eff}}$ is a functional of the electron density distribution $n_e({\bf r},t)$ and decomposed into, \begin{equation} V_{\mathrm{eff}}[n_{\mathrm{e}}(\mathbf{r},t)] = V_{\mathrm{Coulomb}}[n_{\mathrm{e}}(\mathbf{r},t)] + V_{\mathrm{xc}}[n_{\mathrm{e}}(\mathbf{r},t)] + V_{\mathrm{ext}}(\mathbf{r},t), \label{Veff} \end{equation} with the exchange-correlation potential $V_{xc}$, external field potential $V_{\rm ext}$, and, \begin{equation} V_{\mathrm{Coulomb}}[n_{\mathrm{e}}(\mathbf{r},t)] = \sum_{i}V_{\mathrm{ps}}(\mathbf{r}-\mathbf{r}_i) + V_{\mathrm{\mathrm{H}}}[n_{\mathrm{e}}(\mathbf{r},t)], \end{equation} where $i$, $V_{\mathrm{ps}}$ and $V_{\mathrm{H}}$ denote the label of ions and the spherically symmetric ionic pseudopotential and the electron-electron Hartree potential, respectively. Several previous works for Na clusters have used their original pseudo potentials \cite{Fennel,Giglio}, adjusted so that the simulation results reproduce the static and dynamical properties of the system. In this work, instead, we employ the modified Heine-Abarenkov type local pseudo potential for $V_{\mathrm{ps}}$, \begin{equation} V_{\mathrm{ps}}(r) = -\frac{z}{R}e\left\{ \frac{1}{r} \left[ 1-(1+\beta r)\mathrm{e}^{-\alpha r} - A\mathrm{e}^{-r} \right] \right\} (r=|\mathbf{r}|), \end{equation} where $z$ is the number of the valence electrons, and $A, R, \alpha$, and $\beta$ are material dependent parameters determined by \it ab initio \rm density functional formalism in Ref.~\cite{Vps}, thus, independently from Vlasov simulations. Their values for the bulk aluminum crystal are $A=3.574 \ \mathrm{a.u.}, \alpha=3.635 \ \mathrm{a.u.}, \beta=0.8343 \ \mathrm{a.u.}, R=0.334 \ \mathrm{a.u.}, z=3$. $V_{\mathrm{H}}$ is evaluated by solving the Poisson equation, \begin{equation} \label{eq:Poisson} \Delta V_{\mathrm{H}} [n_e(\mathbf{r},t)] = -4\pi e n_e(\mathbf{r},t). \end{equation} Here, let us introduce a real-space simulation box $\Omega$, on which the periodic boundary condition is imposed, and translation vectors $\mathbf{G}$. $\Omega$ is defined as, \begin{equation} \Omega = \left\{ \mathbf{r} = \sum_{j=x,y,z} a_j\mathbf{e}_j \Bigg{|} \ 0\le a_j<1 \right\}, \end{equation} where $\{\mathbf{e}_j\}$ are the lattice vectors along the $j$-axis ($j=x, y, z$), whose lengths are denoted by $L_j = |\mathbf{e}_j|$. Integrals with respect to $\mathbf{r}$ are taken over $\Omega$ in what follows. The translation vectors are given by, \begin{equation} \mathbf{G} = \sum_{j=x,y,z} M_j\mathbf{e}_j \ (M_j = 0, \pm1, \pm2, \cdots). \end{equation} Taking the periodic boundary condition into account, the Coulomb terms $V_{\mathrm{ps}}$ and $V_{\mathrm{\mathrm{H}}}$ are represented as a Fourier series expansion. The pseudo potential term is rewritten as, \begin{gather} \sum_{i}^{\infty}V_{\mathrm{ps}}(\mathbf{r}-\mathbf{r}_i)=\sum_{\mathbf{G},i=1}^{N_{\mathrm{ion}}}V_{\mathrm{ps}}(\mathbf{r}-\mathbf{r}_i-\mathbf{G}), \end{gather} where $N_{\mathrm{ion}}$ denotes the number of ions in $\Omega$ and, \begin{gather} \sum_{\mathbf{G}}V_{\mathrm{ps}}(\mathbf{r}-\mathbf{r}_i-\mathbf{G})\qquad \qquad \qquad \qquad \qquad \notag \\ \qquad = \mathcal{F}^{-1} \left[ \sum_{i} \mathrm{e}^{-\mathbf{Q}\cdot\mathbf{r}_i}\left\{ V_{\mathrm{ps}}(Q) +\frac{4\pi}{Q}z \right\} \right]. \label{Vps} \end{gather} with $\mathbf{Q}$ being the coordinates in the Fourier domain ($Q=|\mathbf{Q}|$), $\mathcal{F}[\cdot]$ the Fourier series expansion within $\Omega$, and, \begin{gather} V_{\mathrm{ps}}(Q) = 4\pi z e R^2 \left[ -\frac{1}{\left( QR \right)^2} + \frac{1}{\left( QR \right)^2 + \alpha^2} \right. \notag\\ \qquad \left. + \frac{2\alpha \beta}{\left\{ \left( QR \right)^2 + \alpha^2 \right\}^2} + \frac{2A}{\left\{ \left( QR \right)^2 + 1 \right\}^2} \right], \end{gather} One obtains the solution of the Poisson equation (Eq.~(\ref{eq:Poisson})) for the electron density $n_{\mathrm{e}}$ given within $\Omega$ as, \begin{equation} V_{\mathrm{\mathrm{H}}}[n_{\mathrm{e}}(\mathbf{r},t)] = \mathcal{F}^{-1} \left[ \mathcal{F} \left[ n_{\mathrm{e}}(\mathbf{r},t) \right] \frac{4\pi e}{Q^2} \right]. \end{equation} For the exchange-correlation potential $V_{\mathrm{xc}}$, one employs the LDA by Perdew and Zunger \cite{PZ}. The laser-electron interaction is described in the length gauge, \begin{equation} V_{\mathrm{ext}}(\mathbf{r},t) = -e\mathbf{E}(t)\cdot \mathbf{r}, \end{equation} within the dipole approximation, where ${\bf E}$ denotes the laser electric field vector. In this case, ${\bf p}$ becomes the kinetic momentum, and, thus, the electronic current density $\mathbf{J}(t)$ averaged over $\Omega$ is given by, \begin{equation} \mathbf{J}(t) = \frac{1}{|\Omega|}\iint_{\Omega} \left(-e \frac{\mathbf{p}}{m}\right) f(\mathbf{r},\mathbf{p},t) \dd \mathbf{r} \dd \mathbf{p}. \label{eq:current density} \end{equation} \subsection{\label{subsec:numerical implementations} Numerical implementations} \subsubsection{\label{subsubsec:pseudo-particle method} Pseudo-particle method} The direct propagation of the distribution function would require the treatment of six-dimensional time-dependent function on grids \cite{gridVlasov}. To avoid such a massive computation, one introduces the pseudo-particle method \cite{Bertsch,Giglio,Fennel,Kohn}, where the distribution function $f(\mathbf{r},\mathbf{p},t)$ is expressed by a set of pseudo-particles with mass $m$ as, \begin{equation} f(\mathbf{r},\mathbf{p},t) = \frac{1}{N_s}\sum_{i=1}^{N_{\mathrm{pp}}}g_r \left( \mathbf{r}-\mathbf{r}_i(t) \right) g_p \left( \mathbf{p}-\mathbf{p}_i(t) \right). \label{ftp} \end{equation} Here $\mathbf{r}_i, \mathbf{p}_i$ are the position and canonical momentum of each pseudo particle labeled by $i$. The total number of pseudo particles $N_{pp}$ is given by $N_{pp}=N_sN_e$, where $N_{s}$ and $N_e$ are the number of pseudo particles per electron and the total number of the electrons contained in $\Omega$, respectively. Statistical error is reduced by increasing $N_s$. $N_s$ is set to 10000 in this study. $g_r(\mathbf{r})$ and $g_p(\mathbf{p})$ denote smoothing kernel functions for the position and momentum, respectively, of Gaussian forms, \begin{align} g_r (\mathbf{r}) &= \sum_{\{\mathbf{G}\}} \frac{1}{\pi^{3/2}d_r^3}\exp(-|\mathbf{r} + \mathbf{G}|^2/d_r^2), \label{eq:smoothing-kernel-r}\\ g_p (\mathbf{p}) &= \frac{1}{\pi^{3/2}d_p^3}\exp(-|\mathbf{p}|^2/d_p^2), \end{align} where $d_r$ and $d_p$ are smoothing widths. Only the nearest neighbor cells are included in summation over $\mathbf{G}$ in Eq.~\eqref{eq:smoothing-kernel-r}. The kernel functions are normalized as, \begin{align} \int_{\Omega} g_r (\mathbf{r}) \dd \mathbf{r} &= 1, \\ \int g_p (\mathbf{p}) \dd \mathbf{p} &= 1, \end{align} so that, \begin{gather} \iint_{\Omega} f({\bf r},{\bf p},t)\, \dd \mathbf{r} \dd \mathbf{p} = N_e. \end{gather} The scattering cross-section of the electron and the effective potential is adjusted through $d_r$; the smaller $d_r$, the larger the cross-section. Here we use $d_r = 0.575 \ \mathrm{a.u.}$ so that the time-dependent energy absorption reproduces the TDLDA results. In the present collision-less case, $d_p$ is not used explicitly. The field quantities such as $V_{\mathrm{eff}}$ and $n_e$ are evaluated on three-dimensional grids discretized into $N_j$ ($j=x,y,z$) intervals on the $j$ axis with a spatial step $\Delta j = L_j/N_j$. In our calculation we set $\Delta x = \Delta y = \Delta z = 0.5 \ \mathrm{a.u.}$ Here, $d_r/\Delta x\simeq 1.15$ is a good parameterization leading to stable simulation \cite{Fennel}. It should be noticed that $d_r$ is the only adjustable parameter in our formalism. The electron density on a grid point $\mathbf{r}$ is calculated as, \begin{equation} n_e(\mathbf{r},t) = \int \dd \mathbf{p} f(\mathbf{r},\mathbf{p},t) = \frac{1}{N_s}\sum_{i=1}^{N_{\mathrm{pp}}}g_r \left( \mathbf{r}_i(t)-\mathbf{r} \right). \label{n_e} \end{equation} The current density $\mathbf{J}(t)$ [Eq.~\eqref{eq:current density}] is evaluated as, \begin{equation} \mathbf{J}(t) = -\frac{1}{|\Omega|} \frac{e}{N_s}\sum_{i=1}^{N_{\mathrm{pp}}} \frac{\mathbf{p}_i(t)}{m}. \end{equation} The Hamiltonian in pseudo-particle representation is written as, \begin{equation} H_{\mathrm{pp}} = \frac{1}{N_s}\sum_i^{N_{\mathrm{pp}}} \left[ \frac{\mathbf{p}^2_i(t)}{2m} + \int_{\Omega} V_{\mathrm{eff}}(\mathbf{r},t)g_r \left( \mathbf{r}_i-\mathbf{r} \right) \dd \mathbf{r} \right]. \label{ppH} \end{equation} The motion of each pseudo particle is governed by the Newton equations under the effective potential $V_{\mathrm{eff}}$ with the periodic boundary condition as, \begin{equation} \dot{\mathbf{r}_i}=\frac{\mathbf{p}_i}{m}, \ \dot{\mathbf{p}}_i = -\int_{\Omega} V_{\mathrm{eff}}(\mathbf{r})\nabla_{\mathbf{r}_i} g_r(\mathbf{r}_i-\mathbf{r})\mathrm{d}\mathbf{r}. \label{Newton} \end{equation} As long as pseudo-particle canonical variables $\mathbf{r}_i, \mathbf{p}_i$ obey the Newton equation (Eq.~(\ref{Newton})), one-body distribution (Eq.~(\ref{ftp})) satisfies the Vlasov equation (Eq.~(\ref{Vlasov})). The force term is given as the gradient of the $N_{\mathrm{pp}}$-body Hamiltonian $H_{\mathrm{pp}}$. One numerically integrates it as, \begin{align} \int V_{\mathrm{eff}}(\mathbf{r})&\nabla_{\mathbf{r}} g_r(\mathbf{r}_i-\mathbf{r})\mathrm{d}\mathbf{r} \notag\\ &\simeq \sum_{\mathbf{r}\in \Omega} V_{\mathrm{eff}}(\mathbf{r})\nabla_{\mathbf{r}_i} g_r(\mathbf{r}_i-\mathbf{r})\Delta x\Delta y\Delta z, \end{align} using the analytical form of $\nabla_{\mathbf{r}_i} g_r(\mathbf{r}_i-\mathbf{r})$, \begin{align} \nabla_{\mathbf{r}_i} &g_r (\mathbf{r}_i-\mathbf{r}) \notag \\ &= \sum_{\{\mathbf{G}\}} \frac{-2(\mathbf{r}_i - \mathbf{r} + \mathbf{G})}{\pi^{3/2}d_r^5}\exp(-|\mathbf{r}_i - \mathbf{r} + \mathbf{G}|^2/d_r^2). \end{align} The integration of Eq.~(\ref{Newton}) is performed by the Verlet method \cite{verlet} with time step $\Delta t = 0.02 \ \mathrm{a.u.}$ Particles exiting $\Omega$ are to re-enter $\Omega$ from the other side. \subsubsection{\label{subsubsec:ground state}Ground state} The initial state is the stationary solution of the Vlasov equation described by the Thomas-Fermi model. The total energy functional, \begin{gather} E_{\mathrm{all}}[n_{\mathrm{e}}(\mathbf{r})] = \int_{\Omega} \Bigl[ \frac{3}{10}\frac{\hbar^2(3\pi^2)^{\frac{2}{3}}}{m}n_{\mathrm{e}}^{\frac{5}{3}}(\mathbf{r}) + \frac{1}{2}V_{\mathrm{H}}(\mathbf{r})n_{\mathrm{e}}(\mathbf{r}) \notag \\ \quad + \sum_{\mathbf{G},i=1}^{N_{\mathrm{ion}}}V_{\mathrm{ps}}(\mathbf{r}-\mathbf{r}_i-\mathbf{G})n_{\mathrm{e}}(\mathbf{r}) + E_{\mathrm{xc}}[n_{\mathrm{e}}(\mathbf{r})] \Bigr] \mathrm{d} \mathbf{r}, \end{gather} is variationally minimized with respect to $n_{\mathrm{e}}(\mathbf{r})$ under the constraint that the box $\Omega$ contains $N_e$ electrons. This leads to the following coupled equations, \begin{equation} \frac{\hbar^2}{2m}\left[ 3\pi^2n_e(\mathbf{r}) \right]^{2/3} + V_{\mathrm{eff}}(\mathbf{r}) = \mu, \label{tf} \end{equation} \begin{equation} V_{\mathrm{eff}}(\mathbf{r}) = V_{\mathrm{Coulomb}}[n_{\mathrm{e}}(\mathbf{r})] + V_{\mathrm{xc}}[n_{\mathrm{e}}(\mathbf{r})], \label{tf_veff} \end{equation} where $\mu$ denotes the chemical potential, playing the role of a Lagrange multiplier. These equations are to be solved for $n_e({\bf r})$ self-consistently. An adopted algorithm to solve the coupled equations Eq.~(\ref{tf}) and Eq.~(\ref{tf_veff}) is shown in Fig.~\ref{GSscheme}. \begin{figure}[tb] \begin{algorithmic}[1] \Procedure{Ground state preparation}{} \State{(--Initialization--)} \State{initial guess of $\mu$ and $n_e^{\mathrm{in}}$} \State{ } \State{(--Self-consistent determination of $\mu$ and $n_e$--)} \While{$\Delta n> \epsilon \, (\epsilon=10^{-7})$} \State{set pseudo-particle position \{$\mathbf{r}_i$\}} \State{$\{\mathbf{r}_i\}\mapsto n_{\mathrm{ps}}(\mathbf{r})$ using Eq.~(\ref{n_e})} \State{$n_{\mathrm{ps}} \mapsto V_{\mathrm{eff}}[n_{\mathrm{ps}}](\mathbf{r})$ using Eq.~(\ref{tf_veff})} \State{$V_{\mathrm{eff}}[n_{\mathrm{ps}}](\mathbf{r}) \mapsto n_e$} \State{find appropriate $\mu$} \State{$\Delta n = \int_{\Omega} \dd \mathbf{r}|n_e^{\mathrm{in}}(\mathbf{r})-n_e(\mathbf{r})|$} \State{$n_e^{\mathrm{in}}=n_e$} \EndWhile \State{ } \State{(--Set Pseudo-particle Momenta--)} \For{$i=1,N_{pp}$} \While{$p>p_f$} \State{random number $p_x$ ($0 \le |p_x| \le p_f$)} \State{random number $p_y$ ($0 \le |p_y| \le p_f$)} \State{random number $p_z$ ($0 \le |p_z| \le p_f$)} \State{$p=\sqrt{p_x^2+p_y^2+p_z^2}$} \EndWhile \State{$\mathbf{p}_i=(p_x,p_y,p_z)$} \EndFor \EndProcedure \end{algorithmic} \caption[GS algorithm]{Algorithm for the ground state preparation} \label{GSscheme} \end{figure} First, the chemical potential $\mu$ and the electron density $n_e^{\mathrm{in}}$ in the real space are guessed so that the total number of electrons within $\Omega$ is $N_e$ (line 3). Then, one distributes pseudo particles according to the guessed $n_e^{\mathrm{in}}$ using random numbers (line 7, also see below). The electron density $n_{\mathrm{ps}}$ realized by the pseudo-particle distribution is calculated through Eq.~(\ref{n_e}) (line 8). The effective potential $V_{\mathrm{eff}}({\bf r})$ is obtained by substituting $n_{\mathrm{ps}}$ into the right-hand side of Eq.~ (\ref{tf_veff}) (line 9). Then, we update the electron density $n_e({\bf r}) $ by substituting thus obtained $V_{\mathrm{eff}}({\bf r})$ to the Eq.~(\ref{tf}) and solving it with respect to $n_e$ (line 10), simultaneously updating $\mu$ by a bisection method to satisfy the condition that the total number of electrons is $N_e$ (line 11). The updated $n_e$ is used as $n_e^{\mathrm{in}}$ in the next iteration of the loop (line 13). One repeats the above operations till convergence, $\int_{\Omega} \dd \mathbf{r}|n_e^{\mathrm{in}}-n_e|<\epsilon$, where we set $\epsilon=10^{-7}$ here for crystalline Al (line 12). After convergence, one distributes the momenta of the pseudo particles uniformly within the local Fermi radius $p_f$ by acceptance-rejection sampling of uniform pseudo-random numbers (lines 17-25). The algorithm to distribute the pseudo particles (line 7 in Fig.~\ref{GSscheme}) is shown in Fig.~\ref{dist}. \begin{figure}[tb] \begin{algorithmic}[1] \Procedure{how to set pseudo particle position}{} \State{(--\# of Pseudo Particles around $\mathbf{r}_s$--)} \State{a sub-grid point $\mathbf{r}_s=(x_s,y_s,z_s)$} \State{calculate $n_e^{\mathrm{inp}}(\mathbf{r}_s)$ by interpolation} \State{the number of pseudo particles $n_e^{\mathrm{inp}} \mapsto N_{\mathrm{pp}}^{\mathrm{local}}$} \State{ } \State{(--Distribute Pseudo Particles around $\mathbf{r}_s$--)} \For{$l=1,N_{\mathrm{pp}}^{\mathrm{local}}$} \State{random number $R_x$ ($-0.5\le R_x \le0.5$)} \State{random number $R_y$ ($-0.5\le R_y \le0.5$)} \State{random number $R_z$ ($-0.5\le R_z \le0.5$)} \State{$x_l=x_s+R_xL_x/N_{\mathrm{inp}}^x$} \State{$y_l=y_s+R_yL_y/N_{\mathrm{inp}}^y$} \State{$z_l=z_s+R_zL_z/N_{\mathrm{inp}}^z$} \State{$l$-th pseudo particle position is} \State{$\mathbf{r}_l=(x_l,y_l,z_l)$} \EndFor \EndProcedure \end{algorithmic} \caption[distribution algorithm]{Algorithm for pseudo particle distribution} \label{dist} \end{figure} We introduce sub-grid points (line 3) by dividing each voxel of the computational grid into $N_{\mathrm{inp}}^j$ regions along the $j$-axis ($j=x,y,z$). The electron density $n_e^{\mathrm{inp}}$ on each sub-grid point is evaluated, based on the trilinear interpolation from those of the surrounding eight computational grid points, from which one calculates the number of pseudo particles $N_{\mathrm{pp}}^{\mathrm{local}}$ around the sub-grid point (lines 4-5). Then, the $N_{\mathrm{pp}}^{\mathrm{local}}$ pseudo particles are uniformly distributed around the sub-grid point using random numbers (lines 8-17). \subsection{\label{subsec:linear response evaluation} Linear response} We evaluate the linear optical response via impulse response by doing dynamical simulations with the initial pseudo-particle momenta $\mathbf{p}_i$ shifted from the ground-state values $\mathbf{p}_i^{\mathrm{GS}}$ by a small amount $\Delta \mathbf{p}$, \begin{equation} \mathbf{p}_i = \mathbf{p}_i^{\mathrm{GS}} + \Delta \mathbf{p}, \end{equation} where $\Delta \mathbf{p} = (0, 0, 0.1 \ \mathrm{a.u.})$ in this study. This is equivalent to the application of an impulse electric field, \begin{equation} \mathbf{E}(t) = -\frac{1}{e}\Delta \mathbf{p}\,\delta(t), \end{equation} where $\delta(t)$ is the delta function. Noting that this field has a constant power spectrum across all frequencies, one can readily obtain the optical conductivity as, \begin{equation} \sigma_{mn}(\omega) = -\frac{e\hat{J}_m(\omega)}{\Delta p_n}\quad (m,n = x,y,z), \end{equation} where $\Delta p_m$ and $\hat{J}_m$ ($m,n = x,y,z$) denote the $m$ component of the momentum shift and the temporal Fourier transform of the current density, respectively. The fast Fourier transformation algorithm \cite{FFT} is used for the evaluation of $\hat{J}_m(\omega)$. Assuming isotropic media, the dielectric function $\varepsilon_{mm}(\omega)$, the complex refractive index $n(\omega)$, and the reflectivity $R(\omega)$ are given by, \begin{align} \varepsilon_{mm}(\omega) &= 1+4\pi\mathrm{i}\frac{\sigma_{mm}(\omega)}{\omega},\\ n(\omega) &= \sqrt{\varepsilon_{mm}(\omega)},\\ R(\omega) &= \left| \frac{\sqrt{\varepsilon_{mm}(\omega)}-1}{\sqrt{\varepsilon_{mm}(\omega)}+1} \right|^2,\\ \ \notag \end{align} respectively, especially, $\varepsilon_{xx}(\omega)=\varepsilon_{yy}(\omega)=\varepsilon_{zz}(\omega)$. \section{\label{sec:results} RESULTS} In this section, we compare the results of the Vlasov-LDA simulations for extended systems described in the previous section with the experimental values and the TDDFT results obtained by the open source code SALMON \cite{salmon,salmon2,salmon3}. We take aluminum as a target material. For Vlasov-LDA, simulation parameters are $N_s=10000$, $N_e=12$, and time step $\Delta t=0.025 \ \mathrm{a.u.}$. For TDDFT, we employ a norm-conserving pseudopotential \cite{FHI} and the LDA functional \cite{PZ}, with the number of k-points $48^3$, number of real-space grids $14^3$, and $dt=0.15 \ \mathrm{a.u}$. We assume an external electric field linearly polarized along the $\Gamma-X$ direction of the following temporal profile: \begin{equation} E(t)=E_0\sin\left[\omega\left(t-\frac{T}{2}\right)\right]\sin^2\left(\frac{t}{T}\pi\right) \ (0\le t\le T), \end{equation} where $E_0$ denotes the field amplitude, $\hbar \omega$ the photon energy, and $T$ the (foot-to-foot) full pulse duration. The corresponding full width at half maximum duration of the laser intensity profile is about $0.36T$. \subsection{\label{subsec:linear response} Linear response} \begin{figure}[tb] \centering \includegraphics[keepaspectratio,width=\hsize]{new_sigma.eps} \includegraphics[keepaspectratio,width=\hsize]{new_nk.eps} \includegraphics[keepaspectratio,width=\hsize]{new_R.eps} \caption[optical conductivity]{(a) Optical conductivity $\sigma(\omega)$, (b) refractive index $n$ and extinction coefficient $k$, (c) reflectivity $R(\omega)$, calculated with the Vlasov-LDA method and TDDFT as well as reported in experimental reference \cite{experiment}. The experimental values are plotted at 3.1 eV (400 nm), 6.2 eV (200 nm), and 15.5 eV (80 nm).} \label{LR} \end{figure} Let us first discuss the complex optical conductivity, refractive index, extinction coefficient, and reflectivity as a function of photon energy. Despite the simpleness of the Vlasov-LDA approach, its results excellently agree with the TDDFT results and experimental values (Fig.~\ref{LR}), especially above 2 eV photon energy. The peak and dip around $1.5 \ \mathrm{eV}$ in the TDDFT results are due to interband absorption, which is not reproduced by the present Vlasov approach, since the latter takes only the single free-electron dispersion into account. Focusing on reflectivity behavior around the plasma frequency, one finds some differences between the two approaches. This difference would be contributions by the above-mentioned interband resonance and non-unity effective mass in TDDFT. We have confirmed it through the decomposition of the response obtained by TDDFT into Drude and Lorentz model components. With the resonance energy set to 1.85 eV, the biggest oscillator around 1.5 eV \cite{DL2}, The estimated effective mass $m_{\mathrm{eff}}$ is $1.09m$, and the damping constant is $0.51 \ \mathrm{eV}^{-1}$, consistent with the values reported previously ($1.16m$ \cite{EffMass} and $0.80 \ \mathrm{eV}^{-1}$ \cite{DL2}, respectively). The loss functions, $\mathrm{Im}\epsilon(\omega)^{-1}$, are shown in Fig.~\ref{Loss}. The TDDFT result is excellently reproduced by the combined Drude and Lorentz contributions with $m_{\mathrm{eff}}=1.09m$. Although Vlasov-LDA overestimates the plasma frequency compared to TDDFT, it agrees with the Drude model with $m_{\mathrm{eff}}=m$. \begin{figure}[tb] \centering \includegraphics[keepaspectratio,width=\hsize]{newloss.eps} \caption[loss function]{Loss functions by TDDFT, Vlasov-LDA, combined Drude and Lorentz models ($m_{\mathrm{eff}}=1.09m$), and Drude model ($m_{\mathrm{eff}}=m$). The peak of the loss function gives plasma frequency.} \label{Loss} \end{figure} \subsection{\label{subsec:energy absorption} Energy absorption} Let us next investigate the energy absorption from the laser pulse. We evaluate the energy absorption by the electrons as their energy increment by the pulse irradiation. The energy is calculated as $\Delta E = H_{\mathrm{pp}}(t=\infty)-H_{\mathrm{pp}}(t=0)$ in the Vlasov-LDA simulation and as $\expval{h_{\mathrm{KS}}(t=\infty)}-\expval{h_{\mathrm{KS}}(t=0)}$ in the TDDFT simulation. We show the fluence dependence for the fixed intensity ($10^{12}\,{\rm W/cm}^2$) at 80 nm wavelength in Fig.~\ref{energy-duration} and that for the fixed pulse width of 3.8 fs at 200 and 400 nm wavelengths in Fig.~\ref{energy-intensity}. \begin{figure}[tb] \centering \includegraphics[keepaspectratio,width=\hsize]{80nm-f.eps} \caption[Pulse Duration dependence of electron energy absorption]{Calculated absorbed energy vs. pulse fluence or pulse width for a fixed intensity (1 ${\rm TW/cm}^2$). Pink circles: Vlasov-LDA, black squares: TDDFT, solid line: linear dependence passing through the square (TDDFT) for the $3.8 \ \mathrm{fs}$ pulse. } \label{energy-duration} \end{figure} \begin{figure}[tb] \centering \includegraphics[keepaspectratio,width=\hsize]{newrabi.eps} \caption[Pulse Duration dependence of electron energy absorption]{Solid lines: temporal evolution of the absorbed energy for three different pulse widths 75, 187, and 375 fs. Dashed lines: maximum values for each pulse width. } \label{rabi} \end{figure} \begin{figure}[tb] \centering \includegraphics[keepaspectratio,width=\hsize]{200nm-f.eps} \includegraphics[keepaspectratio,width=\hsize]{400nm-f.eps} \caption[Laser intensity dependence of electron energy absorption]{Absorbed energy vs. pulse fluence or peak intensity for a fixed pulse width of 3.8 fs for the case of (a) 200 nm and (b) 400 nm wavelength. Pink circles: Vlasov-LDA, black squares: TDDFT, solid line: linear dependence passing through the square (TDDFT) for $10^{11} \ \mathrm{W/cm^2}$ intensity.} \label{energy-intensity} \end{figure} We see in Fig.~\ref{energy-duration} that both Vlasov-LDA and TDDFT results are linear in fluence and agree well with each other in the lower fluence region ($\lesssim 50 \ \mathrm{mJ/cm^2}$). On the other hand, the Vlasov-LDA does not reproduce the TDDFT results for the higher fluence, where the latter deviate from the linear behavior and even decrease with increasing fluence. This difference is due to Rabi-like oscillation \cite{SA}, as confirmed in Fig.~\ref{rabi}, which shows the temporal evolution of absorbed energy for several pulse widths. The maximum electron energy gain during the pulse, which are indicated by horizontal dashed lines, does not depend much on the pulse width, suggesting Rabi-like coherent oscillation. Thus, there is an optimum pulse width for a fixed intensity in terms of energy absorption. Figure \ref{energy-intensity} indicates that the energy absorption calculated by the Vlasov-LDA approach exhibit a linear dependence on fluence or pulse intensity in the whole range. We can see a nonlinear behavior, on the other hand, in the TDDFT results in the higher fluence range ($\gtrsim 40 \ \mathrm{mJ/cm^2}$ for 200 nm and $\gtrsim 2 \ \mathrm{mJ/cm^2}$ for 400 nm. This would be interpreted as saturable absorption as is widely observed in various materials \cite{SA2}. We could not obtain the Vlasov-LDA results for the low fluence region ($\lesssim 2 \ \mathrm{mJ/cm^2}$) because of statistical error. This could be improved by increasing the total number of pseudo particles, in principle. Nevertheless, the Vlasov-LDA results, if extrapolated to the low fluence region, appear to agree well with the TDDFT results. \begin{figure}[tb] \centering \includegraphics[keepaspectratio,width=\hsize]{new_current.eps} \includegraphics[keepaspectratio,width=\hsize]{new_energy.eps} \caption[time-dependent quantities]{(a) Time-dependent current density $J(t)$ and (b) electron energy absorption $\Delta E(t)$. Pink dashed lines: Vlasov-LDA, black solid lines: TDDFT.} \label{td} \end{figure} Figure \ref{td} shows the temporal evolution of the current density and absorbed energy for $10^{12}\,{\rm W/cm}^2$ peak intensity, $200 \ \mathrm{nm}$ wavelength, and 3.8 fs pulse width. Again, overall, the Vlasov results excellently reproduce the TDDFT results. In Fig.~\ref{td} (b), although energy fluctuation due to the pseudo-particle statistical error is seen at $<2\,{\rm fs}$, it becomes negligible at the end of the pulse. Our code is partially parallelized using OpenMP and MPI. One of the most time-consuming parts is the Fourier transformation, which is computed by naive approach. Nevertheless, the computational time of the present Vlasov-LDA code is typically only 1/20 of that of TDDFT using the SALMON code. With more sophistication and parallelization, the efficiency of the Vlasov-LDA method will be further improved, which will be advantageous for applications such as parameter optimization in laser material processing. \section{\label{sec:conclusions} Conclusions} We have extended the Vlasov-LDA semi-classical approach and implemented it with the pseudo-particle method to periodic systems in order to compute the electron dynamics in solids, especially in metals, under ultrashort intense laser pulses. The Vlasov equation can be regarded as the leading order of a semiclassical $\hbar$ expansion of the time-dependent Kohn-Sham equations. The electronic distribution function is expressed by pseudo-particles, incorporating the periodic boundary condition. They play the role of Lagrangian markers embedded randomly in the electron gas. The initial distribution is calculated from the Thomas-Fermi model. We have applied this approach to crystalline aluminum. Although the method has only one adjustable parameter $d_r$, the calculated optical conductivity, refractive index, extinction coefficient, and reflectivity as well as energy absorption are overall in excellent agreement with the TDDFT and experimental results over a wide range of photon energy and fluence, demonstrating the capability of the present approach to accurately describe the dynamics of metallic conduction-band electrons. On the other hand, the Vlasov results deviate from the TDDFT ones around 1.5 eV photon energy, where interband transitions are involved, and at the high-fluence region, where a Rabi-like oscillation takes place. The next step will be to incorporate electron-electron collisions, the description of which is limited in TDDFT. Vlasov-LDA is expected to provide valuable insights into complex laser-material processing if we further couple it with molecular dynamics, electromagnetic field analysis, and other continuum models. \section*{\label{sec:acknowledgements} ACKNOWLEDGEMENTS} We wish to express our gratitude to Kazuhiro Yabana for private discussions. This research was supported by MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant Number JPMXS0118067246. This research was also partially supported by JSPS KAKENHI Grant Number 20H05670, JST COI Grant Number JPMJCE1313, and JST CREST under Grant Number JPMJCR16N5. M.T. gratefully acknowledges support from the Graduate School of Engineering, The University of Tokyo, Graduate Student Special Incentives Program. M.T. also gratefully thanks support through crowd funding platform \it academist \rm by Misako Abe, Daigo Oue, Miho Otsuka, Yusaku Karibe, Ayano Sakai, Yushi Sakai, Shunsuke A. Sato, Ryosuke Shibato, Hitomi Suto, Tomoharu Sunouchi, Hideo Takahashi, and Yusuke Tokunaga. The numerical calculations are partially performed on supercomputers Oakbridge-CX, sekirei, ohtaka (the University of Tokyo), and SGI ICE X at Japan Atomic Energy Agency(JAEA). This research is partially supported by Initiative on Promotion of Supercomputing for Young or Women Researchers, Information Technology Center, The University of Tokyo.
1,108,101,566,837
arxiv
\section{Experiments} The polycrystalline samples were synthesized by using a two-step solid state reaction method. Firstly, the starting materials bismuth powder (purity 99.5$\%$, Alfa Aesar) and sulfur powder (purity 99.99$\%$, Alfa Aesar) were mixed in a ratio of 2:3, ground and pressed into a pellet shape. Then it was sealed in an evacuated quartz tube and followed by annealing at $500\,^{\circ}$C for 10 hours. The resultant pellet was smashed and ground together with the Bi$_2$O$_3$ powder (purity 99.5$\%$, Alfa Aesar) and sulfur powder, in stoichiometry as the formula Bi$_4$O$_4$S$_3$. Again it was pressed into a pellet and sealed in an evacuated quartz tube and burned at $510\,^{\circ}$C for 10 hours. Then it was cooled down slowly to room temperature. The second step was repeated in achieving good homogeneity. The resultant sample looks black and very hard. We cut the sample and obtained a specimen with a rectangular shape for the resistive measurements. The resistivity was measured with Quantum Design instrument PPMS-16T. The temperature stabilization was better than 0.1$\%$ and the resolution of the voltmeter was better than 10$\;$nV. The magnetization was detected by the Quantum Design instrument SQUID-VSM with a resolution of about $5 \times 10^{-8}\;$emu. The sample was shaped as a bar with a typical size of $2 mm\times2 mm\times0.5 mm\;$ for the STS measurements. Since the sample is very hard, which allows us to polish the sample surface and obtain a shiny and mirror-like surface. The top surface was polished by sandpapers with different grit sizes (smallest of ISO P10000). The tunnelling spectra were measured with an ultra-high vacuum, low temperature and high magnetic field scanning probe microscope USM-1300 (Unisoku Co., Ltd.). In STS measurements, Pt/Ir tips were used. The set points of the bias voltage and tunneling current are 40$\;$mV and 100$\;$pA respectively to fix the tip height in topographic mode. Then the differential conductivity was recorded while the bias voltage was swept with the tip held at a fixed vertical distance with $z\;$-piezo-feedback off for the STS measurements. There is no atomically resolved topography measured on the sample since it is a polycrystalline one. The roughness of the surface for STS measurements is about 2$\;$nm, while on some flat surface of a grain the roughness is about 0.5$\;$nm locally. The STS spectra are repeatable at different positions in one grain. In reducing noise of the differential conductance spectra, a lock-in technique with an ac modulation of $0.1\;$mV at $987.5\;$Hz was typically used. \section{Results} \begin{figure} \includegraphics[width=8cm]{Fig1.EPS} \caption {(color online) The XRD pattern (symbols) of Bi$_4$O$_4$S$_3$ refined (red solid lines) by Topas (Bruker-D8). It is clear that the main phase is Bi$_4$O$_4$S$_3$ containing Bi$_2$S$_3$ impurity marked by a blue star and Bi by a red star. The ratio among the three phases Bi$_4$O$_4$S$_3$:Bi$_2$S$_3$:Bi is about 80:15:5.} \label{fig1} \end{figure} The crystallinity of the sample was checked by x-ray diffraction (XRD) with the Brook Advanced D8 diffractometer with Cu K$\alpha$ radiation. The analysis of XRD data was done by the softwares Powder-X and Topas. The XRD pattern looks very similar to that reported by Mizuguchi et al. \cite{22}. The result of Rietveld fitting was done with the Topas program in Fig.~\ref{fig1}, yielding a 80$\%$ volume of Bi$_4$O$_4$S$_3$ with 20$\%$ of impurities which are mainly Bi$_2$S$_3$(15$\%$) and Bi(5$\%$). \begin{figure} \includegraphics[width=8cm]{Fig2.EPS} \caption {(color online) (a) Temperature dependence of resistivity measured at three magnetic fields: $\mu_0H = $0, 6 and 14 T. It is clear that, beside a moderate magnetoresistance effect, a weak insulating behavior is induced by the magnetic field. (b) The temperature dependence of magnetization near the SC transition measured with $H = 4.8\;$Oe in the field-cooled (FC) and zero-field-cooled processes (ZFC).}\label{fig2} \end{figure} \begin{figure} \includegraphics[width=8cm]{Fig3.EPS} \caption {(color online) (a) Temperature dependence of resistivity in Bi$_4$O$_4$S$_3$ at zero field near the resistive transition. The onset transition temperature $T_\mathrm{c}^\mathrm{onset}$ at the crossing point of the normal state background (guided by a red dashed line) and the extrapolated line of the steep resistive transition part (guided by a blue dash-dot line) is $4.2\;$K. The resistance difference between the resistive curve and the normal state background extends to a very high temperature shown in the inset, which suggests that the SC fluctuation may be strong in this material. (b) An enlarged view for the temperature dependence of resistivity at magnetic fields of (from bottom to top) 0, 0.1 to 0.6 T with increments of 0.1 T; 0.8, 1, 1.5, 2 to 7 with increments of 1 T; and 9, 12, 14 T. It is found that the bulk superconductivity can be quickly suppressed by the magnetic field, while the onset transition temperature changes slightly, indicating a strong fluctuation effect. (c) Temperature dependence of the resistivity (as shown in (b)) normalized at 10 K. A kink can be clearly seen at about 4 K when the magnetic field is high and the bulk superconductivity is suppressed completely. The red arrowed line traces out the evolution from the SC onset transition in the low field region to a kink at high magnetic fields.}\label{fig3} \end{figure} In Fig.~\ref{fig2}(a) we present the temperature dependence of resistivity measured at three magnetic fields: $\mu_0H = $0, 6 and 14 T. In addition to the moderate magnetoresistance, one can see that a weak insulating behavior is induced by the magnetic field. This weak semiconducting behavior is of course anti-intuitive for a normal state with Fermi liquid characteristic. A simple explanation would be that the insulating feature is given by an adjacent competing order here, once the superconductivity is suppressed, the latter is getting promoted. However, we should mention that the insulating behavior starts actually at 25 K (at 6 and 14 T) which is far beyond the SC transition temperature here. Someone may argue the minimum in the resistivity is caused by some impurities of Bi$_2$S$_3$, which shows a minima around 25 K \cite{27}, but this may be excluded because we do not see this phenomenon in zero magnetic field. Another possibility is that the conduction band has a very shallow band edge, as illustrated by the band structure calculations \cite{26}. When a magnetic field is applied, the density of states (DOS) of the spin-up and spin-down electrons will become asymmetric given by the Zeeman effect. Therefore we have some polarized electrons which induce the weak insulating behavior. Clearly this insulating behavior needs to be further checked, better with single crystals in the future, and to be explained satisfactorily. In Fig.~\ref{fig2}(b) we present the magnetization data measured in the zero-field-cooled (ZFC) and the field-cooled mode (FC). The SC magnetic transition starts at about 3.6 K. Superconducting transition temperature in this paper is a little lower than that in previous report by Mizuguchi et al. \cite{22}, probably due to mutual doping between the O and S elements. From the resistive curve in the transition region as shown in Fig.~\ref{fig3}(a), the critical temperature taken from $5\%$ of the normal state resistivity ($T_\mathrm{c}^\mathrm{zero}$ or $T_\mathrm{irr}$) is 3.7$\;$K. The onset transition temperature $T_\mathrm{c}^\mathrm{onset}$ determined from the crossing point of the normal state line and the extrapolation of the steep transition line is about 4.2$\;$K, while $T_\mathrm{c}(99\%\rho_\mathrm{n})$ taken from the $99\%$ of the normal resistance is about 6.2$\;$K. It should be noted that the excess conductivity region in which the resistivity starts to deviate from the normal state line (inset of Fig.~\ref{fig3}(a)) can extend to the temperature above 10$\;$K. This excess conductivity is usually regarded as the SC fluctuation from the residual Cooper pairs above bulk $T_\mathrm{c}(99\%\rho_\mathrm{n})$. This superconducting fluctuation is actually expected by the band structure calculations which foresee a low dimensionality of the electronic structure, and can be corroborated by the quickly broadened resistive transition under magnetic fields, as shown in Fig.~\ref{fig3}(b). One can see that the transition temperature with zero resistivity can be suppressed blow 2 K by a magnetic field as low as 0.4 T, while the bulk superconductivity is suppressed completely by a magnetic field of 5 T. However, even if the bulk superconductivity is easily suppressed, a kink appears on the $\rho$ vs $T$ curve at a high magnetic field where one cannot see the diamagnetization. If following the onset transition of the resistivity, as shown in Fig.~\ref{fig3}(c), we can see that the kink has very close relationship with the upper critical field $\mu_0H_\mathrm{c2}$ in the low field and high temperature region. Because it is really difficult to define the temperature below which $\rho_\mathrm{n}$ first deviates from its high temperature behavior, we use these kink positions to define the upper critical fields above 1 T. Surprisingly, this kink stays at about 4 K even with a magnetic field of 14 T. We interpret this kink as the temperature below which the residual Cooper pairs exist in the system even the bulk superconductivity is completely suppressed. Following the tendency of this kink, a very high critical field can be expected in the zero temperature limit, which certainly exceeds the Pauli limit given by $\mu_{0}H_\mathrm{p} = 1.84T_\mathrm{c}$ \cite{28}. \begin{figure} \includegraphics[width=8cm]{Fig4.EPS} \caption {(color online) (a) Typical tunnelling spectra measured at different locations on the polycrystalline samples. The gap values judged from the peaks or humps on the spectra are marked by the arrows. One can see that the mean gap value is around 3$\;$meV as guided by the vertical dashed lines. Some spectra show very large gap values and even have two-gap features. (b) Statistics on the local SC gap sizes $\Delta$ of 400 spectra. The gap size follows the Gaussian distribution with the mean gap value $\overline{\Delta}=3\;$meV. Such a large gap value suggests the unconventional superconductivity in Bi$_4$O$_4$S$_3$ with $T_\mathrm{c}^{\mathrm{onset}}=4.2\;$K. The largest SC gap can reach a value of about 10 meV} \label{fig4} \end{figure} To make further analysis on the superconducting property, we measured STS spectra on this sample. Several typical STS curves measured at 1.6$\;$K below the bulk SC transition are shown in Fig.~\ref{fig4}(a). Most of the spectra are symmetric with very clear suppression of DOS within a certain energy scale, and clear coherence peaks can be found on some spectra. However the coherence peaks on most of the curves are somewhat broad and the zero-bias conductance values are remarkably large, which maybe due to the contamination of the surface on this polycrystalline sample. The gap values determined from the coherence peaks or the kink position to the superconducting valleys (arrows in Fig.~\ref{fig4}(a)) are mainly 3$\;$meV for most of the spectra, while some of the spectra show much larger gap sizes or even two-gap features. In high-$T_\mathrm{c}$ superconductors, sometimes a bosonic mode which exhibits as a peak feature at a higher energy outside of the superconducting gap is found by STS measurements \cite{29,30}. The second gap in Bi$_4$O$_4$S$_3$ resembles the bosonic mode feature. However such high-energy peaks are quite rare to occur in hundreds of measured spectra, so we just regard it as a possible second gap. In order to know the average value of the superconducting gap, we do the statistic analysis to the gap size $\Delta$ taken from 400 spectra and present in Fig.~\ref{fig4}(b). One can see that the mean gap value $\overline{\Delta}$ is about 3$\;$meV. Considering the bulk superconducting transition temperature $T_\mathrm{c}^\mathrm{onset}$ of 4.2$\;$K, we get the ratio $2\overline{\Delta}/k_\mathrm{B}T_\mathrm{c}\sim 16.6$, which is almost 5 times of the value given by the BCS theory in the weak coupling regime. It is even higher than the values of most high-$T_\mathrm{c}$ superconductors. This suggests the very strong coupling superconductivity in the superconductor. Since the scattering is really strong in the polycrystalline sample, it is very difficult to judge the pairing symmetry from the fitting to the spectra. As shown in Fig.~\ref{fig4}(b), the gap size can extend to a very large value, e.g., exceeding 10$\;$meV. Such inhomogeneity of SC gap sizes needs to be verified by other experiment tools which may make it an interesting new material. \begin{figure} \includegraphics[width=8cm]{Fig5.EPS} \caption {(color online) (a) The evolution of the tunnelling spectra taken at temperatures from 1.6 K to 20 K. The spectra are displaced vertically for clarity. (b) The STS normalized by the one measured in normal state (at 20 K). One can see that the gapped feature vanishes at about 14$\;$K which is much higher than the critical temperature for bulk superconductivity ($T_\mathrm{c}(99\%\rho_\mathrm{n})\sim 6\;$K), as shown by the blue curve.} \label{fig5} \end{figure} In Fig.~\ref{fig5}(a), we show the temperature evolution of the STS spectra obtained by warming up the samples from 1.6$\;$K through $T_\mathrm{c}$ to 20$\;$K. One can see that the superconducting feature marked by the depression of the density of states near the Fermi energy persists at temperatures above the bulk $T_\mathrm{c}(99\%\rho_\mathrm{n})\sim6\;$K. Such a feature disappeared at temperature above some fluctuation temperature $T_\mathrm{f}=14\;$K leaving only a V-shaped background. If we use the spectrum measured at $20\;$K as the background, we can obtain the normalized curves at different temperatures as shown in Fig.~\ref{fig5}(b). Superconducting feature is weakened with the increasing of temperature and evolve to a continuous background at the temperature above $T_\mathrm{f}$. In traditional superconductors, the superconducting gapped features on tunnelling spectra vanish at the temperature just above $T_\mathrm{c}.$ \cite{31} In contrast such feature in cuprates could exist at the temperature far beyond $T_\mathrm{c}$, which has been regarded as the pseudogap effect \cite{16}. Even in some iron pnictides, the gapped feature was observed to extend to a very high temperature \cite{32} and was explained by the presence of possible pseudogap effect \cite{33}. The pseudogap effect can also be observed from the kink in resistive curve in cuprates \cite{13}. Because we cannot find any trace of pseudogap from the transport measurements, this effect at high temperature is supposed to be the SC fluctuation instead of the pseudogap. In addition, the estimation is consistent with excess conductivity at the temperature above bulk $T_\mathrm{c}$. If using $T_\mathrm{f}=14\;$K as the pairing temperature, we get the ratio $2\overline{\Delta}/k_\mathrm{B}T_\mathrm{f}\sim5.0$ which is still a large value but comparable with the value calculated from the SC gap and the pseudogap temperature in cuprates \cite{3}. It should be noted that some SC gap values could extend to very high, i.e., larger than 7$\;$meV, which gives a much larger value of $2\overline{\Delta}/k_\mathrm{B}T_\mathrm{f}$. The detailed reason for this large energy gap remains unresolved. \begin{figure} \includegraphics[width=9cm]{Fig6.EPS} \caption {(color online) (a) The evolution of the tunnelling spectra at different magnetic fields up to 10$\;$T at 1.6$\;$K. The spectra are displaced vertically for clarity. The spectrum taken at 20$\;$K and 0$\;$T is also shown for comparison. (b) The STS curves normalized by the one measured in normal state (at 20$\;$K and 0$\;$T). One can see the suppression of DOS remains at fields above the bulk upper critical field $H_\mathrm{c}(99\%\rho_\mathrm{n}) \sim5\;$T.} \label{fig6} \end{figure} Figure~\ref{fig6}(a) shows the STS spectra taken at different magnetic fields at the same temperature $1.6\;$K. The bulk upper critical field $\mu_0H_\mathrm{c2}$ judged from the $99\%$ of the normal state resistance at $1.6\;$K is about $5\;$T. The suppression to the DOS on the spectra are apparently without any variation when crossing this bulk transition temperature, and it is more clear for the normalized spectra by dividing out the background spectrum taken at 20$\;$K and $0\;$T as shown in Fig.~\ref{fig6}(b). As described above, the gapped feature on the spectra existing above $T_\mathrm{c}(99\%\rho_\mathrm{n})$ is consistent with the picture of fluctuating superconductivity. Since the spectra at high magnetic fields are similar to those taken at zero field but at high temperatures, this suppression of DOS near Fermi energy observed above bulk $H_\mathrm{c}(99\%\rho_\mathrm{n})$ can also be attributed to the SC fluctuation and preformed Cooper pairs. \begin{figure} \includegraphics[width=8cm]{Fig7.EPS} \caption {(color online) Phase diagram derived from the resistive transition curves and the STS data. Semi-log plot is used to make the phase diagram more clear. The transition point judging from the $99\%$ of the normal resistance gives rise to the upper critical field $H_\mathrm{c2}$. The curve $H_\mathrm{c2}^\mathrm{kink}$ shows the point determined from the kink point of the resistivity versus temperature which denotes the superconducting fluctuation property. Such fluctuation behavior with excess conductivity is proved by the STS data and extends to as high as 14 K at 0 T. The dashed line is a guide for eyes.} \label{fig7} \end{figure} \section{Discussion} Next we present a phase diagram based on the transport and STS measurements in Fig.~\ref{fig7} and give discussions on the possible mechanism of superconductivity. The SC transition point of critical field $H_\mathrm{c}(99\%\rho_n)$ is shown by the red filled circles. The bulk superconductivity is established in a very small area covered by the irreversibility line $T_\mathrm{irr}$ (blue up-triangles). The large area between them indicates a strong SC phase fluctuation. This is actually consistent with the theoretical expectation because the electronic system has one dimensional feature($p_\mathrm{x}$ and $p_\mathrm{y}$). The bulk superconductivity is established between the one dimensional fluctuating superconducting chains. The curve marked with $H_\mathrm{c}^\mathrm{onset}$ gives the upper critical field determined using the usual crossing point of the normal state background and the extrapolated line of the steep resistive transition part. The most puzzling point is the kink appearing in the $\rho$ vs. $T$ data at a high magnetic field. The curve marked with $H_\mathrm{c2}^\mathrm{kink}$ shows the critical field determined from the kinky point of the resistive data shown in Fig.~\ref{fig3}(c), by following the trace of the arrowed red line there. We add the fluctuation temperature $T_\mathrm{f}$ from the tunnelling spectrum at zero magnetic field to the phase diagram, and get a wide fluctuation region at zero magnetic field. Since this line traces very well to the transition point marked by $H_\mathrm{c}(99\%\rho_\mathrm{n})$ in the low field and high temperature region, we naturally attribute it to the existence of residual Cooper pairs. If this kink can be interpreted as the onset for the pairing, that would indicate a very strong pairing strength or gap, which can be inversely supported by the tunneling data. In a simple BCS picture, we have $H_\mathrm{c2}=(\pi\Phi_0/2\hbar^{2}v_\mathrm{F}^{2})\Delta^{2}$, where $\Phi_0$ is the flux quanta, $v_\mathrm{F}$ is the Fermi velocity. Such a strong pairing needs certainly a reasonable cause, which exceeds the limit of the simple phonon mediated pairing picture. By taking account of the weak correlation effect in the Bi 6p electrons, some other novel mechanism, such as the valence fluctuation of the Bi$^{2+}$ and Bi$^{3+}$, may play an important role in this new superconductor. \section{Conclusions} In summary, we perform the resistive and scanning tunnelling spectroscopy measurements on the new BiS$_2$ based superconductor Bi$_4$O$_4$S$_3$. A weak insulating behavior is induced in the normal state when a high magnetic field is applied. This can be induced either by an adjacent competing order, or the very shallow $p_x$ and $p_y$ band and small Fermi energy. A kink appears on the temperature dependence of resistivity at all high magnetic fields when the bulk superconductivity is completely suppressed. This kink can be regarded as the presence of local pairing, or the upper critical field $H_\mathrm{c2}$(T). The SC fluctuation region from the STS measurement extends to about 14 K although the bulk superconducting transition temperature is only about 3.7 K. The gapped feature near the Fermi energy can also extend to a high magnetic field (~ 5 T), which is consistent with the resistive measurements, again indicating a strong superconducting fluctuation. From the tunnelling spectra, a mean superconducting gap of 3$\;$meV is widely observed, which leads to a very high ratio of 2$\Delta/k_\mathrm{B}T_\mathrm{c}\approx 16.6$, suggesting strong coupling superconductivity. We appreciate the useful discussions with WANG Fa, FU Liang, WANG QiangHua and LI JianXin. This work was supported by the 973 Project of the Ministry of Science and Technology of China (Grant Nos. 2011CBA001002, 2010CB923002, and 2012CB821403), the National Natural Science Foundation of China (Grant No. 11034011), the Program for New Century Excellent Talents in University (Grant No. NCET-12-0255), and A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions. $^{\dag}[email protected], $^*[email protected]
1,108,101,566,838
arxiv
\section{Introduction} The Stark operator $H_0$ is a self-adjoint operator on $L^2(\mathbb{R})$ given by the potential $v(x)=-x$: \begin{equation}\label{Gu} H_0u=-u^{\prime\prime}-xu. \end{equation} The operator describes a charged quantum particle in a constant electric field. The Stark effect (named after Johannes Stark \footnote{It was independently discovered by the physicist Antonino Lo Surdo.}) originates from the interaction between a charge distribution (atom or molecule) and an external electric field. In many cases, the particle is also subjected to an additional electric potential $q$. For example, the hydrogen Stark effect is governed by an additional Coulomb potential. Stark effect is a important subject in quantum theory, classical electrostatics or other physical literatures \cite{solem1997variations,epstein1926stark,courtney1995classical,py1}. In mathematics, it also attracts a lot of attentions, see \cite{Kor,Kor1,Kor2,Graf1,Her2,Her1,Her3,Her4,Her5,Ya1,Ya2,Jen1,Jen2,Jen3}. In this paper, we will investigate a class of more general operators, which is called the {\it Stark type} operator. Given $v_{\alpha}(x)=-x^{\alpha}$ for $x\geq 0$ with $0<\alpha<2$, let $\widetilde{v}_{\alpha}(x)$ be an extension of $v_{\alpha}(x)$ to $\mathbb{R}$ such that $\lim_{x\to -\infty}\widetilde{v}_{\alpha}(x)=\infty$. Let $H_0^{\alpha}=-D^2 +v_{\alpha}$, which is defined on $\mathbb{R}^+$ with some boundary condition at $x=0$, and $\widetilde{H}_0^{\alpha}=-D^2 +\widetilde{v}_{\alpha}$, which is defined on $\mathbb{R}$. We call $H_0^{\alpha}$($\widetilde{H}_0^{\alpha}$) the Stark type operator. The perturbed Stark type operator is given by an additional potential: \begin{equation}\label{Gpwstark} H^{\alpha}u=H_0^{\alpha}u+qu( \text{ or } \widetilde{H}^{\alpha}u=\widetilde{H}_0^{\alpha}u+qu), \end{equation} where $H_0^{\alpha}=-D^2+v_{\alpha}$ (or $\widetilde{H}_0^{\alpha}=-D^2+\widetilde{v}_{\alpha}$) and $q$ is the decaying perturbation. It is well known that $\sigma _{\rm ess}(H^{\alpha}_0)=\sigma _{\rm ac}(H^{\alpha}_0)=\mathbb{R}$ and $H^{\alpha}_0$ does not have any eigenvalue. We are interested in the criteria of perturbation such that the associated perturbed Stark type operator has single embedded eigenvalue, finitely many embedded eigenvalues or infinitely many embedded eigenvalues. We refer the readers to \cite{Her1,Her2} and references therein for embedded eigenvalues (resonances) of operators with Stark effect. In the following, we always assume $\lim_{x\to\infty}|q(x)|=0$ and $\lim_{x\to-\infty}|q(x)|=0$. For the single embedded eigenvalue problem, Vakulenko showed if $q(x)=\frac{O(1)}{1+|x|^{\frac{1}{2}+\epsilon}}$ for some $\epsilon>0$, then the perturbed Stark operator $Hu=-u^{\prime\prime}-xu+qu$ has no eigenvalues in $L^2(\mathbb{R})$\cite{vakulenko1986nonexistence}. Naboko and Pushnitskii proved that the perturbed Stark type operator $H^{\alpha}_0+q$ on $\mathbb{R}^+$ has no eigenvalues if $|q(x)|\leq \frac{C}{1+x^{1-\frac{\alpha}{2}}}$ with $C< 1-\frac{\alpha}{2}$ \cite{naboko1}. Is the bound $1-\frac{\alpha}{2}$ sharp? If it is not, what is the sharp bound? Before answer those questions, we want to mention the problem of embedded eigenvalues for the perturbed free Schr\"odinger operator $-D^2+V$. Let $a=\limsup_{x\to \infty} x|V(x)|$. By a result of Kato \cite{kato}, there is no eigenvalue $E$ with $E>a^2$, which holds for Schr\"odinger operators in any dimension. From the classical Wigner-von Neumann type functions \begin{equation*} V(x)=\frac{c}{1+x}\sin( kx+\phi), \end{equation*} we know that one can not do better than $\frac{a^2}{4}$. For one dimensional case, Atkinson and Everitt \cite{atk} obtained the optimal bound $\frac{4a^2}{\pi ^2}$, that is there is no eigenvalue in $(\frac{4a^2}{\pi ^2},\infty)$ and that there are examples with eigenvalues approaching this bound. We refer the readers to Simon's paper for full history \cite{simon2017tosio} and a short note \cite{liu} for a complete proof. It is natural to ask what will happen at the transition line $\frac{4a^2}{\pi ^2}$. Transition line is always hard to deal with since it can not be addressed in both sides. The first purpose of this paper is to obtain the sharp spectral transition for existence of eigenvalues for perturbed Stark type operators and also explore what happens in the transition lines for both Schr\"odinger operators and Stark type operators. We should mention that some sharp results for preservation of the absolutely continuous spectrum are obtained \cite{christ2003absolutely,kiselev2000absolutely,killipimrn}. \begin{theorem}\label{Mainthm1} Suppose the potential $q$ satisfies \begin{equation*} \limsup_{x\to \infty}{x}^{1-\frac{\alpha}{2}}|q(x)|<\frac{2-\alpha}{4}\pi. \end{equation*} Then $ -u^{\prime\prime}-x^{\alpha}u+qu=Eu$ admits no $L^2(\mathbb{R}^+)$ solutions for any $E\in \mathbb{R}$. \end{theorem} The following result shows that the bound $\frac{2-\alpha}{4}\pi$ is optimal and can be achieved. \begin{theorem}\label{Mainthm2} For any $E\in \mathbb{R}$, $a\geq\frac{2-\alpha}{4}\pi$ and $\theta\in [0,\pi]$, there exist potentials $q(x)$ on $\mathbb{R}^+$ such that \begin{equation*} \limsup_{x\to \infty}{x}^{1-\frac{\alpha}{2}}|q(x)|=a, \end{equation*} and eigen-equation $ -u^{\prime\prime}-x^{\alpha}u+qu=Eu$ has an $L^2(\mathbb{R}^+)$ solution with the boundary condition $\frac{u^{\prime}(0)}{u(0)}=\tan\theta$. \end{theorem} During the proof of Theorem \ref{Mainthm2}, we also proved the following theorem, which covered the (missing) critical case for the Schr\"odinger operator. \begin{theorem}\label{Schrcritical} For each pair $(\lambda,a)$ such that $\lambda= \frac{4a^2}{\pi^2}>0$ and any $\theta\in[0,\pi]$, there exist potentials $V$ such that $\limsup_{x\to \infty}x|V(x)|=|a| $ and the associated Schr\"odinger equation $-u^{\prime\prime}+Vu=\lambda u$ has an $L^2(\mathbb{R}^+)$ solution with the boundary condition $\frac{u^{\prime}(0)}{u(0)}=\tan\theta$. \end{theorem} \begin{remark} With some modifications in our constructions, we can make the potentials in Theorems \ref{Mainthm2} and \ref{Schrcritical} smooth. \end{remark} The proof of Theorems \ref{Mainthm1}, \ref{Mainthm2} and \ref{Schrcritical} is inspired by the Schr\"odinger case (see \cite{atk,eastham1982schrodinger}). The novelty here is that instead of using sign type functions $V(x)=\frac{c}{1+x}\text{ sgn } ( \sin (kx+\phi))$ during the constructions, we use sign type functions piecewisely, namely $V_n(x)=\frac{c_n}{1+x}\text{ sgn } ( \sin (kx+\phi))\chi_{[a_n,b_n]}$ and glue them together. This piecewise construction allows us to address the transition line as we mentioned before. For the sharp transition of eigenvalues embedded into $(-2,2)$ for the discrete Schr\"{o}dinger operator, see \cite{jl3}. Define $P\subset \mathbb{R}$ as \begin{equation*} P=\{E\in\mathbb{R}: -u^{\prime\prime}-x^{\alpha}u+qu=Eu \text{ has an } L^2(\mathbb{R}^+) \text{ solution}\} . \end{equation*} $P$ is the collections of the eigenvalues of $H_0^{\alpha}+q$ with all the possible boundary conditions at $0$. The next question is that what is the criterion for finitely many embedded eigenvalues. We obtain \begin{theorem}\label{Maintheoremapr3} Suppose potential $q$ satisfies \begin{equation*} \limsup_{x\to \infty}{x}^{1-\frac{\alpha}{2}}|q(x)|=a. \end{equation*} Then we have $$\# P\leq \frac{2a^2}{(2-\alpha)^2}.$$ \end{theorem} \begin{theorem}\label{Mainthm3} For any $\{ E_j\}_{j=1}^N\subset \mathbb{R}$ and $\{\theta_j\}_{j=1}^N\subset [0,\pi]$, there exist functions $q\in C^{\infty}[0,+\infty)$ such that \begin{equation}\label{Ggoalb} \limsup_{x\to \infty} {x}^{1-\frac{\alpha}{2}} |q(x)|\leq (2-\alpha)e^{2\sqrt{\ln N}}N, \end{equation} and for each $E_j$, $j=1,2,\cdots,N$, the eigen-equation $-u^{\prime\prime}-x^{\alpha}u+qu=E_ju$ has an $L^2(\mathbb{R}^+)$ solution with the boundary condition $\frac{u^{\prime}(0)}{u(0)}=\tan\theta_j$. \end{theorem} Since $e^{2\sqrt{\ln N}}$ is asymptotically smaller than $N^{\epsilon}$ with any $\epsilon>0$ as $N$ goes to infinity, we have \begin{corollary} For any $\{ E_j\}_{j=1}^N\subset \mathbb{R}$ and $\{\theta_j\}_{j=1}^N\subset [0,\pi]$, there exist functions $q\in C^{\infty}[0,+\infty)$ such that \begin{equation* \limsup_{x\to \infty} {x}^{1-\frac{\alpha}{2}} |q(x)|\leq C(\epsilon)N^{1+\epsilon}, \end{equation*} and for each $E_j$, $j=1,2,\cdots,N$, the eigen-equation $-u^{\prime\prime}-x^{\alpha}u+qu=E_ju$ has an $L^2(\mathbb{R}^+)$ solution with the boundary condition $\frac{u^{\prime}(0)}{u(0)}=\tan\theta_j$. \end{corollary} \begin{remark}\label{Reop} \begin{itemize} \item In our forthcoming paper, we will prove that the bound in Theorem \ref{Maintheoremapr3} is sharp \cite{liusharpstark}. \item Theorems \ref{Maintheoremapr3} and \ref{Mainthm3} implies $O(1)$ is the criterion for finitely many $L^2(\mathbb{R}^+)$ for the perturbed Stark type operator. For Schr\"odinger operator, the answer is no. Suppose the positive sequence $\{E_j\}$ satisfies $\sum_{j}\sqrt{E_j}<\infty$, Simon \cite{simdense} constructed potential $V(x)=\frac{O(1)}{1+x}$ such that $-D^2+V$ has eigenvalues $\{E_j\}$. \item In \cite{simdense}, the sum of Wigner-von Neumann type function $\sum_{j=1}^N c\frac{\sin(\sqrt{E_j}x+\phi_j)}{1+x}$ is used to create positive eigenvalues $\{E_j\}$ for the perturbed free Schr\"odinger operator. However, by Liouville transformation, the perturbed Stark type operator always has ``eigenvalue" 1. So it is hard to use Wigner-von Neumann type functions directly to do the constructions in our situations. \item Our proof in Theorems \ref{Maintheoremapr3} and \ref{Mainthm3} is effective. Instead of $O(1)$, the explicit bounds $(2-\alpha)e^{2\sqrt{\ln N}}N$ and $\frac{2a^2}{(2-\alpha)^2}$ are obtained. \item Our constructions are very general. We only give the parameters specific values in the last step. \end{itemize} \end{remark} The proof of Theorem \ref{Maintheoremapr3} is motivated by \cite{kiselev1998modified}. However the technics are more difficult. The key idea of \cite{kiselev1998modified} is to show the almost orthogonality of $\frac{\theta(x,E_1)}{1+x}$ and $\frac{\theta(x,E_2)}{1+x}$ in Hilbert space $ L^2([0,B],(1+x)dx)$ for all large $B$, where $\theta(x,E_1)$ ($\theta(x,E_2)$) is the Pr\"ufer angle with respect to energy $E_1$ ($E_2$). However, the perturbed Stark (type) operator has its own difficulty. By Liouville transformation, we can transfer the eigen-equation $Hu=Eu$ of the perturbed Stark (type) operator to the eigen-equation $(-D^2+V)u=u$ of perturbed free Schr\"odinger operator. This means that all the new eigen-equations of perturbed free Schr\"odinger operator shares the common eigenvalue $1$ but with different potentials. It is very challenged to deal with the common eigenvalues for the perturbed free Schr\"odinger operator or common quasimomentum for the perturbed periodic Schr\"odinger operator since the resonance phenomenon will show up. This is the reason why the assumption of nonresonance is needed, for example \cite{ld1,krs}. In this paper, we overcame the difficulty by two new ingredients. Firstly, we give a general estimates for the oscillating functions, which is a generalization of Wigner-von Neumann type functions. See the comments right after Lemma \ref{Keyle1}. We mention that the possiblely embedded eigenvalues for such (or more general) potentials can be determined (e.g. \cite{Luk13,Luk14}). Secondly, we take the second leading term of the evolution of the Pr\"ufer angle (the first leading term is 1) into consideration so that the resonant phenomenon can be well studied. Moreover, the two Theorems for oscillatory integral and almost orthogonality are universal. See Section \ref{UOA}. For the construction part, let us say more. For the perturbed Stark (type) operator, under the rational independence assumption of set $\{E_j\}, $ Naboko and Pushnitskii \cite{naboko1} proved Theorems \ref{Mainthm3} and \ref{Mainthm4} without quantitative bounds. There are more results for the free perturbed Schr\"odinger case $H=-D^2+V$. Naboko \cite{nabdense} and Simon \cite{simdense} constructed potentials for which the associated Schr\"odinger operator has given eigenvalues with or without rational dependence assumption. By Pr\"ufer transformation or generalized Pr\"ufer transformation, there are a lot of authors considering the (dense) eigenvalues embedded into the essential spectrum or absolute continuous spectrum for the perturbed free Schr\"odinger operator, the perturbed periodic Schr\"odiger operator or the discrete Schr\"odiger operator \cite{krs,lukdcmv,kru,remling2000bounds}. Recently, by the combination of Pr\"ufer transformation (generalized Pr\"ufer transformation) and piecewise potentials, Jitomirskaya-Liu and Liu-Ong constructed asymptotically flat (hyperbolic) manifolds, perturbed periodic operators and perturbed Jacobi operators with finitely or countable many embedded eigenvalues \cite{ld1,jl,jl3}. Here, we develop the piecewise potential technics in \cite{ld1,jl,jl3} in several aspects. Firstly, we gave the universal constructions in an effective way. We gave the single piece constructions and also obtained the effective bounds in Section \ref{ESPC}. In Section \ref{UGC}, the universal gluing constructions are given and the effective bounds are obtained too. Secondly, as we mentioned before, we dealt with the resonant eigenvalues situations. Unlike the free perturbed or periodic perturbed Schr\"odinger operator, the perturbation is a very small since $q(x)=o(1)$ as $x\to \infty$, the Stark effect $x^{\alpha}$ is much larger than perturbation $q$ in this paper. It is even hard to imagine that under suitable constructions, the perturbation will dominate the evolution of the equation. It turns to be that all the leading entries of all dominations corresponding to energies are the same. So we need to tackle the second domination to distinguish the eigen-equations among different energies. This is the same difficulty in establishing the almost orthogonality among Pr\"ufer angles. After overcoming those difficulties, we are able to prove Theorem \ref{Mainthm3}. Moreover, we can construct potentials with infinitely many eigenvalues. See Theorem \ref{Mainthm4} below. We believe that our method has wide applications in studying the Schr\"odinger operators \begin{theorem}\label{Mainthm4} Let $h(x)>0$ be any function on $(0,\infty)$ with $ \lim_{x\to \infty}h(x) = \infty$ and any sequence $\{\theta_j\}\subset [0,\pi]$. Then for any given $\{ E_j\}_{j=1}^{\infty}\subset \mathbb{R}$, there exist functions $q\in C^{\infty}[0,+\infty)$ such that \begin{equation}\label{Ggoala} |q(x)|\leq \frac{h(x)}{1+{x}^{1-\frac{\alpha}{2}}}\quad \text{for } x>0, \end{equation} and for any $E_j$, the eigen-equation $-u^{\prime\prime}-x^{\alpha}u+qu=E_ju$ has an $L^2(\mathbb{R}^+)$ solution with boundary condition $\frac{u^{\prime}(0)}{u(0)}=\tan\theta_j$. \end{theorem} Now we consider operators $H_0^{\alpha}=-D^2 +v_{\alpha}$ on $\mathbb{R}^+$ with some fixed boundary condition at $x=0$ (or operators $\widetilde{H}_0^{\alpha}=-D^2 +\widetilde{v}_{\alpha}$ on $\mathbb{R}$). In such setting, $E$ is an eigenvalues for $H_0^{\alpha}$ (or $\widetilde{H}_0^{\alpha}$) if and only if $H_0^{\alpha}u=Eu$ has an $L^2(\mathbb{R}^+)$ solution (or $L^2(\mathbb{R})$). Based on the previous Theorems and some addition arguments, we have plenty of Corollaries. \begin{corollary}\label{cor1} Suppose the potential $q$ satisfies \begin{equation*} \limsup_{x\to \infty}{x}^{1-\frac{\alpha}{2}}|q(x)|<\frac{2-\alpha}{4}\pi. \end{equation*} Then $H^{\alpha}=H_0^{\alpha}+q$( $\widetilde{H}^{\alpha}=\widetilde{H}_0^{\alpha}+q$) admits no eigenvalues. \end{corollary} \begin{corollary}\label{cor2} For any $E\in \mathbb{R}$, $a\geq\frac{2-\alpha}{4}\pi$, there exist potentials $q(x)$ such that \begin{equation*} \limsup_{x\to \infty}{x}^{1-\frac{\alpha}{2}}|q(x)|=a, \end{equation*} and $H^{\alpha}u=H_0^{\alpha}+q$( $\widetilde{H}^{\alpha}=\widetilde{H}_0^{\alpha}+q$) has an eigenvalue $E$. \end{corollary} \begin{corollary}\label{cor3} Suppose potential $q$ satisfies \begin{equation*} \limsup_{x\to \infty}{x}^{1-\frac{\alpha}{2}}|q(x)|=a. \end{equation*} Then the total number of eigenvalues of $H^{\alpha}=H_0^{\alpha}+q$( $\widetilde{H}^{\alpha}=\widetilde{H}_0^{\alpha}+q$) is less than $\frac{2a^2}{(2-\alpha)^2}$. \end{corollary} \begin{corollary}\label{cor4} For any $\{ E_j\}_{j=1}^N\subset \mathbb{R}$, there exist functions $q\in C^{\infty}[0,+\infty)$ $(q\in C^{\infty}(\mathbb{R}))$ such that \begin{equation* \limsup_{x\to \infty} {x}^{1-\frac{\alpha}{2}} |q(x)|\leq (2-\alpha)e^{2\sqrt{\ln N}}N, \end{equation*} and $H^{\alpha}=H_0^{\alpha}+q$ $(\widetilde{H}^{\alpha}=\widetilde{H}_0^{\alpha}+q)$ has eigenvalues $\{ E_j\}_{j=1}^N$. \end{corollary} \begin{corollary}\label{cor5} Let $h(x)>0$ be any function on $(0,\infty)$ with $ \lim_{x\to \infty}h(x) = \infty$. Then for any given $\{ E_j\}_{j=1}^{\infty}\subset \mathbb{R}$, there exist functions $q\in C^{\infty}[0,+\infty)$ $(q\in C^{\infty}(\mathbb{R}))$ such that \begin{equation*} |q(x)|\leq \frac{h(x)}{1+{x}^{1-\frac{\alpha}{2}}}\quad \text{for } x>0, \end{equation*} and $H^{\alpha}=H_0^{\alpha}+q$ $(\widetilde{H}^{\alpha}=\widetilde{H}_0^{\alpha}+q)$ has eigenvalues $\{ E_j\}_{j=1}^{\infty}$. \end{corollary} Our paper is organized in the following way. In the first 8 Sections, we only give the proof of the case $\alpha=1$ in all the theorems. In Section \ref{Section:Small}, we will give some preparations. In Section \ref{UOA}, we will set up universal oscillatory integral and also prove the almost orthogonality between different Pr\"ufer angles. In Section \ref{SE}, we will prove Theorems \ref{Mainthm1} and \ref{Mainthm2}. In Section \ref{finitelymany}, we will prove Theorem \ref{Maintheoremapr3}. In Section \ref{ESPC}, we will give the construction of potentials for single piece and also the effective bounds. In Section \ref{UGC}, we will give the general method to glue the piecewise functions together and also the effective bounds. In Section \ref{Twoapp}, as two applications, we will prove Theorems \ref{Mainthm3} and \ref{Mainthm4}, and as well as all the Corollaries. In Section \ref{General}, we will point out the modifications so that our arguments work for general $\alpha$ with $\alpha\in(0,2)$. \section{Preparations}\label{Section:Small} Let $v $ be a positive function on $\mathbb{R}^+$ and consider the Schr\"odinger equation on $\mathbb{R}^+$, \begin{equation}\label{Gsch} -u^{\prime\prime}(x)-v(x)u(x)+q(x)u(x)=Eu(x). \end{equation} The Liouville transformation (see \cite{christ2003absolutely,naboko1}) is given by \begin{equation}\label{GLiou} \xi(x)=\int_0^x\sqrt{v(t)} dt, \phi(\xi)=v(x(\xi))^{\frac{1}{4}}u(x(\xi)). \end{equation} We define a weight function $p(\xi)$ by \begin{equation}\label{GWei} p(\xi)= \frac{1}{v(x(\xi))}. \end{equation} We also define a potential by \begin{equation}\label{GPQ} Q(\xi,E)= -\frac{5}{16}\frac{|v^{\prime}(x(\xi))|^2}{v(x(\xi))^3}+\frac{1}{4}\frac{v^{\prime\prime}(x(\xi))}{v(x(\xi))^2} +\frac{q(x(\xi))-E}{v(x(\xi))}. \end{equation} In the following, we will use the Liouville transformation to perform our proof. We suppose to take $v(x)=x^{\alpha}$ for some $0<\alpha<2$ in the following arguments. As we aforementioned, we prove the case for $\alpha=1$ in the beginning, that is $v(x)=x$ for $x\geq 0$. Let $c=(\frac{3}{2})^{\frac{2}{3}}$. Under such assumption, one has \begin{equation}\label{GLiou1} \xi=\frac{2}{3}x^{\frac{3}{2}}, \phi(\xi,E)=(\frac{3}{2})^{\frac{1}{6}}\xi^{\frac{1}{6}}u(x(\xi)), \end{equation} \begin{equation}\label{GWei1} p(\xi)= \frac{1}{c\xi^{\frac{2}{3}}}, \end{equation} and \begin{equation}\label{GPQ1} Q(\xi,E)= -\frac{5}{36\xi^2}+\frac{q(c\xi^{\frac{2}{3}})-E}{c\xi^{\frac{2}{3}}}. \end{equation} Notice that the potential $Q(\xi,E)$ depends on $q$ and $E$. Suppose $u\in L^2(\mathbb{R}^+)$ is a solution of \eqref{Gsch} with $v(x)=x$. It follows that $\phi$ satisfies \begin{equation}\label{Gschxi} -\frac{d^2\phi}{d\xi^2}+Q(\xi,E)\phi=\phi, \end{equation} and $\phi\in L^2(\mathbb{R} ^+,p(\xi)d\xi)$. This leads to $\phi^{\prime}\in L^2(\mathbb{R} ^+,p(\xi)d\xi)$ \cite[Lemma 1]{naboko1}. Let us introduce the Pr\"{u}fer tranformation. Let \begin{equation}\label{GPruf1} \phi(\xi,E)=R(\xi,E)\sin\theta(\xi,E), \end{equation} and \begin{equation}\label{GPruf} \frac{d \phi(\xi,E)}{d \xi}=R(\xi,E)\cos\theta(\xi,E). \end{equation} Thus we have \begin{equation}\label{GPrufRmar14} \frac{d\log R(\xi,E)}{d\xi}=\frac{1}{2}Q(\xi,E)\sin2\theta(\xi,E) \end{equation} and \begin{equation}\label{GPrufeTmar14} \frac{d\theta(\xi,E)}{d\xi}=1-Q(\xi,E)\sin^2\theta(\xi,E). \end{equation} We need one more lemma. See the Appendix for the proof. \begin{lemma}\label{Keyle2} Suppose $\lim _{x\to -\infty}\widetilde{q}(x)=\infty$. Let us consider equation \begin{equation}\label{Gnesc} -y^{\prime\prime}+\widetilde{q}(x)y=0. \end{equation} Then for any $M>0$, there is a solution of \eqref{Gnesc} and $x_0<0$ such that \begin{equation}\label{Gnesc1} |y(x)| \leq e^{-M|x|} \end{equation} for $x<x_0$. \end{lemma} \section{Oscillatory integral and Almost orthogonality}\label{UOA} \begin{lemma}\label{Keyle1} Let $\beta_1 >0, \beta_2>0$ and $\gamma \neq 0$ be constants. Suppose $\beta_1+\beta_2>1$ and $\beta_2>\frac{1}{2}$. Suppose $\theta(x)$ is a solution of the following equation on $x>1$, \begin{equation}\label{Gfourier9} \frac{d\theta (x)}{dx}=\gamma+\frac{O(1)}{x^{\beta_1}} \end{equation} Let $\beta=\min\{\beta_2,\beta_1+\beta_2-1,2\beta_2-1\}$. Then for any $1<a<b$, we have \begin{equation}\label{Gtheta1} \int_{a}^b \frac{\sin \theta(x)}{x^{\beta_2}}dx=O(\frac{1}{a^{\beta}}), \int_{a}^b \frac{\cos \theta(x)}{x^{\beta_2}}dx=O(\frac{1}{a^{\beta}}) \end{equation} and \begin{equation}\label{Gtheta2} \int_{a}^b\frac{|\sin 2\theta(x)|}{x }dx=\frac{2}{\pi}\ln \frac{b}{a}+\frac{O(1)}{a^{\beta_1}}. \end{equation} \end{lemma} \begin{proof} We only give the proof of \eqref{Gtheta1}. The proof of \eqref{Gtheta2} is similar. We can assume $a$ is large enough and $\gamma>0$. Let $i_0$ be the largest integer such that $2\pi i_0<\theta(a)$. By \eqref{Gfourier9}, there exist $x_0<x_1<x_2<\cdots<x_t<x_{t+1}$ such that $x$ lies in $[x_{t-1},x_t)$ and \begin{equation}\label{tildetheta} {\theta} (x_i)= 2\pi i_0+ i\pi \end{equation} for $i=1,2,\cdots,t,t+1$. By \eqref{Gfourier9}, one has \begin{equation*} x_{i+1}-x_{i}=\frac{\pi}{ \gamma}+ \frac{O(1)}{x_i^{\beta_1}}, \end{equation*} and \begin{equation}\label{e2} x_i\geq x_0+\frac{i\pi}{2\gamma}. \end{equation} Similarly, for $x\in[x_i,x_{i+1})$, we have \begin{equation*} {\theta} (x)=2\pi i_0+i \pi +\gamma(x-x_i)+\frac{O(1)}{ x_i^{\beta_1}}. \end{equation*} Thus, one has \begin{eqnarray} \nonumber && \int_{x_i}^{x_{i+1}}|\sin\theta(x) |dx\\ &=&\int_{0}^{\frac{\pi}{\gamma}}\sin( \gamma x)dx+ \frac{O(1)}{1+x_i^{\beta_1}} =\frac{2}{\gamma}+ \frac{O(1)}{x_i^{\beta_1}}. \label{e1} \end{eqnarray} Notice that $\sin\theta(x) $ changes the sign at $x_i$. The integral also has some cancellation between $(x_{i-1},x_i)$ and $(x_{i },x_{i+1})$. Let $t^\prime\in\{t,t+1\}$ such that ${t}^\prime $ is odd. By \eqref{e1}, we obtain \begin{eqnarray} \nonumber\int_{a}^b\frac{\sin\theta(x)}{x^{\beta_2}}dx&=&\frac{O(1)}{a^{\beta_2}}+ \int_{x_1}^{x_{t^\prime}}\frac{\sin\theta(x)}{x^{\beta_2}}dx \\ \nonumber &=&\frac{O(1)}{a^{\beta_2}}+O(1)\sum_{i=1}^{t+1}(\frac{1}{x_i^{\beta_2}}-\frac{1}{x_{i+1}^{\beta_2}})+\sum_{i=1}^{t+1}\frac{O(1)}{x_i^{\beta_1}}\frac{1}{x_i^{\beta_2}}\\ &=&\frac{O(1)}{a^{\beta_2}}+O(1)\sum_{i=1}^{t+1}( \frac{1}{x_i^{2\beta_2}}+\frac{1}{x_i^{\beta_1+\beta_2}})\\ &=&\frac{O(1)}{a^{\beta_2}}+\frac{O(1)}{a^{2\beta_2-1}}+\frac{O(1)}{a^{\beta_1+\beta_2-1}},\label{ellestimate} \end{eqnarray} where the last equality holds by \eqref{e2}. By the same argument, we have $\int_{a}^b \frac{\cos \theta(x)}{x^{\beta_2}}dx=O(\frac{1}{a^{\beta}})$. This completes our proof. \end{proof} If we let $\beta_1=\infty$ in Lemma \ref{Keyle1}, \eqref{Gtheta1} reduces to the case of the Wigner-von Neumann type functions, which has been proved by plenty of authors. See \cite{atkinson1954asymptotic,harris1975asymptotic} for example. The case that $\beta_2=1$ and $a=1$ in \eqref{Gtheta1} has been established in \cite{ld1}. Let \begin{equation}\label{Gvapr} \frac{q(c\xi^{\frac{2}{3}})}{c\xi^{\frac{2}{3}}}=V(\xi), \text{ for }\xi>0. \end{equation} \begin{lemma}\label{Keyleboud} Suppose $V(\xi)$ in \eqref{Gvapr} satisfies $V(\xi)=\frac{O(1)}{1+\xi}$. Suppose $E_1\neq E_2$. Then the following estimate holds for $\xi>\xi_0>1$ \begin{equation* \int_{\xi_0}^{\xi} \frac{\sin2\theta(x,E_1) \sin2\theta(x,E_2) }{ 1+x} dx= \frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation*} \end{lemma} \begin{proof} It suffices to prove \begin{equation* \int_{\xi_0}^{\infty} \frac{\sin2\theta(\xi,E_1)) \sin2\theta(\xi,E_2) }{ 1+\xi} d\xi=\frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation*} Observe that by basic trigonometry, \begin{equation*} 2\sin2\theta(\xi,E_1)\sin2\theta(\xi,E_2)=\cos (2\theta(\xi,E_1)-2\theta(\xi,E_2))-\cos (2\theta(\xi,E_1)+2\theta(\xi,E_2)). \end{equation*} It suffices to prove that \begin{equation* \int_{\xi_0}^{\infty} \frac{\cos (2\theta(\xi,E_1)-2\theta(\xi,E_2))}{ 1+\xi} d\xi=\frac{O(1)}{\xi_0^{\frac{1}{3}}},\int_{\xi_0}^{\infty} \frac{\cos (2\theta(\xi,E_1)+2\theta(\xi,E_2))}{ 1+\xi} d\xi=\frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation*} By \eqref{GPrufeTmar14}, one has \begin{equation}\label{GPrufeTmar132} \frac{d(\theta(\xi,E_1)+\theta(\xi,E_2))}{d\xi}=2-Q(\xi,E_1)\sin^2\theta(\xi,E_1)-Q(\xi,E_2)\sin^2\theta(\xi,E_2). \end{equation} By \eqref{GPQ1}, \eqref{Gtheta1} and \eqref{GPrufeTmar132}, we have \begin{equation*} \int_{\xi_0}^{\infty} \frac{\cos (2\theta(\xi,E_1)+2\theta(\xi,E_2))}{ 1+\xi} d\xi=\frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation*} Thus we only need to prove \begin{equation}\label{Gkeyformar131} \int_{\xi_0}^{\infty} \frac{\cos (2\theta(\xi,E_1)-2\theta(\xi,E_2))}{ 1+\xi} d\xi=\frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation} By \eqref{GPrufeTmar14} again, one has \begin{eqnarray} \frac{d(\theta(\xi,E_1)-\theta(\xi,E_2))}{d\xi} &=& (-\frac{5}{36\xi^2}-V(\xi))\sin^2\theta(\xi,E_2)- (-\frac{5}{36\xi^2}-V(\xi))\sin^2\theta(\xi,E_1)\nonumber\\ &&+\frac{E_1}{c\xi^{\frac{2}{3}}}\sin^2\theta(\xi,E_1)-\frac{E_2}{c\xi^{\frac{2}{3}}}\sin^2\theta(\xi,E_2) \nonumber\\ &=& (-\frac{5}{36\xi^2}-V(\xi))\sin^2\theta(\xi,E_2)- (-\frac{5}{36\xi^2}-V(\xi))\sin^2\theta(\xi,E_1) \nonumber\\ &&- \frac{1}{2} \frac{E_1}{c\xi^{\frac{2}{3}}}\cos2\theta(\xi,E_1)+\frac{1}{2}\frac{E_2}{c\xi^{\frac{2}{3}}}\cos2\theta(\xi,E_2) + \frac{E_1-E_2}{2c\xi^{\frac{2}{3}}}. \end{eqnarray} Define \begin{equation*} \beta(\xi)=\frac{E_1}{2c\xi^{\frac{2}{3}}}\cos2\theta(\xi,E_1)-\frac{E_2}{2c\xi^{\frac{2}{3}}}\cos2\theta(\xi,E_2). \end{equation*} Let $f(1)=\theta(1,E_1)-\theta(1,E_2)$ and \begin{equation*} \frac{d f(\xi)}{d\xi}=(-\frac{5}{36\xi^2}-V(\xi))\sin^2\theta(\xi,E_2)- (-\frac{5}{36\xi^2}-V(\xi))\sin^2\theta(\xi,E_1)+ \frac{E_1-E_2}{2c\xi^{\frac{2}{3}}}. \end{equation*} Thus \begin{equation*} f(\xi)-(\theta(\xi,E_1)-\theta(\xi,E_2))=\int_1^{\xi} \beta(x)dx. \end{equation*} By \eqref{Gtheta1}, we have for some $\beta_0$, \begin{equation*} \int_1^{\infty} \beta(x)dx=\beta_0, \int_{\xi}^{\infty} \beta(x)dx=\frac{O(1)}{1+\xi^{\frac{1}{3}}} \end{equation*} and then \begin{equation*} \int_1^{\xi} \beta(x)dx=\beta_0+\frac{O(1)}{1+\xi^{ \frac{1}{3}}}. \end{equation*} Thus \begin{equation*} \theta(\xi,E_1)-\theta(\xi,E_2)=f(\xi)+\frac{O(1)}{1+\xi^{ \frac{1}{3}}}-\beta_0. \end{equation*} In order to prove \eqref{Gkeyformar131}, it suffices to prove that \begin{equation}\label{Gkeyformar132} \int_{\xi_0}^{\infty} \frac{\cos 2f(\xi)}{ 1+\xi} d\xi=\frac{O(1)}{\xi_0^{\frac{1}{3}}},\int_{\xi_0}^{\infty} \frac{\sin 2f(\xi)}{ 1+\xi} d\xi=\frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation} By changing the variable $y=\xi^{\frac{1}{3}}$, one has \begin{equation}\label{Gmar15} \frac{d f(y)}{dy}=\frac{df}{d\xi}\frac{d\xi}{dy}=\frac{3}{2c}(E_1-E_2) +\frac{O(1)}{1+y}. \end{equation} By \eqref{Gtheta1} and \eqref{Gmar15}, we have \begin{equation*} \int_{\xi_0^{\frac{1}{3}}}^{\infty} \frac{\cos 2f(y)}{ 1+y} dy=\frac{O(1)}{\xi_0^{\frac{1}{3}}},\int_{{\xi_0^{\frac{1}{3}}}}^{\infty} \frac{\sin 2f(y)}{ 1+y} dy=\frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation*} This implies \eqref{Gkeyformar132}. We finish the proof. \end{proof} \section{ Single embedded eigenvalue}\label{SE} \begin{proof}[\bf Proof of Theorem \ref{Mainthm1}]\label{SE} As mentioned before, we only consider $\alpha=1$. By the assumption of Theorem \ref{Mainthm1}, one has \begin{equation*} \limsup_{\xi\to \infty}\frac{1}{2}\xi |\frac{q(c\xi^\frac{2}{3})} {c\xi^{\frac{2}{3}}}|=d<\frac{\pi}{12}. \end{equation*} Let $\epsilon$ be a small positive number. Then there exists some $\xi_0>0$ such that for all $\xi>\xi_0$, \begin{equation*} \frac{1}{2}\xi |\frac{q(c\xi^\frac{2}{3})} {c\xi^{\frac{2}{3}}}|<d+\epsilon<\frac{\pi}{12}. \end{equation*} By \eqref{GPrufRmar14} and Lemma \ref{Keyle1} ($a=\xi_0$ and $b=\xi$), one has for large $\xi_0$ and $\xi>\xi_0$, \begin{eqnarray*} \log R(\xi,E)-\log R(\xi_0,E) &\geq& O(1)-(d+\epsilon)\int_{\xi_0}^{\xi}\frac{|\sin2\theta(t,E)|}{t}dt \\ &\geq & O(1)- \frac{2}{\pi}(d+\epsilon)\ln \xi. \end{eqnarray*} Thus \begin{equation}\label{Gmar311N} R(\xi,E) \geq \frac{1}{C \xi^{ \frac{2}{\pi}(d+\epsilon)}} \end{equation} for large $\xi$. Let us estimate the $L^2(\mathbb{R}^+)$ norm of $R(\xi,E)$. Direct computation implies that ($\epsilon$ is small enough) \begin{eqnarray*} R^2(\xi,E)p(\xi) &\geq& \frac{1}{C \xi^{\frac{2}{3}} \xi^{2 \frac{2}{\pi}(d+\epsilon)}} \\ &\geq & \frac{1}{C \xi}. \end{eqnarray*} Thus $ R(\xi,E)\notin L^2(\mathbb{R} ^+,p(\xi)d\xi)$. This contradicts $ \phi\in L^2(\mathbb{R} ^+,p(\xi)d\xi)$ and then contradicts $u\in L^2(\mathbb{R}^+)$. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{Mainthm2} for non-critical points] Fix any $E\in\mathbb{R}$. In the case of $H=-D^2+ {v}+q$, let $u$ be the solution of $Hu=Eu$ with the boundary condition of $H$ at $x=0$. In the case of $\widetilde{H}=-D^2+\widetilde{v}+q$, let $q=0$ for $x<0$ and let $u$ be the solution of $\widetilde{H}u=Eu$ such that $u$ satisfies \eqref{Gnesc1}. So $u\in L^2(-\infty,0]$. We employ the same notations in the proof of Theorem \ref{Mainthm1}. We define $q(x)$ for $x>0$ by \begin{equation}\label{Defapr20} \frac{1}{2}\frac{q(c\xi^{\frac{2}{3}})}{c\xi^{\frac{2}{3}}}=-\frac{d}{ {\xi}}{\rm sgn}(\sin2\theta(\xi,E)), \end{equation} where ${\rm sgn}(\cdot) $ is the sign function and $d>\frac{\pi}{12}$ is a constant, which will be determined later. Substitute \eqref{Defapr20} into \eqref{GPrufeTmar14}, and solve the nonlinear system for $\theta$ with the some proper boundary condition $\theta(1,E)=\theta_0$. It is not difficult to see that \eqref{GPrufeTmar14} has a unique piecewise smooth global solution by a standard ODE existence and uniqueness theorem. Thus $q$ is well defined and \begin{equation}\label{e6} \frac{d\log R(\xi,E)}{d\xi}= (- \frac{5}{72\xi^2}-\frac{E}{2c\xi^{\frac{2}{3}}})\sin2\theta(\xi,E)- d \frac{|\sin2\theta(\xi,E)|}{\xi}. \end{equation} By \eqref{e6} and Lemma \ref{Keyle1} ($a=\xi_0$ and $b=\xi$), one has for large $\xi_0$ and $\xi>\xi_0$, \begin{eqnarray*} \log R(\xi,E)-\log R(\xi_0,E) &\leq& O(1)-d\int_{\xi_0}^{\xi}\frac{|\sin2\theta(t,E)|}{t}dt \\ &\leq & O(1)-\frac{2}{\pi}d\ln \xi. \end{eqnarray*} Thus \begin{equation}\label{Gmar311} R(\xi,E) \leq \frac{C}{ \xi^{\frac{2}{\pi}d}} \end{equation} for large $\xi$. Thus for some small $\epsilon>0$, \begin{eqnarray*} R^2(\xi,E)p(\xi) &\leq& \frac{C}{\xi^{\frac{2}{3}} \xi^{ \frac{4}{\pi}d}} \\ &\leq & \frac{C}{ \xi^{1+\epsilon}}, \end{eqnarray*} since $d>\frac{\pi}{12}$. This implies $ R(\xi,E)\in L^2(\mathbb{R} ^+,p(\xi)d\xi)$ and then $u\in L^2(\mathbb{R}^+)$. In the case of $\widetilde{H}$, we also have $u\in L^2(\mathbb{R})$. For any $a >\frac{\pi}{4}$ in the Theorem \ref{Mainthm2}, let $d=\frac{a}{3}$. By the definition of \eqref{Defapr20}, we have \begin{equation*} \limsup_{x\to \infty}\sqrt{{x}}|q(x)|=a. \end{equation*} We finish the proof. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{Mainthm2} for the critical point ] In this case $a=\frac{\pi}{4}$. Let $d_0=\frac{\pi}{12}$. We employ the same notations in the proof of non-critical case. Let $\epsilon_n=\frac{1}{n}$ and $a_n= 2^{\frac{4}{\pi}n^3}$. We define $q(x)$ for $x>0$ piecewisely. For $\xi\in[a_n,a_{n+1})$, we define \begin{equation}\label{Def} \frac{1}{2}\frac{q(c\xi^{\frac{2}{3}})}{c\xi^{\frac{2}{3}}}=-\frac{d_0+\epsilon_n}{ {\xi}}{\rm sgn}(\sin2\theta(\xi,E)). \end{equation} Suppose $ q(c\xi)$ is defined for $\xi\in(0,a_n]$. Let $\theta_n=\theta(a_n,E).$ Substitute \eqref{Def} into \eqref{GPrufeTmar14}, and solve the nonlinear system for $\theta$ with the boundary condition $\theta(a_n,E)=\theta_n$. Thus we have for $a_n\leq\xi\leq a_{n+1}$, \begin{equation}\label{e6apr20} \frac{d\log R(\xi,E)}{d\xi}= (- \frac{5}{72\xi^2}-\frac{E}{2c\xi^{\frac{2}{3}}})\sin2\theta(\xi,E)- (d_0+\epsilon_n) \frac{|\sin2\theta(\xi,E)|}{\xi}. \end{equation} By \eqref{e6apr20} and Lemma \ref{Keyle1} ($a=a_n$ and $b=a_{n+1}$), one has \begin{eqnarray*} \log R(a_{n+1},E)-\log R(a_n,E) &\leq& \frac{O(1)}{a_n^{\frac{1}{3}}}-\frac{2}{\pi}(d_0+\epsilon_n)\ln a_{n+1}, \end{eqnarray*} and for $a_n\leq \xi \leq a_{n+1}$, \begin{equation*} \log R(\xi,E)-\log R(a_n,E) \leq \frac{O(1)}{a_n^{\frac{1}{3}}}-\frac{2}{\pi}(d_0+\epsilon_n)\ln \xi. \end{equation*} Thus, one has \begin{equation*} R(a_n,E)=O(1). \end{equation*} Moreover, for $a_n\leq \xi\leq a_{n+1}$, one has \begin{eqnarray*} R^2(\xi,E)p(\xi) &\leq& O(1)R^2(a_n,E)\frac{1}{\xi^{\frac{2}{3}} \xi^{ \frac{4}{\pi}(d_0+\epsilon_n)}} \\ &\leq & \frac{O(1)}{ \xi^{1+\frac{4}{\pi}\epsilon_n}}. \end{eqnarray*} Direct computation shows that \begin{equation*} \int_{a_n}^{a_{n+1}} R^2(\xi,E)p(\xi)d\xi\leq \frac{O(1)}{\epsilon_na_n^{\frac{\pi}{4}\epsilon_n}}=O(1)\frac{n}{2^{n^2}}. \end{equation*} This implies $ R(\xi,E)\in L^2(\mathbb{R} ^+,p(\xi)d\xi)$ and then $u\in L^2(\mathbb{R}^+)$ ($u\in L^2(\mathbb{R})$). \end{proof} \section{ Proof of Theorem \ref{Maintheoremapr3}}\label{finitelymany} \begin{lemma}\cite[Lemma 4.4]{kiselev1998modified}\label{Leapr7} Let $\{e_i\}_{i=1}^N$ be a set of unit vector in a Hilbert space $\mathcal{H}$ so that \begin{equation*} \alpha=N\sup_{j\neq k}| \langle e_k,e_j\rangle|<1. \end{equation*} Then \begin{equation}\label{Gapr71} \sum_{i=1}^N|\langle g,e_i\rangle|^2\leq (1+\alpha)||g||^2. \end{equation} \end{lemma} \begin{proof}[\bf Proof of Theorem \ref{Maintheoremapr3}] Let \begin{equation*} V(\xi)= \frac{q(c\xi^{\frac{2}{3}})}{c\xi^{\frac{2}{3}}}. \end{equation*} By the assumption of Theorem \ref{Maintheoremapr3}, for any $M>\frac{2}{3}a$, we have \begin{equation*} | V(\xi)|\leq \frac{M}{1+\xi} \end{equation*} for large $\xi$. By shifting the operator, we can assume \begin{equation}\label{Gapr76} | V(\xi)|\leq \frac{M}{1+\xi} \end{equation} for all $\xi>0$. Suppose we have $N$ eigenvalues and denote them by $E_1,E_2,\cdots E_N$. It implies that for $i=1,2,\cdots,N$, \begin{equation*} R(\xi,E_i)\in L^2(\mathbb{R} ^+,p(\xi)d\xi), \end{equation*} and then \begin{equation*} \sum_{i=1}^NR(\xi,E_i)\in L^2(\mathbb{R} ^+,p(\xi)d\xi). \end{equation*} Thus there exists $B_j\to \infty$ such that \begin{equation*} R^2(B_j,E_i)p(B_j)\leq \frac{1}{100}B_j^{-1}, \end{equation*} and then by \eqref{GWei1}, one has \begin{equation}\label{Gapr72} R(B_j,E_i)\leq B_j^{-\frac{1}{6}}, \end{equation} for all $i=1,2,\cdots,N$. By \eqref{Gapr72} and \eqref{GPrufRmar14}, we have \begin{equation}\label{Gapr73} \int_{1}^{B_j} \frac{1}{2}Q(\xi,E_i)\sin2\theta(\xi,E_i) d\xi= \int_{1}^{B_j}\frac{d}{d\xi} \log R(\xi,E_i)d\xi\leq -\frac{1}{6}\log B_j+O(1). \end{equation} By Lemma \ref{Keyle1}, one has \begin{equation}\label{Gapr74} \int_{1}^{B_j}(-\frac{5}{36\xi^2}-\frac{E}{c\xi^{\frac{2}{3}}})\sin2\theta(\xi,E_i)d\xi=O(1). \end{equation} By \eqref{Gapr73} and \eqref{Gapr74}, we have \begin{equation}\label{Gapr75} \int_{1}^{B_j}V(\xi)\sin2\theta(\xi,E_i) d\xi \leq -\frac{1}{3}\log B_j+O(1). \end{equation} Now consider the Hilbert spaces \begin{equation*} \mathcal{H}_j=L^2([1,B_j],(1+\xi)d\xi). \end{equation*} In $\mathcal{H}_j$, by \eqref{Gapr76} we have \begin{equation}\label{Gapr77} ||V||_{ \mathcal{H}_j}^2\leq M^2\log (1+B_j). \end{equation} Let \begin{equation*} e^j_{i}(\xi)=\frac{1}{\sqrt{A_i^j}}\frac{\sin 2\theta(\xi,E_i)}{1+\xi}\chi_{[1,B_j]}(\xi), \end{equation*} where $A_i^j$ is chosen such that $e_i^j$ is a unit vector in $\mathcal{H}_j$. We have the following estimate, \begin{eqnarray} A_i^j &=& \int_1^{B_j}\frac{\sin^2 2\theta(\xi,E_i)}{1+\xi}d\xi \nonumber\\ &=& \int_1^{B_j}\frac{1}{2(1+\xi)}d\xi- \int_1^{B_j}\frac{\cos 4\theta(\xi,E_i)}{2(1+\xi)}d\xi\nonumber\\ &=& \frac{1}{2}\log B_j+O(1),\label{Gapr79} \end{eqnarray} since $\int_1^{B_j}\frac{\cos 4\theta(\xi,E_i)}{2(1+\xi)}d\xi=O(1)$ by Lemma \ref{Keyle1}. By Lemma \ref{Keyleboud}, we have for $i\neq k$, \begin{equation*} \int_1^{B_j} \frac{\sin 2\theta(\xi,E_i)\sin 2\theta(\xi,E_k)}{1+\xi}d\xi=O(1). \end{equation*} It yields that \begin{equation}\label{Gapr78} \langle e_i^j,e_k^j \rangle=\frac{O(1)}{\log B_j}. \end{equation} By \eqref{Gapr79} and \eqref{Gapr75} \begin{equation}\label{Gapr710} \langle V,e^j_i \rangle_{\mathcal{H}_j}\leq -\frac{\sqrt{2}}{3}\sqrt{\log B_j}+O(1). \end{equation} By \eqref{Gapr71} and \eqref{Gapr78}, one has \begin{equation}\label{Gapr711} \sum_{i=1}^N |\langle V,e^j_i\rangle_{\mathcal{H}_j}|^2\leq (1+\frac{O(1)}{\log B_j})||V||_{\mathcal{H}_j}. \end{equation} By \eqref{Gapr710} and \eqref{Gapr77}, we have \begin{equation*} N\frac{2}{9}\log B_j\leq M^2 \log B_j+O(1). \end{equation*} Let $j\to \infty$, we get \begin{equation*} N \leq \frac{9}{2}M^2, \end{equation*} for any $M>\frac{2}{3}a$. This implies \begin{equation*} N \leq 2a^2. \end{equation*} \end{proof} \section{Effective single piece constructions}\label{ESPC} For the case of $\widetilde{H}$, we let $q(x)=0$ for $x<0$. In both cases, let \begin{equation}\label{Gv} \frac{q(c\xi^{\frac{2}{3}})}{c\xi^{\frac{2}{3}}}=V(\xi), \text{ for }\xi>0. \end{equation} Our goal is to construct $V(\xi)\approx \frac{1}{1+\xi}$ and then get $q(x)\approx\frac{1}{1+\sqrt{x}}$ by solving \eqref{Gv}. Denote by \begin{equation}\label{GPQ1mar13} Q(\xi,E)= -\frac{5}{36\xi^2}-\frac{E}{c\xi^{\frac{2}{3}}}+V(\xi). \end{equation} Suppose we construct function $q$ on $[0,x]$. We can define $u$ on $[0,x]$. Let $u$ be the solution of $Hu=Eu$ on $[0,x]$ with some boundary condition at $0$. Under the Liouville transformation, $\phi$ satisfies \begin{equation}\label{Gschximar13} -\phi^{\prime\prime}+Q(\xi,E)\phi=\phi. \end{equation} Recall that we have \begin{equation}\label{GPrufR} \frac{d\log R(\xi,E)}{d\xi}=\frac{1}{2}Q(\xi,E)\sin2\theta(\xi,E) \end{equation} and \begin{equation}\label{GPrufeTmar13} \frac{d\theta(\xi,E)}{d\xi}=1-Q(\xi,E)\sin^2\theta(\xi,E). \end{equation} \begin{theorem}\label{Twocase} Fix $M>0$. Let $E\in \mathbb{R}$ and $ A=\{{E}_j\}_{j=1}^N$. Suppose $E\notin A$ and $\{E_j\}_{j=1}^N$ are distinct. Suppose $\theta_0\in[0,\pi]$. Let $\xi_1>\xi_0>b$. Then there exist constant $C(E, A)$ (independent of $b, \xi_0$ and $\xi_1$) and potential $ V(M,\xi,E,A,\xi_0,\xi_1,b,\theta_0)$ such that the following holds: \begin{description} \item[Potential] for $\xi_0\leq \xi \leq \xi_1$, ${\rm supp}( V)\subset(\xi_0,\xi_1)$, $ V\in C^{\infty}(\xi_0,\xi_1)$, and \begin{equation}\label{thm141} | V(M,\xi,E,A,\xi_0,\xi_1,b,\theta_0)|\leq \frac{4M}{\xi-b} \end{equation} \item[Solution for $E$]Let $Q(\xi,E)$ be given by \eqref{GPQ1mar13}. Then the solution of $(-D^2+Q(\xi,E))\phi=\phi$ with boundary condition $\frac{\phi^\prime(\xi_0)}{\phi(\xi_0)}=\tan\theta_0$ satisfies \begin{equation}\label{thm142} R(\xi_1,E)\leq (1+ \frac{CM}{(\xi_0-b)^{\frac{1}{3}}})(\frac{\xi_1-b}{\xi_0-b})^{-M} R(\xi_0,E) \end{equation} and for $\xi_0<\xi<\xi_1$, \begin{equation}\label{thm143} R(\xi,E)\leq (1+ \frac{CM}{(\xi_0-b)^{\frac{1}{3}}})R(\xi_0,E). \end{equation} \item[Solution for ${E}_j$] Let $Q(\xi,E_j)$ be given by \eqref{GPQ1mar13}. Then the solution of $(-D^2+ Q(\xi,E_j))\phi=\phi$ with any boundary condition at $\xi_0$ satisfies for $\xi_0<\xi\leq \xi_1$, \begin{equation}\label{thm144} R(\xi,{E}_j)\leq (1+ \frac{CM}{(\xi_0-b)^{\frac{1}{3}}}) R(\xi_0,{E}_j). \end{equation} \end{description} \end{theorem} \begin{proof} By changing $\xi$ to $\xi-b$, we assume $b=0$. We consider the non-linear differential equation for $\xi>0$, \begin{equation}\label{Gnonlinear} \frac{d \theta(\xi,E,\xi_0,\theta_0)}{d\xi}=1-(-\frac{5}{36\xi^2}-\frac{E}{c\xi^{\frac{2}{3}}}-\frac{4M}{ 1+\xi}\sin2\theta)\sin^2\theta, \end{equation} where $C$ is a large constant that will be chosen later. Solving \eqref{Gnonlinear} on $[\xi_0,\infty)$ with initial condition $\frac{ \theta^{\prime}(\xi_0)}{\theta(\xi_0)}=\tan \theta_0$, we get a unique solution. Let \begin{equation*} V(\xi)=-\frac{4M}{ 1+\xi}\sin2\theta(\xi,E,\xi_0,\theta_0). \end{equation*} Let $Q(\xi,E)$ be given by \eqref{GPQ1mar13}. We will prove that $V$ satisfies Theorem \ref{Twocase} under some modifications. The solution $\phi(\xi,E)$ of \eqref{Gschximar13} satisfies \begin{eqnarray} \log R(\xi,E)- \log R(\xi_0,E) &=& -\int_{\xi_0}^{\xi} \frac{2M}{ 1+x}\sin^22\theta(x,E) dx+ \int_{\xi_0}^{\xi} (- \frac{5}{72x^2}-\frac{E}{2cx^{\frac{2}{3}}})\sin2\theta(x,E) dx\nonumber \\ &=& -\int_{\xi_0}^{\xi}\frac{M}{ 1+x}dx+ \int_{\xi_0}^{\xi} \frac{M}{ 1+x}\cos4\theta(x,E)dx\nonumber \\ && +\int_{\xi_0}^{\xi} ( -\frac{5}{72x^2}-\frac{E}{2cx^{\frac{2}{3}}})\sin2\theta(x,E) dx.\label{GPrufRmar13} \end{eqnarray} By \eqref{Gtheta1} and \eqref{GPrufeTmar13}, one has \begin{equation*} \int_{\xi_0}^{\xi} \frac{ 1}{ 1+x}\cos4\theta(x,E)dx=\frac{O(1)}{\xi_0}, \int_{\xi_0}^{\xi} (- \frac{5}{36x^2}-\frac{E}{cx^{\frac{2}{3}}})\sin2\theta(x,E) dx=\frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation*} By \eqref{GPrufRmar13}, we prove \eqref{thm142} and \eqref{thm143}. Let us move to the proof of \eqref{thm144}. The solution $\phi(\xi,E_j)$ satisfies \begin{eqnarray} \log R(\xi,E_j)- \log R(\xi_0,E_j) &=& \frac{1}{2}\int_{\xi_0}^{\xi} (-\frac{5}{36x^2}-\frac{E_j}{cx^{\frac{2}{3}}}-\frac{4M}{ 1+x}\sin2\theta(x,E)) \sin2\theta(x,E_j) dx\nonumber \\ &=& - \int_{\xi_0}^{\xi} \frac{2M}{ 1+x} \sin2\theta(x,E) \sin2\theta(x,E_j) dx\nonumber \\ && +\frac{1}{2}\int_{\xi_0}^{\xi} (- \frac{5}{36x^2}-\frac{E_j}{cx^{\frac{2}{3}}})\sin2\theta(x,E_j) dx\label{GPrufRmar131} \end{eqnarray} and \begin{equation}\label{GPrufeTmar131} \frac{d\theta(\xi,E_j)}{d\xi}=1-Q(\xi,E_j)\sin^2\theta(\xi,E_j). \end{equation} By \eqref{Gtheta1} and \eqref{GPrufeTmar131}, one has \begin{equation*} \int_{\xi_0}^{\xi} ( -\frac{5}{36x^2}-\frac{E_j}{cx^{\frac{2}{3}}})\sin2\theta(x,E_j) dx=\frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation*} Thus in order to prove proof of \eqref{thm144}, we only need to prove \begin{equation}\label{Gkeyformar13} \int_{\xi_0}^{\xi} \frac{\sin2\theta(x,E) \sin2\theta(x,E_j) }{ 1+x} dx=\frac{O(1)}{\xi_0^{\frac{1}{3}}}. \end{equation} By Lemma \ref{Keyleboud}, \eqref{Gkeyformar13} is true since $E\notin A$. Thus $V$ satisfies the construction except the regularity. This can be done if we modify $V$ a little at two end points $\xi_0$ and $\xi_1$, and keep all the bounds. \end{proof} \begin{remark} We can also obtain the explicit formula for $C$ which depends on $|E|$, $|E_j|$ and $|E-E_j|$, $j=1,2,3,\cdots,N$ (see \cite{liuwkb,liu2018absence}). However, those constant $C$ only change the $L^2 $ norms by a factor, which does not influence the bounds of the potentials. \end{remark} \section{ Universal gluing constructions}\label{UGC} Let $\{E_j\}_{j=1}^N$ be any $N$ different points in $\mathbb{R}$. We will use the piecewise functions to complete our constructions. Let $B=\{E_j\}_{j=1}^N$ and $$S=100\max_{E_j\in B} \{C(E_j, B\backslash E_j)\},$$ where $C$ is given by Theorem \ref{Twocase}. Let $T_w$ be a sequence and $J_w=1+N\sum_{j=1}^w T_j$, where $w\in \mathbb{Z}^+$. Let $M>0$ and $J_0=1$. Let function $q(x)=0$ for $x\in[0,c]$ (that is $V(\xi)=0$ for $\xi\in[0,1]$). We can define $u$ on $[0,c]$ as following. Let $u$ be the solution of $Hu=Eu$ on $[0,c]$ with the boundary condition at $0$. Thus we can define $\phi(\xi,E)$ for $\xi\in(0,1]$. Now we will define function $V$ ($\text{supp} V\subset (1,\infty)$ ) and $\phi(\xi,E_j)$, $j=1,2,\ldots N$ on $(1,J_w)$ by induction, such that \begin{enumerate}[1.] \item $\phi(\xi,E_j)$ solves for $\xi\in (0,J_w)$ \begin{align}\label{eigenengj} \left( -\frac{d^2}{d\xi^2} +V(\xi)\right) \phi(\xi,E_j) =E_j \phi(\xi,E_j), \end{align} and satisfies boundary condition \begin{equation}\label{1boundaryn} \frac{\phi^{\prime}(\frac{1}{2},E_j)}{\phi(\frac{1}{2},E_j)}=\tan\theta_j, \end{equation} \item $\phi(\xi,E_j)$ for $j=1,2,\cdots,N$ and $w\geq 1$, satisfies \begin{equation}\label{eigenjapr} R(J_{w},E_j) \leq (1+\frac{SM}{\sqrt[3]{J_{w-1}}})^{N} (\frac{J_{w-1}+T_{w}}{J_{w-1}})^{-M} R(J_{w-1},E_j), \end{equation} and also for $\xi\in[J_{w-1},J_w]$ \begin{equation}\label{eigenj} R(\xi,E_j) \leq (1+\frac{SM}{\sqrt[3]{J_{w-1}}})^{N} R(J_{w-1},E_j). \end{equation} \item $V(\xi)\in C^{\infty}(0,J_w]$ and for $\xi\in [J_{w-1},J_w]$, one has \begin{equation}\label{controlkr} | V(\xi) |\leq (1+\frac{(N-1)T_{w}}{J_{w-1}})\frac{4M }{\xi}. \end{equation} \end{enumerate} We proceed by an induction argument. Suppose we completed the construction $V(\xi)$ for step $w$. Accordingly, we can define $\phi(\xi,E_j)$ on $(1,J_w]$ for all $j=1,2,\cdots,N$ by \eqref{eigenengj} and \eqref{1boundaryn}. Applying Theorem \ref{Twocase} to $\xi_0=J_w$, $\xi_1=J_w+T_{w+1}$, $b=0$, $E=E_1$, $\tan\theta_0=\frac{\phi^\prime(J_w,E_1)}{\phi(J_w,E_1)}$ and $A= B\backslash \{E_1\}$, we can define $V(\xi,E_1,B\backslash \{E_1\},J_w,J_{w}+T_{w+1},0,\theta_0)$ on $\xi\in (J_w, J_w+T_{w+1}]$ and also $\phi(\xi,E_1)$ on $(J_w, J_w+T_{w+1}]$. Since the boundary condition matches at the point $ J_w$ (guaranteed by $\tan\theta_0=\frac{\phi^\prime(J_w,E_1)}{\phi(J_w,E_1)}$), $\phi(\xi,E_1)$ is well defined on $(1,J_w+T_{w+1}]$ and satisfies \eqref{eigenengj} and \eqref{1boundaryn}. We define $\phi(\xi,E_j) $ on $(0,J_{w}+T_{w+1})$ by \eqref{eigenengj} and \eqref{1boundaryn} for all $j=2,3,\cdots,N$. Thus $\phi(\xi,E_j)$ is well defined on $(1,J_w+T_{w+1}]$, and satisfies \eqref{eigenengj} and \eqref{1boundaryn} for all $j=1,2,\cdots,N$. Moreover, letting $\xi_1=J_w+T_{w+1}$ in Theorem \ref{Twocase}, one has (by \eqref{thm142}) \begin{equation}\label{Gkstep1} R(J_w+T_{w+1},E_1) \leq (1+\frac{SM}{\sqrt[3]{J_w}})(\frac{J_w+T_{w+1}}{J_w})^{-M} R(J_w,E_1), \end{equation} and for all $\xi\in[J_w,J_w+T_{w+1}]$, we have (by \eqref{thm143}) \begin{equation}\label{Gkstep1new} R(\xi,E_1) \leq (1+\frac{SM}{\sqrt[3]{J_w}}) R(J_w,E_1). \end{equation} Suppose we give the definition of $V$ and $\phi(\xi,E_j)$ for all $j$ on $(0,J_w+tT_{w+1}]$ for $t\leq N-1$. Let us give the definition on $(0,J_w+(t+1)T_{w+1}]$. Applying Theorem \ref{Twocase} to $\xi_0=J_w+tT_{w+1}$, $\xi_1=J_w+(t+1)T_{w+1}$, $b=tT_{w+1}$, $E=E_{t+1}$, $A=B\backslash E_{t+1}$ and $\tan \theta_0=\frac{\phi^\prime(J_w+tT_{w+1},E_{t+1})}{\phi(J_w+tT_{w+1},E_{t+1})}$, we can define ${V}(\xi,E_{t+1}, B_{w+1}\backslash E_{t+1}, J_w+tT_{w+1},J_w+(t+1)T_{w+1},tT_{w+1},\theta_0)$ on $\xi\in (J_w+tT_{w+1}, J_w+(t+1)T_{w+1})$. Similarly, we can define $\phi(\xi,E_j) $ on $(0,J_{w}+(t+1)T_{w+1}]$ for all $j=1,2,\cdots,N$. Moreover, letting $\xi_1=J_w+(t+1)T_{w+1}$ in Theorem \ref{Twocase}, one has \begin{equation}\label{Gkstept} R(J_w+(t+1)T_{w+1},E_{t+1}) \leq (1+\frac{SM}{\sqrt[3]{J_w}})(\frac{J_w+T_{w+1}}{J_w})^{-M} R(J_w+tT_{w+1},E_{t+1}), \end{equation} and also for $\xi\in [J_w+tT_{w+1},J_w+(t+1)T_{w+1}]$, one has \begin{equation}\label{Gksteptnew} R(\xi,E_{t+1}) \leq (1+\frac{SM}{\sqrt[3]{J_w}}) R(J_w+tT_{w+1},E_{t+1}). \end{equation} By induction, we can define $V(\xi)$ and $\phi(\xi,E_j)$ for all $j=1,2,\cdots,N$ on $(0,J_w+NT_{w+1}]=(0,J_{w+1}]$. Now we should show that the definition satisfies the $w+1$ step conditions \eqref{eigenengj}-\eqref{controlkr}. Let us pick up $R(\xi,E_j)$ for some $E_j\in B$. $R(\xi,E_j)$ decreases from point $J_w+(j-1)T_{w+1}$ to $J_w+jT_{w+1}$, and may increase from any point $J_w+(m-1)T_{w+1}$ to $J_w+mT_{w+1}$, $m=1,2,\cdots,N$ and $m\neq j$. That is \begin{equation*} R(J_w+jT_{w+1},E_j)\leq (1+\frac{SM}{\sqrt[3]{J_w}})(\frac{J_w+T_{w+1}}{J_w})^{-M} R(J_w+(j-1)T_{w+1},E_{j}), \end{equation*} and for $m\neq j$, \begin{equation*} R(J_w+mT_{w+1},E_j)\leq (1+\frac{SM}{\sqrt[3]{J_w}}) R(J_w+(m-1)T_{w+1},E_{j}), \end{equation*} by Theorem \ref{Twocase}. Thus for $j=1,2,\cdots,N$, \begin{equation*} R(J_{w+1},E_j)\leq (1+\frac{SM}{\sqrt[3]{J_w}})^{N} (\frac{J_w+T_{w+1}}{J_w})^{-M} R(J_{w},E_j). \end{equation*} This leads to \eqref{eigenjapr}. By the same arguments, we have for $j=1,2,\cdots,N$ and $\xi\in[J_w,J_{w+1}]$, \begin{equation*} R(\xi,E_j)\leq (1+\frac{SM}{\sqrt[3]{J_w}})^{N} R(J_{w},E_j). \end{equation*} This implies (\ref{eigenj}) for $w+1$. By the construction of $V(\xi)$, we have for $\xi\in[J_w+tT_{w+1},J_w+(t+1)T_{w+1}]$ and $0\leq t\leq N-1$, \begin{equation}\label{b'1} |V(\xi)| \leq \frac{4M}{\xi-tT_{w+1}}. \end{equation} In order to prove \eqref{controlkr}, it suffices to show for all $\xi\in[J_w+tT_{w+1},J_w+(t+1)T_{w+1}]$, \begin{equation*} \frac{1}{\xi-tT_{w+1}}\leq (1+\frac{(N-1)T_{w+1}}{J_w}) \frac{1}{\xi}. \end{equation*} It suffices (we only need check $\xi=J_w+tT_{w+1}$) to prove \begin{equation*} \frac{1}{J_w}\leq (1+\frac{(N-1)T_{w+1}}{J_w}) \frac{1}{J_w+t T_{w+1}}. \end{equation*} Since $t\leq N-1$, we only need to show \begin{equation*} \frac{1}{J_w}\leq (1+\frac{(N-1)T_{w+1}}{J_w}) \frac{1}{J_w+(N-1) T_{w+1}}, \end{equation*} which is true by calculations. \section{Proof of Theorems of \ref{Mainthm3} and \ref{Mainthm4}, and all the Corollaries}\label{Twoapp} \begin{proof}[\bf Proof of Theorems of \ref{Mainthm3}] The case $N=1$ has been addressed in Theorem \ref{Mainthm2}. Suppose $N\geq 2$. Let $\epsilon=\frac{1}{\sqrt{\ln N}}$ and $M=\frac{1}{6}+\frac{1}{6\epsilon}$. For $w\in \mathbb{Z}^+,$ let $T_w=N^{(1+\epsilon)w}$ so that $$J_w=1+N\sum_{i=1}^w N^i=1+N\frac{N^{(1+\epsilon)w+1}-1}{N^{(1+\epsilon)}-1}.$$ It is easy to check that \begin{equation*} \lim_{w\to \infty}\frac{J_w+T_{w+1}}{J_w}=N^{\epsilon}-\frac{1}{N}+1. \end{equation*} For large $w$, say $w\geq w_0$, one has \begin{equation*} \frac{J_w+T_{w+1}}{J_w}\geq N^{\epsilon}+\frac{1}{4}. \end{equation*} Thus by \eqref{eigenjapr}, we have for $w\geq w_0$, \begin{equation* R(J_{w},E_j) \leq (1+\frac{SM}{\sqrt[3]{J_w}})^{N}(N^{\epsilon}+\frac{1}{4})^{- M} R(J_{w-1},E_j), \end{equation*} and then \begin{equation* R(J_{w},E_j) \leq (1+\frac{SM}{\sqrt[3]{J_w}})^{N(w-w_0)}(N^{\epsilon}+\frac{1}{4})^{- M(w-w_0)} R(J_{w_0},E_j). \end{equation*} By \eqref{eigenj}, we have for $\xi\in[J_{w},J_{w+1}]$, \begin{equation}\label{eigenjnew} R(\xi,E_j) \leq (1+\frac{SM}{\sqrt[3]{J_w}})^{N(w+1-w_0)}(N^{\epsilon}+\frac{1}{4})^{-M(w-w_0)} R(J_{w_0},E_j). \end{equation} Let $\delta_w=\frac{1}{\sqrt{w}}$. Let $p_{\delta_w}=3-\delta_w$. Let $q_{\delta_w}$ be such that $\frac{1}{p_{\delta_w}}+\frac{1}{q_{\delta_w}}=1$. Then $q_{\delta_w}=\frac{3}{2}+\frac{\delta_w}{4-2\delta_w}$. By basic inequality, one has \begin{equation}\label{Gapr142} \int_{J_{w+1}}^{J_{w+2}} R^2(\xi,E_j)p(\xi) d\xi\leq (\int_{J_{w+1}}^{J_{w+2}} R^{2p_{\delta_w}}(\xi,E_j) d\xi)^{\frac{1}{p_{\delta_w}}}(\int_{J_{w+1}}^{J_{w+2}}p(\xi)^{q_{\delta_w}} d\xi)^{\frac{1}{q_{\delta_w}}}. \end{equation} Direct computations show that \begin{eqnarray} \int_{J_{w+1}}^{J_{w+2}}p(\xi)^{q_{\delta_w}} d\xi &\leq & O(1)\frac{1}{\delta_w} \frac{1}{J_{w+1}^{\frac{1}{10}\delta_w}}\nonumber\\ &=& O(1) \frac{\sqrt{w}} { N^{\frac{1+\epsilon}{10} \sqrt{w} }} \nonumber\\ &=& O(1).\label{Gapr151} \end{eqnarray} It is easy to see \begin{equation}\label{Gapr152} (1+\frac{SM}{\sqrt[3]{J_{w+1}}})^{N2p_{\delta_w}w}=O(1). \end{equation} By \eqref{eigenjnew} and \eqref{Gapr152}, one has \begin{equation*} \int_{J_{w+1}}^{J_{w+2}} R^{2p_{\delta_w}}(\xi,E_j) d\xi\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \end{equation*} \begin{eqnarray} &\leq& (1+\frac{SM}{\sqrt[3]{J_{w+1}}})^{N2p_{\delta_w}(w+2-w_0)}(N^{\epsilon}+\frac{1}{4})^{-2p_{\delta_w} M(w+1-w_0)} R(J_{w_0},E_j)^{2p_{\delta_w}}\int_{J_{w+1}}^{J_{w+2}} d\xi\nonumber \\ &\leq& O(1)(N^{\epsilon}+\frac{1}{4})^{-2p_{\delta_w} M(w-w_0)} R(J_{w_0},E_j)^{2p_{\delta_w}} NT_{w+1} \nonumber \\ &\leq& O(1)(N^{\epsilon}+\frac{1}{4})^{-2p_{\delta_w} Mw} N^{(1+\epsilon)w}\nonumber \\ &=& O(1)(1+ \frac{1}{4N^{\epsilon}})^{-2p_{\delta_w} Mw} N^{-2\epsilon p_{\delta_w} Mw} N^{(1+\epsilon)w}\nonumber \\ &=& O(1)(1+ \frac{1}{4N^{\epsilon}})^{-2p_{\delta_w} Mw}N^{2\epsilon \epsilon_w Mw} N^{-6\epsilon Mw} N^{(1+\epsilon)w} \nonumber\\ &=& O(1)(1+ \frac{1}{4N^{\epsilon}})^{-2p_{\delta_w} Mw}N^{2\epsilon \epsilon_w Mw} ,\label{Gapr153} \end{eqnarray} where the last equality holds by the fact $6\epsilon M=1+\epsilon$. Direct computation shows \begin{eqnarray} \sum_{w=w_0}^{\infty}\left [(1+ \frac{1}{4N^{\epsilon}})^{-2p_{\delta_w} Mw}N^{2\epsilon \epsilon_w Mw}\right]^{\frac{1}{p_{\delta_w}}} &=& \sum_{w=w_0}^{\infty} \left[(1+ \frac{1}{4N^{\epsilon}})^{-2p_{\delta_w} Mw}N^{2\epsilon M\sqrt{w}}\right]^{\frac{1}{p_{\delta_w}}} \nonumber \\ &\leq & \sum_{w=w_0}^{\infty}\left[(1+ \frac{1}{4N^{\epsilon}})^{-4 Mw}N^{2\epsilon M\sqrt{w}}\right]^{\frac{1}{4}}\nonumber \\ &\leq &\sum_{w=w_0}^{\infty}(1+ \frac{1}{4N^{\epsilon}})^{- Mw}N^{\frac{\epsilon}{2} M\sqrt{w}}\nonumber \\ &<&\infty.\label{Gapr154} \end{eqnarray} By \eqref{Gapr142}, \eqref{Gapr151}, \eqref{Gapr153} and \eqref{Gapr154}, we have $R(\xi,E_j)\in L^2(\xi, p(\xi)d\xi)$ for all $j=1,2,\cdots,N$. This implies $E_j$ is an eigenvalue, $j=1,2,\cdots,N$. By the fact $\lim _{w\to\infty}\frac{T_{w+1}}{J_w}=N^{\epsilon}-\frac{1}{N}$, and \eqref{controlkr}, one has \begin{eqnarray} \limsup _{\xi\to \infty}\xi|V(\xi)|&\leq& 4(1+(N-1)(N^{\epsilon}-\frac{1}{N}))M \nonumber\\ &=& \frac{2}{3} (1+(N-1)(N^{\epsilon}-\frac{1}{N})) (1+\frac{1}{\epsilon})\nonumber\\ &=& \frac{2}{3} N^{1+\epsilon} (1+\frac{1}{\epsilon})\nonumber\\ &\leq& \frac{2}{3} N^{1+\epsilon} e^{\frac{1}{\epsilon}}\nonumber\\ &=& \frac{2}{3}e^{2\sqrt{\ln N}}N. \end{eqnarray} By \eqref{Gv}, we have \begin{equation*} \limsup _{\xi\to \infty}\sqrt{\xi}|q(\xi)|\leq e^{2\sqrt{\ln N}}N. \end{equation*} We finish the proof. \end{proof} Let $M=100$ and $K=100CM$ in Theorem \ref{Twocase}. We get \begin{proposition}\label{Twocase1} Let $E\in \mathbb{R}$ and $ A=\{{E}_j\}_{j=1}^k$. Suppose $E\notin A$ and $\{E_j\}_{j=1}^k$ are distinct. Suppose $\theta_0\in[0,\pi]$. Let $\xi_1>\xi_0>b$. Then there exist constants $K(E, A)$, $C(E, A)$ (independent of $b, \xi_0$ and $\xi_1$) and potential $ V(\xi,E,A,\xi_0,\xi_1,b,\theta_0)$ such that for $\xi_0-b>K(E,A)$ the following holds: \begin{description} \item[Potential] for $\xi_0\leq \xi \leq \xi_1$, ${\rm supp}( V)\subset(\xi_0,\xi_1)$, $ V\in C^{\infty}(\xi_0,\xi_1)$, and \begin{equation* | V(\xi,E,A,\xi_0,\xi_1,b,\theta_0)|\leq \frac{C(E, A)}{\xi-b} \end{equation*} \item[Solution for $E$]Let $Q(\xi,E)$ be given by \eqref{GPQ1mar13}. Then the solution of $(-D^2+Q(\xi,E))\phi=\phi$ with boundary condition $\frac{\phi^\prime(\xi_0)}{\phi(\xi_0)}=\tan\theta_0$ satisfies \begin{equation* R(\xi_1,E)\leq 2(\frac{\xi_1-b}{\xi_0-b})^{-100} R(\xi_0,E) \end{equation*} and for $\xi_0<\xi<\xi_1$, \begin{equation* R(\xi,E)\leq 2R(\xi_0,E). \end{equation*} \item[Solution for ${E}_j$] Let $Q(\xi,E_j)$ be given by \eqref{GPQ1mar13}. Then the solution of $(-D^2+ Q(\xi,E_j))\phi=\phi$ with any boundary condition at $\xi_0$ satisfies for $\xi_0<\xi\leq \xi_1$, \begin{equation* R(\xi,{E}_j)\leq 2R(\xi_0,{E}_j). \end{equation*} \end{description} \end{proposition} \begin{proof}[\bf Proof of Theorem \ref{Mainthm4}] Once we have Proposition \ref{Twocase1}, we can prove Theorem \ref{Mainthm4} by the arguments in \cite{ld1,jl}. We omit the details here. \end{proof} \begin{proof}[\bf Proof of all the Corollaries] Corollaries \ref{cor1} and \ref{cor3} follow from Theorems \ref{Mainthm1} and \ref{Maintheoremapr3} respectively. Let $u(x,E) $ be the solution of $\widetilde{H}u=Eu$ on $(-\infty,0]$ such that $u\in L^2(-\infty,0]$ (this can be guaranteed by Lemma \ref{Keyle2}). Let $\theta_E=\frac{u^{\prime}(0,E)}{u(0,E)}$. Instead of using boundary condition $\theta $ in the previous arguments, we use $\theta_E$. Now Corollaries \ref{cor2}, \ref{cor4} and \ref{cor5} follow from Theorems \ref{Mainthm2}, \ref{Mainthm3} and \ref{Mainthm4}. \end{proof} \section{Proof of general cases}\label{General} In this section, we will adapt our proof for $\alpha=1$ to general $\alpha\in(0,2)$. Take $v(x)=v_{\alpha}(x)=x^{\alpha}$ in \eqref{Gsch}-\eqref{GPQ} and let $c_{\alpha}=(1+\frac{\alpha}{2})^{\frac{2}{2+\alpha}}$. In the general cases, we have \begin{equation}\label{GLiou1new} x=c_{\alpha}\xi^{\frac{2}{2+\alpha}}, \phi_{\alpha}(\xi,E)=c_{\alpha}^{\frac{\alpha}{4}}\xi^{\frac{\alpha}{2(2+\alpha)}}u(c_{\alpha}\xi^{\frac{2}{2+\alpha}}), \end{equation} \begin{equation}\label{GWei1new} p_{\alpha}(\xi)= \frac{1}{c_{\alpha}^{\alpha}\xi^{\frac{2\alpha}{2+\alpha}}}, \end{equation} and \begin{equation}\label{GPQ1oldnew} Q_{\alpha}(\xi,E)= -\frac{5}{4}\frac{\alpha^2}{(2+\alpha)^2}\frac{1}{\xi^2}+\frac{\alpha(\alpha-1)}{(2+\alpha)^2}\frac{1}{\xi^{2}}+\frac{q(c_{\alpha}\xi^{\frac{2}{2+\alpha}})-E}{c_{\alpha}^{\alpha}\xi^{\frac{2\alpha}{2+\alpha}}}. \end{equation} Let \begin{equation}\label{Gvaprnew} V_{\alpha}(\xi)=\frac{q(c_{\alpha}\xi^{\frac{2}{2+\alpha}})}{c_{\alpha}^{\alpha}\xi^{\frac{2\alpha}{2+\alpha}}}. \end{equation} Then \begin{eqnarray*} Q_{\alpha}(\xi,E) &=& -\frac{5}{4}\frac{\alpha^2}{(2+\alpha)^2}\frac{1}{\xi^2}+\frac{\alpha(\alpha-1)}{(2+\alpha)^2}\frac{1}{\xi^{2}}- \frac{E}{c_{\alpha}^{\alpha}\xi^{\frac{2\alpha}{2+\alpha}}}+V_{\alpha}(\xi) \end{eqnarray*} Suppose $u\in L^2(\mathbb{R}^+)$ is a solution of \eqref{Gsch} with $v(x)=x^{\alpha}$. It follows that $\phi_{\alpha}$ satisfies \begin{equation}\label{Gschxinew} -\frac{d^2\phi_{\alpha}}{d\xi^2}+Q_{\alpha}(\xi,E)\phi_{\alpha}=\phi_{\alpha}. \end{equation} Now, all the quantities such as $Q_{\alpha}(\xi,E)$ and $c_{\alpha}$ depend on $\alpha$. In order to proceed the proof in a similar way, two important components are essential: 1. all the estimates of oscillated integral still hold; 2. how the $\alpha$-dependent constants are computed in the main theorems. It is convenient to employ a slight different Prf\"ufer transformation. Such standard trick has been used to deal with Stark operators before (p.10 in \cite{christ2003absolutely}). Let $H_{\alpha}(\xi,E)=- \frac{E}{c_{\alpha}^{\alpha}\xi^{\frac{2\alpha}{2+\alpha}}}$. The new Pr\"{u}fer tranformation is given by \begin{equation}\label{GPruf1new} \sqrt{1-H_{\alpha}(\xi,E)}\phi_{\alpha}(\xi,E)=R_{\alpha}(\xi,E)\sin\theta_{\alpha}(\xi,E), \end{equation} and \begin{equation}\label{GPrufnew} \frac{d \phi_{\alpha}(\xi,E)}{d \xi}=R_{\alpha}(\xi,E)\cos\theta_{\alpha}(\xi,E). \end{equation} By \eqref{Gschxinew}, we have \begin{equation}\label{GPrufRmar14new} \frac{d\log R_{\alpha}(\xi,E)}{d\xi}=\frac{1}{2}\frac{V_{\alpha}(\xi)}{\sqrt{1-H_{\alpha}(\xi,E)}}\sin2\theta_{\alpha}(\xi,E)+ O(\frac{1}{\xi^{1+\frac{2\alpha}{2+\alpha}}}) \end{equation} and \begin{equation}\label{GPrufeTmar14new} \frac{d\theta_{\alpha}(\xi,E)}{d\xi}=\sqrt{1-H_{\alpha}(\xi,E)}-\frac{V_{\alpha}(\xi)}{\sqrt{1-H_{\alpha}(\xi,E)}}\sin^2\theta_{\alpha}(\xi,E)+ O(\frac{1}{\xi^{1+\frac{2\alpha}{2+\alpha}}}). \end{equation} As in this paper $ |V_{\alpha}(\xi)|\leq \frac{h(\xi)}{1+\xi}$, for any $h(\xi)$ with $h(\xi)\to \infty$ as $\xi\to \infty$, one has \begin{equation*} V_{\alpha}(\xi)=\frac{O(1)}{\xi^{1-\frac{\alpha}{2+\alpha}}}. \end{equation*} Finally, \eqref{GPrufRmar14new} and \eqref{GPrufeTmar14new} become \begin{equation}\label{GPrufRmar14new1} \frac{d\log R_{\alpha}(\xi,E)}{d\xi}=\frac{1}{2} V_{\alpha}(\xi) \sin2\theta_{\alpha}(\xi,E)+ O(\frac{1}{\xi^{1+\frac{\alpha}{2+\alpha}}}), \end{equation} and \begin{equation}\label{GPrufeTmar14new1} \frac{d\theta_{\alpha}(\xi,E)}{d\xi}=\sqrt{1-H_{\alpha}(\xi,E)}- V_{\alpha}(\xi) \sin^2\theta_{\alpha}(\xi,E)+ O(\frac{1}{\xi^{1+\frac{\alpha}{2+\alpha}}}). \end{equation} Since the tail $O(\frac{1}{\xi^{1+\frac{\alpha}{2+\alpha}}})$ in \eqref{GPrufRmar14new1} and \eqref{GPrufeTmar14new1} is integrable, it does not change the estimate at all up to a constant. Under the new Pr\"ufer transformation and following the arguments of proof of $\alpha=1$, it is not hard to verify all the estimates of oscillated integral still hold. Now we are going to convince the readers the $\alpha$-dependent constants in the main theorems. There are two $\alpha$-dependent constants: the power decay rate of $\xi$ (this constant is fixed in every theorem and it is equal $1-\frac{\alpha}{2}$) and the constants in front of $x^{1-\frac{\alpha}{2}}$. By \eqref{GPrufRmar14new1} and the proof of $\alpha=1$, the critical case of $V_{\alpha}(\xi)$ is $\frac{O(1)}{\xi}$. By the relation \eqref{Gvaprnew}, it is easy to see that $ \frac{O(1)}{x^{1-\frac{\alpha}{2}}}$ is the critical case for $q(x)$. It explains where the constant $1-\frac{\alpha}{2}$ is from. Now we are in the position to explain the constants in front of $x^{1-\frac{\alpha}{2}}$. Since the constants are different in different theorems, we need to check them one by one. Let us check the constant $\frac{2-\alpha}{4} \pi $ in Theorems \ref{Mainthm1} and \ref{Mainthm2} first. Here are the details. Suppose $q(x)\approx\frac{A}{ x^{1-\frac{\alpha}{2}}}$. By \eqref{Gvaprnew}, $ V_{\alpha}(\xi)\approx \frac{A}{ \frac{1+\alpha}{2}}\frac{1}{\xi}$. Similar to \eqref{Gmar311N} and \eqref{Gmar311}, \begin{equation}\label{Nov111} R^2(\xi,E)\approx\xi^{-\frac{4A}{\pi(2+ \alpha)}}. \end{equation} Since the critical case of $R^2_{\alpha}(\xi,E)p_{\alpha}(\xi)$ is $\frac{O(1)}{\xi}$ ($\frac{1}{\xi^{1+\delta}}$ is integrable and $\frac{1}{\xi^{1-\delta}}$ is not integrable for $\delta>0$), we have that the critical case for $R^2_{\alpha}(\xi,E)$ is \begin{equation}\label{Nov110} R^2_{\alpha}(\xi,E) \approx \frac{O(1)}{p_{\alpha}(\xi)\xi}= O(1) \xi^{-\frac{2-\alpha}{2+\alpha}} . \end{equation} By \eqref{Nov111} and \eqref{Nov110}, we have $A=\frac{2-\alpha}{4}\pi.$ We finish checking in Theorems \ref{Mainthm1} and \ref{Mainthm2}. For the rest of constants in main theorems, we can do the similar check. Actually, from \eqref{Nov111} and \eqref{Nov110}, we can see that only ratio $\frac{A}{2-\alpha}$ matters. This fact also holds for the rest of the theorems. By the fact that the ratio $\frac{A}{2-\alpha}$ does not depend on $\alpha$ and the constants for $\alpha=1$, we can easily see the constants in all the theorems are correct.
1,108,101,566,839
arxiv
\section{Introduction.} \indent As it is well known, one can distinguish two regimes in the phase separation process following the formation of nuclei. The initial stage is the growth of these nuclei by the condensation of solute on their surface. The second stage is known as the {\it Ostwald Ripening (OR) \/} process where the particles flow from shrinking clusters to growing ones. The kinetics of domain growth in the late stages of diffusion-limited spinodal decomposition {\it Ostwald Ripening (OR) \/} have been studied by a variety of methods \cite{1,2,3,4,5,6,7,8}. The {\it Lifshitz, Slyozov and Wagner (LSW) \/} theory predicts that the average droplet radius $R$ grows with time $t$ as $R(t)=\Gamma t^\alpha $, where $\Gamma =const.$ and $\alpha =1/3$ and that the distribution of droplet sizes reaches a material-independent universal form when properly scaled. Most simulations and experiments measure $\alpha $ exponents in the range $0.15$ to $0.25$ \cite{9,10,11,12,13,14,15,16,17,18,19,20,21,22}. below the theoretical value $\alpha =1/3$. Moreover, the measured size distributions are typically broader than the $LSW$ prediction. This discrepancy has been attributed either to diffusion effects at an interface or the inadequacy of mean field description for the systems or to insufficiently long simulation times. Several authors have developed improved theoretical models that take into account interaction effects \cite{3}. These models involve expansions in power of the parameter $\sqrt{\phi }$ (where $\phi $ is the volume fraction of the minority phase), whose importance was first recognized by {\it Tokuyama \/} and {\it Kawasaki \/} \cite{23}. To first order in $\sqrt{\phi }$, interactions give rise to two types of corrections: a direct correlations between droplet pairs where small droplets are likely to be surrounded by large ones as well as a ''medium polarization'' in which the rate of evolution of a droplet is not only a function of its radius $R$, but also of the droplets within a neighborhood of size $\xi $. The models reproduce the broadening of experimental distributions while predicting that correlations do not alter the value of the $LSW$ exponent $\alpha $ just as observed in experiments. Recent experimental results \cite{24} have revealed the above types of correlation effects in the two-dimensional coarsening process. Thus, according to the direct correlation effect small (large) cluster are more likely to be found near large (small) ones . The other correlation effect, experimentally observed, is a medium polarization according to which the rate of change of the size cluster is determined not only by its size but also by the influence of others in its surroundings. Thus, the medium polarization around two nearby clusters promotes the accelerated shrinkage of one and growth of the other, the rates of shrinkage and growth being larger than if both clusters were isolated. Taking into account these correlation effects it has been proven \cite{25} that within an {\it % off-centre diffusion \/} approach \cite{26,27} the temporal law of the $OR$ process in two-dimensions can be, under some circumstances, different from that predicted by the $LSW$ theory. The purpose of this paper is an examination of the cluster growth where the dynamic is dominated by the coarsening process. As in the previous case of the $OR$ in two-dimensions \cite{25}, we will suppose that the particles transport from shrinking cluster to growing one occurs by an {\it off-centre diffusion \/} mechanism \cite{26,27}. The identifying of the {\it Markowian chains \/} (as within the {\it Flux over Population Method \/} \cite{28}) is based on a local feature of the medium, according with, the small cluster (which disappears during the $OR$ process) is likely to be surrounded by the greater ones (which increase by the incorporation of mass into them). In this way, the diffusion solution will be determined as a function of both the growing and shrinking cluster sizes. Also, it is shown that the frequency transfer of particles between the shrinking cluster and the growing one may acquire high values due to the medium polarization. Particular properties of the clusters are included in the model. As a result, the temporal power law of the cluster growth derived in this theoretical model differs again, as in the two-dimensional case, from that predicted by the {\it LSW \/} theory. Some experimental results on the growth of the $Ag$ clusters embedded in a $KCl$ matrix will be analyzed by the present theory. It must be pointed out that the present approach of the three-dimensional $OR $ process can work only under assumption that the correlation effects occur (especially, the direct correlation between the cluster sizes). This fact can exist if the clusters nucleate and grow at dislocation lines and/or at grain boundaries (situation which is frequently supposed for clusters embedded in solid matrix \cite{29}). Far to elucidate the controversy regarding the general theory of the $OR$ process (especially related to the famous $\frac 13$), the {\it off-centre diffusion \/} approach of $OR$ gives, at least, a real way to account for the correlation effects. Moreover, this distinguishes from the others in this branch by the fact that the clusters act as entities in theirself and, consequently, the temporal power law of $OR$ is derived in connection with their particular properties \cite{25}. \topmargin=-0.5in \baselineskip=24pt \section{Off-centre Diffusion Approach of the $OR$ Process.} \indent As we have said in the previous section, according to the direct correlation effect within the two -dimensional $OR$ process experimentally observed by {\it Krichevsky\/} and {\it Stavans \/} \cite{24}, small (large) clusters are more likely to be found near large (small) ones . Supposing that this fact is appropriate also in three-dimensions, let us examine the effect of such a correlation on the diffusion process in the $OR$ phenomenon. Consequently, we will assume the situation which is shown in Fig. 1, where there exist a cluster of $N$ sites slightly displaced around the position of the large cluster and, only $N_o$ sites ($N_o<N$) around the position of the small cluster. These off-centre sites (the ''kinks'' of the cluster surface) serve both to the particles motion on the cluster surface \cite{30} and as available sites for the particles transfer from the shrinking (small) cluster to growing (large) cluster \cite{25,26,27}. In the following, we assume that the motion among the available sites of the same cluster is described by the frequency $p_o$ while the transfer frequency from shrinking (small) cluster to growing (large) one is $p$, $p_o\gg p$. In this way, the particles leave the ''kink'' sites of the surface of the shrinking cluster and ''condense'' in the ''kink'' sites of the nearest-neighbour growing cluster. The equations for the $N_o$ concentrations $n_i(x,t)$, $i=\overline{ 1,N_o}$, can readily be written as: \begin{eqnarray} & &\frac{\partial n_1}{\partial t} = p_o \sum_{i=1}^{N_o} (n_i - n_1) + p \sum_{j=1}^{N} \left( n_j (x + \xi) - n_1 \right) \\ \nonumber & &\vdots \\ \nonumber & &\frac{\partial n_{N_o}}{\partial t} = p_o \sum_{i=1}^{N_o} (n_i - n_{N_o}) + p \sum_{j=1}^{N} \left( n_j (x + \xi) - n_{N_o} \right) . \nonumber \end{eqnarray} We perform the power-series expansion of the concentration function in (1). Since, as usual in diffusion processes, we are interested only in slowly varying functions in time and space \cite{26,27}, we may neglect the terms containing the {\it first-order \/} derivatives in the series expansion of (1). Equations (1) can, therefore, be approximated by \begin{eqnarray} & &\frac{\partial n_1}{\partial t} = p_o \sum_{i=1}^{N_o} (n_i - n_1) + N \frac{1}{2} p \xi^2 \frac{\partial^2 n_1}{\partial x^2} \\ \nonumber & &\vdots \\ \nonumber & &\frac{\partial n_{N_o}}{\partial t} = p_o \sum_{i=1}^{N_o} (n_i - n_{N_o}) + N \frac{1}{2} p \xi^2 \frac{\partial^2 n_{N_o}}{\partial x^2} , \nonumber \end{eqnarray} whose Fourier transforms read \begin{eqnarray} & &n_{1q} \left(\omega - N\frac{1}{2}p\xi^2q^2 \right) + n_{2q}p_o + \ldots + n_{N_oq}p_o = 0 \\ \nonumber & &\vdots \\ \nonumber & &n_{1q}p_o + \ldots + n_{(N_o - 1)q}p_o + n_{N_oq} \left(\omega - N\frac{1}{2}p\xi^2q^2 \right) = 0 . \nonumber \end{eqnarray} Looking at the system of equations (2) and at their {\it Fourier transforms \/} we can see that the problem amounts to finding the lowest eigenvalue of a system of equations which has the general matrix form \cite{27}, \begin{equation} A=p_oA_o+pA_1+A_2. \end{equation} Here, $A_o$ describes the diffusion among the off-centre sites belonging to the same cluster; $A_1$ corresponds to the {\it second-order \/} expansion of the concentration functions; and $A_2$ includes the {\it higher-order \/} contributions of the derivatives. We note that in the long-wavelength limit $A_2$ vanishes. The lowest eigenvalue can be obtained by a perturbation \cite{27} given by \begin{equation} \omega =\overline{n}A_1n=\frac 12Np\xi ^2q^2, \end{equation} where $n$ is the (column) vector adjoint to eigenvector \begin{equation} \overline{n}=N^{\frac{-1}2}\left( 1,1,\cdots ,1\right) . \end{equation} The diffusion solution is given by \begin{equation} n(x,t)=\frac{n_0}{\sqrt{2\pi Np\xi ^2t}}\cdot \exp {(-\frac{x^2}{2\cdot Np\xi ^2t})}. \end{equation} This equation gives the particles number per unit length at the time $t$ and at the distance $x$ due to the diffusion of an initial $\delta $ - form concentration of particles. $N$ is in direct proportion with the cluster surface and, in a crude approximation, can be expressed by \begin{equation} N\approx \frac 12\cdot \frac{R^2}{a_o^2}, \end{equation} where $a_o$ is the atom radius and $R$ the radius of the growing cluster. The above equation establishes that, due to the geometrical obstructions, see Fig. 1, only half of the peripheral off-centre sites (the ''kinks'') are available to receive diffusing particles. The diffusing particles come, in the $OR$ process, from shrinking cluster and, indeed, we must take into account its dissociation rate. As it is well known, the cohesive energy per atom decreases with decreasing cluster size \cite{31} and, therefore, the dissociation rate for shrinking clusters become considerable greater in comparison with growing ones. This is important to gain physical insight in the $OR$ process. The shrinking or growing of an cluster begins from a critical radius that depends on its size. A common definition of the critical radius states that it is the radius of a droplet which is instantaneously neither growing nor shrinking. The dissociation rate is related, in the $RRK$ theory {\it (Rice, Ramsperger, Kassel) \/} \cite{32}, to both the thermal energy $E_o=E_o(T)$ ($ T$ stands for the temperature) and the dissociation energy of the particle $ E_D$, \begin{equation} K(T)=\nu \left[ \frac{E_o-E_D}{E_o}\right] ^{s-1}\,. \end{equation} Here, $\nu $ is the vibrational frequency, $s$ is the number of vibrational degrees of freedom of the cluster and $T$ is the temperature. One seems that the excitation of the cluster, ultimately, causes heating and dissociation and that to a large extent the excitation mechanism is decoupled from the dissociation. Thus, with such a simplification, the dissociation rate can be calculated by (9). In this way, we may find the total amount of dissociated particles from the shrinking (small) cluster during the thermal annealing as \begin{equation} n_o=W\cdot K(T)\cdot t, \end{equation} where $W$ accounts for the surface atoms of the cluster and $t$ is the time of the thermal annealing. Further, the $n_o$ entering in equation (7) is replaced by the above amount. The other correlation effect, theoretically assumed in \cite{23} and experimentally observed in two-dimensional $OR$ \cite{24}, is a medium polarization according to which the rate of change of the size cluster is determined not only by its size but also by the influence of others in its surroundings. Thus, the medium polarization around two nearby clusters promotes the accelerated shrinkage of one and growth of the other, the rates of shrinkage and growth being larger than if both clusters were isolated. The medium polarization is due to the electrostatic interaction between the charges associated with each shrinking (negative charge) or growing (positive charge) cluster. This charge is proportionally to the rate of change of the cluster area. The medium polarization consists in the appearance of an electrostatic potential \begin{equation} \Phi (r) = C \frac{e^{-\frac{r}{D}}}{r}, \end{equation} as solution of the {\it Poisson - Boltzmann equation \/} \cite{33}. $C$ is a constant depending on the cluster size and $D$ is the {\it Debye length \/}. The {\it Debye length \/} is in inverse proportion with $\sqrt {M}$ where $M$ is the number of clusters within the neighborhood of the reference cluster (the shrinking or growing cluster). Indeed, the activation energy for particles transfer from shrinking cluster to growing one is considerably lowered due to this electrostatic potential ($\Phi $), thereby enhancing the transfer frequency (see eq. 7) \begin{equation} p = \nu \exp{\left(-\beta \left( E_b - e \Phi \right) \right)}, \end{equation} where $E_b$ denotes the threshold energy for activation, $\nu$ is a prefactor, and $\beta = \left(K_BT \right)^{-1}$. In this way the medium polarization accelerates both the shrinkage of a small cluster and the growth of a large one. As we have said in the introduction part, this correlation effect (as well as the former) can be properly understood for embedded clusters in solid matrix only if the nucleation sites, which really promote the growth of clusters, occur at dislocation lines or/and at grain boundaries \cite{29}. Diffusing particles added to a growing cluster having an initial critical radius $R_o $ leads to an increase in its radius to $R$; \begin{equation} n = \rho \frac{4 \pi}{3} \left( R^3 - R_{o}^{3} \right) , \end{equation} where $\rho$ is the particles concentration in the cluster. Also, taking into account (7) we can express $n$ by \begin{equation} \int^{R}_{R_o} n\left( \xi - R, t\right) dR = \frac{WK(T)}{\sqrt{\frac{\pi }{a_{o}^{2}} p\xi^2 }} \sqrt{t} \cdot \int^{R}_{R_o} \frac {1}{R} \exp{% \left(-\frac{(\xi - R) ^2}{\frac{1}{a_{o}^{2}}p\xi^2t} \right)} dR , \end{equation} where $\xi$ stands for the separatrix between the shrinking cluster and the growing one. For large $t$ and as $\xi \approx R $ \cite{24} the exponential function vanishes and the above equation becomes \begin{equation} \frac{ R^3 - R_{o}^{3}}{\ln {\frac{R}{R_o}}} = \frac{WK(T)a_o}{\rho \sqrt{ \pi^3 p\xi^2 }} \sqrt{t}. \end{equation} The last equation gives the time $t$ for an increase of the cluster radius from a radius $R_o$ to $R$ by an {\it off-centre diffusion mechanism \/}. Indeed, the diffusion process is related to the frequency transfer $p$ and, as we have said, within the particular dependencies of $p$ we must take into account (11). \topmargin=-0.5in \baselineskip=24pt \section{Experimental.} \indent Metal clusters can be produced with ease in solid matrix \cite{29}. For example, electrolytical or additive colouring of the alkali halide crystals containing relatively high impurity concentration ($ \approx 10^{18}$ impurities per $cm^3$) lead, directly, to cluster formation \cite{29,35,36}. Another, more adequate, method in order to study the kinetic aspects of the embedded clusters is the thermal annealing of alkali halide crystals containing negative metallic ions \cite{34,35}. This method advantages a better control of the cluster size but, it must be pointed out that the obtaining of the negative metallic ions is, generally, more difficult to do, this process requiring appropriate conditions related to the external factors as temperature, electric field as well as the filling factors; when large impurity concentration ($\approx 10^{18}$) is used then insignificant amount of the negative metallic centres are obtained. In the present paper, we show the experimental data for metallic clusters obtained by thermal annealing of the $KCl:Ag^{-}$ samples. $KCl$ single crystals containing $Ag^{+}$ ions in a concentration of $5\cdot 10^{17}$ $ions/cm^3$ have been grown by the Kyropoulos method in air. Under electrolytical colouring performed by an usual device in air at $573K$ and at $8000$ $V/cm$ we obtain samples containing $Ag^-$ negative metallic centres (see the initial (1) peak at 290 nm in the Fig. 2a) as well as few small silver clusters (see the intial (1) peak at 380 nm in Figs. 2a and b). Thermal annealing at a given temperature of the $KCl:Ag^{-}$ samples leads, progressively, both to the decrease of the $Ag^-$ amount and to the obtaining of the other, more and more, clusters (see the rise trend of the absorption curves). A possible scenario for the conversion of the $Ag^-$ ions towards $Ag^o$ centres and/or cluster states begins with $Ag^- + kT \leftrightarrow F + Ag^{o}_{i}$ (the $F$ means $F$ centre and the $Ag^{o}_{i}$ means an interstitial silver atom) \cite{34}. The following step after the above reaction should be the precipitation of the silver atoms. Also, during the thermal annealing, the clusters have an increase trend of their sizes. Experimentally, this fact can be observed by a change of the optical spectra; the absorption maximum shifts, progressively, towards high wavelengths. In Figs. 2a the evolution of the optical spectra for the sample is shown with respect to the time of annealing at $800K$; the first absorption maxima are due to $Ag^-$ centres (290 nm), the second absorption maxima are due to silver clusters and the third, very slight peaks are due to the $F$ centres (550 nm). Another set of optical spectra, corresponding to a thermal annealing at $920 K$, is shown in Fig. 2b. In this figure we have eliminated the absorption maxima corresponding to the $Ag^-$ centres. As we can observe, besides the formation of the silver clusters, the thermal treatment of the samples containing $Ag^-$ leads to the appearance of the $F$ centres (the second peak in the right side of the figures). The $F$ centre peak ($550 nm$) has, as it is well known and as one can observe in the Figs. 2a and b, no shift during the thermal annealing. In contrast with the $F$ centre behaviour, the absorption maximum of the silver clusters moves, as we can see, towards the high wavelengths in . Curiously enough, despite the fact that both the samples arise from the same $KCl : Ag^-$ crystal, the thermal annealing at $920 K$ provides a better production and conservation of the $F$ centres (see Fig. 2b). However, a long time annealing will lead to destroy all the $F$ centres. >From the optical spectra, the cluster radii are determined using electrodynamic ({\it Mie}) theory \cite{37}. Application of this theory to large metal clusters is successful and a review of the method as well as more complementary features are given in the book of {\it Vollmer \/} and {\it Kreibig \/} \cite{37}. The extinction cross section is given by \begin{equation} \sigma_{ext} = \frac{2 \pi}{k^2} \sum_{L=1}^{\infty} \left( 2L + 1 \right) Re \left(a_L + b_L \right), \end{equation} where $k$ is the wavevector and $a_L$ and $b_L$ are coefficients containing Bessel and Hankel functions which depend on the complex index of refraction of the particle, the real index of refraction of the surrounding medium and the size parameter $x = k \cdot R$. The $R $ is the cluster radius. For clusters larger than about $10 nm$ the size dependence of the optical spectra is an {\it extrinsic cluster size effect \/} \cite{37} due to electrodynamics of the excitation which is governed only by the dimension of the particle with respect to the wavelength of the light. In Figs. 3a and b it is shown the time of the thermal treatment and the corresponding increase of the cluster radii for the thermal annealing both at $800 K$ and at $920 K$. The shapes of the curves are identically which means that no influence of the $F$ centres on the cluster size evolutions exists. Consequently, in the following we will discuss only a set of experimental data ($920 K$). \topmargin=-0.5in \baselineskip=24pt \section{Results.} \indent As it is well-known one can distinguish two regimes in the cluster growth process. The initial stage of cluster increase, following the formation of nuclei, is, in our case, due to the conversion of the negative metallic centres (see the decrease trend of the $Ag^{-}$ peak (270 nm)). Thus, the main stage of the phase separation proceeds as a uniform growth of a number of precipitate particles from the supersaturated matrix. In this way, during the thermal treatment, the cluster radius increases initially due to the addition of particles coming from the source of the solute ions. When this concentration decreased, the increase of cluster radius is due, mainly, to particle transport from small clusters to larger ones. This is the second stage in the cluster growth process which is known as the {\it Ostwald Ripening (OR) \/}. It must be pointed out that the delimiting of the boundary between the two increasing regimes is difficult to do, but one supposes that the $OR$ regime begins usually before the solute concentration decreases considerably \cite{29}. In Fig. 4 we have shown the theoretical curve (equation 15) derived within the {\it off-centre diffusion \/} approach, the theoretical curve corresponding with the $LSW$ theory and the experimental curve for the increase of the cluster radius in the $OR$ stage for a thermal annealing at $920K$. We have approximated the start of the $OR$ process around of the $R=16nm$. For the theoretical calculus (the {\it off-centre diffusion \/} approach) we have used appropriate values for dissociation energy as $E_D=0.7eV$ and for the threshold energy of the particles transfer as $E_b-e\Phi =1eV$. One can say that there is an agreement between the theoretical curve derived within the {\it off-centre diffusion \/} approach and experimental curves. This agreement becomes much better for larger radii ($R>18nm$). In contrast to the former, the agreement between the $LSW$ theory and the experiment is very good for $17nm<R<19nm$ and more poor in the rest. \topmargin=-0.5in \baselineskip=24pt \section{Conclusion.} \indent In summary, by the present paper, we have analyzed how the correlation effects \cite{23,24} can be taken into account within an {\it off-centre diffusion approach \/} of the $OR$ process. The time dependence of the cluster growth derived by this theoretical approach, under the assumptions established in the introductory part, differs from that predicted by the {\it LSW theory \/} but agrees with the most simulations and experiments in the sense that $\alpha <\frac {1}{3}$ \cite {9,10,11,12,13,14,15,16,17,18,19,20,21,22}. It must be pointed out that though the dynamics is different from that predicted by $LSW$ theory and, one seems that there is a better agreement of the theoretical approach presented here with the experimental data for larger clusters ($R>18nm$), this fact do not invalidate the known theoretical results \cite {1,2,3,4,5,6,7,8}. A source of this difference could consist within the stress effect in the host matrix and/or the correlation effects do not exist in three-dimensional case. Consequently, the experimental data going over $R=20nm$ and a careful study of the stress effect should be helpful. Beforehand to check again the agreement between the theory and the experiment we can see that this approach is distinguished from the others in this branch by the fact that the clusters act as individual entities and, hereby, allowing of the introduction of the cluster properties according with the recent discoveries \cite{31}: the cohesive energy, the dissociation rate for the shrinking cluster, the mobility of the particles inside the cluster which promotes the quasi-sphericity of the cluster shape, the cluster kinks as the sites from where the particles leave the surface of the shrinking cluster or the sites where these particles condense on the surface of the growing cluster. However, despite of a relative agreement only between theory and experiment below $R = 18 nm$ we may say that the results are encouraging in further pursuing of this {\it off-centre diffusion approach \/}. Also, we may say that a careful investigation on the transfer frequency $p$ (within which the explicit form of the electrostatic potential due to medium polarization should take into account) can rise the agreement between the experimental and theoretical data.\\ {\it Acknowledgements. \/} F. Despa is grateful to Professor M. Apostol for many useful discussions regarding the theoretical results. \newpage
1,108,101,566,840
arxiv
\section{Introduction}\label{sec:intro} The solar photosphere and chromosphere are highly dynamic and exhibit a wealth of magnetic structures and phenomena. The underlying processes operate on small spatial and temporal scales and are a consequence of fundamental interactions between plasma, magnetic fields, and radiation. These interactions leave spectro-polarimetric imprints in absorption and emission lines, which form in photosphere and chromosphere, that can be measured with sophisticated instrumentation. 'Classically', there are two types of instruments: (i) slit-scanning spectrographs and (ii) narrow-band imagers scanning in wavelength through a spectral line. Both types can be equipped with polarimetric modulators. While grating spectrographs have the advantage of spectral integrity, high spectral resolution, and large spectral coverage, Fabry-Perot-interferometer (FPI) based narrow-band imagers have the advantage of image integrity, large field-of-views, and short cadences. New technical developments aim to combine spectral and image integrity together with short cadences in integral field units (IFUs). Presently, two concepts look very promising: Image slicers and micro-lense arrays \citep[see e.g.,][]{2019AdSpR..63.1389J, 2020A&A...634A.131K, 2022arXiv220614294D}. Here, we investigate the spectral integrity of an FPI-based narrow-band imager that may suffer from scanning in wavelength through a spectral line, resulting in a finite acquisition time, during which the solar scene may change \citep[as already noted in][]{10.1117/1.JATIS.3.4.045002}. We use the Visible Tunable Filter \citep[VTF,~][]{2014SPIE.9147E..0ES} which is planned to be installed in 2023 at the Daniel K. Inouye Solar Telescope \citep[DKIST, ][]{2020SoPh..295..172R}. VTF is an imaging spectro-polarimeter for the wavelength range between 520 and 860 nm. It is based on two FPIs\footnote{At first-light, VTF will be equipped with only one FPI, but the 2nd FPI is manufactured and will be integrated soon after.} which scan the narrow band images in wavelength, i.e., the spectral points of a solar line are not acquired simultaneously, but during a finite acquisition time. In general, the integration time at each wavelength step is determined by the desired signal-to-noise ratio, i.e., polarimetric sensitivity. A default measurement at full spatial resolution of the photospheric magnetic field with VTF in Fe\,I\,617.3\,nm takes 11\,s to reach the desired polarimetric accuracy at 11 wavelength points (see Sect.~\ref{sec:measurement}). VTF is designed to operate at the diffraction limit of DKIST, which has an aperture of 4\,m. I.e., at 520\,nm, scales of 20\,km are resolved on the solar surface. On such small scales, dynamical processes in the solar photosphere potentially lead to spurious signals which spoil the measurement. \paragraph{Short time scales observed with GREGOR:} To demonstrate these short time scales in the solar photosphere, in Fig.~\ref{fig:gregor}, we present images from GREGOR \citep{2012AN....333..796S, 2012AN....333..863B} taken with HiFI \citep{Denker_2018} close to disk center on June 30, 2019, in the G-Band. The images are Speckle-reconstructed with KISIP \citep{woeger+al2008} by selecting the 100 best out of 500 images. \begin{figure} \includegraphics*{AA_2022_44640_fig1.pdf} \caption{\label{fig:gregor}Speckle reconstructed images of a filigree region close to disk center observed with HiFi at GREGOR on June 30, 2019, in G-Band at 430\,nm. Upper large panel shows a large FOV at 08:22:21 UT. The lower four small panel shows four subsequent snapshots $t=$\,0, 11, 23, \& 35\,s of a smaller FOV marked by the white box in large FOV. Tick mark units are in arcsec. } \end{figure} The upper large panel of Fig.~\ref{fig:gregor} shows the observed active filigree region at 08:22:21 UT. Four consecutive snapshots with a time lapse of about 11.5\,s are displayed in the four small panels for the region that is marked with the white box in the large panel. The small panels have a side width of 2 arcsec and contain $67^2$ pixel with a width of 0.03\,arcsec. The solar scene within those 35\,s is clearly not static, and changes on scales of 0.1 arcsec are already visible in consecutive images with a temporal spacing of 11\,s. The evolution is tiny, but clearly visible on small spatial scales. In this contribution, we investigate the effects of spurious signals that are expected from temporal evolution during the measurement process. The evolution is mimicked by a realistic magneto-hydrodynamic simulation of a plage region. The VTF measurement process is described in Sect.~\ref{sec:measurement}. The method we apply to mimic the measurement process and how to describe the measurement error is explained in Sect.~\ref{sec:method}. Our Results are presented and discussed in Sect.~\ref{sec:results}. Section \ref{sec:conclusions} presents our conclusion and an important suggestion concerning the data pipeline of Fabry-Perot based spectro-polarimeters. \section{VTF measurement process}\label{sec:measurement} The VTF measurement process is described in \citet{2014SPIE.9147E..0ES} and summarised here as follows: The cameras of VTF have a pixel size of 12\,$\mu$m, which corresponds to 0.014\,arcsec and 10 km on the sun. This corresponds to a diffraction-limited spatial resolution of 0.028\,arcsec at a wavelength of 520\,nm. To reach its spectral resolution of $\lambda/\triangle\lambda=100\,000$, the wavelength spacing amounts to 3\,pm at a wavelength of 600\,nm. For our study we use Fe\,I\,617.3\,nm (g=2.5), which has very similar properties as Fe\,I\,630.2\,nm. The VTF cameras are operated with a frame cycle time of 40\,ms, of which 25\,ms are used as exposure time. Simultaneously with each narrow-band image, a broad-band image is recorded in a separate channel of the instrument. The VTF can be operated in spectroscopic and spectro-polarimetric mode. For both modes, the number of scanning steps, number of accumulations, and the binning size can be adjusted. The VTF Instrument Performance Calculator (IPC, Version 3.4)% \footnote{\tt https://www.leibniz-kis.de/en/forschung/ wissenschaftliche-instrumentierung/vtf/ performance-calculator/} was developed to tune these free parameters and to estimate the resulting signal-to-noise ratio (SNR) as well as the time of the measurement. Note that the IPC calculates the SNR for Stokes-I, and does not take into account polarimetric efficiencies. A default spectroscopic measurement with one accumulation at full spatial resolution with an SNR above 200 takes less than one second (=0.88\,s) to measure e.g. Fe\,I\,617.3\,nm with 11 wavelength scan steps (each scan step to tune the etalon takes one camera cycle time of 40ms). In the photosphere, changes of physical parameters on the time scale of one second are not expected within resolution elements as large as 20\,km. Spectro-polarimetric measurements take more time, on the one hand because four modulation states need to be measured instead of one, and on the other hand because the polarimetric signal in the spectral line is small and requires a large number of photons. To minimise seeing-induced cross talk, besides doing dual-beam polarimetry, the four modulations states are acquired consecutively with four single exposures. To increase the number of photons and reduce the photon noise, consecutive accumulations can be chosen at each wavelength position. With 6 accumulations, a $1\sigma$-noise level of $1/577 = 0.0017$ is reached (with continuum intensity being normalised to 1). To detect a signal, the signal should have a minimum amplitude of $3\sigma$ which corresponds to about 0.005. This limits the minimum magnetic field strength which can be detected. In our case, computing synthetic line profiles for Fe\,I\,617.3\,nm of a quiet Sun model and a spectral resolution of 100\,000, we find that a Stokes-$V$ amplitude of 0.005 is produced by a constant vertical field of $\sim$ 20\,G at disk center \citep[cf.,][]{2016PhDT.......566S,10.1117/1.JATIS.3.4.045002}. Note that a horizontal homogeneous field needs 175\,G to produce a $Q$-signal of 0.005. With a horizontal field strength of 100\,G, the $Q$-amplitude amounts to 0.0017 \citep[see also][]{2012A&A...547A..89B, 2013A&A...550A..98B}. \citet{2016SPIE.9908E..4NS} estimate the Doppler shift sensitivity to approximately 80\,m/s, taking into account photon noise and stray light. Note that non-parallelism of the etalon plates were not considered as a source of error in this estimate. With 6 accumulations, $6\times4=24$ single frames are recorded at each wavelength step, 11 spectral points are measured in 11\,s, i.e., 1\,s per spectral point. This means that a single spectro-polarimetric VTF measurement with 6 accumulation takes 11\,s in total. The task of this contribution is to investigate, whether, based on realistic numerical simulations, one expects dynamic changes in the photosphere within 11\,s, and if yes, how much these changes affect the measurement. Note that many science cases will require a smaller noise level than 0.0017. With 12 accumulations, which take 21.6\,s, VTF reaches a $1\sigma$-noise level of 0.0012. 18 accumulations and 31\,s are needed to reach a noise level of $10^{-3}$. Obviously, measurements with longer scanning times are more affected by the dynamic evolution in the photosphere. But in this contribution, we assume that the measurement process takes 11 seconds for 11 scan steps, i.e., we imitate the case of 6 accumulations. \section{Methods}\label{sec:method} To mimic the measurement process of VTF, we take consecutive snapshots of a realistic magnetohydrodynamic simulation of a solar plage region (cf.~Sect.~\ref{sec:muram}), synthesise Stokes profiles, and adapt them to the VTF spectral resolution and acquisition procedure (cf. Sect.~\ref{sec:firtez}). In Sect.~\ref{sec:normalisation} we introduce an alternative way to construct the line profiles, normalising each wavelength point to the local continuum intensity. As a result, we produce different sets of Stokes profiles from the 11 time steps of the simulation. In Sect.~\ref{sec:vfisv} we describe how these different sets of profiles are inverted to retrieve the physical parameters of line-of-sight velocity and and vertical component of the magnetic field strength. To quantify the error of the measurement process, we need to define a reference or {\it true} map. As {\it true} maps, we use the maps that result from the inversion of profiles that are synthesised from time-averaged simulation boxes (over all eleven snapshots). I.e., these profiles are not affected by the finite acquisition time and therefore serve as reference. Finally, all different sets of maps are compared in Sect.~\ref{sec:results}. By comparing physical parameters between inverted maps from constructed profiles and the reference maps, we can estimate the error which is introduced by the finite acquisition time. \subsection{Solar plage simulation with MURAM}\label{sec:muram} \begin{figure} \includegraphics*{AA_2022_44640_fig2.pdf} \caption{\label{fig:muram} MURAM simulation snapshot. a) vertical cut through numerical box showing the modulus of the magnetic field strength in logarithmic scale, clipped between 10 and 2500\,G, with the color bar in logarithmic scale. b) Horizontal cut at mean temperature of 5560\,K of vertical velocity in km/s with positive values pointing upwards (being blue-shifted to observer). c) Same as b) for vertical component of magnetic field strength in kG.} \end{figure} As a typical target for VTF observations we take a plage region with a spatially averaged vertical magnetic strength of 200\,G. Such a simulation was performed with MURAM by Matthias Rempel \citep[private communications, and see][]{2014ApJ...789..132R}. With a grid cell size of 8x8x8 km$^3$, the simulation reaches the spatial resolution of VTF@DKIST. The simulation box has 768x768x384 cells. The box reaches from the upper photosphere into the solar interior. The upper end of the box lies 704\,km above the average $\tau=1$ level. As the lateral box sides have periodic boundary conditions, the magnetic flux through each depth layer is the same and constant in time. The internal numerical time step of the simulation is around 0.1\,s. We limit our analysis to snapshots with a temporal cadence of 1\,s, i.e. 10 numerical time steps. Changes within 10 time steps can be considered small for our purpose, but contribute to an increase of the measurement errors. For this work, we assume that it is sufficient to have a snapshot sequence of 1\,s. This is the same cadence that VTF needs for each wavelength step when using 6 accumulations (SNR$=577$). In Fig.~\ref{fig:muram}a, we display a vertical cut of the absolute magnetic field strength of the computational box. It displays a vertical strong flux concentration in the middle of the box at $x=3$\,Mm. There, the local field strength exceeds 3\,kG. For the horizontal layer at which the spatially averaged temperature is 5560\,K, we show the vertical components of the velocity and the vertical magnetic field strength in Figs.~\ref{fig:muram}b \& c. This horizontal layer roughly corresponds to the formation height of Fe\,I 630.25\,nm such that these maps can be compared with the maps that we retrieve after line synthesis, convolution with the VTF spectral resolution, resampling, and Milne-Eddington inversion. \subsection{Synthetic line profiles with FIRTEZ adapted to VTF}\label{sec:firtez} From a number of suitable photospheric absorption lines we chose to analyse the Fe\,I\,617.3 nm.\footnote{This line has very similar properties as the iron lines at 630\,nm and at 525.0\,nm \citep{2003ASPC..307..131G}. Initially, the intention of this work was to compare VTF measurements with HMI onboard SDO. The focus changed, and we continued to use Fe\,I\,617.3\,nm. Fe\,I\,617.3\,nm has the advantage of being surrounded by a clean continuum.} VTF will use Fe\,I 630.25\,nm for the first observations as long as only one etalon is available, but Fe\,I\,617.3\,nm will be available as soon as the second etalon (Fabry-Perot interferometer) is in place. To synthesise Stokes $I$, $Q$, $U$ $\&\,V$ of Fe\,I\,617.3 nm along vertical line-of-sights in the MURAM box, we use FIRTEZ-dz \citep{2019A&A...629A..24P}. FIRTEZ is chosen because it operates on the geometrical scale, which is intrinsic to the simulation box. As the conversion into optical depth along the line-of-sight is not needed, the computation time for the line synthesis is very short. We calculate the set of Stokes profiles for each horizontal pixel yielding maps with 768 by 768 pixel. For the line synthesis the spectral resolution is assumed to be infinity and is only limited by the spectral sampling. We use a spectral sampling of 0.4\,pm, and compute the range from -40\,pm until +40\,pm. \begin{figure} \centering \includegraphics[width=0.5\textwidth-0.5\columnsep]{AA_2022_44640_fig3.png} \caption{\label{fig:convolution} Stokes I, Q, U, and V profiles from upper left to lower right panels, respectively. Red line denotes synthesised profiles with intrinsic sampling of 0.4\,pm. Green line in the lower left panel sketches the shape of the VTF transmission curve. Blue crosses and lines mimic 11 VTF measurement point as a result of convolving the synthetic profiles with the VTF transmission curve. } \end{figure} \paragraph{Spectral convolution and spectral sampling:} The spectral resolution of VTF follows from the properties of the first Fabry-Perot interferometer. The spectral transmission profile \citep[see e.g., Sect.~3.4.4 in][]{2002tsai.book.....S} can be computed for small angles of incidence from its reflectance, $R=0.95$, and its cavity separation, $d=0.55$\,mm \citep[][]{Kentischer+al2012, 2014SPIE.9147E..0ES, sigwarth2017}. A prefilter is assumed to suppress the side peaks and to transmit only the central peak. The transmission profile of the central peak corresponds to a spectral resolution of 100\,000 and has a full width at half maximum of 5.7\,pm. All FIRTEZ profiles are convolved with this transmission profile and resampled to the VTF spectral step width of $\triangle\lambda=3.15$\,pm. A set of sample profiles are depicted in Fig.~\ref{fig:convolution}: The synthetic lines of Stokes $I$, $Q$, $U$, and $V$ of a sample pixel are drawn in red. The blue lines display the same profiles after accounting for the spectral resolution of VTF, and the blue crosses mark the spectral sampling of VTF. To sketch the shape of the tuneable VTF transmission profile, it is plotted as a green line in the Stokes $U$ plot. Thus, in the end we have Stokes profiles (at the spectral resolution and sampling of VTF) for a set of 11 snapshots, which have a temporal cadence of 1\,s. \paragraph{Temporal sampling:} To mimic the VTF line scan, we compose the Stokes profiles with one wavelength step per consecutive snapshot, i.e., we produce a set of profiles with 11 ($t=0,1,\ldots, 10$\,s) wavelength points. We do this in two modes: straight and rabbit. In the {\it straight} mode the wavelength step increases monotonically by one: $\lambda_{i+1} = \lambda_i + \triangle\lambda$. In the {\it rabbit} mode we jump from blue wing to red wing, successively approaching the line core: $\lambda_{i+1} = \lambda_i + 2*(\lambda_{ic}-\lambda_i)$ and $\lambda_{i+2}=\lambda_{i}+\triangle\lambda$, with $\lambda_{ic}$ being the wavelength step closest to the spatially averaged line core. The idea of the rabbit mode is to sample both line wings as close in time as possible. Herewith, one expects that the Doppler shift error from the finite acquisition time is minimised. \paragraph{Seeing and solar evolution:} The rabbit mode was devised for VTF to also minimise the effects of seeing. The seeing is partially corrected by the adaptive optics system. The resulting degree of image correction varies on scales of seconds and shorter. With the rabbit mode, one attempts to record both line wings with similar seeing. However, the effects of seeing are not analysed in this work. We ignore seeing effects, and only consider the effects of solar evolution during the line scan. \begin{figure} \centering \includegraphics*[width=0.5\textwidth-0.5\columnsep, bb=50 0 600 400]{AA_2022_44640_fig4.png} \caption{ \label{fig:temporal_change} For an example pixel we plot the Stokes profiles for $t=0\,$s (blue) and $t=10\,$s (red). The black curve shows the profile from a temporal average of all 11 snapshots.} \end{figure} \paragraph{Reference profile and temporal changes:} In order quantify the systematic measurement errors, we need to define a reference. We use profiles from temporally averaged snapshots for each pixel as the reference, i.e. we average all 11 snapshots and perform the line synthesis on the averaged box. Alternatively, one can average the synthesised profiles from the 11 snapshots and do the inversion on the time-averaged profiles. We checked that both approaches lead to identical results within the noise, which is on the same scale as the difference between two consecutive snapshot maps. In Fig.~\ref{fig:temporal_change} we illustrate the temporal change of the profiles. We select a sample pixel and plot the Stokes profiles retrieved from the first snapshot 1 (blue line), from the last snapshot 11 (red line), and the temporally-averaged profiles from all 11 snapshots (black line). In the upper left panel, it is seen that the intensity of the left-most wavelength point drops from some 1.28 to some 1.02, i.e., by more than 25\% during the measurement process across 11 wavelength points. \subsection{Profile composition with normalisation}\label{sec:normalisation} \begin{figure} \begin{center} \includegraphics{AA_2022_44640_fig5.pdf} \end{center} \caption{\label{fig:cont_diff} In the upper panel the continuum image with intensities between 0.6 and 1.45 for the first snapshot is displayed. Average intensity is normalised to one. The lower panel displays the difference between the continuum images of snapshot 1 and of snapshot 11. The difference map is clipped at differences of $\pm10$\,\%. } \end{figure} Viewing animations of the snapshots, it becomes apparent that magnetic concentrations, which are visible as bright points in the inter-granular lanes, are shuffled around by the granular flow field. As a consequence, the continuum intensity of an inter-granular pixel can change significantly during the wavelength scan. This is demonstrated in Fig.~\ref{fig:cont_diff}, in which we display the normalised continuum intensity of the first snapshot and the difference map between the first (snap1) and last (snap11) snapshot. The difference map is clipped at absolute differences of 0.1. We compute that in 13\% of all pixels the intensity difference is larger than $0.05$, and in 4\% of all pixels the difference is larger than 0.1. The standard deviation amounts to 0.034. In comparison, the standard deviation of intensity differences between first (snap1) and second (snap2) snapshot amounts to 0.006. If the continuum intensity, for example, decreases continuously during a straight wavelength scan, an analysis of the corresponding composed Stokes-$I$ profile would yield different intensity levels of the blue and red continua. This would be interpreted as an additional line absorption in the red wing, i.e., as an enhanced red-shift. This example suggests that it might be of advantage to normalise the line absorption to the continuum. Since we have the full simulated profiles at each time step (= wavelength step), it is possible to do this. Obviously, this is not straight forward for the VTF measurement. However, as VTF is operated with a simultaneous broad-band channel, this normalisation could be performed using the broad-band intensity as it varies during the line scan at a given spatial pixel. \begin{figure} \centering \includegraphics*{AA_2022_44640_fig6.pdf} \caption{\label{fig:composition} Exemplary comparison of Stokes-$I$ and $V$ profiles in upper and lower panel, respectively: straight composed (red) simulating the VTF measurement, straight normalised (blue), and time-average. The green line in upper panel denotes the continuum intensity at each wavelength (=time) step.} \end{figure} In Fig.~\ref{fig:composition} we illustrate how the straight profile is normalised, and compare the different Stokes-$I$ profile types for the same spatial pixel as in Fig.~\ref{fig:temporal_change}. The black line displays the time-averaged profile. The red line displays the straight VTF measurement mode. The rabbit mode behaves similar as the straight profiles and is not displayed. The green line denotes the continuum intensity variation during the measurement process. From these continuum points, the temporal mean is calculated. This mean value, $\langle I({\rm cont}_{\rm i}) \rangle_{i=1,\ldots,11}$, is used to normalise each wavelength point of the straight profile (red line): \begin{equation} \label{eq:norm} I_{\rm normalised}(\lambda_{i}) = \frac{ I(\lambda_{i}) }{I({\rm cont}_{i})} \cdot \langle I({\rm cont}_{i}) \rangle_{i=1,\ldots,11} \end{equation} This normalised profile is drawn as blue line in Fig.~\ref{fig:composition}. It is seen that the normalised straight profile becomes very similar to the time-averaged profile. Therefore, one expects that the normalised profiles leads to physical parameters that are closer to the reference (time-averaged) parameters. For pixels in which the continuum intensity is constant in time, the straight, normalised, and time-averaged profiles are identical. \subsection{Milne-Eddington inversion with VFISV}\label{sec:vfisv} To compare the differently composed Stokes profiles, we perform Milne-Eddington inversions and compare inverted maps of the line-of-sight velocities, $v_{\rm los}$, and of the vertical (line-of-sight) component of the magnetic field strength, $B_{\rm los}$, for the different profile types. The Milne-Eddington approximation assumes that $v_{\rm los}$ and $B$ as well as $B_{\rm los}$ are constant along the line-of-sight across the solar photosphere. Inspecting the numerical simulation (see, e.g., the vertical cut of $B_{\rm los}$ in Fig.~\ref{sec:muram}), this is clearly not the case, and these quantities change significantly along the line-of-sight. However, in order not to introduce additional error sources due to sophisticated inversions, we limit our analysis to Milne-Eddington inversions. As inversion code, we use ``\underline{V}ery \underline{F}ast \underline{I}nversion of the \underline{S}tokes \underline{V}ector'' \citep[VFISV, ][]{2011SoPh..273..267B}. This code was originally devised to invert data from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamic Observatory (SDO), and was later adapted to data acquired with slit spectrographs\footnote{\texttt{gitlab.leibniz-kis.de/borrero/vfisv\_spec}}as, e.g., for GRIS@GREGOR \citep[][]{2012AN....333..872C}, and to data acquired with Fabry-Perot systems\footnote{\texttt{gitlab.leibniz-kis.de/borrero/vfisv\_fpi}}. In the latter version, the VTF transmission file can be selected in such a way that it is taken into account in the inversion process. Note that we use the identical transmission profile in the convolution with the synthetic profiles in Sect.~\ref{sec:firtez}. As free parameters, we use $\eta_0$, field inclination $\gamma$, field azimuth $\phi$, damping $a$, doppler width $\triangle\lambda_{\rm D}$, field strength $B$, line-of-sight velocity $v_{\rm los}$, source function continuum, and the source function gradient. The magnetic filling factor is put to unity and not used as free parameter. \section{Results and discussions}\label{sec:results} \subsection{Inverted maps for reference (time-averaged) profiles}\label{sec:ref_maps} \begin{figure} \centering \includegraphics*{AA_2022_44640_fig7.pdf} \caption{\label{fig:ref_map} Maps for line-of-sight velocity, $v_{\rm los}$ (upper panel), and vertical component of the magnetic field strength, $B_{\rm los}$ (lower panel) inverted with VFISV from reference (time-averaged) Stokes profiles. As these maps are simulated observables, we use arcsec instead of km for the spatial dimension.} \end{figure} \begin{figure} \includegraphics*{AA_2022_44640_fig8.pdf} \caption{\label{fig:ref_hist} Histograms for maps of line-of-sight velocity with a bin size of 115 m/s (upper panel), and vertical component of the magnetic field strength with a bin size of 30\,G (lower panel) for the inverted maps of the reference (time-averaged) profiles.} \end{figure} Inverted maps for $v_{\rm los}$ and $B_{\rm los}\!=\!B\cdot \cos(\gamma)$ of the reference (time-averaged) profiles are displayed in Fig.~\ref{fig:ref_map}. These maps correspond to the average along the line-of-sight, but for consistency, they can be compared to the horizontal geometrical cuts at the mean temperature of 5560\,K in Fig.~\ref{fig:muram} b) and c). In both figures, we use the same color code and clipping values. They are obtained after line synthesis with FIRTEZ on the time averaged numerical box, convolution with the instrument transmission profile (Sect.~\ref{sec:firtez}), and VFISV inversion (Sect.~\ref{sec:vfisv}). These maps reflect the ideal theoretical expectation of what VTF@DKIST will measure in a plage region with an average vertical magnetic field strength of 200\,G. It is therefore of interest to plot the histogram distribution for $v_{\rm los}$ and $B_{\rm los}$ in Fig.~\ref{fig:ref_hist}. Both distributions are non-symmetric. The up-flow velocities do not reach as high values as the down-flow velocities, but occupy a larger area for small absolute values. Opposite polarity in the magnetic field is present and reaches values of up to -400\,G. The distribution of the positive vertical component of the magnetic field peaks at some 1300\,G and $B_{\rm los}$ reaches values of up to 2.4\,kG. \subsection{The average of the vertical magnetic field strength} The horizontal average vertical magnetic field strength (magnetic flux through a horizontal cut) in the simulation box is constant at all depth layers and at all times due to the periodic boundary conditions on the sides of the box. It was chosen to be 200\,G. This number can be compared with maps of $B_{\rm los}$ for the various types of profiles: Averaging the $B_{\rm los}$ across the inverted maps we obtain $\langle B_{\rm los}\rangle\!=\!208$\,G for the reference profiles, 203\,G for the straight VTF profiles, 206\,G for the rabbit VTF profiles, 208\,G for the straight normalised VTF profiles, and 208\,G for profiles of each snap shot. The difference of 8\,G between the numerical box value and the reference map is ascribed to the different geometrical formation heights. We surmise that strong field concentrations are associated with smaller densities, due to magnetic evacuation (magnetic pressure). Hence, in areas of magnetic field concentration, the opacity is reduced and the line forms in deeper geometrical layers in which the field strength is stronger. This effect could explain that the average value $\langle B_{\rm los}\rangle $ in the inverted reference map is larger by 8\,G than in the numerical box at constant geometrical height. In any case, the difference is small and not significant for our study. More interesting for our study is the finding that normalised straight VTF profiles yield the same $\langle B_{\rm los}\rangle$ as the reference profiles, while the straight VTF profiles differ by 5\,G. This difference is small, but confirms the expectation (Fig.~\ref{fig:composition} in Sect.~\ref{sec:normalisation}) that the normalised VTF profiles are closer to the reference (time averaged) profiles. \begin{figure} \centering \includegraphics{AA_2022_44640_fig9.pdf} \caption{\label{fig:diff_vlos} Difference maps for $v_{\rm vlos}$ in m/s. From top to bottom: Difference to reference map of straight VTF map (a); same for straight normalised VTF map (b); Difference of first snapshot 'snap1' to second snapshot 'snap2' (c); same for eleventh snapshot 'snap11' (d).} \end{figure} \begin{figure} \centering \includegraphics{AA_2022_44640_fig10.pdf} \caption{\label{fig:diff_Bver} Same as Fig.~\ref{fig:diff_vlos} for $B_{\rm los}$ in Gauss.} \end{figure} \subsection{Oscillation of the numerical box} As the sun, the MURAM box exhibits 5-minute oscillations that are visible in maps of $v_{\rm los}$. Therefore the average line-of-sight velocity is not constant. The particular snapshots that we analyse change from a mean velocity of 41\,m/s in the first snapshot, by some 2\,m/s per second, to 18\,m/s in the eleventh snapshot. Hence, the spatially averaged $v_{\rm los}$ changes by 21\,m/s during the analysed VTF measurement process. \subsection{Difference maps for maps of $v_{\rm los}$ \& $B_{\rm los}$} As explained in Sect.~\ref{sec:intro}, the purpose of this study is to quantify the measurement error due to a finite measurement time of 11\,s. To determine the error we compare simulated VTF maps to the reference (time-averaged) maps by computing difference maps. In the upper two panels of Figs.~\ref{fig:diff_vlos} and \ref{fig:diff_Bver}, we display the difference maps from the inversion of straight VTF profiles and the straight normalised VTF profiles with the reference maps. For $v_{\rm los}$, the differences are clipped at $\pm 200$\,m/s, and for $B_{\rm los}$ at $\pm150$\,G. By visual inspection it is seen that the differences are smaller for the straight normalised profiles if compared to the straight VTF profiles. The improvement of the normalisation is clearly seen in $v_{\rm los}$ and less prominent but still visible in $B_{\rm los}$. These differences are quantified in the next subsection. The lower two panels in Figs.~\ref{fig:diff_vlos} and \ref{fig:diff_Bver} illustrate the solar evolution during the measurement process by displaying the difference maps between first and second snapshot (separated by one second, snap1 \& snap2) and between first and eleventh snapshot (separated by ten seconds, snap1 \& snap11). As expected the difference between snap1 and snap11 are much larger than between snap1 and snap2. The differences between snap1 and snap2 can have two causes: solar evolution and/or sensitivity of the inversion process. As the solar evolution on a time scale of one second is expected to be insignificant, we ascribe these differences mostly to the sensitivity of the inversion process which is limited by the spectral resolution and the Milne-Eddington approximation. \subsection{Error quantification} \begin{figure} \includegraphics{AA_2022_44640_fig11.pdf} \caption{\label{fig:histograms} Histograms of difference maps for line-of-sight velocity. $\triangle v_{\rm los}$ in upper panel (a) and the vertical component of the magnetic field strength, $\triangle B_{\rm los}$, in lower panel (b). The histograms includes pixel that have $100\,{\rm G} < |B_{\rm los}^{\rm ref}| < 2500\,{\rm G}$. This corresponds to 25\% of all pixels. The differences are taken for different cases: Red line $\rightarrow$ straight VTF and reference map; blue line $\rightarrow$ normalised straight VTF and reference map; green line $\rightarrow$ rabbit VTF and reference map; black line $\rightarrow$ first and second snapshot; Yellow line $\rightarrow$ first and last (eleventh) snapshot.} \end{figure} In order to quantify the visual impression of the difference maps, we compute histograms and plot them in Fig.~\ref{fig:histograms}. As the deviations are largest in the inter-granular lanes and in the filigree (magnetic) regions, we consider only pixels in which the vertical line-of-sight field strength of the reference map exceeds 100\,G. We ignore 67 pixels with $B_{\rm los} > 2500$\,G as outliers. The histograms then include 25\% of the pixels, i.e., 146\,529 out of 589\,824 pixels. The different line colours correspond to the different cases: \begin{itemize} \item ('straight VTF' $-$ 'reference') $\rightarrow$ red line \item ('normalised straight VTF' - 'reference') $\rightarrow$ blue line \item ('rabbit VTF' - 'reference') $\rightarrow$ green line \item ('snapshot 1' - 'snapshot 2') $\rightarrow$ black line \item ('snapshot 1' - 'snapshot 11') $\rightarrow$ yellow line \end{itemize} The histograms of the $v_{\rm los}$ maps are plotted with a bin width of 10\,m/s in Fig.~\ref{fig:histograms}a. The distributions peak close to vanishing differences. The peak height and the width are a quantitative measure for the measurement error: the larger the peak and the smaller the width, the smaller the measurement error. The visual impression of Fig.~\ref{fig:diff_vlos} is reflected in the distributions in terms of peak height and width: 'snap1 - snap2' has the largest peak and the smallest width, i.e., the smallest errors, and 'snap1 - snap11' has the smallest peak and the largest width, i.e. the largest errors. The histogram 'normalised straight - reference' indicates smaller errors than 'straight - reference'. It is also seen that the performance of the 'rabbit' mode (green line) is not better than the 'straight' mode (red line). The histograms of the $B_{\rm los}$ maps are plotted with a bin width of 5\,G in Fig.~\ref{fig:histograms}b. Again, the black line ('snap1 - snap2') has the largest peak (at 30521). The ordinate is clipped at a value of 12\,000 to increase the visibility of the blue line ('normalised straight') and the red line ('straight'). The latter distributions peak at 11\,716 and 10\,177, respectively, i.e., again, the 'normalised straight' mode performs better than the 'straight' mode. The green line ('rabbit' mode) is worse and has a peak at 8\,263. As expected the differences between snap1 and snap11 are the largest such that the central peak of smallest differences only occurs 4\,276 times. The histogram distributions in Fig.~\ref{fig:histograms} are not Gaussian. In a Gaussian distribution, 68.27\% of all values would lie within one standard deviation, $1\sigma$, i.e. the full width including 68.27\% corresponds to $2\cdot\sigma$, and at that level the peak value is reduced by a factor of 1.65. Analogously, we determine the width, $w_{\rm full}$, of the distribution that include 68.27\% of all values, and then quantify the error as $1\sigma := w_{\rm full}/2$. \begin{table} \caption{\label{tab:vlosmaps} Half of full distribution width, $w_{\rm full}/2$, corresponding to $1\sigma$, for difference maps of $v_{\rm los}$ in m/s. The bin width is set to 10 m/s.} \begin{tabular}{c|c|c|c|c} $\triangle v_{\rm los}$ & $w_{\rm full}/2$ & $w_{\rm full}/2$ & $w_{\rm full}/2$ & $w_{\rm full}/2$ \\ unit & [m/s] & [m/s] & [m/s] & [m/s] \\ \hline 'straight' & 67 & 46 & 181 & 201 \\ {\!'norm.~str.'\!} & 36 & 25 & 88 & 94 \\ 'rabt' & 55 & 39 & 146 & 159 \\ 's1-s2' & 18 & 12 & 49 & 55 \\ 's1-s11' & 195 & 142 & 432 & 470 \\\hline px fract. & {100\%} & {75 \%} & {25 \%} & {20\%} \\ $\!|B_{\rm los}|$ mask\! & {\!all pixel}\! & {$\!0\!-\!100$G\! } & {$\!0.1\!-\!2.5$kG\!} & {$\!0.2\!-\!2.5$kG\!} \\ \end{tabular} \end{table} \begin{table} \caption{\label{tab:Bvermaps} Same as Tab.~\ref{tab:vlosmaps} for deviations of $B_{\rm los}$ in G. For the calculation of $w_{\rm full}$, the bin width is set to 1 Gauss.} \begin{tabular}{c|c|c|c|c} $\triangle B_{\rm los}$ & $w_{\rm full}/2$ & $w_{\rm full}/2$ & $w_{\rm full}/2$ & $w_{\rm full}/2$ \\ unit & [G] & [G] & [G] & [G] \\ \hline 'straight' & 4 & 2 & 48 & 61 \\ {\!'norm.~str.'\!} & 3 & 1 & 36 & 44 \\ 'rabt' & 3 & 1 & 84 & 109 \\ 's1-s2' & 2 & 1 & 10 & 12 \\ 's1-s11' & 16 & 7 & 91 & 105 \\\hline px fract. & {100\%} & {75 \%} & {25 \%} & {20\%} \\ $\!|B_{\rm los}|$ mask\! & {\!all pixel}\! & {$\!0\!-\!100$G\! } & {$\!0.1\!-\!2.5$kG\!} & {$\!0.2\!-\!2.5$kG\!} \\ \end{tabular} \end{table} \begin{table} \caption{\label{tab:binning} Half of full distribution width, $w_{\rm full}/2$, corresponding to $1\sigma$, for four different binning cases using the $B_{\rm los}$-mask $0.1-2.5$\,kG. Values are given in m/s. The bin width is set to 10 m/s.} \begin{tabular}{c|c|c|c|c} binning & 1 by 1 & 2 by 2 & 3 by 3 & 4 by 4 \\ \hline 'straight' & 181 & 169 & 159 & 148 \\ {\!'norm.~str.'\!} & 88 & 77 & 68 & 61 \\ 'rabt' & 146 & 134 & 121 & 114 \\ 's1-s2' & 49 & 44 & 39 & 35 \\ 's1-s11' & 432 & 409 & 378 & 349 \\ \hline \end{tabular} \end{table} These $w_{\rm full}/2$-values are given in Tabs.~\ref{tab:vlosmaps} \& \ref{tab:Bvermaps} for $v_{\rm los}$ and $B_{\rm los}$, respectively. We determine the distribution widths for four different mask criteria: all pixels (first column), pixels with $0<|B_{\rm los}|<0.1$\,kG (2nd column, 75\%), pixels with $0.1<|B_{\rm los}|<2.5$\,kG (3rd column, 25\%), pixels with $0.2<|B_{\rm los}|<2.5$\,kG (4th column, 20\%). \subsubsection{Errors in $v_{\rm los}$ maps} The Doppler shift sensitivity of VTF was estimated (see Sect.~\ref{sec:measurement}) to be of some 80\,m/s. This includes the noise level. The $1\sigma$-value of the 'straight' mode is $1\sigma=67$\,m/s if no mask is applied, which is of the same order of magnitude as the VTF sensitivity. For magnetic pixels (last two columns in Tab.~\ref{tab:vlosmaps}), the $1\sigma$-error is more than double the size of the noise level: $1\sigma=182$\,m/s for $B_{\rm los}>100$\,G and 201\,m/s for $B_{\rm los}>200$\,G. Hence, the $1\sigma$-errors from solar evolution are substantial for a measurement span of 11\,s. In the 'normalised straight' mode, the $1\sigma$-error decreases substantially by roughly a factor of two: $1\sigma=36$\,m/s, if no mask is applied, 88\,m/s for $B_{\rm los}>100$\,G and 94\,m/s for $B_{\rm los}>200$\,G. The errors of the rabbit mode are slightly better than the straight mode, but not as small as for the normalised straight mode. Comparing the errors of 's1-s2' and 's1-s11' demonstrates that the temporal evolution is significant: The differences between the first two snapshots are small, much smaller than the VTF sensitivity. They are probably mostly caused by the low spectral resolution which does not allow for a more accurate determination. However, the errors between the first and the eleventh snapshot increase by a factor of eight or more: from $1\sigma=18$\,m/s to 195\,m/s for all pixel, and from $1\sigma=50$\,m/s to 433\,m/s for the pixels with $B_{\rm los}>100$\,G. We also computed the histograms for spatially binned data and list the results in Tab.~\ref{tab:binning} for the magnetic pixels with $B_{\rm los}$ between 0.1 and 2.5\,kG. We find that the $1\sigma$-errors of the straight mode decreases from $1\sigma=181$\,m/s for no binning, to $1\sigma=169$\,m/s for two by two binning, to $1\sigma=148$\,m/s for four by four binning. Hence, even with a four by four binning the time-evolution error exceeds significantly the VTF sensitivity if the measurement is done in the straight mode. A four by four binning would correspond to the diffraction limit of a 1m-aperture telescope. Note that in case, the measurement objective is to measure only velocities and no magnetic fields, the non-polarimetric Doppler mode can be chosen in which one accumulation suffices to measure the Stokes-$I$ line profile. This measurement only takes 0.9\,s (VTF IPC, v.3.4, see Sect.~\ref{sec:measurement}), and deviations due to solar evolution can be safely neglected. \subsubsection{Errors in $B_{\rm los}$ maps} In Tab.~\ref{tab:Bvermaps} the error values characterising the distribution of the $\triangle B_{\rm los}$ map are listed. These errors are to be compared to the VTF measurement sensitivity of 20\,G, which results from the photon noise (see Sect.~\ref{sec:measurement}). It is seen that the $1\sigma$ errors for the entire map and for weak-field pixels ($<100$\,G) are negligible for straight and normalised straight modes. However, within the magnetic filigree the time-evolution errors are substantial. Considering all pixels with $B_{\rm los}$ between 100 and 2500\,G, the error amount to 48\,G in the straight mode and to 36\,G in the normalised straight mode, which is significantly larger than the 20\,G error due to noise. Interestingly, the rabbit mode has an error of 84\,G and thus performs worse than both the straight and the normalised straight mode. It seems that the Milne-Eddington inversion can deal better with profiles, in which neighbouring wavelength points are recorded close in time. It is not straightforward to explain this behaviour. The differences between two consecutive snapshots 's1-s2' are negligible and smaller than 20\,G in all cases. The time evolution is well reflected in the differences between the first and the eleventh snapshot, 's1-s11'. These differences correspond to an error of $1\sigma=91$\,G for the magnetic pixels exceeding 100\,G. Hence, for filigree and magnetic features which are shuffled around in the inter-granular lanes, the fields are strong enough, and more accumulation and longer effective exposure times will not improve the measurement. Due to their dynamic behaviour and strong fields, it may even be better to measure them with less accumulations, i.e, shorter effective exposure times. For measuring magnetic fields in and above granules, the situation is different: the evolution time scale is longer and the magnetic fields are weaker. Hence, for granular magnetic fields more accumulations might be wanted to improve the signal-to-noise ratio. However, one should be aware that a homogeneous horizontal field of 100\,G produces a $Q$-amplitude of only some 0.0017 for Fe I \,617.3\,nm. \subsection{Implication for data pipelines}\label{sec:pipeline} We have seen that the errors due to solar evolution can in all cases be reduced by normalising the continuum intensity to the instanteous continuum intensity during the measurement as indicated by Eq.~(\ref{eq:norm}). This can be done in our case, because we compute the whole line profile and its continuum for each snapshot. However, a Fabry-Perot instrument measures, of course, only one wavelength point at a time. But instruments like VTF, CRISP, CHROMIS, \& IBIS simultaneously measure a broad-band image at a neighbouring wavelength. These broad-band images can be used to mimic the temporal variation of the continuum intensity. To our knowledge, this is not currently done in existing Fabry-Perot data pipelines. In order to apply Eq.~(\ref{eq:norm}), one needs to scale the broad-band images to the continuum intensity of narrow-band images. As seen in Fig.~\ref{fig:convolution}, a complication may exist, if a true continuum wavelength point is not part of the measurement sequence. Hence, in practise two approaches could do the job: (1) Add one more wavelength point to include a true continuum point outside the line. This point can then be used to determine the scaling factor between the broad-band intensity and the narrow-band continuum intensity. (2) In case the continuum intensity cannot be measured, e.g., because the line is too broad for the narrow pre-filter\footnote{The pre-filter is needed to suppress side peaks from the etalon(s).}, one would compute the average measured profile of a quiet Sun region, fit a quiet Sun profile to it, and determine the continuum intensity from the fitted profile to scale the broad-band intensities. \section{Conclusions}\label{sec:conclusions} We investigate the measurement errors that arise due to temporal changes during the measurement process. Already with the 1.5\,m telescope GREGOR (see Fig.~\ref{fig:gregor}), it can be seen that magnetic features in inter-granular lanes are shuffled around on a time scale of 11\,s. This is also seen in MURAM numerical simulations of a region with a mean vertical field strength of 200\,G (see Fig.~\ref{fig:cont_diff}). Based on these simulations, we mimic the measurement process of VTF and determine the measurement error by defining the 'truth' to be the temporal average of the time that elapses during the measurement. We find that granules are less affected and inter-granular areas with magnetic features are strongly affected. Depending on the science objective, it might be advisable to adapt the effective exposure time in the measurement: For measuring strong fields in the inter-granular lane only a few accumulations may suffice, and reduce the measurement error as the effective exposure time is reduced. For weaker fields within granules, more accumulations can be afforded as the evolution time scale is longer. And for predominantly horizontal (perpendicular to line-of-sight) fields, more accumulations are necessary, because the amplitude of linear polarisation in horizontal fields is much weaker than circular polarisation in vertical fields. A key result of this investigation is the finding, that a normalisation of the narrow-band intensities with the temporally averaged continuum intensity, following Eq.~(\ref{eq:norm}) (and see Fig.~\ref{fig:composition} for an illustration), leads to a significant reduction of the measurement errors. In Sect.~\ref{sec:pipeline}, we discuss how this normalisation could be integrated in existing pipelines, taking advantage of the simultaneously measured broad-band image. \begin{acknowledgements} We thank Matthias Rempel for providing the MURAM simulations, and Markus Schmassmann for making the data accessible and software to read the data. We thank Philip Lindner for providing the GREGOR data used in Section 1. \end{acknowledgements} \bibliographystyle{aa}
1,108,101,566,841
arxiv
\section{Introduction} Josephson effects in atomic Bose-Einstein condensates (BECs) receive considerable attention in experimental and theoretical studies as a prominent manifestation of quantum coherence on a macroscopic scale. These effects have been initially studied for coupled condensates in double-well traps \cite{Zapata1998,Smerzi1997,Raghavan1999,Albiez2005,Levy2007,LeBlanc2011} and coherently coupled spinor condensates \cite{PhysRevA.59.R31,PhysRevLett.105.204101}. In recent years Josephson effects were also observed and analyzed in more exotic systems such as fermionic superfluids \cite{Valtolina2015}, polariton condensates \cite{abbarchi2013macroscopic}, spin-orbit coupled BECs \cite{Wang2018,PhysRevLett.120.120401} and condensates with attractive interaction \cite{Spagnolli2017}. A common theoretical description of two coupled BECs is based on the bosonic Josephson equations. This model has been thoroughly studied theoretically \cite{Smerzi1997,Raghavan1999}. Various experiments confirm the predictions of this model, which include small-amplitude (plasma) oscillations, large-amplitude anharmonic oscillations and quantum self trapping \cite{Albiez2005,Levy2007,PhysRevLett.105.204101}. A formal derivation of this model relies on the two-mode description of the system, whereby each of the two condensates retains its density profile and a homogeneous phase. Such an approximation is valid if the coupling between the two BECs is much weaker than the energy required to create collective excitations inside each condensate. If this requirement is not fulfilled then internal collective excitations can be generated and influence the Josephson dynamics of the system. In order to describe the interference between internal and mutual collective motions of the condensates it is necessary to go beyond the simplified picture of the Josephson equations. Moreover the structure of collective excitations spectrum and consequently the results of such an interference may be considerably different depending on the geometric properties of the system. Existing theoretical studies address this question for certain specific types of collective excitations and trap geometries, such as phonon excitations in toroidal condensates \cite{Bidasyuk2016}, and higher modes of the harmonic trap \cite{PhysRevA.89.023614}. However, various aspects of collective excitations in coupled BECs and their relation to the tunneling transport have yet to be fully explored. In the present work we analyze the collective excitations and the small-amplitude Josephson oscillations in a system of two highly anisotropic BECs. Two elongated condensates are placed in parallel to each other forming a so called long Josephson junction (LJJ). Such Josephson junctions were extensively studied for superconductors \cite{RevModPhys.51.101,PhysRevB.53.6622,PhysRev.164.538} and in atomic BECs mostly in the context of Josephson vortex dynamics \cite{PhysRevA.71.011601,Gil_Granados_2019}. Our interest in these systems is largely inspired by the experimental results of Ref.~\cite{LeBlanc2011} where the populations of the wells were shown to oscillate at several distinct frequencies, indicating interference with internal collective excitations generated inside each condensate. In order to describe the Josephson dynamics on the level of collective excitations in the system we need to relate the general physical picture of the Josephson equations with the Bogoliubov-de Gennes formalism. Following the developments in Refs.~\cite{Danshita2005,Paraoanu2001,Burchianti2017} the Josephson plasma oscillations can be related to the lowest dipole mode in the two-well system. Along a similar line of arguments we show here how multiple Bogoliubov modes are involved in the low-energy Josephson dynamics of the anisotropic system. We model the dynamics of the long bosonic Josephson junction in the mean-field framework of the two-dimensional Gross-Pitaevskii equation. The observed dynamical picture is complemented with an analysis of the collective excitation spectrum based on the Bogoliubov-de Gennes approach. We show how the specific structure of the Bogoliubov modes may lead to various physically relevant effects in the collective dynamics. In addition to enriched Josephson dynamics we identify for a certain range of barrier intensities unexpected quasi-degeneracy of the low-lying Bogoliubov modes with localization of the corresponding quasiparticles at the edges of the junction. Finally, we develop a simplified one-dimensional hydrodynamic model of the long bosonic Josephson junction, that is able to qualitatively reproduce all important features of the full simulations. \section{Simulations of long Bose-Josephson junctions} First we briefly describe simulations of the dynamics of trapped condensed atoms in a bosonic LJJ. The trap potential is approximated by an anisotropic harmonic confinement in all spatial dimensions with a Gaussian barrier in $y$ direction, \begin{equation} V(x,y,z) = \frac{m}{2}(\omega_x^2 x^2 + \omega_y^2 y^2 + \omega_z^2 z^2) + V_b \,\mathrm{e}^{-2 y^2 / \lambda^2}. \label{eqn:pot} \end{equation} The barrier height $V_b$ is a tunable parameter to control the coupling between the two condensate clouds forming in this potential. This setup is patterned similarly to the experimental one described in Ref.~\cite{LeBlanc2011}. We limit our considerations to values of $V_b$ larger than the chemical potential $\mu$ of the system such that classical hydrodynamic flow across the barrier is impossible, and the system remains in the tunneling regime. The harmonic part of the potential is assumed to be cigar shaped ($\omega_x \ll \omega_{y,z}$). The barrier cuts the cigar into two parts along the $(x,z)$ plane parallel to the long axis of the trap. In such a configuration the trapping in $z$ direction remains harmonic. Disregarding the dynamics in this direction we may reduce the description of the system to two dimensions using a Gross-Pitaevskii equation (GPE) \begin{equation} \label{eqn:GPE} \mathrm{i}\hbar\pdv{t}\Psi\left(\vb*{r},t\right) = \left(-\frac{\hbar^2 \laplacian}{2m} + V\left(\vb*{r}\right) + g \abs{\Psi}^2 \right) \Psi\left(\vb*{r},t\right), \end{equation} whith $\vb*{r} = (x,y)$. The condensate wave function $\Psi\left(\vb*{r},t\right)$ is normalized to the total number of atoms in the system $\int \dd{\vb*{r}} \abs{\Psi\left(\vb*{r},t\right)}^2 = N$, which is a conserved quantity. The non-linear interaction strength in two dimensions is then given by \cite{Bao2003} \begin{equation} \label{eqn:interaction_g} g = \frac{4\pi\hbar^2 a}{m} \sqrt{\frac{m \omega_z}{2\pi\hbar}}, \end{equation} where $a$ is the s-wave scattering length and $m$ the mass of a condensate atom. We here consider $^{87}\mathrm{Rb}$ atoms with $a = 5.819~\mathrm{nm}$ and $m = 86.91~\mathrm{u}$ and choose $\omega_x = 2\pi\cdot 1~\mathrm{Hz}$, $\omega_y = \omega_z = 2\pi\cdot 50~\mathrm{Hz}$, a barrier $1/e^2$ half-width $\lambda = 3.5~\mu\mathrm{m}$, and a total number of atoms $N = 2.0 \cdot 10^4$. However, most of our results should be qualitatively similar for any bosonic LJJ in the tunneling regime. For the calculations presented in this section we use a barrier amplitude $V_b/h = 375~\mathrm{Hz}$. The ground state stationary solution of the GPE (\ref{eqn:GPE}) shows the shape of two nearly one-dimensional parallel condensates (see Fig.~\ref{fig:config}). We find a chemical potential $\mu/h= 273~\mathrm{Hz}$ and consequently $V_b / \mu = 1.37$. Note that the chemical potential is still considerably larger than the trapping frequencies. Therefore, the mean-field treatment based on the GPE (\ref{eqn:GPE}) is justified. \begin{figure}[tbp] \includegraphics[width=\linewidth]{setup.eps} \caption{Two-dimensional ground state particle density $\abs{\Psi}^2$ (in arbitrary units) of the long Bosonic Josephson junction. Note the different scales of the $x$ and $y$ axes.} \label{fig:config} \end{figure} Josephson effects in bosonic systems lead to particle transfer between the wells. The relative population imbalance $Z(t)$ is the dynamical quantity which we use to quantify these effects \begin{equation}\label{eqn:Zdef} Z(t) = \frac{N_1(t) - N_2(t)}{N}, \end{equation} where $N_1(t)$ and $N_2(t)$ denote the number of atoms in each well ($N_1(t) + N_2(t) = N$). These populations are related to the solutions $\Psi\left(\vb*{r},t\right)$ of the GPE. They are defined as integrals of the particle density over each well \begin{equation}\label{eqn:N12def} N_j(t) = \int_{A_j}\dd{\vb*{r}} \abs{\Psi\left(\vb*{r},t\right)}^2, \end{equation} where $j=1,2$ and $A_j$ is the area subtended by well $j$. The second dynamical quantity of interest is the phase difference between the condensates \begin{equation} \varphi(t) = \theta_2(t) - \theta_1(t), \end{equation} where each phase is defined by averaging over corresponding well \begin{equation} \theta_j(t) = \arg{\int_{A_j}\dd{\vb*{r}}\Psi\left(\vb*{r},t\right)}. \end{equation} One may show that this quantity is canonically conjugate to the population imbalance. The time evolution of the population imbalance and relative phase of two BECs is often accurately described by the Josephson equations \cite{Smerzi1997,Raghavan1999} \begin{align} &\dot{Z} = - J\sqrt{1-Z^2} \sin(\varphi), \label{eqn:TMM_sym_Zdot}\\ &\dot{\varphi} = \Lambda Z + J \frac{Z}{\sqrt{1-Z^2}} \cos(\varphi). \label{eqn:TMM_sym_phasedot} \end{align} Here, $\Lambda$ describes the intra-well interactions and $J$ characterizes the coupling between the two condensates. These quantities may be expressed in terms of the solutions of the GPE. A derivation of the Josephson equations with explicit definitions for $\Lambda$ and $J$ based on the two-mode model may be found in the \hyperref[app:TMM]{Appendix}. The ratio between $\Lambda$ and $J$ defines the physical region of validity of this model. It is expected to be valid in the region $1 \ll \Lambda/J \ll N^2$ which is also called `Josephson regime' \cite{PitaevskiiStringari2016,Leggett2001}. The value of this ratio is strongly dependent on the barrier height. For the range of configurations considered in the present study this ratio ranges from $\Lambda/J \approx 7$ for ($V_b/\mu=1$) to $\Lambda/J \approx 9800$ for ($V_b/\mu=2$) which is well inside the Josephson regime. The Josephson equations (\ref{eqn:TMM_sym_Zdot},\ref{eqn:TMM_sym_phasedot}) have been successfully applied to describe various double-well experiments \cite{Albiez2005,Levy2007,PhysRevLett.105.204101}. Here we will use these equations as a basic reference and analyze their limitations when applied to anisotropic condensates. In the case of a small initial population imbalance $\abs{Z_0}\ll 1$ or initial phase difference $\varphi_0\ll\pi/2$, the equations (\ref{eqn:TMM_sym_Zdot},\ref{eqn:TMM_sym_phasedot}) reduce to a harmonic oscillator equation with the frequency \begin{equation}\label{eqn:plasma_frequency} \omega_p = \sqrt{J(\Lambda + J)}, \end{equation} known as plasma frequency. Thus the model predicts harmonic oscillations for small initial imbalances or phase differences. The value of the plasma frequency marks another important limitation of the model based on the system (\ref{eqn:TMM_sym_Zdot},\ref{eqn:TMM_sym_phasedot}). In order for the Josephson oscillations to be decoupled from the internal oscillations inside each condensate it is necessary that $\omega_p \ll \omega_{x,y,z}$ \cite{PitaevskiiStringari2016}. This condition is not fulfilled for our system and therefore we can expect the effects of intra-well collective excitations to influence the Josephson dynamics. For larger initial population imbalance or phase difference the Josephson equations predict the development of anharmonic large amplitude oscillations and the transition to the self-trapped regime at the critical population imbalance $Z_{cr}=2\sqrt{J/\Lambda(1-J/\Lambda)}$ \cite{Raghavan1999}. \begin{figure}[tbp] \begin{center} \includegraphics[width=\linewidth]{Zphi_t.eps} \caption{(a) Population imbalance $Z(t)$ and (b) relative phase $\varphi(t)$ as a function of time extracted from a GPE simulation. (c) The Fourier spectrum $|\tilde{Z}(\omega)|$ of the population imbalance (in arbitrary units). The vertical red line indicates the plasma frequency $\omega_p$ predicted by the two-mode model.} \label{fig:Zsequences} \end{center} \end{figure} In order to study the near-equilibrium dynamics of the junction we perform numerical simulations using the GPE (\ref{eqn:GPE}). The initial state is prepared close to the ground state with an initial imbalance $Z_0 = 0.01$ inside of the plasma oscillation regime. In a real experiment such an initialization can be realized with an additional linear offset potential that is switched off at $t=0$. In our simulations the system evolves freely for $5~\mathrm{s}$ and the population imbalance $Z(t)$ is determined at equally spaced times $t_m$. The observed time series $Z(t)$ is shown in the upper panel of Fig. \ref{fig:Zsequences} revealing clearly population oscillations not determined by a single frequency. We analyze the spectral properties of the oscillations by taking the discrete Fourier transform (DFT) of the time series $Z(t_m)$ \begin{equation} \tilde Z(\omega_n) = \sum_{m=0}^{M} Z(t_m) \mathrm{e}^{-\mathrm{i} \omega_n t_m}, \quad \omega_n=2\pi n/t_M, \end{equation} where $M$ is the total number of time steps in the simulation. The Fourier-spectrum $\tilde Z(\omega)$ shows several well separated peaks (see Fig.~\ref{fig:Zsequences}, lower panel). We consider then $Z(t)$ to be a superposition of several harmonic oscillations. We extract the frequencies of these distinct oscillations simply by locating the maxima in the spectrum. It is worth noticing, that none of the detected frequencies can be related to the plasma frequency predicted by the two-mode model. In order to investigate this finding further, we vary the initial population imbalance and initial phase difference. The results plotted in Fig.~\ref{fig:InitialZ} show that multiple peaks in the spectrum can be observed at arbitrarily small initial imbalance or phase difference. Their positions remain stable in the regions $Z_0\ll Z_{cr}$ and $\varphi_0\ll\pi/2$. Furthermore, the frequencies get smaller with increasing $Z_0$ or $\varphi_0$ as is similarly the case for the plasma frequency \cite{Raghavan1999,Martinez2018}. Finally, in the regime of large-amplitude oscillations distinct frequencies can hardly be distinguished and the DFT spectrum does not provide useful information. In the rest of the paper we will only consider the regime of small oscillations with $Z_0 = 0.01$ and $\varphi_0 = 0$, where the frequencies are well defined and stable. \begin{figure}[tbp] \begin{center} \includegraphics[width=\linewidth]{Initscan.eps} \caption{Frequencies corresponding to peaks in the Fourier spectrum $\tilde{Z}$ of the population imbalance as a function of the initial imbalance $Z_0$ (left) and the initial phase difference $\varphi_0$ (right). The vertical red line on the left panel indicates the critical population imbalance $Z_{cr} = 0.19$ for the onset of the self-trapping regime predicted by the two-mode model.} \label{fig:InitialZ} \end{center} \end{figure} \section{Bogoliubov-de-Gennes formalism} The results of the previous section show that several modes with different frequencies contribute to the population oscillations in the system. Furthermore, the number of these modes and their frequency is almost independent of the oscillation amplitude. This indicates that non-linear mode mixing effects are negligible in the considered regime. Nevertheless, such a multi-mode population oscillation spectrum is a clear indication that intra-well collective excitations influence the tunneling dynamics. In order to further analyze how different collective excitations influence the Josephson dynamics we employ the Bogoliubov-de Gennes formalism \cite{PitaevskiiStringari2016,Danshita2005}. We first write the condensate wave function in the form \begin{equation}\label{eqn:perturbed_gs} \Psi\left(\vb*{r},t\right) = \mathrm{e}^{-\frac{\mathrm{i}\mu t}{\hbar}} \left[\psi_0\left(\vb*{r}\right) + \delta\psi\left(\vb*{r},t\right) \right]. \end{equation} where $\psi_0$ is the stationary ground state, $\mu$ is the corresponding chemical potential and $\delta\psi\left(\vb*{r},t\right)$ is a perturbation of the form \begin{equation} \label{eqn:Bogoliubov_ansatz} \delta\psi\left(\vb*{r},t\right) = \sum_k c_k \left[ \mathrm{e}^{-\mathrm{i}\omega_k t} u_k\left(\vb*{r}\right) + \mathrm{e}^{\mathrm{i}\omega_k t} v_k^{\ast}\left(\vb*{r}\right) \right], \end{equation} with the additional condition that the perturbation is small ($c_k \ll \sqrt{N}$). After inserting this ansatz into the GPE (\ref{eqn:GPE}) and considering only the terms linear in $c_k$ we obtain the familiar system of Bogoliubov-de Gennes equations \begin{equation}\label{eqn:BdG} \begin{aligned} \hbar\omega_k u_k = (\hat{H}_0 + 2g \abs{\psi_0}^2 -\mu) u_k + g \psi_0^2 v_k, \\ -\hbar\omega_k v_k = (\hat{H}_0 + 2g \abs{\psi_0}^2 -\mu) v_k + g \psi_0^2 u_k, \end{aligned} \end{equation} where \begin{equation} \hat{H}_0 = -\frac{\hbar^2\laplacian}{2m} + V\left(\vb*{r}\right). \end{equation} The usual normalization condition for the Bogoliubov modes reads \[ \int d\mathbf{r} \left( |u_k|^2 - |v_k|^2 \right) = 1. \] This linear system of equations can be solved numerically in order to obtain eigenmodes and their corresponding frequencies $\omega_k$. We solve the system (\ref{eqn:BdG}) for different values of the barrier height $V_b$. The calculated spectrum as a function of $V_b$ is presented in Fig.~\ref{fig:BarrierScan}. We also perform full GPE simulations with barrier heights in the same range to compare the frequencies extracted from the time series $Z(t)$ with the Bogolibov spectrum. \begin{figure*}[tbp] \includegraphics[width=\linewidth]{bdg_main.eps} \caption{Spectrum of elementary excitation of the long Josephson junction. Solid lines of different colors correspond to the Bogoliubov modes of different symmetry: A --- transversely symmetric (blue lines), B --- transversely antisymmetric but longitudinally symmetric (green lines) and C --- antisymmetric in both directions (red lines). The insets show the spatial distribution of the lowest excitations of each of the three types (scale of the color bar is in arbitrary units). The corresponding modes are marked on the main plot with the letter on the right. Black crosses correspond to the peak positions in the Fourier spectrum $\tilde Z(\omega)$ obtained from the full GPE simulations. The purple dashed line is the estimate of the Josephson plasma frequency $\omega_p$ based on the two-mode model. } \label{fig:BarrierScan} \end{figure*} As one can see in Fig.~\ref{fig:BarrierScan} only some of the Bogoliubov modes correlate with the spectrum of population oscillations. In order to understand this fact let us see how the linear excitations of the form (\ref{eqn:Bogoliubov_ansatz}) enter into the dynamics of the population imbalance of the two-well system. To this end we introduce the following operator representation of the population imbalance \[ Z = \frac{1}{N} \langle \Psi | \hat Z | \Psi \rangle, \qquad \hat Z\left(\vb*{r}\right) = \left\{\begin{aligned} 1,\quad & \mathbf{r}\in A_1 (y<0),\\-1,\quad & \mathbf{r}\in A_2 (y>0).\end{aligned}\right. \] which is valid for any wave function $\Psi$. We can now insert here the wave function of the form (\ref{eqn:perturbed_gs},\ref{eqn:Bogoliubov_ansatz}). We keep only terms up to first order in $c_k$ and assume (without loss of generality) that $\psi_0\left(\vb*{r}\right)$, $u_k\left(\vb*{r}\right)$ and $v_k\left(\vb*{r}\right)$ are purely real functions. Taking also into account that $\langle \psi_0 |\hat Z| \psi_0 \rangle=0$ the final expression yields \begin{equation}\label{eq:zt_bdg} Z(t) = \frac{2}{N} \sum_k c_k D_k \cos(\omega_k t). \end{equation} Here \begin{equation} \label{eq:d2_bdg} D_k = \langle \psi_0 | \hat Z | u_k + v_k \rangle \end{equation} is the excitation amplitude of the $k$'th Bogoliubov mode by an operator $\hat Z$. In other words the set of coefficients $D_k$ essentially characterizes the condensate response to the small perturbation proportional to $\hat Z$ \cite{PitaevskiiStringari2016,PhysRevA.56.587}. The equation (\ref{eq:zt_bdg}) shows that a populated Bogoliubov mode with index $k$ results in a transfer of population with the frequency $\omega_k$, but only if the coefficient $D_k$ is non-zero. To identify which Bogoliubov modes may correspond to non-zero values of $D_k$ we should consider symmetry properties of these modes. Both the potential (\ref{eqn:pot}) and the ground state are symmetric with respect to axes reflections in longitudinal ($x$) and transverse ($y$) directions. Therefore the solutions of (\ref{eqn:BdG}) are expected to possess certain reflection symmetries in these directions as well. In Fig.~\ref{fig:BarrierScan} we distinguish three subsets of levels (plotted with different colors) based on their symmetry properties. These states are (A) transversely symmetric, (B) transversely antisymmetric but longitudinally symmetric and (C) antisymmetric in both directions. In the following we will refer to these states as A, B and C for shorter notation. From the structure of Eq.~(\ref{eq:d2_bdg}) one can predict that $D_k$ may be non-zero only if $u_k+v_k$ is symmetric in $x$ and antisymmetric in $y$. Consequently the states of type B are the only subset of modes that can possibly contribute to the Josephson dynamics identified by $Z(t)$. This argument is confirmed by the results in Fig.~\ref{fig:BarrierScan} showing a nearly perfect match of the spectra extracted from GPE dynamics with the Bogoliubov modes of type B. The above symmetry arguments provide necessary conditions for any of the collective excitations to be traceable in the population imbalance $Z(t)$. However, these conditions are by no means sufficient. The judgement on the number of non-zero amplitudes $D_k$ can be done only by calculating them numerically or measuring them. Such calculations show that there is always only a finite number of essentially non-zero values in this set. In Fig.~\ref{fig:dkcoefs} we show the values of $D_k$ and $\omega_k$ corresponding to the lowest type B excitations for the same barrier heights used in the previous section. Apparently only the five lowest excitations show amplitudes $D_k$ significantly larger than zero. One may also notice a similarity of Fig.~\ref{fig:dkcoefs} and the spectrum $\tilde Z(\omega)$ presented in the lower panel of Fig.~\ref{fig:Zsequences}. This shows that the spectrum of population oscillations can in general be predicted by analyzing the Bogoliubov modes of the system. This also means that the number of observable modes is a property of the system, and does not depend on the initial perturbation imposed in the numerical simulation. \begin{figure}[tbp] \includegraphics[width=\linewidth]{dkcoefs} \caption{Excitation amplitudes $D_k$ for the 10 lowest collective excitations of the type B calculated with the barrier height $V_b/h = 375\,\mathrm{Hz}$.} \label{fig:dkcoefs} \end{figure} \section{General properties of the Bogoliubov spectrum} We now investigate further the complex structure of the calculated excitation spectrum. The first obvious observation is that the barrier affects differently the excitations of different symmetries. In particular, the excitations of type A which are symmetric in the transverse direction are practically unaffected by the changes of the barrier height. These solutions are node-less in $y$ and represent purely longitudinal excitations. They originate from the collective excitations of a single one-dimensional BEC and are well approximated by a simple analytical formula based on hydrodynamic approximation \cite{Menotti2002} (see Fig.~\ref{fig:CompBdG} for direct comparison) \begin{equation}\label{eq:1d-sym-spectrum} \omega = \omega_x \sqrt{\frac{k(k+1)}{2}}, \end{equation} The modes of types B and C are strongly affected by the barrier height but asymptotically they converge to the symmetric counterparts and are expected to match them exactly in the limit of two completely uncoupled condensates. In Fig.~\ref{fig:levelsplit1} we show the level spacings $\Delta\omega = \omega_k^{(B,C)} - \omega_k^{(A)}$ between the levels of type A and corresponding levels of types B and C. We see that in the high barrier limit all level spacings decay exponentially with growing $V_b/\mu$. Interestingly, all the decay rates are related to parameters defined in the two-mode model. The lowest pair of levels converge at the rate of plasma frequency $\omega_p$ but all the other pairs converge at the same rate as the coupling parameter $J$. In the limit of a very high barrier $V_b/\mu > 1.9$ where $\omega_p < \omega_x$, higher modes disappear from $Z(t)$ dependence and the lowest oscillation frequency converges to the estimated value of the plasma frequency $\omega_p$ (see Fig.~\ref{fig:BarrierScan}). This indicates that validity of the Josephson equations (\ref{eqn:TMM_sym_Zdot},\ref{eqn:TMM_sym_phasedot}) is restored in this region. \begin{figure}[tbp] \includegraphics[width=\linewidth]{levsplit-1} \caption{Level spacings $\Delta\omega = \omega_k^{(B,C)} - \omega_k^{(A)}$ between the levels of type A and corresponding levels of types B and C. The plasma frequency $\omega_p$ (red dashed line) and coupling strength $J$ (green dashed line) are shown for comparison. Note the logarithmic scale of the vertical axis, which means that a straight line on the figure corresponds to an exponential decay of the curve.} \label{fig:levelsplit1} \end{figure} Levels of types B and C also show an unexpected degeneracy in the region of intermediate barrier heights. The spacings between these levels are shown in Fig.~\ref{fig:levelsplit2}. We show them in both linear and logarithmic scales to highlight that for all of the affected levels the minimal level spacing is observed at the same barrier height around $V_b/\mu \approx 1$. This quasi-degeneracy arises due to coupling between two condensates. \begin{figure}[tbp] \includegraphics[width=0.5\linewidth]{levsplit-01}\includegraphics[width=0.5\linewidth]{levsplit-02} \caption{Level spacings $\Delta\omega = \omega_k^{(C)} - \omega_k^{(B)}$ shown in linear (left panel) and logarithmic (right panel) scale.} \label{fig:levelsplit2} \end{figure} In order to understand the origin of this degeneracy we look into the distribution of the tunneling current initiated by these excitations. The transverse tunneling current density can be written in the following form \[ J_y(x,t) = \frac{\hbar}{2 m \mathrm{i}} \int dy \left( \Psi^* \partial_y \Psi - \Psi \partial_y \Psi^* \right). \] Inserting here the wave function (\ref{eqn:perturbed_gs},\ref{eqn:Bogoliubov_ansatz}) we get \[ J_y(x,t) = \sum_k c_k j_y^{(k)}(x) \sin(\omega_k t), \] where \[ j_y^{(k)}(x) = \frac{\hbar}{m} \int dy \left[ \psi_0 \partial_y (u_k - v_k) - (u_k - v_k) \partial_y \psi_0 \right] \] is the characteristic current density associated with $k$'th Bogoliubov mode. These current densities for the lowest Bogoliubov modes of types B and C are shown in Fig.~\ref{fig:current}. We see that in the region of quasi-degeneracy the tunneling current in these modes is localized at the edges of the junction and two edge-localized excitations basically decouple from each other. A similar effect was also described for the superconducting LJJs \cite{PhysRev.164.538} and attributed to the Meissner effect in the junction. One may also see from Fig.~\ref{fig:BarrierScan}, that these edge-localized states are always involved in the dynamics of population imbalance $Z(t)$ (the crosses cover all regions of quasi-degeneracy). \begin{figure}[tbp] \includegraphics[width=\linewidth]{current12} \includegraphics[width=\linewidth]{current11} \caption{Tunneling current density distribution corresponding to the lowest Bogoliubov excitations of type B (blue lines) and C (red lines). Top panel corresponds to barrier height $V_b/h = 325 \mathrm{Hz}$ ($V_b/\mu \approx 1.26$), bottom panel --- $V_b/h = 600 \mathrm{Hz}$ ($V_b/\mu \approx 1.93$). Vertical dashed lines on both panels show the Thomas-Fermi boundary of the system.} \label{fig:current} \end{figure} \section{Effective one-dimensional model of elementary excitations} So far all of our descriptions for the bosonic LJJ have been in two spatial dimensions. However, the geometry of the system suggests that we can consider it as two coupled one-dimensional systems. For the symmetric modes the one-dimensional treatment of the system is quite simple and leads to the analytical expression (\ref{eq:1d-sym-spectrum}). However, for the spectrum of antisymmetric modes to the best of our knowledge no analytical expressions exist. The model developed in Ref.~\cite{Abad2013} describes dispersion relations of coupled infinite homogeneous 1D condensates, so while it captures some essential properties of the spectra, it can not reproduce the observed level degeneracy and edge-localized states. In the following we develop a simplified one dimensional hydrodynamic model that is still able to reproduce qualitatively the behavior of antisymmetric Bogoliubov modes. Our aim is to reduce the system to two coupled one-dimensional condensates. The reduction scheme that we propose is similar in spirit to the usual two-mode model. It relies on the additional assumption that the dimensions can be separated. So the long axis of the trap $x$ does not couple to the short axis $y$. In each well the wave functions can be written as a product state and the total wave function reads \begin{align} \Psi(x,y,t) = \Psi_1(x,t) \chi_1(y) + \Psi_2(x,t) \chi_2(y). \end{align} We assume the solutions in $y$ dimension $\chi_{1,2}(y)$ to be time independent. Additionally we define $\chi_1$ and $\chi_2$ such that they are real, orthogonal and normalized to unity. Our trapping potential is obviously separable $V(x,y)/\hbar = \tilde{V}_x(x) + \tilde{V}_y(y)$. Inserting this ansatz into the GPE (\ref{eqn:GPE}) leads to a system of two coupled one-dimensional equations \begin{align}\label{eqn:1DC-GPE} &\mathrm{i} \pdv{t} \Psi_1(x,t) = \left(-\frac{\hbar}{2m} \partial_x^2 + \tilde{V}_x + g_{1D} \abs{\Psi_1}^2 \right) \Psi_1 \notag\\ &- \left(K + F(\abs{\Psi_1}^2 + \abs{\Psi_2}^2)\right) \Psi_2 - F(\Psi_1^\ast \Psi_2 + \Psi_2^\ast \Psi_1) \Psi_1 \notag\\ &+ M \left(\abs{\Psi_2}^2 \Psi_1 + (\Psi_1^\ast \Psi_2 + \Psi_2^\ast \Psi_1) \Psi_2\right), \end{align} where for the second component the indices are interchanged. The coefficients that appear in these equations are defined as follows \begin{align} g_{1D} &= \frac{g}{\hbar} \int \dd{y} \chi_1^4, \\ K &= - \int\dd{y} \left(-\frac{\hbar}{2m} \chi_1 \partial_y^2 \chi_2 + \chi_1 \tilde{V}_y \chi_2 \right), \\ F &= - \frac{g}{\hbar} \int\dd{y} \chi_1^3 \chi_2, \\ M &= \frac{g}{\hbar} \int\dd{y} \chi_1^2 \chi_2^2. \end{align} In the Josephson tunneling regime the overlap between functions $\chi_1$ and $\chi_2$ is small and we can safely assume $M \ll F \ll g_{1D}$. Then the equations simplify to \begin{align}\label{eqn:1DC-GPE-red} &\mathrm{i} \pdv{t} \Psi_1(x,t) = \left(-\frac{\hbar}{2m} \partial_x^2 + \tilde{V}_x + g_{1D} \abs{\Psi_1}^2 \right) \Psi_1 \notag\\ &- \left(K + F(\abs{\Psi_1}^2 + \abs{\Psi_2}^2)\right) \Psi_2, \notag\\ &\mathrm{i} \pdv{t} \Psi_2(x,t) = \left(-\frac{\hbar}{2m} \partial_x^2 + \tilde{V}_x + g_{1D} \abs{\Psi_2}^2 \right) \Psi_2 \notag\\ &\shoveright{- \left(K + F(\abs{\Psi_1}^2 + \abs{\Psi_2}^2)\right) \Psi_1.} \end{align} These equations permit a one-dimensional description of the double-well system. They are similar to the commonly used model for two-component condensates with coherent coupling. However they contain additional terms proportional to $F$ which in general cannot be neglected. A more detailed study of the effects of these additional terms would require a separate study. We only mention here that $K$ is in general not sign-definite and neglecting the nonlinear terms proportional to $F$ may lead to unphysical results. This problem is usually rectified by taking $K$ as a phenomenological positive definite parameter or imposing additional approximations (see e.g. Ref.~\cite{PhysRevA.81.025602} for a discussion and references). In order to further simplify our model, we recall that for the parameters considered in the present work the 1D system is deeply in the Thomas-Fermi regime. This suggests that a hydrodynamic approximation may be valid to describe the system. We rewrite the equations (\ref{eqn:1DC-GPE}) with the usual Madelung decomposition $\Psi_1(x,t) = \sqrt{\rho_1} \mathrm{e}^{\mathrm{i}\theta_1}$, where $\rho_1(x,t)$ is the density and $\theta_1(x,t)$ is the phase, which gives \begin{align}\label{eqn:Hydro} \dot{\rho}_1 = \;& -\frac{\hbar}{m} \partial_x\left(\rho_1 \partial_x \theta_1 \right) \notag\\ & - 2 \left(K + F(\rho_1 + \rho_2)\right) \sqrt{\rho_1 \rho_2} \sin(\theta_2 - \theta_1), \notag\\ \dot{\theta}_1 = \;& - \frac{\hbar}{2m} \left(\partial_x \theta_1\right)^2 - \tilde{V}_x - g_{1D}\rho_1 \notag\\ & +\left(K + F(\rho_1 + \rho_2)\right)\sqrt{\frac{\rho_2}{\rho_1}} \cos(\theta_2-\theta_1), \end{align} where we neglected the quantum pressure terms of the form $\partial_x(\sqrt{\rho})/\sqrt{\rho}$. The equations of the other component have interchanged indices. To study the antisymmetric low energy excitations we consider a stationary state for each well and we add a perturbation with an opposite sign in each well: \begin{align}\label{eq:hydro-ansatz} &\rho_1(x,t) = \rho_0(x) + \delta\rho(x,t), && \theta_1(x,t) = -\tilde{\mu} t - \delta\theta(x,t), \notag\\ &\rho_2(x,t) = \rho_0(x) - \delta\rho(x,t), && \theta_2(x,t) = -\tilde{\mu} t + \delta\theta(x,t), \end{align} where $\tilde{\mu} = \mu/\hbar$. The Thomas-Fermi ground state density is \begin{equation}\label{eq:TF-GS} \rho_0(x) = \frac{\tilde{\mu}-\tilde{V}_x(x)+K}{g_{1D}} \end{equation} in the region where $\tilde{\mu}+K>V_x$ and zero otherwise. With this ansatz we linearize the coupled equations, \begin{align} \pdv{t} \delta\rho = &\; \frac{\hbar}{m} \partial_x \left(\rho_0 \partial_x \delta\theta \right) - 4 \left(K + 2F \rho_0\right) \rho_0 \delta\theta,\\ \pdv{t} \delta\theta = &\; \left(g_{1D} + \frac{K}{\rho_0}\right) \delta\rho. \end{align} Combining the two equations we get one second-order equation for $\delta\rho$ \begin{align} \pdv[2]{t}\delta\rho = \;& \frac{\hbar}{m} \partial_x \left((g_{1D}\rho_0 + K) \partial_x \delta\rho - K \frac{\delta\rho\, \partial_x \rho_0}{\rho_0} \right) \notag\\ & - 4\left(K + 2F\rho_0\right) \left(g_{1D}\rho_0 + K\right) \delta\rho. \end{align} The term proportional to $(\partial_x\rho_0)/\rho_0$ can be safely neglected in the Thomas-Fermi regime. We finally assume the solutions to be periodic in time $\delta\rho(x,t) = \cos(\omega t) \delta\rho(x)$, which leads to the following equation \begin{align} \label{eqn:Hydro1Dfinal} -\omega^2 \delta\rho = \;& \frac{\hbar}{m} \partial_x \left((g_{1D}\rho_0 + K) \partial_x \delta\rho \right) \notag\\ & - 4\left(K + 2F\rho_0\right) \left(g_{1D}\rho_0 + K\right) \delta\rho. \end{align} This equation provides a hydrodynamic approximation for the transversely antisymmetric collective excitations of the system. Using the explicit expression for the Thomas-Fermi ground state density (\ref{eq:TF-GS}) this equation can be easily recast in terms of the chemical potential and the trap potential, which makes it numerically tractable. In some limiting cases the equation (\ref{eqn:Hydro1Dfinal}) can be solved analytically. In particular, in the limit of uncoupled condensates ($K \rightarrow 0, F \rightarrow 0$) it is straightforward to show that the spectrum (\ref{eq:1d-sym-spectrum}) is reproduced as is expected in this limit. Also if one considers the excitations of the same shape as the ground state ($\delta\rho(x,t) = Z(t) \rho_0$) then the result of the two-mode model is recovered and after integrating out the spatial dimension we get a single frequency $\omega = \omega_p$ identical to Eq.~(\ref{eqn:plasma_frequency}). In figure \ref{fig:CompBdG} we compare numerical solutions of equation (\ref{eqn:Hydro1Dfinal}) with the Bogoliubov spectrum of the two-dimensional system. The parameters $g_{1D}$, $K$ and $F$ were estimated based on the ground state GPE solutions. The two approaches show reasonable agreement, which is better in the high barrier region. Specific features of the spectrum, such as formation of quasi-degenerate pairs of levels is also reproduced in the developed hydrodynamic approximation. This result justifies the general validity of this approach and applicability of the derived equation (\ref{eqn:Hydro1Dfinal}). \begin{figure}[tbp] \includegraphics[width=\linewidth]{bdg_vs_hydro.eps} \caption{Spectrum of collective excitations in the hydrodynamic approximation based on the equations (\ref{eq:1d-sym-spectrum}) (solid yellow lines) and (\ref{eqn:Hydro1Dfinal}) (solid red lines) compared to the full solution of the Bogoliubov-de Gennes equations (\ref{eqn:BdG}) (dashed blue lines). } \label{fig:CompBdG} \end{figure} Using the ansatz (\ref{eq:hydro-ansatz}) we can also write the population imbalance in the form analogous to (\ref{eq:zt_bdg}) \[ Z = \frac{2}{N} \sum_k D_k \cos(\omega_k t), \] where the coefficients $D_k$ have a similar meaning to the excitation amplitudes discussed in the previous section and are defined by the solutions of Eq.~(\ref{eqn:Hydro1Dfinal}) as follows \[ D_k = \int dx \delta\rho_k. \] The values of these coefficients are shown in Fig.~\ref{fig:dkcoefs_hydro}. We see that they also qualitatively reproduce the results in Figs.~\ref{fig:dkcoefs} and~\ref{fig:Zsequences}. A number of additional zero-valued coefficients on the figure correspond to excitations of type C, which are also included in the spectrum of Eq.~(\ref{eqn:Hydro1Dfinal}). \begin{figure}[tbp] \includegraphics[width=\linewidth]{dkcoefs_hydro} \caption{Excitation amplitudes $D_k$ in the hydrodynamic approximation (in arbitrary units). The barrier height is $V_b/h = 375\,\mathrm{Hz}$.} \label{fig:dkcoefs_hydro} \end{figure} \section{Conclusions} In the present work we investigated the low-energy dynamics of long bosonic Josephson junctions. In the simulations based on the time-dependent Gross-Pitaevskii equation we observe oscillations of the population imbalance consisting of multiple well-defined frequencies, that persist even for arbitrarily small initial perturbation. In order to understand this behavior we analyze the spectrum of elementary excitations obtained from the Bogoliubov-de Gennes equations. By analyzing the condensate response to the imposed population imbalance we find the Bogoliubov modes that can contribute to the population transfer. This allows us to explain and predict the multi-mode spectrum of population oscillations observed in the dynamical simulations. From the general structure of the excitations spectrum in the region of intermediate barrier heights we discover the development of quasi-degeneracy of low-lying levels. We connect this phenomenon with localization of corresponding modes at the edges of the junction. The geometry of LJJ allows such edge-localized excitations to decouple and form degenerate pairs. Quite interestingly, these edge-localized oscillations are also always observed in the dynamics of the population imbalance. We also developed an effective one-dimensional model of two coupled condensates, that combines a hydrodynamic approximation in the longitudinal direction and quantum tunneling in the transverse direction of the barrier. This simplified model is shown to qualitatively reproduce all essential features of the excitation spectrum obtained using the Bogolubov-de-Gennes analyis: The excitation frequencies are in good agreement in a wide range of barrier heights. The model also correctly predicts multiple frequencies for the population dynamics of the condensate.
1,108,101,566,842
arxiv
\section{Introduction} \label{sec:Introduction} The third observational run (O3) of the aLIGO and VIRGO consortium identified numerous gravitational-wave (GW) events, the majority of which are binary black-hole (BBH) mergers. Over the past decades a large body of theoretical studies was done in order to identify the evolutionary channels leading to the mergers of two compact objects and predict (before the LIGO era) and currently explain the observed rate of mergers and their properties \citet[e.g.][and more]{Belczynski2002,Belczynski2004,Belczynski2007,Belczynski2008,Belczynski2016,Antonini2012,Dominik2012,Antognini2014,Antonini2014,deMink2015,Petrovich2017}. In the second observational run (O2) $11$ GW mergers have been detected by aLIGO and VIRGO. These include $10$ mergers of binary black-holes (BBHs) and a single merger from a binary neutron-star (NS). All of the detection were consistent with zero eccentricity The inferred BBH-merger rate from O2 (in the local Universe) is $R_{{\rm BBH}}=9.7-101{\rm Gpc^{-3}yr^{-1}}$; while the merger rates of binary neutron-star is $R_{{\rm BNS}}=110-3840{\rm Gpc^{-3}yr^{-1}}$; and the upper limit of BH-NS merger is $R_{{\rm BHNS}}<600{\rm Gpc^{-3}yr^{-1}}$ \citep{Abbott2019}. Four main evolutionary channels were proposed in the context of GW mergers. The first deals with collisional mergers in dense environments such as galactic centers or globular clusters \cite[e.g.][]{Rodriguez2016,Rodriguez2018,Fragione2018,Banerjee2018,Hamers2018,Leigh2018,Samsing2014}, where binary mergers are catalyzed by strong interactions with stars in these dense environment. In such environments, strong binary-single and binary-binary interactions lead to harden compact binaries (drive them to shorter periods) and excite their eccentricities. Such models predict GW-production rates in the range of $2-20{\rm Gpc^{-3}yr^{-1}}.$ The second evolutionary channel deals with the isolated evolution of initially massive close binary stars \citep[e.g.][]{Belczynski2008,Belczynski2016,Dominik2012,Dominik2015}. In this scenario massive close binaries strongly interact through one or two common envelope phases in which the interaction of a star with the envelope of an evolved companion leads to its inspiral in the envelope and the production of a short period binary. A fraction of the post-CE binaries are sufficiently close to merge via GW emission within a Hubble time. A different merger path is through the \textquotedblleft chemically homogeneous channel\textquotedblright{} \citep{Mandel2016}. The large uncertainties in the initial conditions of the binaries, the evolution in the common-envelope phase, the natal-kick experienced by NS/BHs at birth; and the mass-loss processes of massive stars give rise to a wide range of expected GW-sources production rates in the range $\sim10^{-2}-10^{3}{\rm Gpc^{-3}yr^{-1}}.$ The third evolutionary channel is mergers induced by secular evolution of triple systems either in the field \citep[e.g.][]{Antonini2016,Antonini2017,Silsbee2017} or in nuclear cluster and/or massive clusters \citep[e.g.][]{Antonini2012,Petrovich2017,Samsing2018,Hoang2018,Fragione2019,Hamilton2019}. In this channel the secular and/or semi-secular perturbations by a third companion (Lidov-Kozai evolution \citet{Lidov1962,koz62}; semi-secular evolution \citet{Antonini2012}) can drive BBHs into high eccentricities such that they merge within a Hubble-time; the rates expected in this channel are $\sim0.5-15{\rm Gpc^{-3}yr^{-1}.}$ The fourth channel \citep{Michaely2019} is from wide binaries in the field (SMA >$1000{\rm AU})$ perturbed by flyby encounters, following similar ideas regarding formation of clue stragglers through stellar mergers of wide binaries studied by \citep{Kaib2014}. We found that the frequent interactions with random stars can change the eccentricity of wide binaries, and in some cases excite sufficiently high eccentricities, leading to the merger of the binary via GW emission before the next flyby happens. The predicted rate from this channel, for spiral galaxies, is $\sim1-10{\rm Gpc^{-3}yr^{-1}.}$ Here we follow up and extend this channel to study flyby perturbations of wide {\emph triples} in the field. One of the important properties of GW mergers, that can potentially distinguish between the different channels is the eccentricity of the merged binary in the aLIGO / VIRGO band. With current observatories only eccentricities greater than $\sim0.1$ at GW frequency of $\sim10{\rm Hz}$ \citep{Harry2010} are detectable and termed as ``eccentric mergers''. Currently an eccentric merger have not been detected yet, and it is important to understand all evolutionary paths that may lead to an eccentric merger and their expected rate. It was suggested that eccentric mergers are rare among mergers of isolated binaries, but that dynamical interaction in dense environments \citep[e.g.][]{Samsing2014,Rodriguez2016} could give rise to a non-negligible rate of eccentric mergers. Another observable property of GW-merges is the measured spin-alignment. Current observations suggest a preference for either an isotropic spin distribution or low spin magnitudes for the observed systems \citep{Will2014}. Dynamical channels are expected to have isotropic spin orientations, while isolated binaries channels are more likely to have spins that are preferentially aligned with the orbit. As we briefly discuss below, the TBH channel is likely to produce an isotropic distribution (with some possible caveats), more similar to the dynamical channels. Finally, with the expectations of much more data from the coming runs of aLIGO, the delay-time distribution (DTD) of GW mergers could also become an important constraining property. The DTD presents the rate of events since the formation of BHs or essentially star formation. The mergers time of BBHs can span from years \citep{Michaely2018} up to Hubble time, with different channels suggesting different DTDs. In this manuscript we expand our understanding of the fourth channel, and extend it to the study of dynamical interactions of wide {\emph triples} in the field (\emph{not} in dense stellar environments). We calculate the GW merger rate from this channel, characterize the expected eccentricity distribution, and discuss the expected spin-alignment from the channel. The paper is structured as follows: in section \ref{sec:Wide-triples-in} we briefly describe the interaction of wide TBH systems in the field and calculate the rate of these system becoming unstable due to flyby interactions. In section \ref{sec:Unstable-triples} we describe the dynamics of unstable triples and calculate the resulting galactic GW-merger rate and the rate of eccentric GW mergers. In section \ref{sec:GW-volume-rates} we compute the corresponding cosmological merger rate observable by LIGO. We discuss our results in Section \ref{sec:Discussion} and summarize (section \ref{sec:Summary}). \section{Wide triples in the field} \label{sec:Wide-triples-in} In the following we describe the dynamics of wide triples perturbed by random flyby of stars in the field. A more extended mathematical description of some of the aspects of such interactions an be found in our previous papers \citet{Michaely2016,Michaely2019}. In what follows we highlight the main aspects of the mathematical model and key differences of this work focusing on wide triples compared with \citep{Michaely2019} where we studied wide binaries. We first describe the interaction qualitatively, subsection \ref{subsec:Qualitative-description} following with a quantitative treatment in subsection \ref{subsec:Quantitative-description}. \subsection{Qualitative description} \label{subsec:Qualitative-description} Several studies (\citet{Kaib2014,Michaely2016,Michaely2019}) showed that the cumulative interactions of wide systems ($a\gtrsim1000{\rm AU}$) with field stars through flyby encounters can considerably change their (outer-orbit) pericenter distances, mainly through the excitation of the wide-binary eccentricity, somewhat similar to the case of stars interacting with massive black holes in galactic nuclei \citep{Lightman1977,Merritt2013}. A fraction of these system might interact tidally \citet{Michaely2016} or inspiral through GW emission \citep{Michaely2019}. Here we focus on wide {\emph triple}-BHs (TBHs) in hierarchical configurations, where, for simplicity we consider only equal-mass BHs. In such triples, the inner binary consists of two components of masses, $m_{1}$ and $m_{2}$, with the inner orbital parameters; inner semi-major axis (SMA) and inner eccentricity denoted by $a_{1}$ and $e_{1}$, respectively, where, for simplicity we consider only inner binaries with $e_{1}$ set to zero (which might be expected at least for the relatively more compact binaries, if they evolved through a common-envelope evolution phase). The third BH, $m_{3}$ and the the inner binary (center of mass) serve as the outer binary of the triple with the outer SMA denoted by $a_{2}$, where we only consider cases where $a_{2}\gg a_{1}$. For illustration see Figure \ref{fig:Illustration-of-hierarchical}. We note that in this manuscript we are neglecting Lidov-Kozai effects and the effects of mass-loss on such secular evolution \citep{Lidov1962,koz62,Naoz2016,Michaely2014} because we are focusing on wide systems with $a_{2}>1000{\rm AU}$. For these systems the Lidov-Kozai timescale is \begin{equation} \tau_{{\rm LK}}\approx\frac{P_{2}^{2}}{P_{1}}\approx \end{equation} \[ 4.7\cdot10^{12}{\rm yr}\left(\frac{a_{2}}{10^{4}{\rm AU}}\right)^{3}\left(\frac{a_{1}}{0.1{\rm AU}}\right)^{-\frac{3}{2}}\left(\frac{M}{30M_{\odot}}\right)^{-1}\left(\frac{M_{b}}{20M_{\odot}}\right)^{\frac{1}{2}} \] where $M\equiv m_{1}+m_{2}+m_{3}$ is the total mass of the TBH and $M_{b}\equiv m_{1}+m_{2}$ is the total mass of the inner binary, and $\tau_{{\rm LK}}$ is typically much larger than a Hubble time, although it could affect TBHs of wider inner binaries and/or closer outer binaries (the latter however, would be less affected by flyby encounters discussed here). \textbf{In future work we indent to explore the regime where the two timescales overlap and potentially flybys might excite Lidov-Kozai oscillations.} For these wide systems a flyby can change the eccentricity of the outer binary such the the pericenter distance, $q=a_{2}\left(1-e_{2}\right)\lesssim a_{1}$. Namely, the third BH passes within the inner binary SMA, effectively giving rise to a strong binary-single encounter, and a chaotic evolution of the now unstable triple. In this case the binary-single encounter resembles the binary-single encounters occurring in dense cluster environments, with similar expected outcomes as those studied in that context (e.g. \citet{Heggie1975,Hills1975,Samsing2014,Stone2019b} and references therein). In other words, perturbed wide field triples provide an effective channel of converting isolated field evolution to a cluster-like dynamical interaction. There are two relevant timescales for this part of the model. First, the interaction timescale, $t_{{\rm int}}\equiv b/v_{{\rm enc}}$, between the TBH and the flyby field star, where $b$ is the closest approach of the flyby to the triple system and $v_{{\rm enc}}$ is the velocity at infinity of the flyby with respect to the triple center of mass. Second, the outer binary orbital period, $P_{2}$. We restrict ourselves to the impulsive regime where $t_{{\rm int}}\ll P_{2}$. In the next section \ref{subsec:Quantitative-description} we calculate the rate of turning hierarchical triples into unstable triples as a function of the inner SMA. A fraction of all systems that undergo a dynamical instability phase, $f_{{\rm merger}}\left(a_{1}\right)$, merge during the resonant interaction phase; we term this the prompt-merger channel (see section \ref{subsec:Calculating-the-merger}), while the majority are disrupted, with one of the BH ejected, and the other two forming a typically more compact remnant binary. Some of these can later inspiral through GW emission and merge in less than a Hubble time; we term this channel for GW-sources the delayed-merger channel. For both channels we consider the expected rates, and characterize the expected eccentricity distribution in the LIGO band, and the fraction of eccentric mergers, $f_{{\rm eccentric}}\text{\ensuremath{\left(a_{1}\right)}. }$ We find $f_{{\rm merger}}\left(a_{1}\right)$ and $f_{{\rm eccentric}}\text{\ensuremath{\left(a_{1}\right)} }$ in section \ref{subsec:Calculating-the-merger}. \begin{figure} \includegraphics[width=0.9\columnwidth]{Illustration}\caption{\label{fig:Illustration-of-hierarchical}Illustration of hierarchical TBH system, $a_{1}\ll a_{2}$. The inner binary is circular while the eccentricity of the outer binary is distributed with thermal distribution, $f\left(e_{2}\right)=2e_{2}.$} \end{figure} \subsection{Quantitative description} \label{subsec:Quantitative-description} As mentioned earlier, here we briefly review the loss-cone analysis used to estimate the TBH destabilization rates due to flyby encounters. A more detailed discussion of the loss-cone analysis in this context can be found in our previous papers. Consider a large ensemble of wide TBHs. All BH masses are taken to be equal $m_{1}=m_{2}=m_{3}=10M_{\odot}$ (total mass is denoted by $M$) the inner SMA $a_{1}$ and outer SMA $a_{2}$. The distribution of $a_{1}$ is log-uniform, $\propto1/a_{1}$ and outer binary SMA $a_{2}>10^{3}{\rm AU}.$ The inner binary is set to be circular, $e_{1}=0$ while the eccentricity distribution of outer binary is assumed to be thermal, $f\left(e\right)de=2ede.$. The ensemble is embedded in the field, where the stellar number density is given by $n_{*}$ and the typical velocity dispersion, $\sigma_{v}$, is set to be the relative encounter velocity, $v_{\rm enc}$. In the following we derive the fraction of the ensemble that sufficiently interacts with the flyby field stars such that the pericenter of the outer binary passes within the inner binary SMA, namely $q\leq a_{1}$, potentially a conservative assumption as triple could be destabilized even at larger pericenter separations (e.g. \citet{Mardling2001}). We find the fraction dependence on the outer SMA, $a_{2}$ and field number density, $n_{*}.$ Moreover, we account for outer binary ionization from the random interaction with flyby stars. We make use of the loss-cone analysis. We first define the loss cone fraction, $F_{q}$, which is the fraction of systems for which $q\leq a_{1}$. The condition of $q=a_{1}$ defines the critical eccentricity $e_{c}$, where the TBH would destabilize, namely \begin{equation} a_{2}\cdot\left(1-e_{c}\right)=a_{1} \end{equation} which corresponds to $e_{c}=1-a_{1}/a_{2}$. \begin{equation} F_{q}=\int_{e_{c}}^{1}2ede=\frac{2a_{1}}{a_{2}}. \end{equation} We note that $F_{q}\ll1$. When a TBH is in the loss cone $m_{3}$ enters the inner binary within a outer orbital period timescale, $P_{2}$ and the triple destabilize it is than lost from the ensemble of TBHs, and may contribute to the production of GW sources as we discuss below. Systems which orbits are close to the loss cone regime could potentially be perturbed into it and replenish the loss cone after the next flyby interaction. In order to calculate what is the fraction of such systems out of the entire ensemble we calculate the smear cone, the average size of phase space an outer binary can occupy after an impulsive interaction with flyby star. The smear cone, defined by $\theta=\left\langle \Delta v\right\rangle /v_{k}$ , where $v_{k}$ is the Keplerian velocity of the outer binary at the average separation, $\left\langle r\right\rangle =a_{2}\left(1+1/2e^{2}\right)$. Because $F_{q}\ll1$ we approximate $e\rightarrow1$, namely $v_{k}=\left(GM/3a_{2}\right)^{1/2}$, where $G$ is Newton's constant. The change in velocity $\Delta v\approx3Ga_{2}m_{p}/v_{{\rm env}}b^{2}$ \citep{Hills1981,Michaely2019} where $m_{p}$ is the mass of the flyby perturber. Following \citep{Michaely2019}... the size of the smear cone is \begin{equation} F_{s}=\frac{\pi\theta^{2}}{4\pi}=\frac{27}{4}\left(\frac{m_{p}}{M}\right)^{2}\left(\frac{GM}{a_{2}v_{{\rm enc}}^{2}}\right)\left(\frac{a_{2}}{b}\right)^{4}.\label{eq:SmearCone} \end{equation} The ratio of the smear cone to loss cone is the fraction of the loss cone filled after a single flyby: \begin{equation} \frac{F_{s}}{F_{q}}=\frac{27}{8}\left(\frac{m_{p}}{M}\right)^{2}\left(\frac{GM}{a_{2}v_{{\rm enc}}^{2}}\right)\left(\frac{a_{2}}{b}\right)^{4}\left(\frac{a_{2}}{a_{1}}\right). \end{equation} In the case where the loss cone is continuously fully replenished, $F_{q}=F_{s}$, the timescale for the loss-cone replenishment becomes comparable to the timescale for the loss-cone depletion, i.e. the outer orbit orbital period, $P_{2}$. Therefore the rate of depletion which is a function of the size of the loss cone is \begin{equation} \dot{L}_{{\rm Full}}=\frac{F_{q}}{P_{2}}\propto a_{2}^{-5/2}a_{1}, \end{equation} which is independent of the local stellar density, $n_{*}$ and scales linearly with the inner binary SMA, $a_{1}$. Therefore the depletion rate decreases with increasing outer SMA in the full loss cone regime. On the other hand, for the case where $F_{s}<F_{q}$, namely for tighter outer binaries, which are less susceptible for change due to random flyby interaction (\ref{eq:SmearCone}), the loss cone is not completely full at all times, and one needs to consider the so called empty loss cone regime. In this case the depletion rate depends on the rate of systems being kicked into the loss cone. Specifically, $f=n_{*}\sigma v_{{\rm enc}}$ where $\sigma=\pi b^{2}$ is the geometric cross-section of the random flyby interaction. In this case the typical timescale for the depletion is the timescale for entering the loss cone, namely $T_{{\rm empty}}=1/f$. As we showed previously, $f$ can be written as \citep{Michaely2016,Michaely2019} \begin{equation} f=n_{*}\pi\sqrt{\frac{27}{8}\left(\frac{m_{p}}{M}\right)^{2}\frac{GMa_{2}^{4}}{a_{1}}}. \end{equation} The critical SMA for which the two timescales are equal the depletion rate is equal to the rate of systems entering the loss cone \citep{Michaely2019} is given by \begin{equation} a_{{\rm crit}}=\left(\frac{2}{27\pi^{4}}\frac{M}{m_{p}^{2}}\frac{a_{1}}{n_{*}^{2}}\right)^{1/7}. \end{equation} Using $a_{{\rm crit}}$ we can calculate the fraction of systems that enter the loss cone for both regimes: $a<a_{{\rm crit}}$the empty loss cone; $a>a_{{\rm crit}}$ the full loss cone. The loss cone, $F_{q}$ represents the fraction of systems that are lost from the ensemble after the relevant timescale, therefore $\left(1-F_{q}\right)$ is the surviving fraction. For the empty loss cone regime this timescale is $T_{{\rm empty}}=1/f$ while for the full loss cone the timescale is $P_{2}$. We can write the fraction of systems that enter the loss cone as function of time, $t$ as \begin{equation} L\left(a_{1},a_{2},n_{*}\right)_{{\rm empty}}=1-\left(1-F_{q}\left(a_{1},a_{2}\right)\right)^{t\cdot f}. \end{equation} At the limit where $F_{q}t/T_{{\rm empty}}\ll1$ we can expand this equation to the leading order and get \begin{equation} L\left(a_{1},a_{2},n_{*}\right)_{{\rm empty}}=F_{q}tf. \end{equation} Note that the fraction of systems lost in the empty loss cone regime is proportional to $F_{q}$, namely \begin{equation} L_{{\rm empty}}\propto F_{q}\propto a_{2}^{-1}a_{1}. \end{equation} Specifically, the fraction grows with SMA for $a_{2}<a_{{\rm crit}}$, unlike the full loss cone regime. This means that the loss-rate peaks for TBHs with SMA of $a_{{\rm crit}}.$ For the full loss cone we follow the same treatment with \begin{equation} L\left(a_{1},a_{2},n_{*}\right)_{{\rm full}}=1-\left(1-F_{q}\left(a_{1},a_{2}\right)\right)^{t/P_{2}}, \end{equation} and after the expansion we get \begin{equation} L\left(a_{1},a_{2},n_{*}\right)_{{\rm full}}=F_{q}t/P_{2}. \end{equation} Our treatment so far neglected the ionization process for wide systems in collisional environments. Taking the ionization into account by using the half-life time treatment from \citep{Bahcall1985}, where the half life time is defined to be \begin{equation} t_{1/2}=0.00233\frac{v_{{\rm enc}}}{Gm_{p}n_{*}a_{2}} \end{equation} we get for the empty loss cone \begin{equation} L\left(a_{1},a_{2},n_{*}\right)_{{\rm empty}}=\tau F_{q}f\left(1-e^{-t/\tau}\right)=\label{eq:empty} \end{equation} \[ \tau\frac{2a_{1}}{a_{2}}n_{*}\pi\sqrt{\frac{27}{8}\left(\frac{m_{p}}{M}\right)^{2}\frac{GMa_{2}^{4}}{a_{1}}}\left(1-e^{-t/\tau}\right). \] where $\tau=t_{1/2}/\ln2$. For the full loss cone we get \begin{equation} L\left(a_{1},a_{2},n_{*}\right)_{{\rm full}}=\tau\frac{F_{q}}{P_{2}}\left(1-e^{-t/\tau}\right)=\label{eq:full} \end{equation} \begin{equation} \tau\frac{2a_{1}}{a_{2}}\left(\frac{GM}{4\pi^{2}a_{2}^{3}}\right)^{1/2}\left(1-e^{-t/\tau}\right). \end{equation} We emphasize the fact that in both regimes the loss-cone fraction is proportional to the inner SMA $a_{1}$. We can identify the loss-fraction to be the probability for a TBH to become unstable due to flyby interactions. Figure \ref{fig:Probability-of-becoming} shows a representative case for the probability of becoming unstable as a function of the outer SMA for some specific time during the evolution and for a specific given field environment. Equipped with these equations we turn to calculate the fraction of GW-mergers that occur following the strong encounter between the outer third companion and the inner binary (i.e. the now unstable triple) catalyzed by the flyby perturbations. \begin{figure} \includegraphics[width=0.8\columnwidth]{Probability_sma}\caption{\label{fig:Probability-of-becoming}The probability of becoming unstable, namely the outer pericenter distance $q_{2}=a_{2}\left(1-e_{2}\right)\protect\leq a_{1}$, due to flyby interaction. The plot is calculated for the following parameters: $t=10{\rm Gyr}$, $n_{*}=0.1{\rm pc^{-3}},$ $v_{{\rm enc}}=50{\rm kms^{-1}}$. The highest probability is for $a_{{\rm crit}}$. The full loss cone regime is for $a>a_{{\rm crit}}$ and the empty loss cone regime is for $a<a_{{\rm crit}}.$} \end{figure} \section{Unstable triples} \label{sec:Unstable-triples} In this section we describe the dynamics of unstable tripes. We follow closely the treatment done by \citep{Samsing2014,Samsing2018b} in the context of binary-single encounters. It is well known that triple systems are not believed to be integrable and therefore we cannot predict the end result of any specif triple system. However, in a statistical manner we can predict the end state of binary-single encounter \citep{Stone2019b,Samsing2014,Heggie1975}. Binary-single encounters are an important astrophysical source of unstable triples. The physics of binary-single encounters were studied mainly in dense stellar environment such as globular clusters or galactic nuclei. A close binary-single interaction is considered when the single star passes within the binary SMA, or specifically within the sphere of influence of the binary. In this situation the gravitational interaction between every pair of masses is comparable in strength and the outcome is chaotic. For such close interactions two outcomes are possible. The first, direct interaction (DI) where only one gravitational interaction takes place and the result is a tighter binary and an escaper. Note the binary could be either the same as in the initial condition, this case is call a \textit{flyby}, or different and this case is called an \textit{exchange}. The second, intermediate state (IMS), where the systems goes through many (of the order of $\left\langle N_{{\rm IMS}}\right\rangle =20$, for our case \citep{Samsing2017}) binary-single encounters, where each time the orbital characteristics (SMA and eccentricity) are drawn from the available phase space volume set by the system angular momentum and energy budget. Keep in mind that when the binary orbital properties are set, conservation of angular momentum and energy set the trajectory of the bound third star until the next binary-single scatter. The end-state of the multiple binary-single scattering is a tight binary and an escaper. From the GW perspective a merger can occur either {\rm promptly} between scattering events during the IMS, or later, after an end-state is reached, when one of the BHs escapes, leaving behind a more compact, likely eccentric binary. The remnant binary would eventually inspiral and merge through GW-emission on a typically much longer timescale than the dynamical time. A fraction of latter mergers occur in less than a Hubble time and these would contribute to the rate of detectable GW sources; we term this GW-sources channel the {\rm delayed-merger} channel. In the following we calculate the rate of mergers and eccentricity distribution of the merged systems in both cases. \subsection{Binary-single encounters and the production of prompt GW-mergers} In this subsection we describe the mathematical modeling of the IMS. In the following we consider only equal masses BH with $10M_{\odot}$each. The initial binary is circular with SMA, $a_{1}$, and the third BH interacts with the binary via consecutive binary-single encounters. In each encounter the probability for forming a temporary binary with any two out of the three BHs is uniform. The eccentricity, $e_{{\rm IMS}}$ is drawn from thermal distribution, namely $f\left(e\right)de=2ede$. The SMA is determined by the energy budget which is approximated by equation 12 in \citep{Samsing2018b} \begin{equation} \frac{m_{1}m_{2}}{2a_{1}}=\frac{m_{i}m_{j}}{2a_{{\rm IMS}}}+\frac{m_{ij}m_{k}}{2a_{{\rm bs}}}\label{eq:energy budget} \end{equation} where $a_{{\rm IMS}}$ is the SMA of the temporary binary and $a_{{\rm bs}}$ is the temporary SMA of the outer binary. Where $\left\{ i,j,k\right\} $ are the randomized indexes after the interaction and $m_{ij}=m_{i}+m_{j}$ is the mass of the temporary binary. From eq. (\ref{eq:energy budget}) we can express the SMA of the third bound BH \begin{equation} a_{{\rm bs}}=a_{1}\left(\frac{m_{ij}m_{k}}{m_{1}m_{2}}\right)\left(\frac{a'}{a'-1}\right)\label{eq:a_bs} \end{equation} where \begin{equation} a'\equiv\frac{a_{{\rm IMS}}}{a_{c}}\ {\rm and}\ a_{c}\equiv a_{1}\frac{m_{i}m_{j}}{m_{1}m_{2}}.\label{eq:a_definitions} \end{equation} We note that in our equal mass case $a_{c}=a_{1}$ and therefore $a'$ is just $a_{{\rm IMS}}/a_{1}$. In order to estimate the available phase space for the IMS we estimate the upper (lower) bound $a'_{{\rm U}}\ \text{\ensuremath{\left(a'_{{\rm L}}\right)}}$ of $a'$. The lower bound of $a'$ is trivial with \begin{equation} a_{{\rm L}}'\approx1,\label{eq:a_L} \end{equation} the upper bound should separate between when the resonant triple can no longer be described as an IMS (a binary and a bound single), this occurs when $a_{{\rm bs}}\approx a_{{\rm IMS}}$. \citet{Samsing2018b} finds that one way of estimating $a'_{{\rm U}}$ is by comparing the tidal force, $F_{{\rm tid}}$ exerted by the third BH with the binary gravitational binding force, $F_{{\rm bin}}$. In the high eccentricity limit we find \begin{equation} F_{{\rm tid}}\approx\frac{1}{2}\frac{Gm_{ij}m_{k}}{a_{{\rm bs}}^{2}}\frac{a_{{\rm IMS}}}{a_{{\rm bs}}} \end{equation} \begin{equation} F_{{\rm bin}}\approx\frac{1}{4}\frac{Gm_{i}m_{j}}{a_{{\rm IMS}}^{2}}. \end{equation} We set $a'_{{\rm U}}$ in the case that \begin{equation} \frac{F_{{\rm tid}}}{F_{{\rm bin}}}=0.5 \end{equation} which translates to \begin{equation} a'_{{\rm U}}=1+\left(\frac{1}{2}\frac{m_{k}}{\mu_{{\rm ij}}}\right)^{2/3}\label{eq:a_U} \end{equation} where $\mu_{ij}\equiv m_{i}m_{j}/(m_{i}+m_{j})$ is the reduced mass of the IMS binary. The values of $a'$ are distributed uniformly between $a'_{{\rm L}}$ and $a'_{{\rm U}}$ and the eccentricity distribution is thermal \citep{Heggie1975,Hut1985,Rodriguez2018}. Next we can calculate the orbital timescale for the third companion, $t_{{\rm iso}}$, to come back for the next binary-single encounter. During the tie in-between scatter events the temporary binary can potentially merge via GW emission if its merger timescale, $t_{{\rm merger}}$ is shorter than $t_{{\rm iso}}$. The orbital period is simply the Keplerian orbital period with $a_{{\rm bs}}$; combining it with eq. (\ref{eq:a_bs}) and eq. (\ref{eq:a_definitions}) we get: \begin{equation} t_{{\rm iso}}=2\pi\frac{a_{1}^{3/2}}{\sqrt{GM}}\left(\frac{m_{ij}m_{k}}{m_{1}m_{2}}\right)^{3/2}\left(\frac{a'}{a'-1}\right)^{3/2}.\label{eq:t_iso} \end{equation} The merger timescale, for eccentric binaries, is given by \citep{Pet64} \begin{equation} t_{{\rm merger}}\approx\frac{768}{425}T_{c}\text{\ensuremath{\left(a_{{\rm IMS}}\right)}}\left(1-e_{{\rm IMS}}^{2}\right)^{7/2}\label{eq:t_merger} \end{equation} where $T_{c}=a_{{\rm IMS}}^{4}/\beta$ is the merger timescale for a circular orbit and $\beta=64G^{3}m_{i}m_{j}\left(m_{i}+m_{j}\right)/\left(5c^{2}\right)$ where $c$ is the speed of light. \subsubsection{Calculating the merger fraction} \label{subsec:Calculating-the-merger} In order to find the fraction of systems that merge during the IMS as a function of the initial SMA, $a_{1}$, \textbf{we preform a numerical calculation. In order to save computer time we do not make a direct N-body simulation. We sample, in a Monte-Carlo approach, the IMSs orbital distributions from \citep{Samsing2014,Samsing2017} and check whether or not they lead to a merger. We use MATLAB to} sample 20 values of $a_{1}$ equally spaced in log from $\left(10^{-2}{\rm AU},10^{2}{\rm AU}\right)$. For each value of $a_{1}$ we simulate $N_{{\rm tot}}=10^{5}$ scattering experiments where for each scattering experiment there is $N_{{\rm IMS}}=20$ times where a temporary binary is created bound to a third BH on a Keplerian orbit. For each iteration of the IMS we randomly choose the binary orbital properties, $a_{{\rm IMS}}$, which are drawn from a uniform distribution in the range $\left(a'_{{\rm L}},a'_{{\rm U}}\right)$ see equations (\ref{eq:a_L}) and (\ref{eq:a_U}); and the eccentricity, $e_{{\rm ecc}}$ is drawn from a thermal distribution. Next, we calculate $t_{{\rm iso}}$ from eq. (\ref{eq:t_iso}) and compare it to $t_{{\rm merger}}$ from eq. (\ref{eq:t_merger}). If $t_{{\rm merger}}<t_{{\rm iso}}$ we count it as an IMS merger and check whether it is an eccentric merger in the LIGO band, see subsection \ref{subsec:Calculating-eccentric-mergers}. If $t_{{\rm merger}}>t_{{\rm iso}}$ we randomize the binary and single again until we reach $N_{{\rm IMS}}$ times. Additionally we record all $t_{{\rm iso}}$ in order to calculate the merger time since the beginning of the scattering experiment. In the case where no merger occurs during the resonant phase we record the final end state, to eventually obtain the distribution of the orbital parameters from such cases (see subsection \ref{subsec:Post-resonance-state}). $f_{{\rm merger}}\left(a_{1}\right)$ is then just the number of mergers divided by the total number of systems considered, $N_{{\rm tot}}$. The results presented in Figure \ref{fig:f_merger}. We find a power law relation between $f_{{\rm merger}}$ and $a_{1}$, the exact fitted function is \begin{equation} f_{{\rm merger}}\left(a_{1}\right)=0.00165\times a_{1}^{-0.7123}.\label{eq:merger_fit} \end{equation} We note that the fraction scales with the inner SMA with a power which is smaller than unity $f_{{\rm merger}}\propto a_{1}^{-0.7123}.$ \begin{figure} \includegraphics[width=0.9\columnwidth]{f_merger_vs_a_inner}\caption{\label{fig:f_merger} The fraction of systems that merged, $f_{{\rm merger}}$during the resonant phase as a function of the initial SMA, $a_{1}$. For every $a_{1}$ we simulated $10^{5}$ binary-single scattering experiments; for each experiment we use $N_{{\rm IMS}}=20$ scattering events which we randomize the temporary binary orbital elements (see text) and check if this temporary IMS leads to a merger. Black dots, the calculated fraction from our numerical experiment. Blue solid line, the best fit to a power law.} \end{figure} \subsubsection{Calculating the fraction of eccentric mergers } \label{subsec:Calculating-eccentric-mergers} In order to find $f_{{\rm eccentric}}\text{\ensuremath{\left(a_{1}\right)} }$ we simulate the evolution of each binary we flagged as an IMS merger. We use the well known equations of motion of the SMA and eccentricity from \citep{Pet64} \begin{equation} \frac{da}{dt}=-\frac{64}{5}\frac{G^{3}m_{1}m_{2}\left(m_{1}+m_{2}\right)}{c^{5}a^{3}\left(1-e^{2}\right)^{7/2}}\left(1+\frac{73}{24}e^{2}+\frac{37}{96}e^{4}\right) \end{equation} \begin{equation} \frac{de}{dt}=-e\frac{304}{15}\frac{G^{3}m_{1}m_{2}\left(m_{1}+m_{2}\right)}{c^{5}a^{4}\left(1-e^{2}\right)^{5/2}}\left(1+\frac{121}{304}e^{2}\right). \end{equation} Additionally we calculate the approximate gravitational peak frequency following \citep{Wen2003} \begin{equation} f_{{\rm peak}}\text{\ensuremath{\left(a_{{\rm IMS}},e_{{\rm IMS}}\right)}}=\frac{1}{\pi}\sqrt{\frac{Gm_{ij}}{a_{{\rm IMS}}^{3}}}\frac{\left(1+e_{{\rm IMS}} \right)^{1.1954}}{\left(1-e_{{\rm IMS}}^{2}\right)^{1.5}}. \end{equation} We consider a binary merger to be an eccentric merger if the eccentricity, $e_{{\rm IMS}}$ is greater than $0.1$ when the GW peak frequency is $f_{{\rm peak}}=10{\rm Hz}$ \citep[e.g.][]{Rodriguez2018}. In Figure \ref{fig:Fraction_eccentric} we present our calculated eccentric merger fraction as a function of the initial SMA, $a_{1}$. The relation between the eccentric fraction and the initial SMA is well described by a power law \begin{equation} f_{{\rm eccentric}}\left(a_{1}\right)=0.0006\times a_{1}^{-0.942}.\label{eq:eccentric_fit} \end{equation} In Figure \ref{fig:ecc_distribution} we present the eccentricity distribution at $10{\rm Hz}$ for the entire sample weighted by the inner SMA distribution. We report that $\sim78\%$ of all mergers in the IMSs are eccentric in the aLIGO band. \begin{figure} \includegraphics[width=0.9\columnwidth]{f_eccentric_vs_a_inner}\caption{\label{fig:Fraction_eccentric}The fraction of eccentric mergers, $f_{{\rm eccentric}}$ during the resonant phase as a function of the initial SMA, $a_{1}$. For the same setup as in Figure \ref{fig:f_merger}. Black dots, the calculated fraction from our numerical experiment. Blue solid line, the best fit to a power law.} \end{figure} \begin{figure} \includegraphics[width=1.0\columnwidth]{eccentricity_distribution_10Hz_infig}\caption{\label{fig:ecc_distribution}The eccentricity distribution at $10{\rm Hz}$ GW frequency of our entire sample. The main plot show the distribution of $\log e$ , hence all values greater that $-1$ are eccentric mergers, $\sim78\%$. This distribution is weighed with the distribution of the initial SMA, $a_{1}.$ The inset is the same distribution but presented in $\log\left(1-e_{1}\right)$, namely focuses on the most eccentric part of the distribution we see the most probable $e$ value correspond to $\log\left(1-e_{1}\right)\approx-3.5$.} \end{figure} \subsection{The post encounter states and the production of delayed-mergers} \label{subsec:Post-resonance-state} In the following we study the the production of delayed mergers in cases where no prompt merger occurs during the resonant encounter, and a remnant compact binary is formed with $a_{{\rm delay}}<a_{1}$ (while the third BH is ejected from the system). It was shown in \citep{Stone2019b} that the energy distribution of the remnant binary scales like \begin{equation} E_{{\rm delay}}\propto\left|E_{1}\right|^{-4} \end{equation} where $E_{{\rm delay}}$ is the energy of the remnant binary and $E_{1}=-Gm_{1}m_{2}/(2a_{1})$ is the initial binary energy. Additionally, the eccentricity of the remnant binary, $e_{{\rm delay}}$, is drawn from thermal distribution \citep{Stone2019b}. For every system that did not promptly merge during the IMS we follow the GW-inspiral evolution of the remnant binary and calculate the merger time by using eq. (\ref{eq:t_merger}), and the eccentricity at $f_{{\rm peak}}=10{\rm Hz}$ by following the same treatment as described in (\ref{subsec:Calculating-eccentric-mergers}) for the prompt mergers, where the only difference is such binaries are followed up to a Hubble time, as their evolution is not restricted by the next encounter (i.e. at the isolation time). The left panel of Figure \ref{fig:ecc_dist_endstate} shows the fraction of systems that merge within a Hubble time, and the eccentricity distribution for our sample, weighted by the distribution of the inner SMA, is presented in the right panel. We note that the eccentricity distribution is very similar to \citep{Rodriguez2018,Samsing2018c}, besides the somewhat lower fraction of eccentric mergers, as these mergers do not include the prompt mergers discussed before. The combined distribution of both the prompt and delayed mergers is comparable with those found by \citep{Rodriguez2018}, as might be expected given that the final distribution in both cased is generally determined by the outcomes of binary-single encounters. We found a numerical Gaussian fit to the merger fraction of the delayed mergers \begin{equation} \log f_{{\rm delay}}=-3.237\times e^{-\left(\frac{\log a_{1}-2.212}{1.964}\right)^{2}}, \label{eq:f_delay} \end{equation} and find that the total fraction of eccentric delayed mergers is $f_{{\rm delay,ecc}}\approx1\%$. We note that the majority of the prompt mergers are eccentric because the merger time is limited to the isolation time $t_{{\rm iso}}\ll t_{{\rm Hubble}}$, and only the most initially eccentric binaries, having small peri-centers, could merge on such short timescales. In contrast, we find a smaller fraction of eccentric delayed mergers because the merger time is instead limited by $t_{{\rm Hubble}}$, allowing for binaries that merge on these longer timescales to also circularize by the time they reach the aLIGO band. We emphasize that isolated binaries with SMA smaller than $0.1{\rm AU}$ will merge withing a Hubble time, therefore we expect the fitted function from equation (\ref{eq:f_delay}) to saturate at the lower end of $a_{1}$ near unity. The merger rate we calculate for these binaries is therefore effectively included in the isolated binary channels discussed by others \citep[and others]{Belczynski2016,Belczynski2008,Dominik2012}. Nevertheless, he IMS evolution could give rise to higher initial eccentricity of such binaries and thereby (on average) shorter merger time and potentially higher detected eccenttricities, even if not affecting the total merger rate. Although this is generally the case, in terms of longer delay times, the relatively low fraction of eccentric mergers coming from these (small initial SMA) mergers indicates that these mergers are effectively indistinguishable from the isolated binary case at least in the aLIGO band. One should note that they may still present significantly higher eccentricities at earlier stages, potentially observable by future space-based GW-detectors. The latter aspects, however, are beyond the scope of the current paper. \begin{figure*} \includegraphics[width=1.04\columnwidth]{f_ES_vs_a_inner} \includegraphics[width=1.04\columnwidth]{ES_eccentricity_distribution_10Hz}\caption{\label{fig:ecc_dist_endstate}Left plot: The fraction of delayed mergers, $f_{{\rm delay}}$ as a function of the initial SMA, $a_{1}$ out of $10^{4}$ binary-single experiments. A merger is flagged when the merger timescale is less than the Hubble time, $t_{{\rm Hubble}}.$ Black dots, the calculated fraction from our numerical experiment. Blue solid line, the best fit to a Gaussian with the variable, $\log a_{1}$. Right plot: The eccentricity distribution at $10{\rm Hz}$ for all delay mergers, weighted with the initial SMA, $a_{1}$distribution. The plot is similar to \citep{Rodriguez2016,Samsing2017}. Only $\sim0.3\%$ of the entire population is eccentric at $\sim10{\rm Hz}$} \end{figure*} \section{Volumetric rates of GW-sources from the TBH channel} \label{sec:GW-volume-rates} In this section we calculate the volumetric rate of GW mergers from TBHs in both the prompt and delayed mergers channels. We make use of the equations (\ref{eq:empty}) and (\ref{eq:full}) to calculate the fraction of TBHs that become unstable due to flybys, and then combine them with the merger fractions described above, $f_{{\rm merger}}$ and $f_{{\rm ecc}}$ (from equations (\ref{eq:merger_fit}) and (\ref{eq:eccentric_fit})). Since the different properties of spiral and elliptical galaxies affect the rates (through the different stellar number density and velocity dispersions in the different types of galaxies), we calculate the merger rate for both typical spiral and elliptical galaxies. For the spiral galaxy case we model a Milky Way (MW)--like galaxy stellar density similar to that considered in \citep{Michaely2016}. Let $dN(r)=n_{*}\left(r\right)\cdot2\pi\cdot r\cdot h\cdot dr$ be the the number of stars in a region $dr$ (and scale height $h$), located at distance $r$ from the center of the Galaxy. Additionally, we model the Galactic stellar density in the Galactic disk as follows \begin{equation} n_{*}\left(r\right)=n_{0}e^{-\left(r-r_{\odot}\right)/R_{l}}\label{eq:MW_galaxy} \end{equation} , where $n_{0}=0.1{\rm pc}^{-3}$ is the stellar density near our Sun, $R_{l}=2.6{\rm kpc}$ \citep{Juric2008} is the galactic length scale and $r_{\odot}=8{\rm kpc}$ is the distance of the Sun from the galactic center. The mass of the perturber is taken to be $0.6M_{\odot}$ which is the average mass of a star in the galaxy. The velocity dispersion is set to the velocity dispersion of the flat rotation curve of the galaxy, namely $\sigma=50{\rm kms^{-1}}$. For elliptical galaxies we take the density profile from \citep{Hernquist1990} and translate it to stellar density given an average stellar mass of $0.6M_{\odot}.$ \begin{equation} \tilde{n}_{*}\left(r\right)=\frac{M_{{\rm galaxy}}}{2\pi r}\frac{r_{*}}{\left(r+r_{*}\right)^{3}}\label{eq:elliptical} \end{equation} where $r_{*}=1{\rm kpc}$ is the scale length of the galaxy, $M_{{\rm galaxy}}=10^{11}M_{\odot}$ is the total stellar (and not total) mass of the galaxy. The velocity dispersion for a typical elliptical galaxy we consider is $\sigma=160{\rm kms^{-1}}.$ Figure \ref{fig:Numebr-stellar-density} shows the stellar density profiles of the two prototypes of galaxies. \begin{figure} \includegraphics[width=1.02\columnwidth]{stellar_densities_galaxies}\caption{\label{fig:Numebr-stellar-density}Number stellar density as a function of distance. The blue solid line represents the Milky-Way galaxy . The red dashed line represents a typical elliptical galaxy.} \end{figure} Next, we estimate the fraction of TBHs out of the entire stellar population, $f_{{\rm TBH}}$. The fraction of BHs out of the entire stellar population is $f_{{\rm primary}}\approx10^{-3}$ \citep{Kroupa2001}. We assume all stars with masses greater than $20M_{\odot}$ turn into BHs without any natal-kicks \citep[and others]{Belczynski2016,Mandel2016a} and the triple fraction of all BHs progenitors is $f_{{\rm triple}}=0.25$ \citep{Sana2014}. For the mass ratio distribution in the inner binary, $Q_{{\rm inner}}$ we consider a uniform distribution $Q_{{\rm inner}}\in\left(0.3,1\right)$ \citep{Moe2016}, this translates to a fraction of $f_{{\rm secondary}}\approx0.53$ of the inner binary companions to also form BHs. Additionally, we can compute the fraction of the inner binaries (accounting for their total main-sequence masses) to have a third BH progenitor companion, in this case we use a different mass ratio distribution $Q_{{\rm outer}}\propto M_{{\rm binary}}^{-2}$ \citep{Moe2016}, as to apply for wide separation systems, and consider the range $Q_{{\rm outer}}\in\left(0.3,1\right)$. $M_{{\rm binary}}$ is the total mass of the inner binary. This translates to the fraction of the tertiaries forming BHs of $f_{{\rm traitery}}\approx0.76$. Hence the total fraction of TBHs from the stellar population is \begin{equation} f_{{\rm TBH}}=f_{{\rm primary}}\times f_{{\rm secondary}}\times f_{{\rm tertiary}}\times f_{{\rm triple}}\approx1\times10^{-4}. \end{equation} The SMA distribution $f_{a_{1}}$and $f_{a_{2}}$is taken to be a log-uniform, where the inner binary SMA ranges between $a_{1}\in\left(0.1{\rm AU,}100{\rm AU}\right)$ while $a_{2}\in\left(10^{3}{\rm AU},10^{5}{\rm AU}\right)$. Following \citep{Michaely2019,Perets2012a,Igoshev2019} we take the overall fraction of triple systems that are wider than $10^{3}{\rm AU}$ to be $f_{{\rm {\rm wide}}}=0.2$. \subsection{Volumetric prompt-merger rates} In the following we calculate the volumetric delayed-merger rate and the volumetric eccentric delayed-merger rate. \subsubsection*{Spiral galaxies} For a MW-like galaxy the rate of BH prompt GW-mergers from perturbed wide TBH in the field is just \begin{equation} \Gamma_{{\rm MW}}=\int_{0.5{\rm kpc}}^{15{\rm kpc}}\int_{10^{3}{\rm AU}}^{10^{5}{\rm AU}}\int_{10^{-1}{\rm AU}}^{10^{2}{\rm AU}}\frac{L_{{\rm merger}}\left(a_{1},a_{2},n_{*}\right)}{10{\rm Gyr}}da_{1}da_{2}dN\left(r\right)\label{eq:MW_merger_IMS} \end{equation} \[ \approx0.017{\rm {\rm Myr^{-1}}} \] where $L_{{\rm merger}}\equiv L\left(a_{1},a_{2},n_{*}\right)f_{a_{1}}f_{a_{2}}f_{{\rm TBH}}f\left(a_{1}\right)_{{\rm merger}}$. In order to translate this rate to the volumetric merger rate in spiral galaxies $R_{{\rm spiral}}$, we follow \citep{Belczynski2016} and calculate the rate $R_{{\rm spiral}}=10^{3}n_{{\rm spiral}}\times\Gamma_{{\rm MW}}$, to get \begin{equation} R_{{\rm spiral}}\approx0.2\times F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}. \end{equation} Where $n_{{\rm spiral}}=0.0116{\rm Mpc^{-3}}$ is the local density of MW-like galaxies \citep{Kopparapu2008} and we define \begin{equation} F_{{\rm model}}\equiv\left(\frac{f_{{\rm primary}}}{10^{-3}}\right)\left(\frac{f_{{\rm secondary}}}{0.53}\right)\left(\frac{f_{{\rm tritary}}}{0.76}\right)\left(\frac{f_{{\rm wide}}}{0.2}\right)\left(\frac{f_{{\rm triple}}}{0.25}\right) \end{equation} to be the how the results depend on our model assumption. Moreover, we can calculate the eccentric merger rate from this channel simply by substituting $f_{{\rm merger}}$ with $f_{{\rm ecc}}$ from equation (\ref{fig:ecc_distribution}), to find \begin{equation} \Gamma_{{\rm MW,ecc}}=\int_{0.5{\rm kpc}}^{15{\rm kpc}}\int_{10^{3}{\rm AU}}^{10^{5}{\rm AU}}\int_{10^{-1}{\rm AU}}^{10^{2}{\rm AU}}\frac{L_{{\rm ecc}}}{10{\rm Gyr}}da_{1}da_{2}dN\left(r\right)\label{eq:MW_ec_IMS} \end{equation} \[ \approx0.01{\rm {\rm Myr^{-1}.}} \] where $L_{{\rm ecc}}\equiv L_{{\rm merger}}\left(a_{1},a_{2},n_{*}\right)f_{a_{1}}f_{a_{2}}f_{{\rm TBH}}f\left(a_{1}\right)_{{\rm eccntric}}$. Hence, \begin{equation} R_{{\rm spiral,ecc}}\approx0.12\times F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}},\label{eq:R_ecc_IMS} \end{equation} and the fraction of eccentric mergers from this channel is consistent with $\sim78\%$. \subsubsection*{Elliptical galaxies} Following a similar procedure we now calculate the prompt-merger rate from elliptical galaxies. Taking Eq. (\ref{eq:elliptical}) \begin{equation} \Gamma_{{\rm elliptical}}=\int_{0.5{\rm kpc}}^{30{\rm kpc}}\int_{10^{3}{\rm AU}}^{10^{5}{\rm AU}}\int_{10^{-1}{\rm AU}}^{10^{2}{\rm AU}}\frac{L_{{\rm merger}}\left(a_{1},a_{2},\tilde{n}_{*}\right)}{10{\rm Gyr}}da_{1}da_{2}dN\left(r\right) \end{equation} \[ \approx0.03{\rm {\rm Myr^{-1}}.} \] Next we input the number density of elliptical galaxies in the local universe $n_{{\rm elliptical}}\approx0.1{\rm Mpc^{-3}}$\citep{Samsing2014} and get \begin{equation} R_{{\rm elliptical}}=3.2\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}.\label{eq:R_merger_elliptical_IMS} \end{equation} and for the eccentric mergers we get \begin{equation} R_{{\rm elliptical,ecc}}=1.2\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}.\label{eq:R_eccentric_elliptical_IMS} \end{equation} Adding the contributions from both spiral and elliptical galaxies we get a total volumetric prompt-merger rate to be \begin{equation} R_{{\rm resonant}}=R_{{\rm spiral}}+R_{{\rm elliptical}}\approx3.4\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}. \end{equation} and the volumetric eccentric prompt-mergers \begin{equation} R_{{\rm resonant,ecc}}=R_{{\rm spiral,ecc}}+R_{{\rm elliptical,ecc}}\approx1.2\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}\label{eq:Resosant_ecc} \end{equation} \subsection{Volumetric delayed merger rates} In the following we calculate the volumetric delayed-merger rate and the volumetric eccentric delayed-merger rate. Following the same procedure described in subsection \ref{subsec:Post-resonance-state} we calculate the merger rate by substituting $f_{{\rm merger}}$ with $f_{{\rm delay}}$ from equation (\ref{eq:f_delay}). As done in the previous section, we calculate the merger rate of systems with $a_{1}\in\left(10^{-1}{\rm AU},10^{2}{\rm AU}\right)$ for both types of galaxies. For spiral galaxies we find \begin{equation} \Gamma_{{\rm spiral,delay}}\approx0.9{\rm {\rm Myr^{-1}}} \end{equation} which leads to \begin{equation} R_{{\rm spiral,delay}}\approx9.2\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}. \end{equation} For elliptical galaxies we find \begin{equation} \Gamma_{{\rm elliptical,delay}}\approx1.5{\rm {\rm Myr^{-1}}} \end{equation} which translates to \begin{equation} R_{{\rm elliptical,delay}}\approx150\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}. \end{equation} In total we compute a volumetric rate of \begin{equation} R_{{\rm delay}}=R_{{\rm spiral}}+R_{{\rm elliptical}}\approx160\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}. \end{equation} Unlike the prompt-mergers only $\sim0.3\%$ of delayed-mergers end up as eccentric at $10{\rm Hz}$. Therefore we expect only $\sim0.5\times F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}$ eccentric volumetric rate mergers from this channel. \section{Discussion} \label{sec:Discussion} \subsection{Model assumptions} The progenitor model for BBH GW-merger presented makes use and is based on several assumptions, in the following we address each them. \textbf{Natal kicks}. We cannot emphasize this point enough. This is the \textit{most} critical assumption we make is that the BHs discussed here, receive no natal kick at birth. The importance of this assumption, as we discussed in more depth in \citep{Michaely2019}, is that ultra-wide binaries/triples are highly susceptible to disruption by such kicks. The binding energy of the outer binary in the triple are very small, and natal-kicks of comparable velocity to the typical orbital velocity of the outer orbit or higher would disrupt the triple, significantly decreasing the number of potential TBH progenitors as discussed in \citep{Silsbee2017}. Currently, BH natal kicks are poorly constrained \citep{Repetto2012,Repetto2017}. However, there is some evidence that BH are formed following failed supernova (SN) \citep{Ertl2015,Adams2017}. In the failed SN scenario large amount of fallback is accreted on the newly formed compact object and suppresses any natal kicks, as the BH forms through direct collapse \citep{Fryer1999}. In fact, most if not all other theoretical models that can potentially reproduce the inferred high rates of BBH GW-mergers follow similar assumptions, and also assume no or low-velocity natal-kicks for BHs, or no/low natal kicks for higher mass BHs that form through direct collapse \citep[e.g.][where many of the dynamical models make use of the same assumptions]{Belczynski2016,Belczynski2008,Belczynski2007}. We generally follow the same approach. \textbf{Equal BH masses}. Here we considered only TBHs composed of same-mass BH components. This simplistic assumption is made as a first step in developing this model. In the future we will expand the mathematical formalism to account for unequal masses. Nevertheless, we do not expect the rate to change dramatically \citep[e.g.][]{Samsing2018b}, when unequal BHs are considered. Nevertheless, we briefly discuss possible implications of our model in respect to the mass-function of the GW-mergers. The masses of the component of the inner binaries are likely to be correlated, as we discussed in the assumptions regarding the rate calculations in section \ref{sec:GW-volume-rates} and generally be more similar to those expected from the isolated binaries channel, where short period binaries serve as GW-sources progenitors. The outer third component, might be expected to be randomly drawn from a regular mass-function, as it forms almost in isolation, given the large separation from the inner binary (although in case it was dynamically captured, e.g. \citet{Perets2012a}, it would have some preference to higher masses). Since typically the less massive component is ejected in binary-single encounters, the dynamics will systematically give rise to overall higher mass BHs to take part in mergers compared with the BH mass function of single, or even isolated binary BHs. A more detailed prediction, however, will require further study of the binary-single encounter dynamics of unequal mass TBHs. \textbf{Inner binary SMA boundaries}. We set the lower boundary of the inner SMA to be $a_{1}=0.1{\rm AU}.$ The reason is that the merger time via GW emission of a circular binary with $m_{1}=m_{2}=10M_{\odot}$ at $a_{1}=0.1{\rm AU}$ is $t_{{\rm merger}}\approx10^{10}{\rm yr}$. Hence, binaries with SMA smaller than $0.1{\rm AU}$ may merge in isolation even without any perturbations, and thereby our model would not increase the GW-merger rates originating from such short-period binaries. Nevertheless, as we discussed above, such binaries which are part of TBHs such as those we discussed, might still evolve through the triple instability we present here, and in this case their merger characteristics will differ, in particular their eccentricities might be higher, they will have shorter DTD, and should not generally show a spin-orbit alignment. For completeness we present in table \ref{tab:merger_Rates_for_e-2} the volumetric merger rate accounting for binaries with initial SMA $a_{1}=10^{-2}{\rm AU}$ in order to compare with our results. \begin{table*} \caption{\label{tab:merger_Rates_for_e-2} The volumetric merger rates for the case where $a_{1}\in\left(10^{-2}{\rm AU},10^{2}{\rm AU}\right)$. Effectively these numbers include the isolated binary rates, hence they do not represent and \emph{additional} contribution to the merger rates from the TBH channel.} \begin{tabular}{|c|c|c|c|} \hline & prompt mergers $\rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}$ & eccentric mergers $\rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}$ & delayed mergers $\rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}$\tabularnewline \hline \hline spirals & $\sim0.6$ & $\sim0.5$ & $\sim15$\tabularnewline \hline elliptical & $\sim9.1$ & $\sim8.6$ & $\sim250$\tabularnewline \hline \end{tabular} \end{table*} \textbf{Volumetric rate calculation}. In order to calculate the volumetric rate we make the assumption that the galaxy densities, both spiral and elliptical are $n_{\rm spiral}=$ and $n_{\rm elliptical}$. Furthermore, we assume the MW is the prototype of spiral galaxies with velocity dispersion of $~50\rm kms^{-1}$ and total mass of $10^{11}M_\odot$ and the model we present in equation (\ref{eq:elliptical}) is the prototype for elliptical with total mass of $10^{11}M_\odot$. This assumption may change the merger rate significantly for different galaxy prototypes. Specifically, for elliptical galaxies with total mass of $~5\times10^{10}M_{\odot}$ the rate decreases by an order of magnitude. The sensitivity of our results by the specific model for a prototype galaxy motivates us to explore the issue in future research. Moreover, in \citet{Michaely2019} we calculated the merger rate for wide binary systems solely for spiral galaxies. Following this work we argue that the rate of mergers from wide BBHs systems is governed from elliptical rather than spiral. We will explore this scenario elsewhere. \subsubsection{Delay time distribution (DTD)} \citet{Michaely2019} showed that the DTD for wide binaries is uniform in time. A priory one would expect that the DTD for the TBH case might be more complicated due to the additional inspiral timescale during the resonant encounter or the later inspiral of the delayed mergers. In figure \ref{fig:DTD} we present the inspiral time for the prompt-mergers. We see that very short merger times $t_{{\rm merger}}<10^{6}{\rm yr}$, which would hardly affect the overall uniform DTD for the initial production of the destabilized TBHs, and can be generally neglected in that context. The DTD for the prompt-GE channel is therefore expected to be generally uniform in time, similar to the ultra-wide binary channel discussed in \citep{Michaely2019}. \begin{figure} \includegraphics[width=1.00\columnwidth]{t_merger_dist}\caption{\label{fig:DTD}The distribution of merger times in the resonant phase. The merger time, $t_{{\rm merger}}\lesssim P_{2}$ hence we regard of mergers from the resonants stage as prompt once the TBH becomes unstable} \end{figure} The inspiral time of the delayed mergers is different (see figure \ref{fig:f_merger}). The distribution is dominated by a peak at $\sim10^{6}{\rm yr}$ approximately corresponding to the merger time of an initially circular binary with $a_{1}=0.01{\rm AU}$ which is the most weighed value, given the assumed log uniform distribution of the SMAs. Considering a larger lower-bound $a_{1}>0.01{\rm AU}$ the peak would shift and be centered around $t\left(a_{1}\right)_{{\rm merger}}$, from equation (\ref{eq:t_merger}) until $a_{1}=0.1{\rm AU}$ which corresponds to a merger time of $\sim10^{10}{\rm yr}$ which is the upper cutoff. The shape to the right of the peak is effectively tracing the SMA distribution, $f_{a}$. \begin{figure} \includegraphics[width=1.0\columnwidth]{t_ES_merger_dist}\caption{\label{fig:The-inspiral-time} The inspiral time for all delayed mergers weighted by the distribution of the inner SMA, $a_{1}$. Black solid line is to guide the eye with a slope that corresponds to $t_{{\rm delay,merger}}^{1/4}$ and the black dashed line corresponds to $t_{{\rm delay,merger}}^{-1/4}$} \end{figure} In this case the DTD would be slightly affected by the additional merger timescale for the shorter period binaries, and only somewhat change for the tail distribution of long merger times, giving rise to some modulation of the DTD, leading to a slightly decreasing DTD function. We should also briefly note that at the early stages of galaxy formation, in particular in disk galaxies, the initial stellar densities and the number of BHs are initially low, compared to our basic model assumptions, accounting only for fully formed galaxies. However, this should hardly affect the observable GW-sources most of which would not originate from such early times. \subsubsection{Spin distribution} The dynamical process of multiple binary-single interactions effectively samples the phase space chaotically such that the end state inclination distribution is close to isotropic \citep{Stone2019b}. It is likely that this similarly holds for the resonant phase as well, where multiple though smaller number of encounters occur, although such assumption needs to be better verified in future studies. Overall we expect to find, similar to dynamical mergers in dense environments, an isotropic distribution of the orbital inclination and therefore an isotropic spin-orbit alignment distribution. In that respect, the current findings of a preference for either an isotropic spin distribution or low spin magnitudes for the observed systems \citep{Will2014}, are consistent with our suggested channel. \subsubsection{Eccentric mergers distributions} It was previously shown that eccentric mergers can originate from dynamical channels in dense cluster environments \citep[e.g.][]{Samsing2017,Rodriguez2018}, which predict a volumetric rate of eccentric mergers of $\sim0.2-1{\rm Gpc^{-3}yr^{-1}}$. As we show here, eccentric mergers can arise from the wide-TBH channel in the field. We find in equation (\ref{eq:Resosant_ecc}) a volumetric rate of $\sim5\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}$ eccentric mergers from the prompt-merger channel, which dominate the contribution of eccentric mergers from this TBH scenario. Additionally, we present their distribution in the inset of figure \ref{fig:ecc_distribution} and expect a peak of the distribution with $\log\left(1-e_{1}\right)\approx-3.5$. This extremely high eccentricities correspond to a unique signal in aLIGO/VIRGO. Moreover, we find that, $0.3\%$ of the delayed-mergers we studied are eccentric at $10{\rm Hz}$ see figure \ref{fig:ecc_dist_endstate}. This correspond to a rate of $\sim0.3-0.8\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}$. Combining the prompt-merger and delayed-merger contributions we find an overall rate of eccentric mergers of \begin{equation} R_{{\rm eccentric,TBH}}\approx1-10\times \rm F_{{\rm model}}{\rm Gpc^{-3}yr^{-1}}. \end{equation} \section{Summary} \label{sec:Summary} In this paper we extended our previous study of BBH GW-sources formation from ultra-wide binaries perturbed by random flyby encounters in the field, and studied ultra-wide triples. We calculate the merger rate and eccentric merger rate of BBH originating from this channel and find them to potentially be one of the main channels for BBH GW sources. Wide TBH systems are gravitationally perturbed by random flybys of stars in the field, giving rise to a random walk of their outer-binary angular momentum. As a result a fraction of the TBH outer-binaries become highly eccentric and their pericenter is sufficiently decreased to give rise to a strong encounter between the outer TBH component and the inner binary, destabilizing the system and driving it into an effective binary-single resonant encounter similar to such encounters that drive dynamical formation channels of GW-sources in dense stellar clusters. occurring in Consequently the TBHs evolve through a sequence of many binary-single encounters, during which two of the BHs might merger in what we term a prompt-GE mergers. Alternatively, one of the BHs could be ejected, leaving behind a remnant, more compact BBH. In the later case, sufficiently compact remnant BBHs would inspiral and merger through GW emission, and contribute to the formation of detectable GW sources, in what we term the delayed-mergers channel. We find the total volumetric rate of systems that merge via GW emission from both channels to be \begin{equation} R_{{\rm IMS,merger}}\approx50-150{\rm \times F_{{\rm model}}Gpc^{-3}yr^{-1}} \end{equation} and an eccentric GW-mergers volumetric rate of \begin{equation} R_{{\rm IMS,eccentric}}\approx1-10{\rm \times F_{{\rm model}}Gpc^{-3}yr^{-1}}. \end{equation} comparable and consistent with the currently inferred rate of BBH GW-mergers, and consistent with the current no detections of eccentric mergers, given the, still, too low statistics. We do expect, however, a few eccentric mergers to be detected over the coming few years, once the cumulative number of identified GW-sources is of the order of several hundreds. We also predict the spin-orbit alignment of the GW mergers from this channel to generally be isotropic. We also predict a close to uniform delay time distribution, with a significant contribution from both early and late type galaxies, and a preference for galaxies with higher velocity dispersions which are more favorable for field interactions with wide TBHs. \section*{Acknowledgements} EM thanks Johan Samsing for helpful and enlightening discussion that lead to this project. HBP acknowledges support from the Kingsley distinguished-visitor program in Caltech, where some of the work has been done. \textbf{Data availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
1,108,101,566,843
arxiv
\section{Introduction} Thermal rectification~\cite{starr1936,terraneo2002a,RevModPhysLibaowen} is a kind of anomalous heat transfer phenomenon, in which the heat flux prefers to flow in one direction with higher thermal conductivity. It has been got much attention since its first experimental observation by Starr~\cite{starr1936}. A lot of studies, including the numerical simulations or experimental measurements~\cite{RevModPhysLibaowen,yang2012a,liu2019a,roberts2011a}, have been done for better understanding the underlying thermal transport mechanisms. In the past two decades, most attention has been paid to the phonon management on energy transport in nanoscale thermal systems~\cite{RevModPhysLibaowen}, such as thermal diodes~\cite{li_thermal_2004} or thermal logic gates~\cite{PhysRevLett.99.177208}. Theoretical model was firstly developed by Terraneo et al.~\cite{terraneo2002a} for thermal rectifier in $2002$, in which the rectifying effect is obtained by acting on the parameters which control the nonlinearity of the lattice. Although this model is far away from a realistic implementation, nevertheless, it opens the possibility to propose thermal devices which may have practical relevance. Li et al.~\cite{li_thermal_2004} demonstrates a thermal diode model that works in a wide range of system parameters by coupling two nonlinear one dimensional lattices. After above theoretical predictions, in $2006$, Chang et al.~\cite{chang_solid-state_2006} demonstrated nanoscale solid-state thermal rectification, in which high-thermal-conductivity carbon and boron nitride nanotubes were mass-loaded externally and inhomogeneously with heavy molecules. The experiment resulting nanoscale system yields asymmetric axial thermal conductance with greater heat flow in the direction of decreasing mass density. Along this line, many simulation investigations and experiments~\cite{wang2017} have focused on various asymmetric nanostructures~\cite{yang2012a,liu2019a,yousefzadi_nobakht_thermal_2018} or lattice system~\cite{ai2011a,ai2011,zhong2009} including mass graded, shape changing (tapered, tailored), such as carbon nanocone~\cite{yang_carbon_2008}, single-walled carbon nanohorns~\cite{wu2008}, graphene nanoribbons~\cite{hu2009,yang_thermal_2009,wang2014}, graphene nanojunctions~\cite{ouyang2010}. Some normal mechanisms~\cite{RevModPhysLibaowen,liu2019a} are usually used to explain the thermal rectification. One of them is the phonon spectra of two connecting materials or asymmetric nanostructures. The calculation of vibrational density of states indicates that the phonon spectra overlap varies by switching the direction of the temperature gradient~\cite{yang_thermal_2009,liu2019a,roberts2011a}. This difference may be obvious in the nanoscale asymmetry thermal system. As the phonon mean free path is smaller than the characteristic length of the thermal system, the phonon ballistic scattering~\cite{ouyang2010,miller2009} and local edge scattering~\cite{wang2017} also become important factors for the thermal conductivity. The asymmetry phonon scattering finally leads to asymmetry heat conduction. Apart from these mechanisms, another necessary condition~\cite{go2010} for thermal rectification is the thermal conductivity of the material or structure should be a function of both physical space and temperature and nonseparable, which is also one of the mechanisms to explain the thermal rectification in bulk materials. Peyrard~\cite{peyrard2006} and Dames~\cite{dames2009} note that thermal rectification can be realized in bulk materials by selecting materials with the suitable properties or different temperature dependent thermal conductivity. Except theoretical analysis, experimental progress in bulk materials is also made by Kobayashi et al.~\cite{kobayashi2009} and Sawaki et al.~\cite{sawaki2011}. The former prepared an oxide thermal rectifier made of two cobalt oxides with different thermal conductivities~\cite{kobayashi2009}, while the latter investigated thermal rectification in a bulk material with a pyramid shape to elucidate shape dependence of the thermal rectification~\cite{sawaki2011}. Except the heat transfer in the nanoscale thermal systems or bulk materials, as phonon transports from ballistic to diffusive regime, thermal rectification may happen, too. The phonon mean free path in silicon or graphene usually ranges $3-4$ orders of magnitudes. It indicates that even for a given characteristic length, phonon transports crossover from ballistic to diffusive regime~\cite{RevModPhys.90.041002,RevModPhysLibaowen}. For many asymmetry geometries, as the heat transfer is in the ballistic-diffusive regime, the thermal conductivity usually is dependent of both the spatial space and temperature~\cite{cahill2003nanoscale,cahill2014nanoscale} and nonseparable~\cite{go2010}. In addition, many previous studies~\cite{RevModPhys.90.041002,cahill2003nanoscale,cahill2014nanoscale} show that the length-dependent thermal conductivity changes rapidly as the characteristic length is comparable to the phonon mean free path. Hence, it has great potential to realize thermal rectification in the ballistic-diffusive regime. One of widely used methods to realize the thermal rectification in this regime is changing the spatial configurations of the thermal systems. Wang et al.~\cite{wang_monte_2010} investigates the phonon transport in single silicon nanowires with variable cross-section. The results show that the tapered cross-section nanowires can decelerate thermal flux and that the incremental cross-section one has the opposite influence. However, the simulations are limited below $100$nm. D. Jou et al.~\cite{criado-sancho2013,criado-sancho2013a} analyses the thermal rectification in inhomogeneous or composite thermal systems from macro-to-nanoscale~\cite{carlomagno2016}, for example the nanoporous/bulk silicon devices~\cite{criado-sancho2012}. It attributes to that the thermal conductivity decreases significantly as the characteristic length of the thermal system decreases from macro-to-nanoscale and the phonon boundary scattering increases. Arora et al.~\cite{arora2017} studies the thermal rectification in a selectively restructured graphene by introducing vacancy defects in a potion of graphene. As a result, they find that the thermal rectification is mainly a function of length of defective and nondefective regions and volume percentage of defect, and it is mostly independent of defect size. A longer (of the order of $10\mu$m) nondefective side, coupled to a shorter (of the order of $100$nm) defective side, can lead to large thermal rectification. However, the manufacturing process of the restructured graphene materials is more expensive and complex compared to the silicon. In this study, the radial thermal rectification in the homogeneous concentric silicon ring from ballistic to diffusive regime is investigated, which is motivated by the nonuniform radial thermal conductivity~\cite{yang_nanoscale_2015,li2019} and radial thermal rectification in graphene~\cite{yousefi2019} and helium II~\cite{saluto2018}. The paper is organized as follows. The Sec.~\ref{sec:method} we introduce the schematic of the concentric silicon ring and the basic theory of the phonon Boltzmann transport equation (BTE). In Sec.~\ref{sec:results}, the numerical results about the radial thermal rectification are predicted and some analysis are discussed. Finally, a conclusion is made in Sec.~\ref{sec:conclusion}. \section{Structure and phonon BTE} \label{sec:method} Figure~\ref{silicondiskpdf} shows the simulated thermal system of silicon including two concentric circular boundaries with different radii of $R_i$ and $R_o$, where $R_i<R_o$. In what follows, the characteristic length of the thermal system is defined as $L=R_o-R_i$. The temperature of the inner and outer boundaries is fixed at $T_i=T_0 \left(1+ \Delta/2 \right)$ and $T_o=T_0 \left(1 - \Delta/2 \right)$, respectively, where $T_0$ and $\Delta$ are the average temperature and the normalized temperature difference between two boundaries. As $\Delta \neq 0$, there is temperature gradient along the radial direction. The total heat flux across the circle with radius $r$ is $Q(r)= 2\pi r \bm{q} \cdot \mathbf{n}$, where $R_i \leq r \leq R_o $, $\bm{q}$ is the heat flux, $\mathbf{n}$ is the normal unit vector along the radial direction from the inner boundary to the outer boundary. At steady state, $Q$ is a constant due to the energy conservation. \begin{figure} \centering \includegraphics[scale=0.3,viewport=150 0 750 600,clip=true]{disk.pdf} \caption{Schematic of the concentric silicon ring. The characteristic length of the thermal system is $L=R_o-R_i$.} \label{silicondiskpdf} \end{figure} As $\Delta >0$, the heat flux flows from the inner to the outer boundary and the associated macroscopic variables $W$ ($T,~Q,~\bm{q}$, etc) are labeled as '$W_+$'. As $\Delta<0$, the heat flux flows in the opposite direction and the associated macroscopic variables are labeled as '$W_-$'. According to previous studies, as $|\Delta|$ increases, thermal rectification~\cite{li_thermal_2004,RevModPhysLibaowen} may happen, i.e., $|Q_+| \neq |Q_-|$. In order to investigate the thermal rectification in silicon from tens of nanometers to tens of microns, numerical simulations are implemented based on the phonon Boltzmann transport equation (BTE) under the single-mode relaxation time approximation~\cite{ChenG05Oxford,kaviany_2008,MazumderS01MC}, i.e., \begin{equation} \bm{v} \cdot \nabla f = \frac { f^{0}({T}_{\text{loc}})-f }{\tau(T) }, \label{eq:BTE11} \end{equation} where $f=f(\bm{x},\bm{s},\omega,p)$ is the phonon distribution function with the space vector $\bm{x}$, unit directional vector $\bm{s}$ in 3D coordination, phonon angular frequency $\omega$ and polarization $p$. $\bm v=\nabla_{\bm{K}} {\omega}$ is the group velocity calculated by the phonon dispersion, where $\bm{K}$ is the wave vector and assumed to be isotropic. The optical phonon branches are not considered due to the small contribution to the thermal conduction. The approximate quadratic polynomial dispersions~\cite{pop2004analytic} are used to represent the dispersion relation~\cite{brockhouse1959lattice} of the acoustic phonon branches in monocrystalline silicon. $\tau=\tau(T)$ is the effective relaxation time, a combination of various phonon-phonon intrinsic scattering (include impurity, N and U scattering) mechanisms based on the Matthiessen's rule~\cite{holland1963analysis,kaviany_2008}, where $T$ is the temperature and will be discussed later. Here, the experimental formulas of the impurity, N and U scattering are used, which can refer to Ref~\cite{terris2009modeling}. $f^{0}$ is the local equilibrium state with pseudo-temperature ${T}_{\text{loc}}$, satisfying the Bose-Einstein distribution~\cite{ChenG05Oxford,kaviany_2008}, i.e., $$f^{0} (T)= f_{\text{BE}} (T)= \frac{1}{ \exp(\hbar\omega/k_{B}T)-1 } ,$$ where $\hbar$ is the Planck's constant divided by $2\pi$, $k_B$ is the Boltzmann constant. The pseudo-temperature ${T}_{\text{loc}}$ is introduced to ensure the energy conservation of the scattering term, i.e., \begin{equation} \sum_{p}\int \int_{4\pi} \frac{\hbar \omega D}{4\pi} \frac { f^{0}(T_{\text{loc}})-f }{\tau(T)} d{\Omega}d{\omega}=0, \label{eq:scatteringterm} \end{equation} where $D$ is the phonon density of state, $d{\Omega}$ and $d{\omega}$ are the integral over the solid angle space and the frequency space, respectively. In systems out of equilibrium, the temperature $T$ can be defined in terms of an equilibrium distribution with the same energy density, namely, equivalent equilibrium temperature~\cite{ChenG05Oxford}, i.e., \begin{equation} \sum_{p}\int \int_{4\pi} \frac{\hbar \omega D}{4\pi} \left( f_{\text{BE}}(T)-f \right) d{\Omega}d{\omega}=0. \label{eq:thermodynamicterm} \end{equation} The heat flux $\bm{q}$ is calculated by \begin{equation} \bm{q} =\sum_{p}\int \int_{4\pi} \bm{v} \hbar \omega f D/4{\pi} d{\Omega}d{\omega}. \label{eq:heatflux} \end{equation} An implicit synthetic scheme~\cite{ADAMS02fastiterative} is used to solve Eq.~\eqref{eq:BTE11}, which can refer to Ref~\cite{ZHANG20191366}. To ensure the numerical accuracy, $30164 \times 1152 \times 40$ cells are used to discrete the physical space, solid angle space and frequency space, respectively. In addition, the isothermal thermalizing boundary conditions~\cite{li2019} are implemented on the inner and outer boundaries. \section{Results and discussions} \label{sec:results} Previous work~\cite{li2019} based on phonon gray model shows that the radial heat transport is dominated by two parameters including the radius ratio of the two concentric boundaries (${R_i}/{R_o}$) and the ratio of the phonon mean free path to the characteristic length. Hence, the thermal rectification in the ballistic-diffusive regime with different characteristic length $L$, temperature range ($T_0,\Delta$) and $R_i/R_o$ are investigated. First, the thermal rectification in the ballistic and diffusive limits are derived theoretically. The radial local thermal conductivity $k$ is introduced and defined as \begin{align} k &= \frac{-Q}{2\pi r \frac{dT}{dr} }, \label{eq:conductivity} \\ \frac{dT}{Q} &= -\frac{1}{2\pi k} d(\ln{r}) . \label{eq:integral} \end{align} where $k=k(R_i/R_o,L,r,T,T_0, \Delta)$ is dependent of the geometry $(R_i/R_o,~L)$ and temperature range $(T_0,~\Delta)$ of the thermal system as well as the spatial position $r$ and temperature $T(r)$. As the characteristic length is much larger than the phonon mean free path and the heat transfer is in the diffusive regime, the radial thermal conductivity is only dependent of the local temperature for a given system geometry, i.e., $k=k(T)$~\cite{rudramoorthy2010heat}. Then taking an integral of Eq.~\eqref{eq:integral} from $r=R_i$ to $r=R_o$ as $\Delta>0$ or $\Delta<0$, respectively, we can derive~\cite{go2010} (as $\Delta>0$, subscript of all macroscopic physical quantities plus one '+' symbol, else '-') \begin{align} \frac{1}{Q_+} \int_{T_i}^{T_o} k(T) dT &= \int_{R_i}^{R_o} -\frac{1}{2\pi } d(\ln{r}), ~~\Delta>0, \\ \frac{1}{Q_-} \int_{T_i}^{T_o} k(T) dT &= \int_{R_i}^{R_o} -\frac{1}{2\pi } d(\ln{r}), ~~\Delta<0, \\ \Longrightarrow |Q_+| &= |Q_-|. \label{eq:integraldiffusive} \end{align} As the characteristic length is much smaller than the phonon mean free path and the heat transfer is in the ballistic regime, in which there is rare phonon-phonon intrinsic scattering. For arbitrary phonon coming from one boundary, it incidents into other boundaries directly without energy and momentum change. The total heat flux can be calculated by~\cite{li2019,olfe1968} \begin{align} Q_{+} &= F(T_i)-F(T_o) ,~\Delta>0, \\ Q_{-} &= F(T_i)-F(T_o) ,~\Delta<0, \\ \Longrightarrow |Q_+| &= |Q_-|, \label{eq:integralballistic} \end{align} where \begin{align} F(T)=0.5 \pi R_i \sum_{p}\int \hbar \omega D|\bm{v} | f_{\text{BE}}(T) d{\omega} . \end{align} In addition, it can be observed that the total heat flux in the ballistic limit is nothing to do with the phonon mean free path. In a word, there is no thermal rectification in the diffusive and ballistic limits. In the ballistic-diffusive regime, the thermal rectification is more complicated and there is no analytical solutions. Figure~\ref{rectification2} shows the thermal rectification ratio ($\text{REC}$) at different $|\Delta|$ ($0.001-1.0$) with different characteristic length $L$, where $R_i/R_o=0.2$ and the thermal rectification ratio is defined as \begin{align} \text{REC}= \frac{ \left|Q_+ \right| -\left| Q_- \right|}{ \left|Q_-\right|}. \end{align} It can be observed that as $|\Delta|$ increases from $0.001$ to $1.0$, the thermal rectification ratio $\text{REC}$ increases whatever the characteristic length. In addition, the heat flux prefers to flow from the inner to the outer boundary, which is opposite to the previous results in graphene predicted by the molecular dynamics~\cite{yousefi2019}. Unlike previous molecular dynamics simulations in which the phonon density of states along the radial direction are nonuniform at the nanoscale, in the BTE scale, the asymmetric atomic details and phonon wave nature are not accounted and the phonon density of states are constant for a given phonon frequency and polarization. Furthermore, it is interesting to find that for a given temperature difference $|\Delta|$, the thermal rectification ratio with $L=400$nm is larger than that as $L=40$nm or $L=4\mu$m, as shown in~\cref{rectification2}. According to that there is little thermal rectification in the ballistic and diffusive limits, we can concluded that for a given $|\Delta|$, as the characteristic length increases from $40$nm to $4\mu$m, there is at least an extremum value of the thermal rectification ratio. In order to predict the extremum value more accurately, numerical simulations are implemented with different characteristic length $L$ from tens of nanometers to tens of microns, where $|\Delta|=1.0$, $R_i/R_o=0.2$, as shown in~\cref{systemsizeDT} and Table.~\ref{rectificationdata}. It can be observed that as $T_0=300\text{K}$, with the increase of the characteristic length, the thermal rectification ratio increases gradually till maximum value, then decreases. The maximum value reaches as the characteristic length is $140$nm$-160$nm. \begin{table} \caption{The specific numerical data of the thermal rectification ratio ($\text{REC}$) with different $T_0$, $L$ and $R_i/R_o$, where $|\Delta |=1.0 $, as shown in~\cref{systemsizeDT} and~\cref{GDT3}.}\vskip 0.2cm \centering \begin{tabular}{|*{6}{c|}} \hline \multicolumn{4}{|c|}{ $T_0=300 \text{K}$ } & \multicolumn{2}{c|}{ $T_0=200 \text{K}$ } \\ \hline \multicolumn{2}{|c|}{ $R_i/R_o=0.2$ } & \multicolumn{2}{c|}{ $R_i/R_o=0.6$ } & \multicolumn{2}{c|}{ $R_i/R_o=0.2$ } \\ \hline $L$(nm) & $\text{REC}$ & $L$(nm) & $\text{REC}$ & $L$(nm) & $\text{REC}$ \\ \hline 40 & 0.208 & 20 & 0.123 & 80 & 0.123 \\ \hline 120 & 0.303 & 40 & 0.155& 120 & 0.159 \\ \hline 140 & 0.307 & 60 & 0.162 & 200 & 0.202 \\ \hline 160 & 0.308 & 80 & 0.159 & 400 & 0.242 \\ \hline 200 & 0.304 & 100 & 0.155 & 600 & 0.250 \\ \hline 400 & 0.263 & 200 & 0.127 & 800 & 0.248 \\ \hline 800 & 0.209 & 400 & 0.09 &1200 & 0.240 \\ \hline 4000 & 0.123 & 2000 & 0.04 & 2000 & 0.223 \\ \hline \end{tabular} \label{rectificationdata} \end{table} \begin{figure} \centering \subfloat[]{\label{rectification2}\includegraphics[width=0.4\textwidth]{rec_si-eps-converted-to.pdf}}~~ \subfloat[]{\label{systemsizeDT}\includegraphics[width=0.4\textwidth]{Tsidelta-eps-converted-to.pdf}}~~\\ \caption{(a) The distributions of the thermal rectification ratio $\text{REC}$ at different $|\Delta|$ with different characteristic length $L=R_o-R_i$, where $T_0=300\text{K}$, $R_i/R_o=0.2$. (b) The distributions of the thermal rectification ratio with different temperature $T_0$ ($300\text{K}$, $200\text{K}$) and characteristic length $L=R_o-R_i$, where $|\Delta|=1.0$, $R_i/R_o=0.2$.} \label{systemsizeDT22} \end{figure} \begin{figure} \centering \subfloat[]{\label{idea1}\includegraphics[width=0.6\textwidth]{ideakk.pdf}}~~\\ \subfloat[]{\label{idea2}\includegraphics[width=0.6\textwidth]{begin.pdf}}~~\\ \caption{(a) Schematic of the distributions of the effective thermal conductivity $\overline{k}$ with different characteristic length $L$ for a given temperature $T$ in silicon~\cite{li2019,ChenG05Oxford,terris2009modeling}. (b) A schematic of how to realize the thermal rectification in the ballistic-diffusive regime. If we can change the phonon mean free path as the direction of the temperature gradient changes, the profiles of the length-dependent heat flux $|Q|$ or effective thermal conductivity $\overline{k}$ may be stretched or shrunk along the horizontal direction. $+$ and $-$ represent the heat flows from the inner to the outer or in the opposite direction, respectively. } \label{rectification} \end{figure} We suppose that one of the reasons for these phenomena is the size effects, namely, the effective thermal conductivity $\overline{k}$ in silicon decreases as the system size decreases~\cite{cahill2014nanoscale,cahill2003nanoscale}. Besides, the effective thermal conductivity profiles change most rapidly as the system size is comparable to the phonon mean free path~\cite{li2019,ChenG05Oxford}, as shown in~\cref{idea1}. On the other hand, for a given system size, as the temperature increases, the phonon-phonon intrinsic momentum destroying scattering happens frequently so that the effective thermal conductivity decreases. Hence, in the ballistic-diffusive regime, both the phonon boundary scattering and the phonon-phonon intrinsic scattering play an important role on the heat transfer. Furthermore, for the concentric ring geometry, previous studies~\cite{yang_nanoscale_2015,li2019} have proven that even for a given system size and temperature ($T_0,~\Delta \rightarrow 0$), the local radial thermal conductivity is dependent of the radius in the ballistic-diffusive regime, i.e., $$k=k(r).$$ It is totally different from that in the cross-plane heat transfer, in which the local thermal conductivity is independent of the spatial space~\cite{MajumdarA93Film}. Then as $|\Delta|$ is large, the local radial thermal conductivity is possible to be the function of both the radius $r$ and the temperature $T(r)$ and nonseparable, i.e., $$k=k(T,r)=k(T(r),r).$$ If so, it is possible to realize thermal rectification based on the theory in Ref~\cite{go2010}. So how to realize the thermal rectification? As shown in~\cref{idea2}, if we can change the phonon mean free path of the thermal system in the ballistic-diffusive regime, the length-dependent total heat flux $Q$ or effective thermal conductivity $\overline{k}$ profiles may be stretched or shrunk along the horizontal direction. For a given characteristic length, the small change affects a lot to the heat flux, especially in the ballistic-diffusive regime, which results in $|Q_{+}| \neq |Q_{-}|$. (as $\Delta>0$, subscript of all macroscopic physical quantities plus one '+' symbol, else '-') Hence, one of the key point to realize the thermal rectification is to make the average phonon mean free path of the concentric silicon ring different as the direction of the temperature gradient changes, i.e., $\overline{\lambda}_{+} \neq \overline{\lambda}_{-}$, which is also the main starting point of this work. \begin{figure} \centering \subfloat[]{\label{Temperatureradial}\includegraphics[width=0.46\textwidth]{DT1_T300-eps-converted-to.pdf}}~~ \subfloat[]{\label{GDTTL}\includegraphics[width=0.41\textwidth]{TTL-eps-converted-to.pdf}}~~\\ \caption{$+$ and $-$ represent the heat flows from the inner to the outer or in the opposite direction, respectively. (a) The radial distributions of the temperature with different characteristic length, where $|\Delta|=1.0$, $T_0=300$K, $r^*=\left( \ln(r)-\ln(R_i) \right)/ \left( \ln(R_o)-\ln(R_i) \right)$. (b) The distributions of average temperature $\overline{T}$ with different characteristic length $L=R_o-R_i$ and temperature $T_0$ ($300\text{K}$, $200\text{K}$), where $|\Delta|=1.0$, $ \overline{T}= \frac{ \int_0^1 {T(r^*)} d(r^*) }{ \int_0^1 d(r^*) } $, $R_i/R_o=0.2$. } \label{idea} \end{figure} In order to prove our guess, the temperature distributions along the radial direction with different characteristic length are predicted, as shown in~\cref{Temperatureradial}, where $T_0=300$K, $|\Delta|=1.0$, $r^*=\left( \ln(r/R_i) ) \right)/ \left( \ln(R_o/R_i) \right)$. It can be observed that with the increase of the characteristic length, the average temperature $\overline{T}$ increases as $\Delta>0$ but decreases as $\Delta <0$, where $$ \overline{T}= \frac{ \int_0^1 {T(r^*)} d(r^*) }{ \int_0^1 d(r^*) }. $$ Previous studies have demonstrated that the average phonon mean free path $\overline{\lambda}(\overline{T})$ in silicon decreases as the temperature increases~\cite{Glassbrenner64conductivity,terris2009modeling,zhang_discrete_2019}, where \begin{align} \overline{\lambda}(\overline{T})&=\frac{ \sum_{p} \int { \hbar \omega D \frac{\partial f_{\text{BE}} }{\partial{T}} } | \bm{v} | \tau d\omega } { \sum_{p} \int { \hbar \omega D \frac{\partial f_{\text{BE}} }{\partial{T}} } d\omega }. \label{eq:lambdaeq} \end{align} In other words, the average phonon mean free path with $\Delta<0$ is smaller than that with $\Delta >0$. Usually, for a given characteristic length, the larger the average phonon mean free path in silicon is, the larger the thermal conductivity is, which proves that the heat flux prefers to flow from the inner to the outer side. The average temperature and phonon mean free path as $\Delta>0$ or $\Delta <0$ are also calculated based on Eq.~\eqref{eq:lambdaeq}. As shown in~\cref{GDTTL} or Table.~\ref{TaverageTable}, it can be observed that with the increase of the characteristic length, the average temperature increases as $\Delta >0$ while decreases as $\Delta <0$. As the characteristic length is much large ($L=40\mu$m), the average temperature as $\Delta<0$ is close to that as $\Delta>0$, which goes to the heat transfer in the diffusive regime. As $T_0=300\text{K}$, $40\text{nm} \leq L \leq 40 \mu$m, we calculate that if $\Delta> 0$, $ 220 \text{K} \leq \overline{T}_{+} \leq 265 \text{K}$ and $ 232\text{nm} \leq \overline{\lambda}_{+} \leq 300 $nm, else if $\Delta< 0$, $265 \text{K} \leq \overline{T}_{-} \leq 375 \text{K}$ and $152 \text{nm} \leq \overline{\lambda}_{-} \leq 232 \text{nm}$. As $L$ increases, $\overline{T}_{-}- \overline{T}_{+}$ decreases gradually and goes to zero, which is consistent with the heat transfer in the diffusive regime. Figure~\ref{GDTQQ} shows the distributions of the average heat flux $\overline{q}$ with different characteristic lengths, where $\overline{q}= \left| \frac{ \int_{R_i}^{R_o} Q(r)dr }{ \int_{R_i}^{R_o} 2\pi r dr } \right|$, $R_i/R_o=0.2$. It can be observed that as $T_0=300\text{K},~ \Delta>0$, with the increase of the characteristic length crossover from the ballistic to diffusive regime, the heat flux decreases. And the main drop happens in the ballistic-diffusive regime. Similar phenomena can be observed as $T_0=300\text{K},~ \Delta<0$. What's different is that as $\Delta <0$, the average temperature is higher so that $\overline{\lambda}_{-}<\overline{\lambda}_{+}$. It can be seen that due to $\overline{\lambda}_{-}<\overline{\lambda}_{+}$, with the increase of the characteristic length $L$, the heat flux $\overline{q}_{-}$ decreases dramatically first before $\overline{q}_{+}$. Besides, $\overline{q}_{-}$ converges first before $\overline{q}_{+}$ as the characteristic length is large enough. In other words, the numerical profiles are stretched along the horizontal direction so that there is thermal rectification in the ballistic-diffusive regime, which is consistent with our original guess shown in~\cref{idea2}. \begin{table} \caption{The specific numerical data of the average temperature along the radial direction ($\overline{T}$) with different $T_0$, $L$ and $R_i/R_o$, where $|\Delta |=1.0 $, $ \overline{T}= \frac{ \int_0^1 {T(r^*)} d(r^*) }{ \int_0^1 d(r^*) } $, as shown in~\cref{GDTTL} and~\cref{RTTL}.}\vskip 0.2cm \centering \begin{tabular}{|*{9}{c|}} \hline \multicolumn{6}{|c|}{ $T_0=300 \text{K}$ } & \multicolumn{3}{c|}{ $T_0=200 \text{K}$ } \\ \hline \multicolumn{3}{|c|}{ $R_i/R_o=0.2$ } & \multicolumn{3}{c|}{ $R_i/R_o=0.6$ } & \multicolumn{3}{c|}{ $R_i/R_o=0.2$ } \\ \hline $L$(nm) & $\overline{T}_{-}$(K) & $\overline{T}_{+}$(K) & $L$(nm) & $\overline{T}_{-}$(K) & $\overline{T}_{+}$(K) & $L$(nm) & $\overline{T}_{-}$(K) & $\overline{T}_{+}$(K) \\ \hline 40 & 374 & 220 & 60 & 331& 265 & 40 & 260 & 149 \\ \hline 120 & 354 & 230 & 100 & 323 & 267 & 120 & 251 & 155 \\ \hline 200 & 344 & 235 & 200 & 311 & 267 & 200 & 245 & 159 \\ \hline 400 & 331 & 240 &400 & 300 & 266 & 400 & 236 & 163 \\ \hline 800 & 318 & 244 & 1000 & 287 & 265 & 800 & 227 & 167 \\ \hline 4000 & 287 & 255 & 2000 & 280 & 265 & 4000 & 203 & 172 \\ \hline 40000 & 265 & 261 & \multicolumn{3}{c|}{ } & 20000 & 185 & 174 \\ \hline \end{tabular} \label{TaverageTable} \end{table} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Q1Q2-eps-converted-to.pdf}~~\\ \caption{The distributions of heat flux $\overline{q}$ along the radial direction with different characteristic length $L=R_o-R_i$ and temperature $T_0$ ($300\text{K}$, $200\text{K}$), where $|\Delta|=1.0$, $\overline{q}= \left| \frac{ \int_{R_i}^{R_o} Q(r)dr }{ \int_{R_i}^{R_o} 2\pi r dr } \right|$, $R_i/R_o=0.2$. $+$ and $-$ represent the heat flows from the inner to the outer or in the opposite direction, respectively.} \label{GDTQQ} \end{figure} The thermal rectification with different temperature $T_0$ is also investigated, where $R_i/R_o=0.2$ and $|\Delta| =1.0$. As the temperature decreases from $T_0=300\text{K}$ to $T_0=200\text{K}$, as shown in~\cref{systemsizeDT}, it can be observed that the length dependent thermal rectification ratio profiles move along the positive axis, namely, the increasing of the characteristic length. There is still a maximum value of the thermal rectification ratio but its associated characteristic length increases. Besides, as temperature decreases, the maximum thermal rectification ratio decreases. Similar to the analysis mentioned before, according to Table.~\ref{TaverageTable} and Eq.~\eqref{eq:lambdaeq}, as $T_0=200\text{K}$, $40\text{nm} \leq L \leq 20 \mu$m, if $\Delta> 0$, we predict that $ 149 \text{K} \leq \overline{T}_{+} \leq 175 \text{K}$ and $ 420 \text{nm} \leq \overline{\lambda}_{+} \leq 545 $nm, else if $\Delta< 0$, $185 \text{K} \leq \overline{T}_{-} \leq 260 \text{K}$ and $235 \text{nm} \leq \overline{\lambda}_{-} \leq 385 \text{nm}$. As $\overline{\lambda}_{-}/L \approx 1$, $\overline{\lambda}_{+}/L \approx 1$, the heat transfer is in the ballistic-diffusive regime. The different phonon mean free path finally leads to different total heat flux, as shown in~\cref{GDTQQ}. \begin{figure} \centering \subfloat[]{\label{GDT3}\includegraphics[width=0.4\textwidth]{Rsidelta-eps-converted-to.pdf}}~~ \subfloat[]{\label{RTTL}\includegraphics[width=0.4\textwidth]{RTTL-eps-converted-to.pdf}}~~\\ \caption{The distributions of the thermal rectification ratio $\text{REC}$ (a) and the average temperature in the domain $\overline{T}$ (b) with different characteristic length $L=R_o-R_i$ and $R_i/R_o$ ($0.2,~0.4,~0.6$), where $T_0=300\text{K},~|\Delta|=1.0$, $\overline{T}= \frac{ \int T(r^*)d(r^*) }{ \int d(r^*) } $. $+$ and $-$ represent the heat flows from the inner to the outer or in the opposite direction, respectively. } \label{RIOGDT} \end{figure} Except the temperature range ($T_0,~\Delta$), the thermal rectification may be related to the geometry of the thermal system due to $k=k(R_i/R_o,L,r,T,T_0, \Delta)$. For a given temperature range, i.e., $T_0=300\text{K},~|\Delta|=1.0$, we compare the thermal transport phenomena with different radius ratio of the two concentric boundaries: $R_i/R_o=0.2,~0.4,~0.6$. Numerical simulations are implemented with different $L$ and the results are shown in~\cref{GDT3}. It can be observed that for a given characteristic length as $R_i/R_o$ increases, the thermal rectification ratio decreases. The maximum thermal rectification ratio decreases from $31\%$ to $16\%$ as $R_i/R_o$ increases from $0.2$ to $0.6$. The distributions of the average temperature in the domain $\overline{T}$ are also shown in~\cref{RTTL} and Table.~\ref{TaverageTable}. It can be observed that for a given characteristic length $L$, as $R_i/R_o$ increases, $\overline{T}_{+}$ increases while $\overline{T}_{-}$ decreases. In other words, $\overline{\lambda}_{+}$ decreases while $\overline{\lambda}_{-}$ increases so that $\overline{\lambda}_{+}-\overline{\lambda}_{-}$ decreases. Actually, for a given characteristic length, as $R_i/R_o$ increases, the geometry asymmetry along the radial direction decreases so that the difference between the thermal resistance near the inner boundary and the outer boundary caused by the phonon boundary scattering decreases. As $R_i/R_o \rightarrow 1.0$, $\overline{\lambda}_{-} \rightarrow \overline{\lambda}_{+}$. The heat transfer in the concentric ring along the radial direction comes close to that in the cross-plane heat transfer~\cite{MajumdarA93Film}, in which there is no thermal rectification. \section{Conclusion} \label{sec:conclusion} In this study, the radial thermal rectification in the concentric silicon ring from ballistic to diffusive regime is investigated based on the phonon Boltzmann transport equation. Analytical solutions prove that there is no thermal rectification in the ballistic and diffusive limits. In the ballistic-diffusive regime, the heat flux prefers to flow from the inner boundary to the outer boundary ($\Delta>0$). Furthermore, as the characteristic length increases from tens of nanometers to tens of microns, the thermal rectification ratio increases first and then decreases gradually till zero. Because as $\Delta>0$, the average temperature is lower, which leads to larger phonon mean free path $\overline{\lambda}_{+}$ compared to that as $\Delta<0$ ($\overline{\lambda}_{-}$). The difference of the average phonon mean free path leads to the stretch or contract of the length-dependent heat flux profiles, especially in the ballistic-diffusive regime. Different heat flux finally results in thermal rectification. In addition, the effects of the temperature and radius ratio of the two concentric boundaries are investigated, too. As the temperature decreases, the maximum thermal rectification ratio decreases. As the radius ratio of the inner and outer boundaries increases, the thermal rectification ratio decreases for a given characteristic length because the difference between $\overline{\lambda}_{+}$ and $\overline{\lambda}_{-}$ decreases. The present study offers an idea to realize the thermal rectification in homogeneous materials in the ballistic-diffusive regime by stretching or contracting the length-dependent thermal conductivity or heat flux. \section*{Conflict of interest} There is no conflict of interest. \section*{Acknowledgments} This work was supported by the National Key Research and Development Plan (No. 2016YFB0600805) and the National Science Foundation of China (Grants No. 11602091). \section*{References} \bibliographystyle{IEEEtr}
1,108,101,566,844
arxiv
\section{Introduction} The formation, fragmentation and dynamical evolution of molecular clouds is still a matter of debate, with the astrophysical community divided among those favouring short evolutionary time scales set by turbulence \citep[e.g.][]{Hartmann2001, Hartmann2012} and those who take into account magnetic fields and their retarding effects in the dynamics \citep[e.g.][]{WM1997, Mouschovias2006}. The only way to set the debate is to compare results from theory and simulations inclusive of magnetic fields and non-ideal magnetohydrodynamic effects with detailed observations, preferably toward relatively simple and quiescent regions with no significant stellar feedback to reduce the number of unknowns in the models. Observers can probe internal motions and kinematics thanks to high spectral resolution observations of molecular tracers \citep[e.g.][]{Goodman1993, Caselli2002a, Pineda2010Coherence, Hacar2013}. Such studies have shown that cloud cores in nearby low-mass star forming regions, are thermally supported, while turbulent non-thermal motions start to dominate in the sharp transition region between the core and the surrounding parent molecular cloud. Distant high-mass star forming regions, away from active sites of star formation, also show moderately supersonic motions when viewed with high spectral and angular resolution instruments \citep[e.g.][]{Henshaw2014}. Thus, spectroscopic observations are unique tools to study the dynamical evolution of molecular clouds and simulations need to reproduce these basic observational results. The choice of the right tracer to study the various parts of a molecular cloud is set by our understanding of astrochemical processes, which also play a crucial role on the evolution of the cloud. In fact, molecules such as CO and its isotopologues act as efficient cooling agents to allow a region to attain the temperatures required for star formation \citep[e.g][]{Goldsmith2001}. The ionisation fraction, set by an interplay of cosmic-ray ionisation and ion-molecule chemical reactions \citep[e.g.][]{Guelin1977, Caselli1998}, also has a profound effect on the evolution of a magnetised molecular cloud \citep[e.g][]{Shu1987}: high ionisation fractions within a magnetic region prevents the collapse of the cloud due to frequent collisions between neutrals and ions; conversely, at low ionisation fractions, neutrals can slip past the ions allowing for collapse to occur. As shown by \citet{BB2012}, the ionisation fraction within a cloud has a great effect on how the cloud will fragment. High ionisation not only increases the timescale for collapse, but also the lengthscale for fragmentation while low ionisation allows for smaller structures to from. Application of a step-like ionisation profile based on the results of photochemical studies by \citet{Ruffle1998} revealed a two-stage fragmentation process which can form subparsec cores within a $\sim$pc size clump. Conversely, applying a cosmic ray only ionisation profile results in the formation of only subparsec cores \citep{BB2012}. That being said, few simulations take into account the specific chemistry of a molecular cloud when examining its evolution and the formation of stars. \citet[][hereafter Paper I]{BB2014}, presented the results of non-ideal magnetohydrodynamic (MHD) simulations of the two-stage fragmentation model. Specifically, \citetalias{BB2014} explored the effects of a step-like ionisation profile and microturbulence via ongoing density perturbations on core collapse in an effort to determine the necessary parameters for the two-stage fragmentation process to occur. To that end, they only presented the density and mass-to-flux ratio ($\mu$) results and the physical parameters of the clumps and cores formed. Although the application of these ionisation profiles are by no means a full treatment of the complex chemistry within a molecular cloud, use of such a profile represents the first step to including the effect of the chemistry on the evolution of a molecular cloud. In this paper, we expand the analysis of these simulations to explore effects of the ionisation profile on the velocity structure and subsequent core formation within molecular clouds. As with \citetalias{BB2014}, these simulations will assume microturbulent perturbations. Here, we focus on the effects of the ionisation profile on the velocity structure and molecular line profiles. A following paper will expand this analysis to fully turbulent models. The rest of the paper is set up as follows. Section 2 describes the numerical code and models focusing on the details pertinent to the analysis goals of this paper. Section 3 looks at the velocity structure within the clumps/cores of each model. Section 4 presents analysis of synthetic spectra and the effects of the ionisation profiles on the velocity dispersion and centroid velocities. Finally Sections 5 and 6 discuss and summarise the results and trends revealed in the previous sections. \section{Simulations} \subsection{Numerical Code} We explore the kinematics and velocity structures within clump-core complexes in partially ionised, isothermal magnetic interstellar molecular clouds. The simulations were performed using the non-ideal MHD IDL code developed by \citet{BC2004} and including the additional ionisation profiles described in \citetalias{BB2014}. This code assumes planar clouds with infinite extent in the $x$- and $y$ directions and a local vertical half thickness Z. A full description of the assumptions, nonaxisymmetric equations and formulations can be found in \citet{BC2004, CB2006, Basu2009a, Basu2009b}, however for convenience, we highlight those essential for the analysis within this paper. The model assumes a magnetic field that threads the cloud perpendicular to the $xy$ plane and includes the effects of ambipolar diffusion. The timescale for collisions between neutral particles and ions is \begin{equation} \tau_{ni} = 1.4 \left(\frac{m_i +m_{H_2}}{m_i} \right) \frac{1}{n_i\langle\sigma w\rangle_{iH_2}}, \label{tni} \end{equation} where $m_{i}$ is the ion mass, $m_{H_2}$ is the mass of molecular hydrogen, $n_{i}$ is the number density of ions, and $\langle\sigma w\rangle_{iH_2}$ is the neutral-ion collision rate. Assuming collisions between H$_{2}$ and HCO$^+$, the neutral-ion collision rate is $1.69\times 10^{-9}$ cm$^{3}$ s$^{-1}$ \citep{MM1973}. The factor of 1.4 in Equation~\ref{tni} accounts for neglecting the inertia of helium in calculating the slowing-down time of the neutrals by collisions with ions \citep{CB2006, MC1999} The threshold for collapse within a molecular cloud is regulated by the normalized mass-to-flux ratio of the background reference state, \begin{equation} \mu_{0} \equiv 2\pi G^{1/2}\frac{\sigma_{n,0}}{B_{\rm ref}}, \end{equation} where $(2\pi G^{1/2})^{-1}$ is the critical mass-to-flux ratio for gravitational collapse in the adopted model \citep{CB2006}, $\sigma_{n,0}$ is the initial mass column density and $B_{\rm ref}$ is the constant, uniform magnetic field strength of the background reference state far away from the sheet. In the limit where $\tau_{ni} \rightarrow 0$, frequent collision between the neutral particles and ions couple the neutrals to the magnetic field, that is, the medium is flux frozen. Under these conditions, subcritical regions ($\mu_{0} < 1$) are supported by the magnetic field and only supercritical regions ($\mu_{0} > 1$) may collapse within a finite time frame. Non-zero values of $\tau_{ni}$ are inversely dependent on the ion number density and therefore on the degree of ionisation for a fixed neutral density. Finally, the model is characterized by several dimensionless free parameters including a dimensionless form of the initial neutral-ion collision time ($\tau_{ni,0}/t_{0}~\equiv~2\pi G\sigma_{n,0}\tau_{ni,0}/c_{s}$) and a dimensionless external pressure ($\tilde{P}_{\rm ext} \equiv 2 P_{\rm ext}/\pi G \sigma^{2}_{n,0}$). Here, $c_{s}~=~(k_{B} T/m_{n})^{1/2}$ is the isothermal sound speed; $k_{B}$ is the Boltzmann constant, $T$ is the temperature in Kelvin, and $m_{n}$ is the mean mass of a neutral particle ($m_{n}~=~2.33$ amu). We normalize column densities by $\sigma_{n,0}$, length scales by $L_{0}~=~c_{s}^{2}/2\pi G \sigma_{n,0}$ and time scales by $t_{0}~=~c_{s}/2\pi G \sigma_{n,0}$. Based on these parameters, typical values of the units used and other derived quantities are \begin{eqnarray} \nonumber\sigma_{n,0} &=& \frac{3.63\times 10^{-3}}{(1+\tilde{P}_{\rm ext})^{1/2}}\left(\frac{n_{n,0}}{10^3 \rm ~cm^{-3}}\right)^{1/2}\left(\frac{T}{10 ~\rm K}\right)^{1/2} \rm g~cm^{-2},\\ &&\\ c_{s} &=& 0.188\left(\frac{T}{10 ~\rm K}\right)^{1/2} \rm km~s^{-1},\\ t_{0} &=& 3.98\times 10^5\left(\frac{10^3 \rm~ cm^{-3}}{n_{n,0}}\right)^{1/2}(1 + \tilde{P}_{\rm ext})^{1/2}~\rm yr,\label{time}\\ \nonumber L_{0} &=& 7.48\times 10^{-2} \left(\frac{T}{10 ~\rm K}\right)^{1/2}\times\\ &&\left(\frac{10^3 \rm ~cm^{-3}}{n_{n,0}}\right)^{1/2}(1 + \tilde{P}_{\rm ext})^{1/2}~\rm pc,\label{length} \end{eqnarray} where $n_{n,0}$ is the initial neutral number density. For our analysis, we assume a dimensionless external pressure $\tilde{P}_{\rm ext} = 0.1$ ($P_{\rm ext}/k_{B} \approx 10^{3}$ cm$^{-3}$ K) and a temperature $T = 10$ K. By assuming this value for $\tilde{P}_{ext}$, we are neglecting the effect of surface gravity waves, which act to reduce the fragmentation length and time scales \citep[see][]{CB2006,Basu2009b}. Our clouds are assumed to be evolving in isolation (i.e., not embedded within a cloud complex or adjacent to a hotter region) so that high external pressures would not be expected. \subsection{Model Parameters} \label{model} The analysis presented in this paper expands upon that presented in \citetalias{BB2014}. As such the models presented here include four simulations previously presented within that paper and two extra models that were performed to round out the data set. As described in \citetalias{BB2014}, these simulations assume an initially diffuse cloud with an initial background column density which corresponds to a visual extinction $A_{V,0} = 1$ mag. Using the prescription of \citet{Pineda2010} \citep[see also][]{BB2012} and assuming a mean molecular weight of 2.33 amu, the resulting conversion between visual extinction and mass column density is \begin{equation} \sigma_{n} =3.638\times 10^{-3} (A_{V}/\rm mag)~\rm g~cm^{-2}. \label{av2sigma} \end{equation} All simulations begin with an initial linear column density perturbation, $\delta\sigma_{n}/\sigma_{n,0}$ which is a normally distributed random variable with mean equal to zero and standard deviation $A$. The random value of $\delta\sigma_{n}/\sigma_{n,0}$ for each pixel is then added to the background column density in that pixel. To ensure that these perturbations do not artificially bias the simulation toward a particular fragmentation scale, all wavelengths are sampled and assigned to the region, i.e., they are white noise perturbations. For some simulations, \begin{figure*} \centering \includegraphics[width = 0.33\textwidth]{f1a.eps} \includegraphics[width = 0.33\textwidth]{f1b.eps} \includegraphics[width = 0.33\textwidth]{f1c.eps}\\ \includegraphics[width = 0.33\textwidth]{f1d.eps} \includegraphics[width = 0.33\textwidth]{f1e.eps} \includegraphics[width = 0.33\textwidth]{f1f.eps} \caption{Column density enhancement maps of clump/core regions. Each panel is a zoom-in of the full $64\pi L_{0}~x~64\pi L_{0}$ region that focuses on the region containing the most evolved clump/core in each model. Contours show the visual extinction value in 2 magnitude steps starting at $A_{V} = 2$. Panels are organized to depict increasing frequency of perturbations (initial perturbation only (left), perturbations every 10$t_{0}$ (middle), and perturbations every 5$t_{0}$ (right)) with top row depicting models with the step-like ionisation profile and the bottom row depicting models with the cosmic-ray only ionisation profile. Panels show each model at the last time of the respective simulations. Top row (from left to right): Model I ($t/t_{0} = 143.6$), Model II ($t/t_{0} = 80.7$), Model III ($t/t_{0} = 70.0$). Bottom row (from left to right): Model IV ($t/t_{0} = 67.3$), Model V ($t/t_{0} = 40.9$), Model VI ($t/t_{0} = 35.4$). Axes are in units of $L_{0}$ (1 $L_{0}$ = 0.075 pc or 2.54 pixels.)} \label{sigmaoverlays} \end{figure*} subsequent perturbations are applied at specific intervals ($\Delta t_{sp}/t_{0}$). All simulations are performed on a 512 $\times$ 512 periodic box. The box size is 64$\pi L_{0}$, which is much larger than the preferred fragmentation scale ($4\pi L_{0}$) in the non-magnetic limit. For $T$ = 10~K and $\sigma_{n,0} = 3.638\times~10^{-3}$~g~cm$^{-2}$, $L_{0}~=~0.075$ pc. This translates to a box size of 15.16 pc or a pixel size of 0.0296 pc. For this analysis, all simulations assume perturbations with standard deviation about the mean of $A = 0.03$ and an initial mass-to-flux ratio $\mu_{0} = 1.1$. All other assumptions are the same as described in \citetalias{BB2014}. The initial parameters for the specific models can be found in Table~\ref{models}. The ionisation profiles quoted indicate the initial neutral-ion collision time. The SL profile (step-like) results in an initially almost flux frozen medium (i.e., $\tau_{ni,0}/t_{0} = 0.001$) while the CR profile results in a longer neutral-ion collision time ($\tau_{ni,0}/t_{0} = 0.2$). All simulations run until any pixel within the simulation equals or exceeds $\sigma_{n}/\sigma_{n,0} =10.$ \begin{deluxetable}{cccc} \tablecaption{Simulation Parameters} \tablewidth{0pt} \tablehead{ \colhead{} & \colhead{\citetalias{BB2014}} & \colhead{Ionisation} & \colhead{}\\ \colhead{Model} & \colhead{Model} & \colhead{Profile\tablenotemark{*}} & \colhead{$\Delta t_{sp}/t_{0}$\tablenotemark{\dag}} } \startdata I & A & SL & $\infty$ \\ II & C & SL & 10 \\ III & B & SL & 5 \\ IV & -- & CR & $\infty$ \\ V & G & CR & 10 \\ VI & -- & CR & 5 \enddata \tablenotetext{*}{Indicates the initial value of $\tau_{ni,0}/t_{0}$: $\tau_{ni,0}/t_{0}~=~0.001$ (SL) or $\tau_{ni,0}/t_{0}~=~0.2$ (CR)\\ \dag~~$\Delta t_{sp}/t_{0}$ is the time between subsequent perturbations in dimensionless units. } \label{models} \end{deluxetable} \subsection{Aims and Regions of Interest} \label{aims} Studies and observations \citep[][among others]{Kirk2009, Pineda2010Coherence, Walsh2004, Walsh2007} of star forming regions have found several properties regarding the kinematics of prestellar cores in relation to their surroundings. Specifically, cores are observed to have little internal turbulence \citep{BM1989, Jijina1999}, smaller velocity dispersions than the surrounding material \citep{BM1989, Goodman1998, Jijina1999, Pineda2010Coherence}, and small relative motions with respect to the surrounding material \citep{Walsh2004, Walsh2007}. \citet{Kirk2009} looked at the effect of turbulence and magnetic fields on the line widths of synthetic spectra created from thin disk simulations of molecular clouds. These simulations were performed using the same IDL MHD code as described above, however they only consider the CR ionisation profile. In this paper, we explore the effect of the assumed ionisation profile on the velocity field for simulations with ongoing column density perturbations. Specifically, we are interested in the regions of the simulation where clumps and/or cores have formed by the end of each run. From analysis of velocity maps within the plane, we look to determine how the ionisation profile shapes the velocity field. In addition, we create synthetic spectra assuming uniform optically thin conditions and constant fractional abundances of species representative of the cloud envelope (CO) and core ($N_{2}H^{+}$) to determine whether the cores formed within the simulations conform with the three kinematic properties of observed cores. Figure~\ref{sigmaoverlays} shows the density enhancement maps for each model. Each panel shows a zoom-in of the full simulation, focusing on the region containing the main clump or core within that model. The contours show the visual extinction in steps of 2 mag starting at $A_{V} = 2$ mag. Each panel is taken at the respective endpoint of each simulation (i.e., when one pixel within the full simulation reaches or exceeds $\sigma_{n}/\sigma_{n,0} = 10$) as indicated by the times in the caption. Figure~\ref{xi} shows the contours of the ionisation fraction overlaid on column density enhancement maps for two representative models: Model II (top) for the step-like ionisation profile and Model V (bottom) for the CR-only ionisation profile. The contour levels are indicated in the figure caption. As shown in the figure, the ionisation structure for both models follows the column density structure with the highest density regions exhibiting the lowest values of $\chi_{i}$. However there are some noticeable differences that are direct consequences of the profile shape. For Model V, we see that the ionisation contours are more or less evenly spaced, while for Model II, there is evidence of steep gradients in ionisation surrounding the various levels of structure within the clump including the clump itself. As expected, the entire core within Model V is encompassed by a low ionisation contour ($\chi_{i} = 1.0\times 10^{-7} $). In contrast, for Model II, the lowest ionisation contours outline the core envelopes while the clumps and low density gas are outlined by significantly higher ionisation contours. This is direct evidence of two stage fragmentation, where the larger fragmentation lengthscales occur at larger ionisation fractions and smaller lengthscales require smaller ionisation fractions \citepalias[see][]{BB2014}. \begin{figure} \centering \includegraphics[width = 0.48\textwidth]{f2a.eps}\\ \includegraphics[width = 0.48\textwidth]{f2b.eps} \caption{Ionisation contours overlaid on column density enhancement maps for Model II (top) and Model V (bottom). Model II Contour levels: ($1.0\times 10^{-8}$, $5.0\times 10^{-8}$, $1.0\times 10^{-7}$, $5.0\times 10^{-7}$, $1.0\times 10^{-6}$, $5.0\times 10^{-6}$, $1.0\times 10^{-5}$, $5.0\times 10^{-5}$, $1.0\times 10^{-4}$). Model V Contour levels: ($5.0\times 10^{-8}$, $7.5\times 10^{-8}$, $1.0\times 10^{-7}$) (inner to outer respectively).} \label{xi} \end{figure} \section{Velocity Structure within Molecular Clouds} Observations of the velocity within a molecular cloud are restricted along the line of sight. Our simulations give us the opportunity to look at the velocity structure of the models within the plane of the sky. Figure~\ref{velocity} shows the velocity maps of the six models for the same regions depicted in Figure~\ref{sigmaoverlays}. The contours show the same column density enhancement levels as those depicted in Figure~\ref{sigmaoverlays} and are plotted to show the location of the clump/core structures in each model. Note the differences in the velocity ranges as denoted by the color bar for each panel. As shown, each model exhibits regions of high and low velocity, however we see that Models II and III have the largest velocity range while Models V and VI have the smallest. Looking at the models individually, we see that for Model I, the core regions exhibit the lowest velocity with two high velocity lobes on either side. For Models II and III, the addition of the ongoing perturbations changes the velocity structure dramatically. In these two cases, the low velocity region (hereafter referred to as the velocity valley) occurs in the center of the clump with high velocity streamers that exist on the outer edges. Specifically, the highest velocities tend to occur in the lower density gas. Finally, for the three models with the CR-only ionisation profile (Models IV - VI), the addition of perturbations still results in a chaotic velocity field, but to a lesser degree than in the step-like ionisation models. The larger degree of ambipolar diffusion throughout the simulations seems to result in a simpler velocity gradient with high velocities on one side of the core and low velocities on the opposite side. The distinct difference between the velocity structures formed under the different ionisation conditions is directly due to the assumed ionisation profile. Models that assume a step-like ionisation profile keep the low density regions at nearly flux frozen conditions, thus preventing collapse. Conversely, models that assume a CR-only ionisation profile allow for collapse to occur even at lower densities. Looking at Models II and V, for example, the difference in the velocity field between these two models is due to the fact that in Model II, the velocity magnitude increases in regions with steep gradients in the ionisation fraction. This velocity enhancement is caused by flows of material from high to low ionisation fraction regions as the ability for neutrals to slip past the magnetic field lines increases. With the lower ionisation fraction in Model V (or higher density regions of Model II), the gradients in the ionisation fraction are not as steep and therefore do not induce high velocities. \begin{figure*} \centering \includegraphics[width = 0.33\textwidth]{f3a.eps} \includegraphics[width = 0.33\textwidth]{f3b.eps} \includegraphics[width = 0.33\textwidth]{f3c.eps}\\ \includegraphics[width = 0.33\textwidth]{f3d.eps} \includegraphics[width = 0.33\textwidth]{f3e.eps} \includegraphics[width = 0.33\textwidth]{f3f.eps} \caption{Velocity maps at final time for each model with column density enhancement contours. Panels and contours depict the same models and levels as Figure~\ref{sigmaoverlays}, respectively.} \label{velocity} \end{figure*} \begin{figure*} \centering \includegraphics[width = 0.33\textwidth]{f4a.eps} \includegraphics[width = 0.33\textwidth]{f4b.eps} \includegraphics[width = 0.33\textwidth]{f4c.eps}\\ \includegraphics[width = 0.33\textwidth]{f4d.eps} \includegraphics[width = 0.33\textwidth]{f4e.eps} \includegraphics[width = 0.33\textwidth]{f4f.eps} \caption{Absolute value momentum maps ($|p|$) with column density enhancement contours. Panels and contours depict the same models and levels as Figure~\ref{sigmaoverlays}.} \label{p_sigmaoverlays} \end{figure*} Figure~\ref{p_sigmaoverlays} shows the mass weighted velocity (momentum) maps with overlaid visual extinction contours for each of the six models. Again note the different color scales for each panel. Comparing the top row to the bottom, we see that the models with the step-like ionisation profile exhibit a larger momentum range than models with the CR-only ionisation profile. As with the velocity maps, the addition of on going perturbations acts to distort the momentum fields. Looking closely at the panels for Models II and III (top middle and top right, respectively) we see that the largest momentum gradients seem to occur on the periphery of the cores. \begin{figure*} \centering \includegraphics[width = 0.48\textwidth]{f5a.eps} \includegraphics[width = 0.48\textwidth]{f5b.eps}\\ \includegraphics[width = 0.48\textwidth]{f5c.eps} \includegraphics[width = 0.48\textwidth]{f5d.eps} \caption{Density enhancement maps with $p_{x}$ (left) and $p_{y}$ (right) momentum contours for Model I (top row) and Model III (bottom row). Cyan contours indicate motions in the negative direction (toward the observer) and red contours indicate motions in the positive direction (away from observer). } \label{sigma_poverlays} \end{figure*} Finally, Figure~\ref{sigma_poverlays} shows examples of density enhancement maps with overlaid $p_{x}$ and $p_{y}$ contours for Model I (top row) and Model III (bottom row). As depicted in all four panels, there exists an imaginary line where the contours switch from positive to negative. This indicates the convergence point for the gravitationally induced flows which form the clump/core. By comparing the two different momentum maps (Figures~\ref{p_sigmaoverlays}~\&~\ref{sigma_poverlays}) with the density enhancement and velocity maps in Figures~\ref{sigmaoverlays}~\&~\ref{velocity} respectively, we can determine the behaviour of the region. First, the components of the momenta show that the location of the clumps in all of the models occurs where the momentum switches signs from positive to negative. Second, the largest momentum gradients seem to occur on the periphery of the cores. On closer inspection, we see that these high momentum regions only occur on one side of the cores, indicating the direction of flow for the material from the surroundings onto the forming cores. \section{Synthetic Spectra} In order to see if the cores present in the simulations follow the three observed trends outlined in Section~\ref{aims}, we must perform spectral analysis on each of the models and as such must produce synthetic spectral observations of the cores and surrounding gas. Observations of star forming regions are three dimensional in nature, where the third dimension is the line of sight velocity ($V_{los}$). In principle, our simulations are two dimensional, however they do have a finite thickness in the $z-$ direction. Therefore, if we assume an observer is looking at the sheet edge-on, we can still extract synthetic spectra for two lines of sight, either along the $x-$ axis or along the $y-$ axis. For this rotation, we define the center of the full simulation region ($x,y = 0,0$) to be analogous to the center of a compass with $+y$ defined as north. Lines of sight that are parallel to the $y-$ axis originate from the southern extent of the simulation and terminate at the northern extent. Likewise, lines of sight parallel to the $x-$ axis originate from the western extent and terminate at the eastern extent. Based on this, we refer to these two orthogonal lines of sight as NS and EW, respectively, where the last letter indicates the location of the observer. To compare the kinematic properties of the simulated cores to the observed properties outlined in Section~\ref{aims}, we need synthetic spectra of the neutral and ionic gas components of both the cores and surrounding low density gas. The inclusion of ambipolar diffusion in the simulations allows us to examine the synthetic spectra for both the neutral and ionic components of the medium. When comparing to observations, we must choose the appropriate tracers that our simulations will correspond to. Based on the termination conditions of the simulations, the final range of visual extinctions within our models runs from $<$~1 mag to just above 10 mag, which corresponds to column densities $N = 9\times 10^{20}$~cm$^{-2}~-~9\times 10^{21}$~cm$^{-2}$. As we are restricting our analysis to the clump-core regions, we must assume tracers that are appropriate for the densities within these regions. Ammonia (NH$_{3}$) and Diazenylium (N$_{2}$H$^{+}$) are excellent neutral and ionic tracers respectively for regions with molecular hydrogen densities in the range of 10$^{4}$ cm$^{-3}$ - 10$^{5}$ cm$^{-3}$ and $A_{V} = 3 - 9.5$ mag \citep{Tafalla2002, Caselli2002a}. As shown by the panels in Figure~\ref{sigmaoverlays}, the clump/core regions are encompassed within the $A_{V} = 2$ mag contour with the most structure appearing within the $A_{V} = 4$ mag contour where the average volume density is above $4.4\times 10^{3} \rm~cm^{-3}$, thus indicating that the majority of our region of interest would indeed be detected by these two tracers. For the low-density gas, we assume the majority of the neutral particles are carbon monoxide (CO) and the majority of ions are H$^{13}$CO$^{+}$. No chemistry is included in the models, so constant abundances of the various species are assumed, as well as optically thin conditions. The effects of the inclusion of a simplified chemistry and radiative transfer will be discussed in a future paper. \subsection{Linewidth and Centroid Velocity Analysis} \subsubsection{Method} With the above points in mind, we construct synthetic spectra by assuming Gaussian line shapes for each pixel such that the line of sight velocity in each pixel (i.e., $V_{x}$ or $V_{y}$ depending on orientation) is the mean velocity for the pixel. The width of the line is dependent on the thermal and non-thermal velocities, i.e., \begin{equation} FWHM = \sqrt{\Delta V_{NT}^{2} + \Delta V_{T}^{2}}, \end{equation} where $\Delta V_{T}^{2} = 8 \ln 2~c_{s}^{2}$ is the thermal velocity component and $\Delta V_{NT}^{2} = \Delta V_{obs}^{2} - \Delta V_{T}^{2}$ is the non thermal velocity component with $\Delta V_{obs}$ corresponding to the observed velocity \citep{Myers1991}. For each pixel, we assume that they are small enough such that they are thermalised, i.e., there is no non-thermal component, so that observed non-thermal motions will reflect the kinematics. Based on this assumption, the width of the Gaussian is \begin{equation} FWHM = \sqrt{\Delta V_{T}^{2}} = \sqrt{8 \ln 2~c_{s}^{2}}. \end{equation} The height of the Gaussian is given by the FWHM scaled by the neutral or ionic column density. The total spectral line is then constructed by summing up all individual components along the line of sight. For our analysis, we look at spectra for lines of sight both on and off source. For the on-source spectra, we create spectra for the two perpendicular lines of sight (NS and EW) discussed in the previous section. For each line of sight, we create four separate spectra: a low density component and core component for both the neutral and ionic gas, respectively, assuming that the high and low density components are traced by the four molecules discussed above. For this analysis, the low density gas (LDG) is defined as regions along the los with visual extinction $2~<~A_{V}~<~7$ mag and the cores are defined as regions along the los with visual extinction $A_{V} > 7$ mag. The sources for which we create these spectra are listed in Table~\ref{coordinates}. For each model, the MC designation refers to the ``Main Core'' which is the most evolved core (i.e., the one that caused the simulation to terminate; see Section~\ref{model}). Models II and III also exhibit a second well-evolved core in the south east (SE) region of the clump. For these models, we investigate this core in addition to the MC in order to determine how the spectra may change with contamination from gas originating from other regions of the clump. \begin{deluxetable}{lcrr} \tablecaption{Core Designations} \tablewidth{0pt} \tablehead{ \colhead{Model} & \colhead{Designation} & \colhead{$x$} & \colhead{$y$} } \startdata I & MC & -26.5 & -53.5 \\ II & MC & 17.6 & -52.0 \\ II & SE & 32.0 & -57.8 \\ III & MC & 15.2 & -55.1 \\ III & SE & 34.0 & -57.4 \\ IV & MC & 0.4 & 39.1 \\ V & MC & 85.5 & 62.1 \\ VI & MC & 85.2 & 62.5 \enddata \label{coordinates} \end{deluxetable} \subsubsection{Results: Core and LDG Properties} \begin{figure*} \includegraphics[width = 0.76\textwidth, angle=-90]{f6.eps} \caption{Examples of synthetic spectra for Model I (upper left), Model III (upper right), Model IV (lower left), and Model VI (lower right). Plots show the synthetic lines and Gaussian fits for the low density gas (outer Gaussian) and core (inner Gaussian) as indicated by the legend. Symbols indicate the ``observed'' values while the solid lines show the Gaussian fits. All spectra are scaled such that the strongest line in each panel corresponds to a scaled column density of unity. All panels show a NS los through the main core in each individual model (see Table~\ref{coordinates}) and assume a low density gas range 2 mag $< A_{V} <$ 7 mag and a lower core threshold of $A_{V} = 7$ mag.} \label{spectra} \end{figure*} Figure~\ref{spectra} shows examples of the synthetic spectra for Models I (upper left), III (upper right), IV (lower left), and VI (lower right). Each panel shows the spectra for the low-density gas (outer Gaussian) and the component of the core (inner Gaussian) for the assumed visual extinction ranges. The open circles show the synthetically generated spectra for the neutral low density gas (black), neutral core gas (red), ionic low density gas (green) and ionic core gas (blue). The correspondingly colored solid lines show the Gaussian fits for each of the four species. All spectra were scaled such that the strongest line in each panel corresponds to a column density of unity. The data represented by the black and red symbols (low density and core neutral gas, respectively) are not visible on the plot given that they are obscured by the green and blue symbols (low density and core ionic gas, respectively). This indicates that for both ionisation profiles, the motions of the ions and neutrals are very similar. Comparing the models that assume a step-like ionisation profile (top row) to those with a cosmic ray only profile (bottom row) we see that both the core and low density envelope within models with the step-like ionisation profile exhibit large non-thermal contributions to the Linewidth while the CR only models only show evidence of slight non-thermal contributions. \begin{figure*} \includegraphics[width = 0.76\textwidth, angle = -90]{f7.eps} \caption{Resulting Gaussian parameters for both neutral particles and ions for all models and lines of sight. Symbols depict values for the assumed low density gas (LDG) or core tracers as indicated by the legends. The solid symbols depict models with the step-like ionisation profile while open symbols depict models with the CR-only ionisation profile. The LDG includes gas within the visual extinction range 2 mag $<~A_{V}~<$ 7 mag, while core gas includes gas with $A_{V}~>~7$ mag. Top panels: velocity dispersion ($\sigma_{V}$). Bottom panels: centroid velocity ($\mu$). The black and red dotted lines indicate the thermal variance ($\sigma_{T}$) for each respective molecule assuming a temperature of 10 K. The dashed line shows $\mu = 0$ for visual reference. } \label{gaussianparams} \end{figure*} Figure~\ref{gaussianparams} shows the results of fitting Gaussians to each of the synthetic spectra. Panels show the velocity dispersion ($\sigma_{v}$, top) and the centroid velocity ($\mu$, bottom) for the LDG (2 mag $<~A_{V}~<$ 7 mag, left) and core gas ($A_{V}~>~7$ mag, right). Here we have scaled the fitted parameters to the dimensional values assuming the mean mass for the non-thermal component and the mass of the molecular tracer for the thermal component. As discussed above, we assume the low density neutral gas, low density ionic gas, neutral core gas and ionic core gas are traced by CO, H$^{13}$CO$^{+}$, NH$_{3}$ and N$_{2}$H$^{+}$, respectively. The solid symbols depict models with the step-like ionisation profile while lighter symbols depict models with the CR-only ionisation profile. The black and red dotted lines in the upper panels indicate the thermal velocity dispersion for each molecule assuming a temperature of 10 K. The dashed line in the lower panels depicts the $\mu = 0$ line for visual reference. Focusing on the top panels first, we note that in general, the low-density gas has a larger dispersion than the core gas. This is especially evident for the models with the step-like ionisation profile. This is likely due to the fact that with the step-like ionisation profile, the low-density gas is kept near flux freezing throughout the entire simulation. As such, the non-thermal component added by the microturbulent perturbations is not able to dissipate as efficiently as in the CR-only models. The neutral gas is expected to have slightly larger dispersions than the ionic component (see next paragraph). This should be reflected in larger $NH_{3}$ line widths compared to the $N_{2}H^{+}$ line widths, if the two species do indeed trace the same gas \citep[see][for possible exceptions]{Friesen2010, Tafalla2004}. Looking at the bottom panels, we see no discernible trend between the LDG and core gas. The centroid velocity of ions and neutrals within the high-density gas can differ significantly depending on the line of sight and model. \begin{figure} \includegraphics[width = 0.38\textwidth, angle = -90]{f8.eps} \caption{Difference between neutral and ions for low density gas and cores. Symbols show the difference (neutral - ion) for the visual extinction ranges indicated in the legend. Solid symbols indicate models with the step-like ionisation profile while open symbols indicate models with CR-only ionisation profile. Left Panel: Difference in the velocity dispersion. Right Panel: Difference in the centroid velocity.} \label{comparison1} \end{figure} Figure~\ref{comparison1} highlights the difference between the Gaussian parameters of the neutrals and ions for each of the core lines of sight studied. The left panel shows the difference between the velocity dispersion of the neutrals and ions ($\sigma_{ni}$) while the right panel shows the difference between the centroid velocity of the neutrals and the ions ($\mu_{ni}$). The solid symbols depict models with the step-like ionisation profile while the lighter symbols depict models with the CR-only ionisation profile. Looking at the left panel first, we see that for all models, the difference between the neutrals and ions is always positive. This indicates that the neutrals have larger velocity dispersions than the ions within both the LDG and cores regardless of the ionisation profile. This is because gravitationally driven motions affect neutrals more strongly than the ions. In addition, we see that in all cases, the difference between the neutrals and ions is larger for the cores than for the low-density gas. Also evident is a distinct split between the two different ionisation profiles. For both ionisation profiles, there does not seem to be a discernible trend with increasing frequency of perturbations. Looking at the right hand panel, we see that the difference between centroid velocities of the neutral particles and ions for the LDG is small while the difference for the core gas shows larger deviations relative to the low density gas for some lines of sight. \begin{figure} \centering \includegraphics[width = 0.37\textwidth,angle=-90]{f9.eps} \caption{Gaussian parameters as a function of position. Top row shows the variation in the standard deviation for neutral particles ($\sigma_{v,n}$) while bottom row shows the variation in the centroid velocity of the neutral particles ($\mu_{n}$). Both values are scaled to the sound speed ($c_{s}$). Left column shows the scans across the main core in Model I for NS lines of sight. Right column shows the scans across the full clump region in Model III for EW lines of sight. Blue lines shows the values for all gas with $A_{V}~>~2 $ mag while black lines show the values for all gas with $A_{V}~>~5$ mag. Vertical dashed lines indicate the locations of the main core (red) and SE core (green) for each of the models/lines of sight. The vertical purple line shows the location of a medium density region at $(x,y) = 14.8,~-51.95$ (north of the MC) denoted as N in the legend. } \label{GaussianI} \end{figure} \subsubsection{Results: Region properties} In addition to the individual lines of sight through the center of the cores, we performed scans across the clumps/cores for both the NS and EW lines of sight to see how the standard deviation or velocity dispersion ($\sigma_{v,n}$) and centroid velocity ($\mu_{n}$) of the neutral particles change as a function of position in step-like models only. For this analysis, we consider all material above 2 mag and above 5 mag to probe overall motions across the cloud. Figure~\ref{GaussianI} shows examples of some of the situations that can arise within forming cores. Here we show a scan across the clump/core in Model I along the NS los (left), and scans across the clump in Model III along the EW los (right). For both models, the top row shows the variation in the neutral velocity dispersion ($\sigma_{v,n}$) while the bottom row shows the variation in the neutral centroid velocity ($\mu_{n}$). The dashed lines indicate the locations of the main core (red) and SE core (green) for each of the models/lines of sight. The purple dashed line shows the location of a medium density region just north of the MC in Model III. The blue data points include all gas along the line of sight with $A_{V} > 2$ mag while the black data points include all gas with $A_{V} > 5$ mag. The two plots for Model I depict a best-case scenario within our simulations. Looking at the change in variance across the clump, we see that the amount of dispersion increases as we get closer to the central core and then decreases again as we move away creating a Gaussian like profile. The degree of variance is larger for the $A_{V} > 2$ mag data than the $A_{V} > 5$ mag data. This shows that the majority of the dispersion comes from the low density gas which agrees with the trends shown in Figure~\ref{gaussianparams}. All across the core (defined by the $A_{V} > 5$ curve) there is a clear drop in velocity dispersion compared to the lower density material surrounding the core. This is reminiscent of the transition to coherence found toward low-mass dense cores \citep{Pineda2010Coherence}. At the location of the core itself, there is a very small local minimum indicating that the dispersion within the core is smaller than its immediate surroundings, 0.39\% and 0.36\% for the $A_{V} > 5$ mag and $A_{V} > 2$ mag data respectively. Looking at the lower left panel, we see that the variation in the centroid velocity reveals that it decreases monotonically as one scans across the region. This decrease reveals that to the left of the clump/core, most of the material is moving in the positive direction while to the right, most of the material is moving in the negative direction. At the location of the core itself, there is a distinct drop as the direction of the dominating velocity changes signs. The decreasing trend observed in the centroid velocity is a result of the angle between the line of sight and the orientation of the clump. In this model, the NS and EW lines of sight are both approximately at a 45 degree angle to the major axis of the clump and therefore at a 45 degree angle to the in-flowing material. This results in material moving in the positive direction dominating the sample on one side of the core and material moving in the negative direction dominating on the other side. If the lines of sight were parallel and perpendicular to the clump major axis, the positive and negative velocity contributions would cancel each other out resulting in a flat line. Comparing the two data sets, we see there is a negligible difference between the $A_{V} > 2$ mag and $A_{V} > 5$ data mag for $\mu_{n}$. The right hand column in Figure~\ref{GaussianI} shows the scans for Model III along the EW lines of sight. Compared to Model I, this model shows the effect of having multiple structures within a clump and along the line of sight. Looking at the upper right hand panel, we see that the trends in the two data sets are different. In the region between y = -56 and y = -57, we see a spike in the $A_{V} > 5$ mag data that is larger than in the $A_{V} > 2$ mag data. This would indicate that there is a large variance in the $A_{V} > 5$ mag gas within this region that is diluted by the low density gas. Likewise, between y = -55 and y = -52, there is a large peak in the $A_{V} > 2$ mag gas that is not as pronounced in the $A_{V} > 5$ mag gas. Looking at the scan of the central velocity, we see that the more complicated clump/core structure of this model no longer results in a monotonically decreasing trend. Again, as with the dispersion plot, we see large differences between the two data sets. Comparing the data trends in Figure~\ref{GaussianI} to the density enhancement, velocity and momentum maps (see Figures~\ref{sigmaoverlays},~\ref{velocity},~\ref{p_sigmaoverlays}~\&~\ref{sigma_poverlays}), we can start to pick out some of the features visible in the maps. First, the location of the cores appear to coincide with either a valley or sharp gradient for both the $\sigma_{n}/c_{s}$ and $\mu_{n}/c_{s}$ values. With respect to the variance ($\sigma_{n}$) this would once again indicate a transition to coherence. This is even evident at the location of the medium density region (N) at the location designated by the purple dashed line. Second, the bottom right hand panel has a sharp increase between y = -57 and y = -55 in the $A_{V} > 5$ mag gas that shows a distinct switch in the direction of velocity. This is consistent with the velocity and momentum maps of the region (see Figure~\ref{velocity}~\&~\ref{sigma_poverlays}) which show a velocity switch within this region. \section{Discussion} \subsection{Velocity Structures} As shown in the velocity maps for Models II and III (Figure~\ref{velocity}, top middle and top right, respectively), we see that the cores that have formed within the clump seem to exist at the edge of the velocity valley. This is also observed in the velocity maps for Models IV - VI although to a lesser extent. Conversely, Model I shows that the position of the velocity valley coincides with the position of the core. The question is whether the cores always form within a velocity valley. Unlike observations, with simulations we have the advantage of being able to look back in time. Figure~\ref{TimelapseII} shows an example of the time-lapse of the velocity structure for Model II at times $t/t_{0} = 75.1, 78.1, 79.1, 79.6, 80.1~\&~80.7$ (from top left to bottom right, respectively). The contours show the density enhancement in 1 magnitude levels starting at $A_{V} = 2$ mag that correspond to the time of the velocity map. From these maps, we see that the extent of the velocity valley decreases over time. Looking at the visual extinction contours across the six times, the formation of the clump and subsequent formation of the cores at later times is evident. If we look at the location of the three high density regions in the last panel (Region 1: $x_{1},y_{1} = 17.6,-52.0$, Region 2: $x_{2},y_{2} = 32.0,-57.8$, and Region 3: $x_{3},y_{3} = 21.7, -52.8$) and compare to the previous times, we see that at early times, the regions which end up forming these three cores are initially entirely within the velocity valley and then migrate toward the edge of the velocity valley as they develop. This migration is due to the fact that the clump (and therefore the velocity valley) are contracting around these three regions. Comparing the final panel of Figure~\ref{TimelapseII} to the previous three panels, we can see that the positions of these three cores do not significantly change over the course of their formation (the magnitude of the core velocity is on the order of 0.07 pc/Myr). This implies that the cores do indeed seem to form within the velocity valley. This behaviour is also exhibited in the other 5 models (not shown). We also note that the cores are elongated along the flow with the densest regions occurring closer to the velocity valley. This results in an elongated shape with the head pointing toward the velocity valley. This is reminiscent to the cometary shapes of starless cores observed by \citet[][see Figure 14]{Crapsi2005}. \begin{figure*} \centering \includegraphics[width = 0.33\textwidth]{f10a.eps} \includegraphics[width = 0.33\textwidth]{f10b.eps} \includegraphics[width = 0.33\textwidth]{f10c.eps}\\ \includegraphics[width = 0.33\textwidth]{f10d.eps} \includegraphics[width = 0.33\textwidth]{f10e.eps} \includegraphics[width = 0.33\textwidth]{f10f.eps} \caption{Velocity maps for Model II for six times (in order from top left to bottom right: $t/t_{0}$ = 75.1, 78.1, 79.1, 79.5, 80.1 and 80.6). The contours show the visual extinction in 1 magnitude steps starting at $A_{V} = 2$ mag at the same time as the velocity map.} \label{TimelapseII} \end{figure*} Based on the above, the following scenario regarding the formation of the clumps/cores emerges. As mentioned earlier, the components of the momentum reveal that in all models the clumps form where two flows converge (see Figure~\ref{sigma_poverlays}). This is consistent with the observations and simulation evidence of \citet{Schneider2010} and \citet{DB2011}. This convergence of flows creates the velocity valley (see Figure~\ref{velocity}). For the models which only form a single clump/core (Models I, IV, V and VI), the clump collapses and thus the velocity valley shrinks until a single core forms, causing the core to form directly over the velocity valley. This is due to the initial parameters of these models, specifically the lack of perturbations in Model I and the CR-only ionisation profile for Models IV - VI. Higher resolution simulations with box size reduced to $8\pi L_{0}$ (not shown) confirm that subfragmentation does not occur within the core for these models. For Models II and III, the scenario becomes more complicated since they both show evidence of the two stage fragmentation \citepalias[c.f.][]{BB2014}. For these two models the high ionisation fraction within the low density gas causes a parsec size clump to form around the initially large velocity valley. As the clump collapses, the density increases and the velocity valley shrinks. This allows for a second fragmentation event to occur in regions where the density has risen enough to cause the ionisation fraction to drop, allowing for cores to form. Given the velocity structure, i.e. low velocity in the centre of the clump and higher velocity on the outskirts, the cores can only form from material flowing in from the outskirts of the clump. As the cores form from this in-flowing material, the clump continues to collapse on its own timescale \citep{BB2012}, resulting in cores coinciding with the periphery of the velocity valley. Thus the velocity valley may be at the origin of the ``transition to coherence'' widely observed in dense cores. For Models II and III, the in-flowing material can be mildly supersonic resulting in supersonic relative velocities between the two flows. This could result in shocks occurring at the junction point between the converging flows, however at the moment, the code assumes isothermality throughout the evolution. \subsection{Synthetic Spectra} \subsubsection{Comparison with Previous Work} As mentioned in Section~\ref{aims}, star forming regions show three kinematic properties of starless cores in relation to their surroundings. First, cores are observed to have little internal turbulence \citep{BM1989,Jijina1999}, i.e., the velocity dispersion is dominated by thermal motions. This property is evident in all of our simulations as shown by Figure~\ref{gaussianparams}. Second, cores have smaller velocity dispersions than the surrounding material \citep{BM1989, Goodman1998, Jijina1999, Pineda2010Coherence}. Our simulations show evidence of this property as depicted by the dips in the $\sigma_{v}$ scans across cores (c.f Figure~\ref{GaussianI}), however contamination along the line of sight and small sampling of the core regions itself makes spotting such a dip non-trivial. A previous study of synthetic spectra by \citet{Kirk2009} was able to show that the cores in their simulations follow the first two properties (i.e., they had little internal turbulence and smaller velocity dispersions than the surrounding material) however they were not able to show evidence of the third (i.e., cores show small relative motions with respect to the surrounding material). As described by \citet{Walsh2004}, the diagnostic for determining the relative motion of the core to the surroundings is comparing the difference between the centroid velocity of the core and the low density gas to the linewidths of typical chemical tracers. Small relative motions are indicated by small differences in line center velocities similar to the line width of N$_{2}$H$^{+}$ while large relative motions would be indicated by shifts in centroid velocity comparable to the broader CO line widths. Simulations by \citet{Ayliffe2007} found regions where the high density tracers would have larger line widths than the low density tracers, thus contradicting \citet{Walsh2004}, however they concede that by including chemical evolution to their calculations yields results that agree better with the observations. The ionisation profile within our simulations depicts a rudimentary treatment of chemical evolution within the cloud. As discussed previously, we assume the neutral LDG component is CO while the ions in the cores correspond to N$_{2}$H$^{+}$. For our cores to have low relative motions compared to the surroundings, the spread in the difference between the core centroid velocity and the LDG centroid velocity must be smaller than the linewidth of the core gas. \begin{figure} \includegraphics[width = 0.38\textwidth, angle = -90]{f11.eps} \caption{Relative motion of core with respect to low density gas for all synthetic spectra. Filled regions between the dashed and dotted lines indicate area covered by the average FWHMs of the CO (blue + pink) and the N$_{2}$H$^{+}$ (pink only) linewidths, respectively. The solid line shows where zero deviation occurs and is plotted for visual reference. Solid black dots depict models with the step-like ionisation profile while open black dots depict models with the CR-only ionisation profile. Red dots indicate four cases with the step-like ionisation profile that exhibit larger deviations than the other spectra.} \label{motions} \end{figure} Figure~\ref{motions} shows the comparison of the centroid velocity of the core minus the centroid velocity of the LDG ($\mu(N_{2}H^{+}) - \mu(CO)$) for all synthetic spectra. Solid black dots indicate models with the step-like ionisation profile while open black dots indicate models with the CR-only ionisation profile. The filled regions within the dashed and dotted lines indicate the extremes of the average CO (blue + pink) and N$_{2}$H$^{+}$ (pink only) linewidths, respectively. As one can see, the difference is relatively small for most cases and fall well within the boundaries defined by the N$_{2}$H$^{+}$ linewidth. This agrees with the observations of \citet{Walsh2004} as well as the concession for chemical evolution by \citet{Ayliffe2007}. The four red points in Figure~\ref{motions} indicate outliers that although they fall within the core linewidth boundaries, deviate greatly compared to the other models. Looking at the orientations of the los for these four points, we see that they all occur for EW lines of sight within Models II and III. Indeed, these lines of sight have shown the tendency to deviate compared to the other models in Figure~\ref{comparison1}~(right panel). Initial analysis of this phenomena assumed it was due to contamination by a second source along the line of sight, however this postulate is not consistent for all the lines of sight that show large deviations. For example, MII MC NS goes through two distinct high density regions and does not show a large deviation in $\mu(N_{2}H^{+}) - \mu(CO)$ while MII SE EW has very little contamination along the line of sight but does show a large deviation. Closer examination of the conditions for these four lines of sight reveal that they all intersect the imaginary line that defines where the direction of the velocity switches abruptly from positive to negative and the contributions from low and high density gas on either side of this line is asymmetric. This asymmetry is key to the large deviations in $\mu(N_{2}H^{+}) - \mu(CO)$. To illustrate, we compare Model I (which shows very little deviation) to Models II and III (which have evidence of significant deviation). In Model I, both lines of sight intersect the velocity switch line, however the centroid difference, as shown in Figure~\ref{motions}, is small. This is due to mostly symmetric contributions of low and high density gas moving in both directions along the line of sight which cancel each other out. The small deviation observed in Figure~\ref{motions} indicates a slight asymmetry between the motions of high density gas with respect to low density gas along the line of sight. Conversely, for Model II and Model III, the gas along the EW los is highly asymmetric about the velocity switch line causing the majority of the high density gas to be moving in one direction while the low density gas is moving in the opposite direction, resulting in a large deviation as shown in Figure~\ref{motions}. Given that the spectra are mass weighted, the sign of the deviation reveals which side of the velocity line the core is on: a positive deviation corresponds to being in the foreground along the line of sight moving away from the observer while a negative deviation corresponds to being in the background moving toward the observer. This explains why the phenomenon does not show up for MII MC NS. For this line of sight, both the cores and the cloud are moving in the same direction with very little deviation between the two. Observations of star forming regions (or any astronomical region) are two dimensional by nature with information about the third filled in by the velocity along the line of sight. By measuring the deviations in $\mu(N_{2}H^{+}) - \mu(CO)$ from zero as shown in Figure~\ref{motions}, one can start to construct a three dimensional picture of what a clump or other high density region may look like. In our simulations, we have the luxury of being able to see what the velocity and density look like in the same plane. For example, an observer of the region in Model III may consider the two clumps to be independent of each other, however by looking at the velocity map we can see that these two clumps are actually part of one larger clump since they share the same velocity valley. If they were independent, they would each have their own individual velocity valley. This type of analysis is not possible in observations. However, observations of a significant deviation in the mean velocities for the low density and high density gas along the line of sight have been reported \citep[e.g.,][]{Henshaw2013}. We interpret this as the presence of a velocity switch along the line of sight with the high density gas in the foreground. As indicated above, observations of this phenomenon requires two very specific conditions. First the line of sight has to intersect the imaginary line where the velocity abruptly changes signs. Second, there needs to be asymmetry between the low- and high-density gas along the line of sight. Therefore, this large deviation in $\mu(N_{2}H^{+}) - \mu(CO)$ is not expected to be detected in all observations. \citet[][Figure 2, second panel]{Walsh2004} shows a small number of sources which have larger deviations in $\mu(N_{2}H^{+}) - \mu(CO)$ that still lie within the confines of the N$_{2}$H$^{+}$ line width, similar to those within this study. Our simulations are more appropriate for more quiescent low mass star forming regions, compared to those toward high-mass star forming regions \citep[e.g.][]{Henshaw2013} who found $\mu(N_{2}H^{+}) - \mu(CO) = 0.18\pm0.04$ \subsubsection{Effect of Ionisation Profile} The analysis in the previous sections has shown that the ionisation profile has a great effect on the synthetic spectra. Figures~\ref{spectra}~-~\ref{comparison1} highlight the various difference that arise due to the different ionisation profiles. The most obvious difference is the broadening of the spectra of the low-density gas. As discussed above, this broadening is directly due to the flux frozen conditions set up by the ionisation fraction within the low-density gas. By inhibiting movement of neutrals across the magnetic field, the non-thermal component that is introduced by the microturbulent perturbations cannot dissipate as efficiently as in the CR only models. In addition, as shown by the left hand panel of Figure~\ref{comparison1}, the ionisation profile has an effect on the difference in dispersions for the neutral and ion components of the gas, with the step-like ionisation resulting in smaller differences within the cores and low density gas than observed for models with the CR only profile. This again would be a direct consequence of the nearly flux frozen conditions during the early stages of evolution for the models with the step-like ionisation profile. Frequent collisions between the neutrals and ions within the gas with high ionisation fraction would result in similar velocities for each species while the ability for neutrals to slip past neutrals at lower ionisation fractions allows the neutrals to obtain a larger velocity than the ions. For the CR only ionisation profiles, the neutrals are always able to slip past the ions and thus obtain a larger velocity even in the low density gas. Analysis of the velocity structure via Figure~\ref{velocity} revealed that the models with the step-like ionisation profile produce larger contraction velocities than the CR only models. This indicates that large velocities require the presence of a high ionisation fraction while the low ionisation fraction is more conducive for quiescent conditions. This is due to the fact that regions with high ionisation fractions have larger fragmentation lengthscales. These regions, therefore, amass material from further away than the lower ionised regions, resulting in larger velocities. The transition between high ionisation to low ionisation in the step-like models could explain the transition to coherence within cores as observed by \citet{Goodman1998} and \citet{Pineda2010Coherence}, among others. A closer look at the core region analysed in \citet{Pineda2010Coherence} shows a gradient in density between the core and surrounding gas, with the highest density occurring within the quiescent coherent region. If we assume that the ionisation fraction within the quiescent region is low while the surrounding region has a higher ionisation fraction then the transition to coherence is simply the transition from a UV dominated ionisation fraction to a lower CR dominated ionisation fraction and the transition to coherence could indicate the physical region within which the ionisation fraction drops. A study of the deuteration fraction \citep{Caselli2002b} within the core and surrounding high density gas could give an indication of the ionisation fractions within these two regions to confirm this theory. If this is the case, comparing the thickness of the transition zone to the column density maps will help constrain the parameters of the step-like ionisation profile. In the future, we plan to include a simple chemical network and implement a variation in the cosmic-ray ionisation flux across the cloud \citep[as in][]{Padovani2011}, to more accurately determine the ionisation fraction across the molecular cloud and study the effects on the dynamical evolution. \section{Summary} Based on the above analysis and discussion, there are several results which we summarise below. \begin{itemize} \item Analysis of the density and velocity structures for all models show that clumps form at the intersection of converging flows. These flows are formed due to gravitational instability rather than a pre-existing turbulent flow. Velocity data reveals a large low velocity region (velocity valley) that occurs at the center of the clump due to the convergence of the flows. \item Cores form within the velocity valley. For the monolithic collapse (Model I) the center of the core coincides with the center of the velocity valley. For models with subfragmentation, the cores form on the periphery of the valley from material that is flowing into the valley. \item CR-only models exhibit subsonic to transonic contraction velocities while the step-like ionisation profile models exhibit supersonic contraction velocities. Regions with high ionisation fractions have larger fragmentation lengthscales and amass material from further away, resulting in larger velocities. Therefore observed high velocities require high ionisation fractions while the quiescent nature of cores requires lower ionisation profiles. The step-like ionisation profile naturally leads to supersonic in-fall speeds since it allows for steep gradients in the ionisation fraction to occur as the profile steps down from high to low ionisation. These steep gradients allow for velocity enhancements which are caused by flows of material from high to low ionisation fraction regions as the ability for neutrals to slip past the magnetic field lines increases. The observed transition to coherence could be the transition from high to low ionisation fractions. As such, it is very important to consider the influence of the ionisation fraction and chemistry on the evolution of molecular clouds. \item Analysis of the synthetic spectra show that the low density gas spectra have larger dispersions than the high density spectra for models with the step-like ionisation profile. In addition, the line widths are consistently larger for neutral particles than ions in both the low- and high-density gas. \item A comparison of the difference between centroid velocity for the core gas as traced by $N_{2}H^{+}$ and the low density gas as traced by CO to the linewidths of these two tracers was performed. Results show that the difference for all models was well within the limits defined by the linewidth of the $N_{2}H^{+}$ gas. This indicates that all cores show non-ballistic motions with respect to the surrounding gas which agrees with the observations presented by \citet{Walsh2004, Walsh2007}. \item Large deviations in the difference between the centroid velocities of $N_{2}H^{+}$ and CO ($\mu_{N_{2}H^{+}}$ and $\mu_{CO}$, respectively) coincide with lines of sight that intersect the convergence of the two flows that make up the clump. Such a variation requires that the line of sight intersect the velocity switch and that the region have an asymmetry between the contributions of the low and high density gas along the line of sight. Instances of this within observations would give insight into the hidden structure along the line of sight. \end{itemize} \section*{Acknowledgments} SB was supported by a Natural Science and Engineering Research Council (NSERC) Discovery Grant. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 320620-PALs
1,108,101,566,845
arxiv
\section{Introduction} \label{sec:Intro} Recurring events which have a large impact on the system and are infrequent and irregular are known in the literature as extreme events.~\cite{ansmann2013extreme, karnatak2014route}. Due to their occurrence in a large class of physical systems~\cite{RevModPhys.73.1067, Feigenbaum2001, dobson2007, bunde2002science}, a large body of research has been devoted to understanding such events in specific systems like rogue waves in oceans~\cite{akhmediev2009waves, chabchoub2011rogue, chabchoub2012super} and coupled laser systems~\cite{bonatto2011deterministic, akhmediev2016roadmap, pisarchik2011rogue, dal2013extreme}, harmful algal blooms in marine ecosystems~\cite{bialonski2015data, bialonski2016phytoplankton}, epileptic seizures in the brain~\cite{lehnertz2008epilepsy, Lehnertz2006} and adverse weather conditions like floods, droughts and cyclones. Additionally, studies using theoretical models have shown that extreme events can be generated via various mechanisms including incoherent background of interacting waves~\cite{kim2003statistics}, noise-induced attractor hopping~\cite{reinoso2013extreme, zamora2013rogue}, pulse-coupled small world networks~\cite{rothkegel2014irregular}, inhomogeneous networks of oscillators~\cite{ansmann2013extreme, karnatak2014route} and delay coupled relaxation oscillators~\cite{PhysRevE.95.062219}. For some of those systems, parameter regions have been identified in which several attractors can coexist. Particularly interesting is the coexistence of attractors containing extreme events and attractors exhibiting only regular motion. In those cases, it depends crucially on the initial conditions whether extreme events would occur. The structure of basins of attraction is essential for assessing the risk of the emergence of extreme events, but this question has rarely been addressed in literature. For several decades, significant research has been done in the field of basin structures and their boundaries in general dynamical systems. Some of the interesting basin structures include fractal basins~\cite{VANDERMEER2001265, grebogi2012chaos, MCDONALD1985125}, Wada basins~\cite{NUSSE1996242, doi:10.1142/S0218127496000035}, intermingled basins~\cite{SOMMERER1996243,PhysRevE.54.2489, PhysRevE.52.R3313} and riddled basins~\cite{doi:10.1142/S0218127492000446, PhysRevLett.73.3528, OTT1994384}. Various studies have analyzed the specific conditions which lead to the emergence of each of these basin types~\cite{PhysRevLett.50.935, KENNEDY1991213, ott1994blowout, PhysRevLett.71.4134}.For the study presented here riddled basins are of particular interest. A basin is said to be riddled, if any arbitrary neighborhood of every point of the basin contains points from another basin of attraction~\cite{ASHWIN1994126, 0951-7715-9-3-006}. An important consequence of this property is that, if the basin of any attractor is riddled, an arbitrarily small perturbation in any initial condition from the basin of attraction of this particular attractor, can make the system converge to another attractor. It is known that riddled basins are often formed in systems with certain symmetries which manifest themselves as invariant manifolds of the system~\cite{CAZELLES2001301}. In previous studies, riddled basins have been found in a variety of systems including simple maps~\cite{PhysRevE.57.2713, PhysRevE.56.6393}, electronic circuits~\cite{0305-4470-28-3-001} and instantaneously coupled chaotic oscillators~\cite{PhysRevLett.73.3528, doi:10.1063/1.4954022}. Here we present the occurrence of riddled basins in delay-coupled relaxation oscillators. This very complicated basin structure leads to an extremely high sensitivity of the system with respect to perturbations. The latter property can have a strong impact on the system's dynamics since the attractor possessing the riddled basin is the one containing extreme events. In this paper, we present a system of delay-coupled FitzHugh Nagumo (FHN) oscillators which have been recently shown to exhibit extreme events~\cite{PhysRevE.95.062219} and investigate the emergence of various types of multistability, i.e.\ the coexistence of different attractors and their respective basins of attraction. In particular, we show that this system exhibits riddled basins of attraction in the parameter regime where extreme events are observed. After introducing the model in Sec.~\ref{sec:Model}, we discuss various regimes of multistability in Sec.~\ref{sec:Multistability}. We show that the basins of attraction become more and more complex as the coupling strength is increased. In particular, we identify a riddled basin of attraction belonging to an attractor exhibiting extreme events in Sec.~\ref{sec:Characteristics} by extending the concept of final state sensitivity to infinite-dimensional dynamical systems and providing further evidence for the riddled structure by classifying points as interior or boundary points. We underline the consequences of a riddled basin structure in a system exhibiting extreme events in the conclusions (Sec.~\ref{sec:Conclusions}). \section{The Model} \label{sec:Model} We consider a pair of FHN units $(i=1,2)$, which are coupled to each other using two time delayed diffusive couplings. If the coupling strengths of the system are given by $M_1$ and $M_2$; and the respective time delays are given by $\tau_1$ and $\tau_2$, then the dynamical equations governing the system are given as, \begin{equation} \begin{aligned} \dot{x}_i &= x_i(a-x_i)(x_i-1)-y_i + \sum_{k=1,2} M_k (x_j^{(\tau_k)}-x_i) \\ \dot{y}_i &= bx_i-cy_i + \sum_{k=1,2 }M_k (y_j^{(\tau_k)}-y_i), \label{eq:Definition} \end{aligned} \end{equation} where $x_j^{(\tau_k)}=x_j \left( t-\tau_k \right)$, $y_j^{(\tau_k)}=y_j \left( t-\tau_k \right)$ and $i \neq j$. The two FHN units possess identical parameters $a$, $b$ and $c$. For our investigations, we fix them at $a=-0.025$, $b=0.00652$ and $c=0.02$. These parameter values correspond to the regime where in absence of coupling, each FHN unit executes oscillatory behavior in the long term. \begin{figure} \includegraphics[width=0.95\linewidth]{Fixed_Points.pdf} \caption{Bifurcation diagram showing the position of the fixed points for varying coupling strength $M$. The green hollow circles represent unstable fixed points and the blue solid diamonds represent their stable counterparts. The points of fold and Hopf bifurcations are marked by `F' and `H' respectively.} \label{fig:FixedPoints} \end{figure} Having identical internal parameters for FHN units implies the existence of an invariant manifold defined by $x_1^{(\tau)}=x_2^{(\tau)}$; $y_1^{(\tau)}=y_2^{(\tau)}$ for all $\tau \in \left[ 0, \text{max}\left\{ \tau_k \right\} \right]$. This manifold corresponds to the complete synchrony of the two units and partitions the phase space of the system in two symmetric halves. Note that this symmetry is particularly reflected in the position of the fixed points of this system in phase space. Hence, if $\left( x_1^*, y_1^*, x_2^*, y_2^* \right)$ is a fixed point of the system outside the synchronization manifold, then $\left( x_2^*, y_2^*, x_1^*, y_1^* \right)$ is also a fixed point with the same stability. A useful way to exploit the symmetry of the system is by transforming Eq.~\ref{eq:Definition} to new coordinates, $X_{1,2}=\frac{x_1 \pm x_2}{2}$ and $Y_{1,2}=\frac{y_1 \pm y_2}{2}$. In these coordinates, $\left( X_1, Y_1 \right)$ denote the position of the projection of a general point $\left( X_1, Y_1, X_2, Y_2 \right)$ on the synchronization manifold and $\left( X_2, Y_2 \right)$ represents the separation between the point and the synchronization manifold. Moreover, if we define $X_j^{(\tau_k)}=X_j \left( t-\tau_k \right)$, any point on the synchronization manifold can be represented by $X_2^{(\tau)}=Y_2^{(\tau)}=0$ for all $\tau \in \left[ 0, \text{max}\left\{ \tau_k \right\} \right]$. Here, we use the transformed co-ordinates in figures to distinguish between attractors located on the synchronization manifold and the ones outside the synchronization manifold. Moreover we note that, due to the form of coupling used in Eq.~\ref{eq:Definition}, it can be shown that the position of the fixed points is dependent neither on the time delays nor on the individual coupling strengths. Instead, it only depends on the sum of coupling strengths, $M=M_1+M_2$. Explicit computation yields several fixed points (See Fig.~\ref{fig:FixedPoints}) among which the origin is the only one present for the whole interval of coupling strength, even for zero coupling. As the coupling strength increases from zero, a pair of unstable fixed points appear on either side of the synchronization manifold via a fold bifurcation \textbf{F}. Thereafter, one of the fixed points on each side of the manifold stabilizes through a reverse Hopf bifurcation \textbf{H}. Since we are mainly interested in the parameter regions in which multistability occurs, we focus on that interval of coupling strength in which two stable fixed points outside the synchronization manifold exist. However, we also consider the transition in which those two fixed points become stable. The dynamical properties of the system with no stable fixed points is studied in detail in our previous work~\cite{PhysRevE.95.062219}. \section{Multistability and the Structure of Basins of Attraction} \label{sec:Multistability} According to our aim, we analyze different regimes of multistability in the system described in Eq.~\ref{eq:Definition} while the coupling parameters are varied. For the sake of simplicity, we keep two of the coupling parameters fixed at $M_1=0.01$ and $\tau_1=80$ throughout the article and discuss the changes in dynamics as parameters $M_2$ and $\tau_2$ are varied. To that end, we first fix $\tau_2=65$ and vary $M_2$. The effects of varying $\tau_2$ with fixed $M_2$ will be discussed briefly later in the article. The numerical simulations presented in this article were performed using the Python package JITCDDE~\cite{JITCDDE} which integrates systems of delay-differential equations using a modified form~\cite{SHAMPINE2001441} of the Bogacki-Shampine Runge-Kutta method. Varying $M_2$ leads to various regimes, each of which is characterized by its own set of coexisting attractors and their corresponding basins of attraction. Due to the large number of attractors encountered during the parameter sweep, we index the $i^{\text{th}}$ attractor encountered by the symbol \A{i} and its corresponding basin of attraction by \B{i}. Also note that, since the phase space for a continuous time-delayed system is infinite dimensional, it is not possible to show the entire phase space in a diagram. We therefore show only slices of the phase space where the initial history of the system is identical to the initial conditions of the system: $x_i(\tau)=x_i(0)$ and $y_i(\tau)=y_i(0)$ for $-\text{max}\left\{ \tau_1, \tau_2 \right\} \le \tau < 0$. This brings down the dimensions of the slice to four, which is further reduced to two by choosing $y_1(0)=y_2(0)=0.01$. The structure of the basins of attraction in other slices of the phase space will be discussed in a later section of the article. \begin{figure} \includegraphics[width=\linewidth]{Synchronization_Manifold_2.pdf} \caption{Phase space representation of the various classes of attractors obtained on the synchronization manifold upon varying the coupling strength $M_2$. Small coupling $(M_2=0)$: limit cycle corresponding to mixed mode oscillations shown in blue; intermediate coupling $(M_2=0.00247)$: chaotic attractor corresponding to extreme events shown in red; large coupling $(M_2=0.0026)$: limit cycle corresponding to small amplitude oscillations shown in green. The inset shows the close-up view of the dynamics in the neighborhood of the origin. Other coupling parameters: $M_1=0.01$, $\tau_1=80$, $\tau_2=65$.} \label{fig:Synch} \end{figure} \begin{figure*} \includegraphics[width=\linewidth]{Regimes_1.pdf} \caption{Two dimensional slices of the phase space showing the basins of attraction at various values of coupling strength $M_2$. For each panel, the slice is taken at $y=y_1=y_2=0.01$. Other coupling parameters: $M_1=0.01$, $\tau_1=80$, $\tau_2=65$. The color code indicates the different basins of attraction \B1,$\ldots$,\B7 corresponding to the different attractors \A1,$\ldots$,\A7 (see text).} \label{fig:Regimes} \end{figure*} \subsection{Regime 1} For $M_2=0$, the total coupling strength $M=M_1+M_2=0.01$ is still less than the minimum coupling required for stabilization of the non-trivial fixed points (see Fig.~\ref{fig:FixedPoints}). The global attractor of the system, \A{1} is the limit cycle on the synchronization manifold corresponding to mixed mode oscillations (see Fig.~\ref{fig:Synch}: blue curve) and therefore, all initial conditions converge to it. This leads to the trivial basin structure \B{1} shown in Fig~\ref{fig:Regimes}a. The structure of \B{1} remains unchanged until $M_2 \approx 0.0003$ (or correspondingly $M \approx 0.0103$). \subsection{Regime 2} If $M_2$ is increased beyond $0.0003$, the reverse Hopf bifurcation stabilizes a pair of fixed points --- \A{2} and \A{3} --- placed symmetrically on either sides of the synchronization manifold and makes the system tri-stable. The two new attractors form tongue shaped basins --- \B2 and \B3 --- in the slice of phase space (see Fig.~\ref{fig:Regimes}b for an example). The trajectories starting within these tongue-shaped regions do not approach the synchronization manifold during the transient and converge to either of the fixed points directly. Note, that the diagonal line in the slice of phase space containing the basin of attraction shown represents the synchronization manifold of the system. Therefore, pairs of initial conditions which are symmetrically placed with respect to the diagonal either converge to the attractor on the synchronization manifold \A{1} (and belong to the basin \B{1}); or converge to the pair of fixed points \A{2} and \A{3} (and belong to basins \B{2} and \B{3} respectively) which themselves are symmetric with respect to the synchronization manifold. This makes the basins of attraction symmetric which is expected from the system in consideration. While the majority of the points belonging to \B{2} and \B{3} are contained in the tongue like structures, there are initial conditions which appear scattered in the area outside the tongues and yet converge to the fixed points \A{2} and \A{3}. As $M_2$ is increased further, the number of points belonging to \B{2} and \B{3} scattered outside the tongue-like structures increases (see Fig.~\ref{fig:Regimes}c). \subsection{Regime 3} \begin{figure} \includegraphics[width=0.95\linewidth]{Dynamics.pdf} \caption{Various representations of the dynamics observed between $M_2 \approx 0.00245$ and $M_2 \approx 0.00255$. The attractors of the system are shown in a three dimensional projection of the phase space in (a). They include the red chaotic attractor \A1 on the invariant synchronization manifold shown in gray and the pair of stable fixed points \A2 and \A3. Time evolution of typical trajectories converging to attractors \A1 and \A3 are shown in (b) and (c) respectively. Another phase space representation of the trajectories in (b) and (c) are shown in (d) and (e) respectively where the transformed coordinates $\left( X_1, X_2 \right)$ are plotted. While the red trajectory converges quickly to the synchronization manifold, the green comes close to the manifold and diverges away from it multiple times before converging to the fixed point.} \label{fig:Multistable} \end{figure} \begin{figure} \includegraphics[width=0.95\linewidth]{ExEv_Basin.pdf} \caption{Basin of attraction corresponding to extreme events (navy blue) in Regime 3 as seen in a two-dimensional slice of the phase space. This is a plot of that basin of attraction only, compared to the plot of all basins of all coexisting attractors shown in Fig.~\ref{fig:Regimes}d.} \label{fig:ExEv_Basin} \end{figure} So far, the trajectories on the synchronization manifold execute mixed mode oscillations comprising of several small amplitude oscillations followed by a single large amplitude oscillation or an event. While the number of small-amplitude oscillations between two consecutive events increases as $M_2$ is increased through Regimes 1 and 2, the overall dynamics on the synchronization manifold remains periodic in these regimes. Therefore, the inter-event-intervals throughout the long term trajectory remains a constant in time. However, upon increasing $M_2$ beyond $0.00245$, we enter Regime 3 and the limit-cycle corresponding to the mixed mode oscillations undergoes a period-adding cascade to become a chaotic attractor \A{4} (see Fig.~\ref{fig:Synch}: red curve). In particular, the small amplitude oscillations between two successive events become highly chaotic, which results in an extremely high irregularity in the inter-event-intervals. These rare, recurrent and highly irregular events in such systems are known as extreme events and have been analyzed in detail in our recent work~\cite{PhysRevE.95.062219}. The transition to Regime 3 is also accompanied by the emergence of an extremely rich structure of the basins of attraction. One of the distinct qualitative changes which occurs during the transition from Regime 2 to Regime 3 is the significant increase in the number of points which are in \B{2} and \B{3} but not in the tongue. As can be clearly seen in Fig.~\ref{fig:Regimes}d, the phase space seems now to be comprised of two distinct types of regions: The `pure' regions where neighboring points belong to only one particular basin of attraction; and the `mixed' regions where neighboring points may belong to any of the three basins of attraction. Notably, the pure regions seem to contain points only belonging to \B{2} or \B{3} and not to \B{4}. In other words, the entire basin \B{4} seems to be contained in the mixed regions of the phase space, which is illustrated by plotting only the points of \B{4} in Fig.~\ref{fig:ExEv_Basin}. The quantitative aspects of the characteristics of these regions will be discussed in more detail in the following section. The emergence of those two distinct regions in the phase space --- denoted as `pure' and `mixed' --- also impacts the transients of the trajectories which do not start on the synchronization manifold. A trajectory which starts in one of the pure regions converges to the corresponding fixed point without repeatedly approaching the neighborhood of the synchronization manifold, hence yielding a relatively short transient. The trajectories starting in the mixed regions, however, may approach the neighborhood of the synchronization manifold many times and trace out the close proximity of the chaotic attractor on the synchronization manifold before being ejected and converge finally to one of the stable fixed points. This leads to possibly very long transients where the dynamics of the trajectory --- which will eventually converge to a fixed point --- resembles closely the extreme event dynamics of the trajectory which has converged to the chaotic attractor on the synchronization manifold (see Fig.~\ref{fig:Multistable}). \subsection{Regime 4} On increasing the coupling strength beyond $M_2 \approx 0.00255$, the attractor on the synchronization manifold changes from being the chaotic set \A{4} to a small-amplitude limit cycle \A{5} (see Fig.~\ref{fig:Synch}: green curve). This changes the long-term motion for trajectories starting on the synchronization manifold from being comprised of small chaotic oscillations interspersed by irregularly appearing large events to a periodic oscillation with small amplitude. Initial conditions not starting on the synchronization manifold may still converge either to the non-trivial fixed points \A{2} and \A{3}; or the attractor on the synchronization manifold \A{5}. The structure of the basins of attraction for such a regime is shown in Fig.~\ref{fig:Regimes}e. Note that the phase space still has the mixed regions which now mostly consist of points from the basins \B{2} or \B{3}. Nevertheless, points belonging to \B{5} can still be found in parts of the mixed regions on either side of the diagonal. This correlates with the numerical observation that although a large majority of the initial conditions starting away from the synchronization manifold converge to either \A{2} or \A{3}; there are initial conditions not on the synchronization manifold which converge to \A{5}. Note that as the coupling strength $M_2$ is increased, the periodicity of the limit-cycle decreases due to a reverse period-doubling cascade. \subsection{Regime 5} \begin{figure} \includegraphics[width=0.95\linewidth]{Rich_Dynamics.pdf} \caption{Various representations of the dynamics of the system observed between $M_2 \approx 0.0026$. The attractors of the system are shown in a three dimensional projection of the phase space in (a). They include the green limit cycle \A1 on the invariant synchronization manifold shown in gray, the pair of stable fixed points \A2 and \A3; and the blue and red non-synchronized chaotic attractors \A4 and \A5. Phase space representations of trajectories on attractors \A1, \A4 and \A5 in the transformed co-ordinates $\left( X_1, X_2 \right)$ are plotted in (b), (c) and (d), respectively.} \label{fig:VeryMultistable} \end{figure} If the coupling strength is increased beyond $M_2 \approx 0.00285$, an additional pair of chaotic attractors, \A{6} and \A{7}, appear on either sides of the synchronization manifold. The system therefore, now contains a total of 5 co-existing attractors (see Fig.~\ref{fig:VeryMultistable}): a small amplitude limit-cycle \A{5}, a pair of non-trivial fixed points \A{2} and \A{3}, and a pair of non-synchronized chaotic attractors \A{6} and \A{7}. A trajectory that converges to \A{6} or \A{7} executes nearly synchronous small-amplitude oscillations interspersed by single highly asynchronous large amplitude oscillations. In phase space, the trajectory remains extremely close to the synchronization manifold during the small-amplitude oscillations and diverges away from it during the large-amplitude oscillation. Note that, although the trajectory exhibits a dynamics similar to the attractor containing extreme events, i.e. it comprises of many small amplitude oscillations followed by a large amplitude oscillation, the dynamics in this case cannot be classified as extreme events as the events are not irregular and not rare enough. The emergence of \A{6} and \A{7} leads to an additional richness in the structure of the basins of attraction (see Fig.\ref{fig:Regimes}f). Similar to the previous two regimes, the phase space appears to be partitioned into mixed and pure regions. However, each of the pure regions in this regime can also belong to either of the chaotic attractors \A{6} or \A{7} in addition to the previously present stable fixed points. The mixed regions on the other hand, contains points belonging to the basins of all attractors in the system including the small-amplitude limit cycle \A{5} on the synchronization manifold. \subsection{Regime 6} Beyond $M_2 \approx 0.00360$, the chaotic invariant sets outside the synchronization manifold are no longer stable and the only attractors which remain in the system are the fixed points \A{2} and \A{3}, and the small-amplitude limit cycle \A{5}. This is qualitatively similar to Regime~2 with the attractor on the synchronization manifold being the limit-cycle corresponding to small-amplitude instead of mixed mode oscillation. This similarity in the nature of attractors is also reflected in the basin structure (see Fig.~\ref{fig:Regimes}g). The basins of the fixed points comprise mostly of the tongue like structures emanating from the synchronization manifold, and additional isolated points scattered elsewhere in phase space. The rest of the phase space forms the basin of attraction of the limit cycle \A{5}. Note that there are no `mixed' regions in phase space anymore. Moreover, the number of points which belong to the basins \B{2} and \B{3} and yet do not belong to the tongue-like structures decrease as the coupling strength is increased up to $M_2 \approx 0.0042$ (see Fig.~\ref{fig:Regimes}h). On increasing $M_2$ even further, the system exhibits more interesting dynamics including small-amplitude chaotic oscillations and stabilization of the origin. However, the detailed analysis the system in these regimes is beyond the scope of this paper. \begin{figure*} \includegraphics[width=\linewidth]{Bifurcation.png} \caption{Bifurcation diagrams showing the Poincar\'{e} section (obtained by fixing $x_1=x_2$ and $y_1=y_2=0.01$) of the trajectories on the synchronization manifold for varying coupling parameters $M_2$ and $\tau_2$. Parameters: (a) $\tau_2=65$, (b) $M_2=0.00245$. Common Parameters: $M_1=0.01$, $\tau_1=80$.} \label{fig:Bifurcation} \end{figure*} The changes in dynamics on the synchronization manifold presented in this section can be summarized by the bifurcation diagram in Fig.~\ref{fig:Bifurcation}. As the coupling strength increases, the stable limit-cycle corresponding to the mixed mode oscillations, \A1, on the synchronization manifold undergoes a period-adding cascade to eventually become a chaotic attractor corresponding to an attractor \A4 containing extreme events. This chaotic attractor looses stability and a high periodicity small-amplitude limit cycle, \A5 emerges which thereafter undergoes a reverse period-doubling cascade to finally become a period-one limit-cycle. The bifurcation diagram for varying time-delay $\tau_2$ with fixed coupling strength $M_2$ is also plotted in Fig.~\ref{fig:Bifurcation}. The latter is qualitatively similar to the first one except for the direction of changes in the dynamics. In other words, the qualitative changes observed in the system as coupling strength is increased follow the same order as the qualitative changes observed as the time-delay is decreased. This implies that the dynamical regimes and the corresponding basin structures described in this section can be obtained by varying either of the coupling parameters or even a combination of both. \section{Characteristics of Basins of Attraction} \label{sec:Characteristics} In the last section, we noted that the basin structure in Regimes 3, 4 and 5 partitions in the phase space into pure and mixed regions. Here, we demonstrate that in these regimes the basins of attraction of certain attractors possess a riddled structure and hence, are fundamentally different from the Regimes 1, 2 and 6 where mixed regions do not exist in the phase space. We also highlight that such a property can have crucial consequences for the dynamics particularly when the occurrence of extreme events is involved. In order to show that the basins in Regimes 3, 4 and 5 are riddled, we first compare the structure of basin boundaries in Regimes 3, 4 and 5 with those of Regimes 2 and 6. We thereby emphasize that in Regimes 3, 4 and 5, there are regions in phase space where an arbitrarily small perturbation in the initial conditions can lead to a trajectory converging to a different attractor. \begin{figure*} \includegraphics[width=0.9\linewidth]{Pure_Mixed_1.pdf} \caption{Two dimensional slices of the phase space at $y=y_1=y_2=0.01$, showing interior (shown in white) and boundary points (shown in black) of the basins of attraction in various dynamical regimes. For the particular Regimes 2, 3, 4, 5 and 6, the couplings strengths used are $M_2=0.0024$, $0.00247$, $0.0026$, $0.0029$ and $0.0038$, respectively (as in Fig.~\ref{fig:Regimes}).} \label{fig:Pure_Mixed} \end{figure*} We start our analysis by assigning all points in phase space to two categories with regards to their position in their respective basins of attraction: interior points and boundary points. A point is said to be an interior point if all the points in its infinitesimal neighborhood belong to the same basin of attraction as the point under consideration. All other points are classified as boundary points. While the exact classification of points is not possible in numerical computations since we always deal with a certain resolution; we approximate this classification by constructing a mesh of $512 \times 512$ points spanning the two dimensional slice of the phase space. Thereafter, we assume that the next neighbors of each point in the mesh belong to its infinitesimal neighborhood. The results obtained are presented in Fig.~\ref{fig:Pure_Mixed}. The accuracy of the method can be increased by starting with a finer mesh. However, it was verified using a $1024 \times 1024$ grid that the results obtained are qualitatively identical to the results presented here. Let us first analyze the phase space in Regimes 2 and 6 (see Fig.~\ref{fig:Regimes}). Most of the phase space is comprised of continuous two dimensional regions belonging to a particular basin of attraction. However, the phase space also contains numerous isolated points, each of which, belongs to a particular basin of attraction, say of attractor \A{i}, but is surrounded completely by points belonging to a different basin of attraction, say of attractor \A{j}. Note that all the points surrounding the isolated point belong to the basin of attraction of the same attractor \A{j}. However, that attractor \A{j} is different from the attractor \A{i} corresponding to the isolated point. From Fig.~\ref{fig:Pure_Mixed}, it can be seen that for Regimes 2 and 6, boundary points (colored black) form one dimensional curves separating regions of interior points (colored white) belonging to different basins of attraction. Boundary points also mark the isolated points and their immediate neighbors. \begin{figure} \includegraphics[width=\linewidth]{Bar_Plot.pdf} \caption{Characteristics of boundary points of the basins of attraction in various regimes. Fractions of points in the various basins of attraction $f_{BA,BP}$ which are boundary points are shown in (a) as bar plots. Whereas (b) shows a stacked histogram plot depicting the composition of the set of all boundary points in terms of basins of attraction to which each of the boundary points belongs. The slices of basins of attraction used for the analysis are taken from Fig.~\ref{fig:Regimes}.} \label{fig:Bar_Plot} \end{figure} The situation is evidently different in Regimes 3, 4 and 5 where boundary points seem to fill up two dimensional regions in phase space which are colored in black. Notably, the boundary points appear to cover the mixed regions of the phase space entirely whereas the interior points cover the pure regions. This implies that every point in the mixed region has at least one point in its immediate neighborhood that belongs to a different basin of attraction than itself. Therefore, for any trajectory starting from the mixed region which converges to a particular attractor, there exists an infinitesimally small perturbation to that initial condition which would push the trajectory across a basin boundary and cause it to converge to a different attractor. Plotting the fraction of boundary points in each basin of attraction in Regimes 2 through 6 (see Fig.~\ref{fig:Bar_Plot}a), it can be inferred that the basins of attraction of attractor \A4 in Regime 3, and attractor \A5 in Regimes 4 and 5 are completely contained in the mixed regions of the phase space as they are entirely composed of boundary points. This indicates that attractor \A4 in Regime 3 and \A5 in Regimes 4 and 5 have riddled basins of attraction as each point belonging to the basins of attraction of these attractors has in its immediate neighborhood, a point belonging to the basin of attraction of another attractor. The significance of a riddled basin in Regime 3 is greatly increased as the attractor possessing the riddled basin corresponds to the occurrence of extreme events. The basin in consideration, \B4, is riddled in basins \B2 and \B3 which correspond to the fixed point attractors. Note also, that an initial condition in \B2 or \B3 which belongs to the mixed region of the phase space exhibits a long transient during which it closely traces the chaotic attractor corresponding to the occurrence of extreme events before converging to the fixed points. This underlines an important property of the system under consideration: (a) any initial condition in the mixed region of the phase space can potentially exhibit extreme events for a long time, if not perpetually; and (b) due to the riddled nature of the basins of attraction, even a very small perturbation in initial conditions in the mixed region can change a system from exhibiting extreme events as a transient behavior to exhibiting extreme events forever. Although a stable chaotic attractor corresponding to extreme event generation does not exist for Regimes 4 and 5, having a riddled basin structure in these regimes in still important as the long transients which closely resemble extreme events may be observed for trajectories starting from the mixed regions of the phase space. Additionally we note that, Regime 5 contains two stable chaotic attractors \A6 and \A7. While the events exhibited by the trajectories converging to these attractors are not irregular enough to be classified as extreme events, they are still recurrent and have a significantly larger amplitude than the typical oscillation of the system which may considerably affect the system. In order to illustrate this point, we plot in Fig.~\ref{fig:Bar_Plot}b, stacked histograms showing the fraction of boundary points which belong to the respective attractors. From Fig.~\ref{fig:Bar_Plot}a and Fig.\ref{fig:Bar_Plot}b, it can be inferred that not only a major fraction of \B6 and \B7 belongs to the mixed regions of phase space, but also that the points belonging to \B6 and \B7 constitute a large fraction of all points in the mixed region. This implies: (a) if behavior exhibiting a regular occurrence of events is the desired state of the system, the choice of plausible initial conditions is restricted to the small pure regions in \B6 and \B7; as other initial conditions in \B6 or \B7 are in the mixed region and hence, are vulnerable to small perturbations and (b) an initial condition in the mixed region of the phase space is most likely to result in regular behavior containing frequently occurring events. \begin{figure*} \includegraphics[width=0.9\linewidth]{Slices.pdf} \caption{The top row shows two dimensional slices of phase space, color coded for the different basins of attraction in the system, for varying values of $y=y_1=y_2$ when coupling strength is fixed at $M_2=0.00247$ (as in Fig.~\ref{fig:Regimes}). The bottom row shows the classification of all points as interior or boundary points. The color code for the various basins of attraction is the same as in Fig.~\ref{fig:Regimes} and Fig.~\ref{fig:Pure_Mixed}.} \label{fig:Slices} \end{figure*} In order to ensure that the observations regarding the riddled nature of basins of attraction is not a manifestation of the specific choice of the slice in phase space used for obtaining the basins of attraction, we present plots of basins of attraction in other slices in Fig.~\ref{fig:Slices}. These slices are obtained by choosing various values of $y=y_1=y_2$ for $M_2=0.00247$ (Regime 3). From the figure, we observe that although the size and shape of the tongues change as $y$ is varied, the qualitative structure of the basin remains consistent. Again we observe a partitioning of the phase space into pure and mixed regions and the basin of attraction of the attractor \A3 corresponding to extreme events is completely contained in the mixed regions. A similar analysis of the phase space in other regimes also reveal results which are in agreement with those presented previously in this section. \begin{figure} \includegraphics[width=\linewidth]{Pair_Invariance.pdf} \caption{Basins of attraction in the two dimensional projection of the phase space shown in (a) and the zoomed in view of a portion of its mixed region in shown in (b). Color code for (a) and (b) is the same as in Fig.~\ref{fig:Regimes}. The uncertainty exponent $\rho$ for the region in (b) is plotted against the distance between points $d$ in (c) using the solid red dots. The dashed line gives the best fit for the points in red.} \label{fig:Pair_Invariance} \end{figure} A very well known method to compute the dimensions of a fractal basin boundary is the computation of the uncertainty exponent $\alpha$~\cite{GREBOGI1983415}. In order to compute the exponent $\alpha$ in our system, we choose a part of the mixed region in the two dimensional slice of the phase space (for the zoomed in version of the selected region with a resolution of $10^{-12}$, see Fig.~\ref{fig:Pair_Invariance}(b)). We then choose a distance, $\varepsilon$ and randomly select 1000 pairs of initial conditions from the region such that the distance between each pair is $\varepsilon$. For varying values of $\varepsilon$, we then plot the fraction of pairs of initial conditions, $f(\varepsilon)$ such that each initial condition in the pair converges to a different attractor. The expected relation between $f(\varepsilon)$ and $\varepsilon$ for a fractal basin boundary is \begin{equation} f(\varepsilon) \sim \varepsilon^\alpha, \label{eq:Uncertainty} \end{equation} where the uncertainty exponent $\alpha$ is the difference between the dimension of the state space and the dimension of the basin boundary. Though this method of final state sensitivity has been developed only for systems defined in a finite dimensional phase space, we believe that it also provides similar insights into the basin structure of systems in an infinite dimensional phase space. Our analysis shows that for the system in consideration, $\alpha = 7.476 \times 10^{-7}$ for $M_2=0.00247$ (see Fig.~\ref{fig:Pair_Invariance}). This value is very close to zero implying that the dimensions of the basin boundary is approximately equal to the dimension of the state space. This is in accordance with the results shown in Fig.~\ref{fig:Pure_Mixed} where the boundary points seemed to span a two dimensional region in the two dimensional slice of the phase space. Our results also agree with previous studies of riddled basins of attraction where the uncertainty exponent $\alpha$ has been reported to be approximately zero~\cite{doi:10.1063/1.4954022}. Although the coupling strength chosen for Fig.~\ref{fig:Pair_Invariance} is in Regime 3, the results for Regimes 4 and 5 where riddled basins are also observed are similar to the ones presented here. \section{Conclusions} \label{sec:Conclusions} In this study, we have explored in detail the various regimes of multistability and the structure of the corresponding basins of attraction exhibited by a system of two identical FitzHugh-Nagumo units connected to each other using two coupling delays. In our analysis, we have focused on a parameter interval which includes the regime where this system exhibits extreme events. Depending on the coupling strength we obtain up to 5 different co-existing attractors. Due to the symmetry of the system, one of the attractors is located on the synchronization manifold, while the other attractors lie outside this manifold. We find that the basin structure of the system becomes progressively rich and complex as we approach the parameter regime where extreme events are observed. While many basins of attraction are fractal, we also find basins of attraction which are riddled. The significance of this result is increased as one of the riddled basins corresponds to the extreme event dynamics. To classify these basins as riddled, we compute the uncertainty exponent, which is found to be very close to zero giving a strong indication of a riddled basin. Although riddled basins have been reported previously in many systems, our investigation is, to the best of our knowledge, the first evidence of a riddled basin in an infinite dimensional system such as a delay-coupled system. Additionally, we have shown that the method of final state sensitivity which was originally developed for finite dimensional systems can be successfully employed in the case of infinite-dimensional systems, where the computation of basins of attraction is particularly difficult. Similar to the findings of previous studies, we show that in case of a riddled basin the phase space can be divided into pure and mixed regions. A crucial aspect of our analysis is that one of the basins which shows riddling belongs to an attractor which contains extreme events. This basin of attraction is completely confined to the mixed regions of the phase space. This has an important consequence for the overall dynamics: While any trajectory starting from the pure regions in phase space leads to a safe dynamics far away from extreme events, the trajectories starting in the mixed regions of phase space may or may not converge to the state containing extreme events. Those initial conditions in the mixed region are extremely sensitive with respect to perturbations. Already very tiny perturbations would be sufficient to push the trajectory to a dynamics which contains extreme events. Therefore, we obtain a high risk of ending up in a state of extreme events and which of the initial conditions lead to them is not predictable. \section*{Acknowledgments} The authors would like to thank G. Ansmann, A. Choudhary, P. H\"ovel, E. Knobloch, K. Lehnertz, C. Masoller, E. Sch\"oll, S. Wieczorek and J. A. Yorke for fruitful discussions and critical suggestions. This work was supported by the Volkswagen Foundation (Grants No.~88459). The simulations were performed at the HPC Cluster CARL, located at the University of Oldenburg (Germany) and funded by the DFG through its Major Research Instrumentation Program (INST 188/157-1 FUGG) and the Ministry of Science and Culture (MWK) of the State Lower Saxony.
1,108,101,566,846
arxiv
\section{Introduction} \label{sec:intro} Call-by-value is a common evaluation strategy of many functional programming languages, whether full-fledged or fragments of proof assistants. Such languages and their evaluation strategies can be formalised operationally in terms of an underlying lambda calculus and its reduction strategies. As shown in \cite{Plo75}, the classic lambda calculus $\lamK$ \cite{Bar84} is inadequate to formalise call-by-value evaluation as defined by Landin's SECD abstract machine. The adequate calculus is the lambda-value calculus $\lambda_{\vv}$. The pure (and untyped) version \cite{RP04} is the core that remains after stripping away built-in primitives whose main purpose is to facilitate the encoding of programs as terms of the calculus. Hereafter we write $\lambda_{\vv}$ for the pure version. Unfortunately, the lambda-value calculus, and by extension its pure version, are considered defective on several fronts for formalising call-by-value evaluation at large, and many alternative calculi have been proposed with various aims, \emph{e.g.}\ \cite{FF86,HZ09,Mog91,EHR91,AK10,AK12,AP12}. We do not wish to propose yet another calculus. These proposals vary the calculus to fit an intended call-by-value model, but this is one of the choices for investigations on full abstraction. The other is to vary the model to fit the intended calculus \cite[p.1]{Cur07}. The questions are: What does $\lambda_{\vv}$ model? Is its import larger than call-by-value evaluation under SECD? To answer these questions and avoid `the mismatch between theory [the calculus] and practice [the model]' \cite[p.2]{Abr90} we have to first address the open problem of whether $\lambda_{\vv}$ has a `standard theory'. A central piece of a standard theory is the notion of solvability which is synonymous with operational relevance. Let us elaborate these ideas first and discuss their utility further below. Recall that a lambda calculus consists of a set of terms and of proof-theories for conversion and reduction of terms. Conversion formalises intensional (computational) equality and reduction formalises directed computation. A term converts/reduces to another term (both terms are in a conversion/reduction relation) iff this fact can be derived in the conversion/reduction proof-theory (Section~\ref{sec:prelim} illustrates). The relations must be confluent for the proof-theory to be consistent. In the calculus the reduction relation is full-reducing and `goes under lambda'. It is possible to reason algebraically at any scope where free variables (which stand for unknown operands in that scope) occur. Operational equivalence can be established for `arbitrary terms, not necessarily closed nor of observable type' \cite[p.3]{Cur07}. Solvability is a basic concept in lambda calculus. It appears 18 pages after the definition of terms in the standard reference \cite{Bar84} (terms are defined on page 23 and solvability on page 41). Solvability was first studied in \cite{Bar71,Bar72,Wad76} and stems from the realisation that not all diverging terms (\ie\ terms whose reduction does not terminate) are operationally irrelevant (\ie\ meaningless, useless, of no practical use, etc.) For a start, not all of them are equal. An inconsistent proof-theory results from extending the conversion proof-theory with equations between all diverging terms. Indeed, some diverging terms can be applied to suitable operands such that the application converges to a definite final result of the calculus (a `normal form' in the jargon). For other diverging terms the application diverges no matter to how many or to which operands they are applied. Solvable terms are therefore terms from which a normal form can be obtained when used as functions. The name `solvable' stems from their characterisation as solutions to a conversion. By definition, terms that directly convert to a normal form are solvable. In contrast, unsolvable terms are the terms that are operationally irrelevant. A consistent proof-theory results from extending the conversion proof-theory with equations between all unsolvables. This consistent extension is satisfied by well-known models where unsolvables correspond to the least-defined element of the model. Any further extension that includes the equations between unsolvables and is consistent is called \emph{sensible} in the jargon. Finally, solvable terms can be characterised operationally: there is a reduction strategy named `head reduction' that converges iff the input term is solvable. To summarise: $\lamK$ has a definition of solvability synonymous with operational relevance, a sensible extended proof-theory, sensible models (\ie\ models of the sensible extension), and an operational characterisation of solvables. All these ingredients are referred to in \cite[p.2]{Abr90} as a `standard theory'. However, in that work $\lamK$'s standard theory is criticised as a basis for functional programming languages because program results are not normal forms, there are no canonical initial models, etc. (Strictly speaking, however, $\lamK$ is as unfit as Turing Machines as a basis for practical programming languages.) A `lazy' lambda calculus is proposed which is closer to a non-strict functional programming language, but that divorces solvability from operational relevance. The latter is modified according to the notion of `order of a term' \cite{Lon83}. Broadly, the order is the supremum (ordinal) number of operands accepted by the term in the following inductive sense: if the term converts to $\lambda x.M$ then it accepts $n+1$ operands where $n$ is the number of operands accepted by $M$. Otherwise the term has order 0. Operationally irrelevant terms are only the unsolvables of order 0. Other unsolvables are operationally relevant and the extended proof-theory that equates unsolvables of order $n>0$ is inconsistent. Following similar steps, \cite{PR99,EHR91,EHR92,RP04} describe a call-by-value calculus with a proof-theory induced by operational equivalence of terms under SECD reduction. A definition of solvability, called $v$-solvability, is proposed for $\lambda_{\vv}$. This definition is unsatisfactory because it does not adapt $\lamK$'s original definition of solvable term, namely, `the application of the term to suitable operands converts to a normal form'. It adapts a derived definition, namely, `the application of the term to suitable operands converts to the identity term'. This definition is equivalent to the former in $\lamK$ but not in $\lambda_{\vv}$. Consequently, $v$-solvability does not capture operational relevance in $\lambda_{\vv}$, some normal forms of $\lambda_{\vv}$ (definite results) are $v$-unsolvable, and the extended proof-theory is not sensible. Moreover, the operational characterisation of $v$-solvables involves a reduction strategy of $\lamK$, not of $\lambda_{\vv}$, and the notion of order used is not defined in terms of $\lambda_{\vv}$'s conversion in a way analogous to \cite{Lon83}. The blame is put on $\lambda_{\vv}$'s nature and continues to be put in recent related work \cite{AP12,Gue13,CG14,GPR15}. We show that $\lambda_{\vv}$ does indeed have a standard theory. First we revisit the original definition of solvability in $\lamK$ and generalise it by connecting it with the notion of effective use of an arbitrary (closed or open) term. We then revisit $v$-solvability and show that it does not capture operational relevance in $\lambda_{\vv}$ but rather `transformability', \ie\ the ability to send a term to a chosen value. (Values are not definite results of $\lambda_{\vv}$ but a requirement for confluence.) We introduce $\lambda_{\vv}$-solvability as the ability to use the term effectively. Our \mbox{$\lambda_{\vv}$-solvability} captures transformability and `freezability', \ie\ the ability to send a term to a normal form, albeit not of our choice. The intuition is that terms can also be solved by sending them to normal forms that differ operationally from divergent terms at a point of potential divergence. The link between solvability and effective use is a definition of order that uses $\lambda_{\vv}$'s conversion, and a Partial Genericity Lemma which states that $\lambda_{\vv}$-unsolvables of order $n$ are generic (can be replaced by any term) for orders greater or equal than $n$. The $\lambda_{\vv}$-unsolvables of the same order can be equated without loss of consistency, and so we construct a consistent extension which we call $\mathcal{V}$. Our proof of the Partial Genericity Lemma is based on the proof of $\lamK$'s Genericity Lemma presented in \cite{BKV00} that uses origin tracking. An ingredient of the proof is the definition of a complete reduction strategy of $\lambda_{\vv}$ which we call `value normal order' because we have defined it by adapting to $\lambda_{\vv}$ the results in \cite{BKKS87} relative to the complete `normal order' strategy of $\lamK$. Value normal order relies on what we call `chest reduction' and `ribcage reduction' in the spirit of the anatomical analogy for terms in \cite{BKKS87}. The last two strategies illustrate that standard reduction sequences fall short of capturing all complete strategies of $\lambda_{\vv}$, and that a result analogous to `quasi-needed reduction is normalising' \cite[p.208]{BKKS87} is missing for $\lambda_{\vv}$. An operational characterisation of solvables in terms of a reduction strategy of $\lambda_{\vv}$ is complicated but we believe possible (Section~\ref{sec:operational-characterisation}). To summarise, our contributions are: a definition of solvability in $\lambda_{\vv}$ that is synonymous with operational relevance, the Partial Genericity Lemma, the reduction strategies value normal order, chest reduction and ribcage reduction, and finally the sensible proof-theory where unsolvables of the same order are equated. The standard theory of $\lambda_{\vv}$ has practical consequences other than reducing the mismatch between theory and practice, or the operational formalisation of call-by-value. Terms with the same functional result that may have different sequentiality under different reduction strategies can be distinguished operationally. Models for sequentiality exist \cite{BC82}. The full-reducing and open-terms aspect of the calculus has applications in program optimisation by partial evaluation and type checking in proof assistants \cite{Cre90}, in the \textsc{PoplMark} challenge \cite{MMM05}, in reasoning within local open scopes \cite{Cha12}, etc. The computational overload incurred by proofs-by-reflection can be mitigated by reducing terms fully \cite{GL02}. Finally, that some non-terminating terms (unsolvables) can be equated without loss of consistency is of interest to proof assistants with a non-terminating full-reducing programmatic fragment, \emph{e.g.}\ \cite{ACPPW08}. This paper can be read by anyone able to follow the basic conventional lambda calculus notions and notations that we recall in Section~\ref{sec:prelim}. The first part of the paper provides the necessary exegesis and intuitions on $\lamK$, $\lambda_{\vv}$, solvability, effective use, $v$-solvability, and introduces our $\lambda_{\vv}$-solvability. The more technical second part involves the proof of the Partial Genericity Lemma and the consistent proof-theory. Some background material and routine proofs are collected in the appendix. References to the latter are labelled `App.' followed by a section number. \section{Overview of \texorpdfstring{$\lamK$}{lambda-K} % and \texorpdfstring{$\lambda_{\vv}$}{lambda-V}} \label{sec:prelim} This preliminaries section must be of necessity terse. Save for the extensive use of EBNF grammars to define sets of terms, we follow definitions and notational conventions of \cite{Bar84,HS08} for $\lamK$ and of \cite{Plo75} for $\lambda_{\vv}$. The book \cite{RP04} collects and generalises both calculi. The set of lambda terms is $\Lambda::=x~|~(\lambda x.\Lambda)~|\ (\Lambda\,\Lambda)$ with `$x$' one element of a countably infinite set of variables that we overload in grammars as non-terminal for such set. Uppercase, possibly primed letters $M$, $M'$, $N$, etc., will stand for terms. In words, a term is a variable, or an abstraction $(\lambda x.M)$ with bound variable $x$ and body $M$, or the application $(MN)$ of an operator $M$ to an operand $N$. We follow the common precedence and association convention where applications associate to the left and application binds stronger than abstraction. Hence, we can drop parenthesis and write $(\lambda x.x\,y)\,p\,q\,(\lambda x.x)$ rather than $((((\lambda x.(x\,y))p)q)(\lambda x.x))$, and we can write $\Lambda::=x~|~\lambda x.\Lambda~|\ \Lambda\,\Lambda$, and $\lambda x.M$, and $MN$. For brevity we write $\lambda x_1\ldots x_n.M$ instead of $\lambda x_1.\lambda x_2.\ldots\lambda x_n.M$. We write $\mathrm{FV}$ for the function that delivers the set of free variables of a term. We assume the notions of bound and free variable and write $\equiv$ for the identity relation on terms modulo renaming of bound variables.\footnote{We are following here the convention of Appendix~C in \cite{Bar84} not to be confused with the `Barendregt convention' or `hygiene rule' of \cite{Bar90} where bound variables and free variables must differ.} For example, $\lambda x.xz \equiv \lambda y.yz$. We also abuse $\equiv$ to define abbreviations, \emph{e.g.}\ $\Term{I}\equiv \lambda x.x$. Like \cite{CF58,HS08}, we write $\cas{N}{x}{M}$ for the capture-avoiding substitution of $N$ for the free occurrences of $x$ in $M$. We write $\Lambda^0$ for the set of closed lambda terms, \ie\ terms $M$ such that $\mathrm{FV}(M)=\emptyset$. We use the same postfix superscript for the operation on a set of terms that delivers the subset of closed terms. The set of values (non-applications) is $\mathsf{Val}::=x~|~\lambda x.\Lambda$. The set of closed values is $\mathsf{Val}^0$ and consists of closed abstractions. A context $\ctx{C}[\enspace]$ is a term with one hole, \emph{e.g.}\ $\ctx{C}[\enspace] \equiv \lambda x.[\enspace]$. Plugging a term within the hole may involve variable capture, \emph{e.g.}\ $\ctx{C}[\lambda y.x] \equiv \lambda x.\lambda y.x$. The conversion/reduction proof-theories of $\lamK$ and $\lambda_{\vv}$ can be presented as instances of the Hilbert-style proof-theory shown in Fig.~\ref{fig:lambda-calculi} that is parametric (\emph{cf.}\ \cite{RP04}) on a set $\mathbb{P}$ of permissible operands $N$ in the contraction rule ($\beta$) which describes the conversion/reduction of the term $(\lambda x.B)N$, that is, the application of an abstraction (a function) to an operand. Operands are arbitrary terms in $\lamK$ and restricted to values in $\lambda_{\vv}$ which means that $\lambda_{\vv}$ has fewer conversions/reductions than $\lamK$. \begin{figure}[ht] \begin{mathpar} \inferrule*[Left=($\beta$)]% { N \in \mathbb{P}}% {(\lambda x.B)N = \cas{N}{x}B}% \and % \inferrule*[Left=($\mu$)]% {N = N'}% {M\,N = M\,N'}% \and % \inferrule*[Left=($\nu$)]% {M = M'}% {M\,N = M'\,N}% \and% \inferrule*[Left=($\xi$)]% {B = B'}% {\lambda x.B = \lambda x.B'}% \\ \inferrule*[Left=($\rho$)]% { }% {M = M}% \and% \inferrule*[Left=($\tau$)]% {M = N \quad N = P}% {M = P}% \and% \inferrule*[Left=($\sigma$)]% {M = N}% {N = M}% \end{mathpar} \vspace{0.5cm} \begin{tabular}{l|l|l|l} Theory & $=$ & $\mathbb{P}$ & discarded rules \\ \hline\hline $\lamK$ conversion & $=_{\beta}$ & $\Lambda$ & none \\ $\lamK$ multiple-step reduction & $\mrel{\beta}$ & $\Lambda$ & $\sigma$ \\ $\lamK$ single-step reduction & $\rel{\beta}$ & $\Lambda$ & $\rho$, $\tau$, $\sigma$ \\ $\lambda_{\vv}$ conversion & $=_{\beta\va}$ & $\mathsf{Val}$ & none \\ $\lambda_{\vv}$ multiple-step reduction & $\mrel{\beta\va}$ & $\mathsf{Val}$ & $\sigma$ \\ $\lambda_{\vv}$ single-step reduction & $\rel{\beta\va}$ & $\mathsf{Val}$ & $\rho$, $\tau$, $\sigma$ \end{tabular} \caption{$\lamK$ and $\lambda_{\vv}$ proof-theories.} \label{fig:lambda-calculi} \end{figure} \begin{figure}[ht] \begin{tabular}{lllll} Set &&& Description & Abbreviation in the text\\ \hline\hline $\Lambda$ & $::=$ & $x~|~\lambda x.\Lambda~|\ \Lambda\,\Lambda$ & lambda terms \\ $\mathsf{Val}$ & $::=$ & $x~|~\lambda x.\Lambda$ & values \\ $\mathsf{Neu}$ & $::=$ & $x\,\Lambda\,\{\Lambda\}^*$ & $\lamK$ neutrals \\ $\mathsf{NF}$ & $::=$ & $\lambda x.\mathsf{NF}\ |\ x\,\{\mathsf{NF}\}^*$ & $\lamK$ normal forms & {\mbox{$\betaK$-nf}}s (singular \mbox{$\betaK$-nf}) \\ $\HNF$ & $::=$ & $\lambda x.\HNF~|~x\,\{\Lambda\}^*$ & head normal forms & {hnf}s (singular hnf) \\ $\mathsf{Neu}\V$ & $::=$ & $\mathsf{Neu}~|~\mathsf{Block}\,\{\Lambda\}^*$ & $\lambda_{\vv}$ neutrals \\ $\mathsf{Block}$ & $::=$ & $(\lambda x.\Lambda)\mathsf{Neu}\V$ & blocks \\ $\V\NF$ & $::=$ & $x\ |\ \lambda x.\V\mathsf{NF}\ |\ \mathsf{Stuck}$ & $\lambda_{\vv}$ normal forms & {\mbox{$\betaV$-nf}}s (singular \mbox{$\betaV$-nf}) \\ $\mathsf{Stuck}$ & $::=$ & $x\,\V\NF\,\{\V\NF\}^*$ & stucks \\ & $|$ & $\mathsf{Block}\mathsf{NF}\,\{\V\NF\}^*$ & \\ $\mathsf{Block}\mathsf{NF}$ & $::=$ & $(\lambda x.\V\NF)\,\mathsf{Stuck}$ & blocks in \mbox{$\betaV$-nf} \end{tabular} \caption{Sets of terms.} \label{fig:lam-sets} \end{figure} \begin{figure}[ht] \begin{tabular}{llll} Abbreviation & Term & has \mbox{$\betaK$-nf} & has \mbox{$\betaV$-nf} \\ \hline\hline $\Term{I}$ & $\lambda x.x$ & yes & yes \\ $\Term{K}$ & $\lambda x.\lambda y.x$ & yes & yes \\ $\Term{\Delta}$ & $\lambda x.xx$ & yes & yes \\ $\Term{\Omega}$ & $\Term{\Delta}\DELTA$ & no & no \\ $\Term{U}$ & $\lambda x.\Term{B}$ & no & yes \\ $\Term{B}$ & $(\lambda y.\Term{\Delta})(x\,\Term{I})\Term{\Delta}$ & no & yes \end{tabular} \caption{Glossary of particular terms.} \label{fig:lam-glossary} \end{figure} In $\lambda_{\vv}$ the rule ($\beta$) restricted to operand values is named ($\betaV$). The term $(\lambda x.B)N$ is called a $\beta$-redex iff $N\in\Lambda$, and a $\betaV$-redex iff $N\in\mathsf{Val}$. A term is a $\beta$-normal-form (hereafter abbrev. \mbox{$\betaK$-nf}) iff it has no $\beta$-redexes. A term is a \mbox{$\betaV$-nf}\ iff it has no $\betaV$-redexes. The inference rules are: compatibility ($\mu$) ($\nu$) ($\xi$), reflexivity ($\rho$), transitivity ($\tau$), and symmetry ($\sigma$). The table underneath names the proof-theory obtained, and the relation symbol, for given $\mathbb{P}$ and rules. The conversion relation includes the reduction relation. A term $M$ has a \mbox{$\betaK$-nf}\ $N$ when $M =_{\beta} N$ and $N$ is a \mbox{$\betaK$-nf}. A term $M$ has a \mbox{$\betaV$-nf}\ $N$ when $M =_{\beta\va} N$ and $N$ is a \mbox{$\betaV$-nf}. A term $M$ has a value when $M =_{\beta\va} N$ and $N\in\mathsf{Val}$. All proof-theories are consistent (not all judgements are derivable) due to confluence (a term has at most one \mbox{$\betaK$-nf}\ and at most one \mbox{$\betaV$-nf}). Fig.~\ref{fig:lam-sets} defines sets of terms and Fig.~\ref{fig:lam-glossary} defines abbreviations of terms used in the following sections. A full table of sets of terms and abbreviations of terms is provided in App.~\ref{app:full-gloss}. Observe that every term of $\Lambda$ has the form $\lambda x_1\ldots x_n.\,H\,M_1\cdots M_m$ where $n\geq0$, $m\geq0$, and $M_1\in\Lambda$, \ldots, $M_m\in\Lambda$. The head term $H$ is either a `head variable' $x$ (which may or may not be one of $x_1 \ldots x_n$) or an application $(\lambda x.B)N$ (which is a redex iff $N\in\mathbb{P}$). The set $\mathsf{Neu}$ of \emph{neutrals} of $\lamK$ contains applications $x\,M_1\cdots M_n$ with $n\geq 1$. The expression $\{\Lambda\}^*$ in the grammar stands for zero or more occurrences of $\Lambda$. The applications associate as $(\ldots ((x\,M_1)M_2)\cdots M_n)$ according to the standard convention. The set $\mathsf{NF}$ of {\mbox{$\betaK$-nf}}s consists of abstractions with bodies in \mbox{$\betaK$-nf}, free variables, and neutrals in \mbox{$\betaK$-nf}. According to the grammar, every \mbox{$\betaK$-nf}\ has the form $\lambda x_1 \ldots x_n.x\,N_1\cdots N_m$ where $n\geq0$, $m \geq 0$, $N_1\in\mathsf{NF}$, \ldots, $N_m\in\mathsf{NF}$, and $x$ may or may not be one of $x_1\ldots x_n$. The set $\HNF$ of head normal forms (abbrev. {hnf}s) consists of terms that differ from {\mbox{$\betaK$-nf}}s in that $N_1\in\Lambda$, \ldots, $N_m\in\Lambda$. Clearly, $\mathsf{NF}\subset\HNF$. Some examples: $\lambda x.\Term{I}$ is a \mbox{$\betaK$-nf}\ and a hnf, $\lambda x.\Term{I}\Term{\Delta}$ is not a \mbox{$\betaK$-nf}\ (it contains the $\beta$-redex $\Term{I}\,\Term{\Delta}$) nor a hnf\ (it has no head variable), $\lambda x.\,x\,\Term{I}\Term{\Delta}$ is not a \mbox{$\betaK$-nf}\ but it is a hnf, and both $x\,(\lambda x.\,\Term{I})$ and $x\,\Term{\Omega}$ are neutrals, with only the first in \mbox{$\betaK$-nf}. The set $\mathsf{Neu}\V$ of neutrals of $\lambda_{\vv}$ contains the neutrals $\mathsf{Neu}$ of $\lamK$ and blocks applied to zero or more terms. The set $\mathsf{Block}$ of \emph{blocks} contains applications $(\lambda x.B)N$ where $N\in\mathsf{Neu}\V$. These are applications that do not convert to a $\betaV$-redex and are therefore blocked. (Our blocks differ from the `head blocks' of \cite[p.8]{RP04} and the `pseudo redexes' of \cite[p.4]{HZ09} which require $N\not\in\mathsf{Val}$ and so include terms like $(\lambda x.B)(\Term{I}\,\Term{I})$ that convert to a $\betaV$-redex.) The set $\V\mathsf{NF}$ of {\mbox{$\betaV$-nf}}s contains variables, abstractions in \mbox{$\betaV$-nf}, and \emph{stuck terms} (`stucks' for short) which are neutrals of $\lambda_{\vv}$ in \mbox{$\betaV$-nf}. The set $\mathsf{Stuck}$ of stucks contains $\mathsf{Neu}$ neutrals of $\lamK$ in \mbox{$\betaV$-nf}\ and blocks in \mbox{$\betaV$-nf}. According to the grammar, every \mbox{$\betaV$-nf}\ has the form $\lambda x_1 \ldots x_n.H\,Z_1\cdots Z_m$ with $n\geq0$, $m\geq 0$, $Z_1\in\V\mathsf{NF}$, \ldots, $Z_m\in\V\mathsf{NF}$, and $H$ either a variable or a block in \mbox{$\betaV$-nf}. Some examples: $x\,\Term{\Omega}$ is a neutral not in \mbox{$\betaV$-nf}, $x\,\Term{\Delta}$ is a neutral in \mbox{$\betaV$-nf}\ (a stuck), $(\lambda x.y)(x\,\Term{\Omega})$ is a block not in \mbox{$\betaV$-nf}, and $(\lambda x.y)(x\,\Term{\Delta})$ is a block in \mbox{$\betaV$-nf}\ (a stuck). A \emph{reduction strategy} of $\lamK$ (resp. of $\lambda_{\vv}$) is a partial function that is a subrelation of $\mrel{\beta}$ (resp. of $\mrel{\beta\va}$\,). A reduction strategy is \emph{complete} with respect to a notion of irreducible term when the strategy delivers the irreducible term iff the input term has one, diverging otherwise. A reduction strategy is \emph{full-reducing} when the notion of irreducible term is a \mbox{$\betaK$-nf}\ (resp. \mbox{$\betaV$-nf}). The Quasi-Leftmost Reduction Theorem \cite[Thm.~3.22]{HS08} states, broadly, that any reduction strategy of $\lamK$ that eventually contracts the leftmost redex is full-reducing and complete. One such well-known strategy is leftmost reduction \cite{CF58}, also known as leftmost-outermost reduction (when referring to the redex's position in the abstract syntax tree of the term) or, more commonly, as normal order. The Standardisation Theorem \cite[Thm.~3]{Plo75} guarantees that there are full-reducing and complete strategies of $\lambda_{\vv}$. One such strategy is described in \cite{RP04} and discussed in Section~\ref{sec:value-normal-order}. \section{Solvability reloaded} \label{sec:lamK-solv} As explained in the introduction, a term is solvable iff a normal form can be obtained from it when used as a function. Solvability is usually defined first for closed terms and then extended to open terms. \begin{defi}[\textsc{SolN}] A term $M\in\Lambda^0$ is solvable in $\lamK$ iff there exists $N\in\mathsf{NF}$ and there exist operands $N_1\in\Lambda$, \ldots, $N_k\in\Lambda$ with $k\geq0$ such that $M\,N_1\,\cdots\,N_k =_{\beta} N$. \end{defi} This definition is the seminal one on page~87 of \cite{Bar71}.\footnote{The provisos $M\in\Lambda^0$ and $k\geq0$ are implicit in the original definition due to the context of the thesis (closed-term models) and its subscript convention. They are explicit in later definitions \cite{Bar72,Wad76,Bar84}. The order of existential quantifiers is immaterial. The original definition says `$M\,N_1\cdots N_k$ has a \mbox{$\betaK$-nf}' which as explained in Section~\ref{sec:prelim} is the same as `converts to a \mbox{$\betaK$-nf}'. In \cite{Bar84} the requirement on $N$ is immaterially changed from being a \mbox{$\betaK$-nf}\ to having a \mbox{$\betaK$-nf}.} In words, a closed term is solvable iff it converts to a \mbox{$\betaK$-nf}\ when used in operator position at the top level. If the term is or has a \mbox{$\betaK$-nf}\ then it is trivially solvable by choosing $k=0$. Let us illustrate with examples that also explain the focus on closed terms. First, take the diverging closed term $\Term{\Omega}$ (an abbreviation of $\Term{\Delta}\DELTA$, \ie\ $\Term{\Omega}\equiv\Term{\Delta}\DELTA\equiv(\lambda x.xx)(\lambda x.xx)$). A \mbox{$\betaK$-nf}\ cannot be obtained from it no matter to how many or to which operands it is applied, \emph{e.g.}\ $(\Term{\Delta}\DELTA)N_1 \cdots N_k =_{\beta} ((\lambda x.x\,x)\Term{\Delta})N_1 \cdots N_k =_{\beta} (\Term{\Delta}\DELTA)N_1 \cdots N_k =_{\beta}\ldots$ is an infinite loop. Terms like $\Term{\Omega}$ are operationally irrelevant. Now take the closed terms $\lambda x.x\,\Term{I}\,\Term{\Omega}$ and $\lambda x.x\,\Term{K}\,\Term{\Omega}$. Both terms diverge and yet both deliver a \mbox{$\betaK$-nf}\ when applied to suitable operands. For example, $(\lambda x.x\,\Term{I}\,\Term{\Omega})\Term{K} =_{\beta} \Term{I}$, and $(\lambda x.x\,\Term{K}\, \Term{\Omega})\Term{K} =_{\beta} \Term{K}$. The {\mbox{$\betaK$-nf}}s obtained from such diverging function terms are different, therefore they have different operational behaviour and cannot be equated. More precisely, a proof-theory with judgements $M = N$ can be obtained by taking the conversion proof-theory (if $M =_{\beta} N$ then $M = N$) and adding the equation $\lambda x.x\,\Term{I}\,\Term{\Omega} = \lambda x.x\,\Term{K}\,\Term{\Omega}$. However, this extended proof-theory is inconsistent because the false equation $\Term{I} = \Term{K}$ is then provable. The focus on closed terms is because some open terms contain neutral terms (Section~\ref{sec:prelim}) that block applications \cite{Wad76}. For example, take the neutral $x\,\Term{\Omega}$ and apply it to operands: $(x\,\Term{\Omega})N_1\cdots N_k$. The conversion to \mbox{$\betaK$-nf}\ is impossible because the diverging subterm $\Term{\Omega}$ is eventually converted due to the presence of the free variable $x$ that blocks the application to the operands. (Similarly, in $x\,y\,\Term{\Omega}$ the neutral subterm $x\,y$ blocks the application.) However, a free variable stands for some operator, so substituting a closed operator for the variable may yield a solvable term. For example, substitute $\Term{K}\,\Term{I}$ for $x$ and choose $k=0$, then $\Term{K}\,\Term{I}\,\Term{\Omega} =_{\beta} \Term{I}$. Traditionally, open terms are defined as solvable iff the closed term resulting from such substitutions is solvable. We postpone the discussion to Section~\ref{sec:open-open} where we show that fully closing is excessive in $\lamK$. In Section~\ref{sec:v-solv} we show that it is counterproductive for defining solvability in $\lambda_{\vv}$. We conclude this section with the role of solvables in the development of a standard theory. Solvable terms are approximations of totally defined terms. They are `at least partially defined'~\cite{Wad76}. In contrast, unsolvable terms are `hereditarily'~\cite{Bar71} or `totally'~\cite{Wad76} undefined, and can be equated without loss of consistency. More precisely, given the set of equations $\mathcal{H}_0=\{M = N\ |\ M,N\in\Lambda^0\ \text{unsolvable} \}$, a consistent extended proof-theory $\mathcal{H}$ results from adding $\mathcal{H}_0$'s equations as axioms to $\lamK$ (\ie\ $\mathcal{H} = \mathcal{H}_0+\lamK$) \cite{Bar84}. A consistent extension where unsolvables are equated (\ie\ contains $\mathcal{H}$) is called \emph{sensible}. A consistent extension that does not equate solvables and unsolvables is called \emph{semi-sensible}. There are standard models that satisfy $\mathcal{H}$, with unsolvables corresponding to the least elements of the model \cite{Bar72,Bar84}. By extension, such models are called \emph{sensible models} \cite[p.505]{Bar84}. Solvable terms can be characterised operationally: there is a reduction strategy of $\lamK$ called `head reduction' that converges iff the input term is solvable. (Solvability, like having \mbox{$\betaK$-nf}, is semi-decidable.) More precisely, solvable terms exactly correspond to terms with hnf, and head reduction delivers a hnf\ iff the input term has one, diverging otherwise \cite{Wad76,Bar84}. (In the technical jargon, head reduction is said to be \emph{complete} with respect to hnf.) \subsection{Other equivalent definitions of solvability} \label{sec:eq-defs} There are two other equivalent defi\-nitions of solvability that use different equations \cite{Bar72,Wad76,Bar84}. \begin{defi}[\textsc{SolI}] A term $M\in\Lambda^0$ is solvable in $\lamK$ iff there exist operands $N_1\in\Lambda$, \ldots, $N_k\in\Lambda$ with $k\geq0$ such that $M\,N_1\,\cdots\,N_k =_{\beta} \Term{I}$. \end{defi} \begin{defi}[\textsc{SolX}] A term $M\in\Lambda^0$ is solvable in $\lamK$ iff for all $X\in\Lambda$ there exist operands $N_1\in\Lambda$, \ldots, $N_k\in\Lambda$ with $k\geq0$ such that $M\,N_1\,\cdots\,N_k =_{\beta} X$. \end{defi} In words, a closed term is solvable iff it is convertible by application to the identity term or to any given term. Definition \textsc{SolI} is \emph{de facto} in most presentations. These definitions are equivalent to \textsc{SolN} (capture the same set of solvables) because of two properties that hold in $\lamK$. The first is stated in the following lemma. \begin{lem}[Lemma~4.1 in \cite{Wad76}] \label{lem:4.1Wad76} If $M\in\Lambda^0$ has a \mbox{$\betaK$-nf}\ then for all $X\in\Lambda$ there exist operands $X_1\in\Lambda$, \ldots, $X_k\in\Lambda$ with $k\geq 0$ such that $M\,X_1\,\cdots\,X_k =_{\beta} X$. \end{lem} In words, a closed term with \mbox{$\betaK$-nf}\ can be converted by application to any given term. This lemma is the link between \textsc{SolN}'s \emph{existential} property of having a \mbox{$\betaK$-nf}\ and \textsc{SolX}'s \emph{universal} property of converting to any term. The shape of a \mbox{$\betaK$-nf}\ is the key to this link, as the proof of the lemma illustrates. \begin{proof}[Proof of Lemma~\ref{lem:4.1Wad76}] As explained in Section~\ref{sec:prelim}, a \mbox{$\betaK$-nf}\ has the form $\lambda x_1\ldots x_n.\,x\,N_1\cdots N_m$ with $n\geq 0$, $m\geq 0$, and $N_1\in\mathsf{NF}$, \ldots, $N_m\in\mathsf{NF}$. Since $M$ is closed, its \mbox{$\betaK$-nf}\ $M'$ has $n>0$ with $x$ is one of $x_i$. Lemma~\ref{lem:4.1Wad76} holds by choosing $k=n$, $X_j$ arbitrary for $j\not=i$, and $X_i\equiv \Term{K}^m X$, with $\Term{K}^m$ the term that takes $m+1$ operands and returns the first one. Thus, \mbox{$M\,X_1 \cdots {(\Term{K}^mX)}_i\cdots X_n =_{\beta} X$} holds because $M =_{\beta} M'$ and $M'\,X_1 \cdots {(\Term{K}^mX)}_i\cdots X_n =_{\beta} (\Term{K}^mX)N'_1 \cdots N'_m =_{\beta} X$, with $N'_i$ the result of substitutions on $N_i$. \end{proof} The link between \textsc{SolI} and \textsc{SolX} is provided by the property that for all $X\in\Lambda$ the conversion $\Term{I}\,X =_{\beta} X$ holds \cite[p.171ff]{Bar84}. We provide here an explicit proof. \begin{lem} \label{lem:equiv-solvs} The solvability definitions \textsc{SolN}, \textsc{SolI}, and \textsc{SolX} are equivalent in $\lamK$. \end{lem} \begin{proof} We use different operand symbols and subscripts to distinguish the equations: \begin{align} M\,N_1\,\cdots\,N_k & =_{\beta} N \tag*{\textsc{SolN}} \\ M\,Y_1\,\cdots\,Y_l & =_{\beta} \Term{I} \tag*{\textsc{SolI}} \\ M\,Z_1\,\cdots\,Z_j & =_{\beta} X \tag*{\textsc{SolX}} \end{align} We first prove \textsc{SolX} iff \textsc{SolN}: From \textsc{SolX} we prove \textsc{SolN} by choosing $k=j$, $N_i\equiv Z_i$, and $X$ the \mbox{$\betaK$-nf}\ $N$. Conversely, given \textsc{SolN} then $MN_1\cdots N_k$ has a \mbox{$\betaK$-nf}, so by Lemma~\ref{lem:4.1Wad76} we have that forall $X\in\Lambda$ the conversion $M\,N_1\cdots N_k\,X_1\cdots X_{k'} =_{\beta} X$ holds. Then \textsc{SolX} follows by choosing $j=k+k'$, $Z_1 \equiv N_1$, \ldots, $Z_k \equiv N_k$, $Z_{k+1} \equiv X_1$, \ldots, $Z_j \equiv X_{k'}$. We now prove \textsc{SolX} iff \textsc{SolI}: From \textsc{SolX} we prove \textsc{SolI} by choosing $l=j$, $Y_i\equiv Z_i$, and $X\equiv\Term{I}$. Conversely: \begin{tabular}{lll} (a) & $M\,Y_1\,\cdots\,Y_l =_{\beta} \Term{I}$ & \textsc{SolI} \\ (b) & $M\,Y_1\cdots Y_l\,X =_{\beta} \Term{I}\,X$ & by ($\nu$) on (a) with any $X$ \\ (c) & $\Term{I} X =_{\beta} X$ & by ($\beta$) \\ (d) & $M\,Y_1\cdots Y_l\,X =_{\beta} X$ & by ($\tau$) on (b),(c) \end{tabular}\\ \noindent Then, \textsc{SolX} holds by choosing $j=l+1$, $Z_1 \equiv Y_1$, \ldots, $Z_{j-1}=Y_l$, $Z_j \equiv X$. \end{proof} Bear in mind that although all definitions are equivalent, \textsc{SolI} and \textsc{SolX} are possible because of properties that hold in $\lamK$, and therefore \textsc{SolI} and \textsc{SolX} are secondary. As we shall see in Section~\ref{sec:v-solv}, the anaologous in $\lambda_{\vv}$ of Lemma~\ref{lem:4.1Wad76} is not the case, nor are the analogous of \textsc{SolI}, \textsc{SolX}, and Lemma~\ref{lem:equiv-solvs}. Adapting \textsc{SolI} or \textsc{SolX} to that calculus will leave solvable terms behind. \subsection{Open terms, and open and non-closing contexts} \label{sec:open-open} Solvability has been typically extended to open terms by requiring at least one closed substitution instance or all closures of the open term\footnote{A closed substitution instance of $M$ is a closed term resulting from substituting closed terms for all the free variables of $M$. A closure of $M$ is a term $\lambda x_1 \ldots x_n.M$ such that $\mathrm{FV}(M)=\{x_1,\ldots,x_n\}$. Since different closures differ only on the order of prefix lambdas, if one closure is solvable then all other closures are too by passing the operands to the closure in the appropriate order. Substitutions and closures are connected by the $\beta$-rule.} to be solvable~\cite{Wad76,Bar72,Bar84}. As we discussed in Section~\ref{sec:lamK-solv}, neutral terms are the reason for closing. Substituting closed operators for the blocking free variables of neutrals may yield solvable terms. For example, $\cas{\Term{K}\,\Term{I}}{x}{(x\,\Term{\Omega})}\equiv\Term{K}\,\Term{I}\,\Term{\Omega}$ is trivially solvable according to \textsc{SolN} by choosing $k=0$. Similarly, the closure $\lambda x.x\,\Term{\Omega}$ is solvable by choosing $k=1$ and $N_1\equiv\Term{K}\,\Term{I}$. A traditional definition of solvability for open and closed terms uses a `head context' to close the term before passing the operands \cite{Wad76} (head contexts are defined on page 491 and solvability with head contexts on page 503). \begin{defi}[\textsc{SolH}] A term $M\in\Lambda$ is solvable in $\lamK$ iff there exists $N\in\mathsf{NF}^0$ and there exists a head context $\ctx{H}[\enspace] \equiv ((\lambda x_1\ldots x_n.[\enspace])C_1\cdots C_n)N_1 \cdots N_k$ with $n\geq0$, $k\geq0$, $\mathrm{FV}(M)=\{x_1,\ldots,x_n\}$, $C_1\in\Lambda^0$, \ldots, $C_n\in\Lambda^0$, and $N_1\in\Lambda^0$, \ldots, $N_k\in\Lambda^0$ such that $\ctx{H}[M] =_{\beta} N$. \end{defi} In words, the head context forces the closed $C_i$ to be substituted for all the free variables (if there are any) of the term placed within the hole. The resulting closed substitution instance is then at the top-level operator position where it is applied to the closed $N_i$ operands. The top-level operator position is a `head' position (Section~\ref{sec:prelim}), hence the name of the context. Since $\ctx{H}[\enspace]$ is a \emph{closed and closing} context, the \mbox{$\betaK$-nf}\ $N$ has to be closed too. In \cite{PR99}, \textsc{SolH} and \textsc{SolI} are combined and the conversion is $\ctx{H}[M] =_{\beta} \Term{I}$. However, using a closed and closing context is excessive. The nature of solvability and the previous definitions do not require it. To begin with, an open term that is or has a \mbox{$\betaK$-nf}\ is, by its very nature, solvable. For other open terms not every free variable has to be substituted, only the blocking ones that prevent solving the term. In all the previous definitions the $N_i$ operands are arbitrary, and so the requirement that $N_i$ are closed in $\ctx{H}[\enspace]$ can be dropped. Since in \textsc{SolI} both $M$ and $\Term{I}$ are closed then the open $N_i$ or their open subterms must be eventually discarded in the conversion to $\Term{I}$. But in \textsc{SolN} the \mbox{$\betaK$-nf}\ $N$ is arbitrary too, so not every open $N_i$ operand or open subterm therein has to be discarded. A less restrictive definition is perfectly possible: \begin{defi}[\textsc{SolF}] A term $M\in\Lambda$ is solvable in $\lamK$ iff there exists $N\in\mathsf{NF}$ and there exists a \emph{function context} $\ctx{F}[\enspace] \equiv (\lambda x_1\ldots x_n.[\enspace])N_1\cdots N_k$ with $n\geq 0$, $k\geq0$, and $N_1\in\Lambda$, \ldots, $N_k\in\Lambda$ such that $\ctx{F}[M] =_{\beta} N$ \end{defi} This definition is closer to \textsc{SolN}. The function context can be \emph{open} and \emph{non-closing}: $N$ and $N_i$ may be open, and not every free variable of $M$ need be substituted. For example, $x\,\Term{\Omega}$ is solved by the open function context $(\lambda x.[\enspace])(\Term{K}\,N)$ where $N$ is an open \mbox{$\betaK$-nf}. And $x\,y\,\Term{\Omega}$ is solved by the non-closing function context $(\lambda x.[\enspace])\Term{K}$ which does not close $y$. \begin{lem}[Generalisation of Lemma~\ref{lem:4.1Wad76}] \label{lem:GenOf4.1Wad76} If $M\in\Lambda$ has a \mbox{$\betaK$-nf}\ then for all $X\in\Lambda$ there exists a function context $\ctx{F}[\enspace]$ such that $\ctx{F}[M] =_{\beta} X$. \end{lem} \begin{proof} The \mbox{$\betaK$-nf}\ of $M$ has the form $\lambda x_1\ldots x_n.x\,N_1\cdots N_m$ with $n\geq0$, $m\geq 0$ and $N_1\in\mathsf{NF}$, \ldots, $N_m\in\mathsf{NF}$. If $x\in\mathrm{FV}(M)$ the lemma holds by choosing $\ctx{F}[\enspace] \equiv (\lambda x.[\enspace])(\Term{K}^m X)X_1\cdots X_n$ with $X_i$ arbitrary and $\Term{K}^m$ the term that takes $m+1$ operands and returns the first one. If $x\not\in\mathrm{FV}(M)$ then $x$ is $x_i$ for some $i$. The lemma holds by choosing $\ctx{F}[\enspace] \equiv [\enspace] X_1\cdots X_{i-1}(\Term{K}^m X)X_{i+1}\cdots X_n$. \end{proof} Let us note that the lemma also holds with the proviso relaxed to `$M$ has a hnf'. \begin{thm} \label{thm:equiv-solvs-open} In $\lamK$ the solvability definitions \textsc{SolH} and \textsc{SolF} are equivalent. \end{thm} Intuitively, if we have a solving head context then we have a solving function context because function contexts subsume head contexts. And if we have a solving function context then we can construct a solving head context by carefully closing the former and the \mbox{$\betaK$-nf}. The proof of Thm.~\ref{thm:equiv-solvs-open} is not so short and we have put it in App.~\ref{app:open-open} with an accompanying example illustrating the construction of a solving head context from a solving function context. As we shall see in Section~\ref{sec:lamV-solv}, the analogous in $\lambda_{\vv}$ of Thm.~\ref{thm:equiv-solvs-open} is not the case. Adapting \textsc{SolH} to that calculus will leave solvable terms behind. \subsection{Solvability and effective use} \label{sec:effective-use-lamK} As noted in \cite{PR99} there is a more general definition of solvability that connects the notions of `operational relevance' and `effective use' of a term. A term is effectively used when it is eventually used as an operator. The term is operationally relevant iff it then delivers a final result, which in $\lamK$ is a \mbox{$\betaK$-nf}. In all previous solvability definitions, the term to solve is placed at the top-level operator position and thus it is effectively used. If it were placed at other positions then it may be eventually used as operator or it may be trivially used (discarded). If placed at an operand position that is never discarded, never gets to an operator position where it is applied to operands, and is returned as the final result, then the term is effectively used. It is as if the term were placed within an empty function context. Thus, a final result is in operator position, is effectively used, and is operationally relevant. An unsolvable term cannot be effectively used to deliver a \mbox{$\betaK$-nf}: `unsolvable terms can never have a nontrivial effect on the outcome of a reduction' \cite[p.506]{Wad76}. More precisely, if $M$ is unsolvable then for all $X$, $M\,X$ is unsolvable \cite[Cor.~8.34]{Bar84}. Unsolvable terms that are not effectively used are generic: they can be substituted by arbitrary terms. This is formalised by the so-called Genericity Lemma. The following statement of the Lemma is a combination of the versions in \cite[Prop.~14.3.24]{Bar84} and \cite[Cor.~5.5]{Wad76} (both collected in App.~\ref{app:effective-use-lamK} for ease of reference). These versions use arbitrary contexts $\ctx{C}[\enspace]$ because $\ctx{C}[M]$ is more general than $M\,X$. The latter is a particular case of the former for $\ctx{C}[\enspace] \equiv [\enspace]\,X$. With the context, the term plugged into the hole may eventually appear in operator position. \begin{lem}[Genericity Lemma] \label{lem:lamK-genericity-lemma} Let $M\in \Lambda$ and $N\in\mathsf{NF}$. $M$ is unsolvable in $\lamK$ implies that for all contexts $\ctx{C}[\enspace]$, if $\ctx{C}[M] =_{\beta} N$ then for all $X\in\Lambda$ it is the case that $\ctx{C}[X]=_{\beta} N$. In formal logic: \begin{displaymath} M\ \textrm{unsolvable} \Rightarrow (\forall\ctx{C}[\enspace].\, \ctx{C}[M] =_{\beta} N \Rightarrow (\forall X\in\Lambda.\,\ctx{C}[X] =_{\beta} N)) \end{displaymath} \end{lem} In words, if plugging an unsolvable term in a given arbitrary context converts to a \mbox{$\betaK$-nf}\ then plugging any other term also converts to that \mbox{$\betaK$-nf}. The unsolvable is not used effectively in the context. Although the lemma is stated as an implication, it is actually an equivalence because the negation of the consequent is a necessary condition for `$M$ solvable' by the \textsc{SolF} definition of solvability. Clearly, if $M$ is solvable then there exists $\ctx{C}[\enspace]\equiv\ctx{F}[\enspace]$ such that $\ctx{F}[M]=_{\beta} N$, and by the shape of $\ctx{F}[\enspace]$ it is not the case that for all $X\in\Lambda$, $\ctx{F}[X]=_{\beta} N$. Take for instance $\ctx{F}[\Term{\Omega}]$ which diverges. (Note that if $M$ is solvable and $\ctx{C}[M] =_{\beta} N$ holds then $\ctx{C}[X]=_{\beta} N$ should not hold for terms $X$ that are not convertible to $M$ unless $M$ is not effectively used in $\ctx{C}[\enspace]$.) The lemma is a definition of solvability when read as the inverse equivalence: \begin{displaymath} M\ \textit{solvable} \Leftrightarrow (\exists \ctx{C}[\enspace].\,\ctx{C}[M] =_{\beta} N \wedge \neg(\forall X\in\Lambda.\,(\ctx{C}[X] =_{\beta} N))) \end{displaymath} The following definition simply moves $N$ to the formula from the proviso. \begin{defi}[\textsc{SolC}] A term $M\in\Lambda$ is solvable in $\lamK$ iff there exists a context $\ctx{C}[\enspace]$ such that $\ctx{C}[M]=_{\beta} N$ for some $N\in\mathsf{NF}$ and not for all $X\in\Lambda$ it is the case that $\ctx{C}[X]=_{\beta} N$. \end{defi} In words, $M$ solvable means there exists a context that uses $M$ effectively to deliver a \mbox{$\betaK$-nf}. Function contexts are just one possible type of context applicable in \textsc{SolC}. \section{Call-by-value and % pure \texorpdfstring{$\lambda_{\vv}$}{lambda-V}} \label{sec:pure-lamV} In call-by-value functional programming languages, the evaluation of application expressions $e_1\,e_2$ can be broadly described in `big-step' fashion as follows. The operator expression $e_1$ is first evaluated to a `value' $v_1$ where `value' means here a first-class final result of the language. Functions are first-class values in such languages and their bodies are compiled, not evaluated. (In the SECD machine, the corresponding abstraction is not reduced, SECD reduction is `weak', meaning it does not `go under lambda'.) The operand expression $e_2$ is next evaluated to a value $v_2$. Finally, the result of passing $v_2$ to $v_1$ is evaluated. Evaluation diverges at the point where the first sub-evaluation diverges. Evaluation may halt due to a run-time error. The order of evaluation matters w.r.t. the point of divergence or halting.\footnote{Some languages prefer to evaluate $e_2$ before $e_1$, or instead of binary applications consider applications with multiple operands, evaluating the latter in left-to-right or right-to-left fashion. Some languages eschew divergence and run-time errors by means of a strong but yet expressive type discipline.} In pure $\lambda_{\vv}$, an application $M\,N$ can be reduced to \mbox{$\betaV$-nf}\ in several ways with the restriction that if $M$ is an abstraction or reduces to an abstraction, say $\lambda x.B$, and $N$ is a value or reduces to a value, say $V$, then the redex application $(\lambda x.B)V$ can be reduced in one step to $\cas{V}{x}{B}$, with reduction continuing on the result of the substitution. Either the abstraction $\lambda x.B$, or the value $V$, or both may be fully reduced in \mbox{$\betaV$-nf}\ depending on the reduction strategy. If $N$ is not a value or does not reduce to a value then $(\lambda x.B)N$ is a neutral which may only be reduced to a stuck. Abstractions are values, and so are free variables because they range over values as discussed in more detail below. Terms can be open, reduction may `go under lambda' with free variables possibly occurring within that scope, and final results are not values but {\mbox{$\betaV$-nf}}s. The rationale behind the restricted reduction/conversion and the definition of values is not merely to model call-by-value but to uphold confluence which is a \emph{sine qua non} property of the calculus because it upholds the consistency of the proof-theories. Intuitively, the rationale is \emph{to preserve confluence by preserving potential divergence}. To preserve confluence, applications cannot be passed as operands unless given the opportunity to diverge first. This point is fundamental to understanding our approach to solvability for $\lambda_{\vv}$ and so the rest of this section elaborates it. In $\lambda_{\vv}$ the reduction relation $\mrel{\beta\va}$ is confluent \cite[App.~A2]{HS08}. Confluence applies even for terms without \mbox{$\betaV$-nf}. The implication is that terms have at most one \mbox{$\betaV$-nf}, and so terms with different \mbox{$\betaV$-nf}\ are not $\betaV$-reducible/convertible. Not every $\betaV$-reduction/conversion is provable and the reduction/conversion proof-theory is consistent. The proof of confluence requires substitutivity which is the property that reduction/conversion is preserved under substitution, \emph{e.g.}\ if $M =_{\beta\va} N$ then $\cas{L}{x}{M} =_{\beta\va} \cas{L}{x}{N}$. In $\lambda_{\vv}$, permissible operands and subjects of substitutions cannot be applications, whether arbitrary or in \mbox{$\betaV$-nf}. Otherwise, substitutivity and confluence would not hold. (This is explained in \cite[p.135-136]{Plo75}, see App.~\ref{app:pure-lamV} for a detailed discussion.) Substitutivity requires the proviso $L\in\mathsf{Val}$ which explains why free variables are members of $\mathsf{Val}$, namely, because they range over members of $\mathsf{Val}$. For illustration, the neutral $x\,\Term{\Delta}$ cannot be passed in applications such as $(\lambda x.y)(x\,\Term{\Delta})$ because whether it diverges depends on what value $x$ is. For example, substituting the value $\Term{I}$ for $x$ yields $(\lambda x.y)(\Term{I}\,\Term{\Delta})$ which converges to $y$. But substituting the value $\Term{\Delta}$ for $x$ yields $(\lambda x.y)(\Term{\Delta}\DELTA)$ which diverges. Applications must be given the opportunity to diverge before being passed, not only to model call-by-value but because whether a neutral converges depends on which values are substituted for its free variables. The same goes for stucks: in the above examples $x\,\Term{\Delta}$ is actually a stuck. \subsection{Neutrals, stucks, and sequentiality} \label{sec:neutrals-seq} Before moving on we must recall that the nesting and order of neutrals confer the sequentiality character to $\lambda_{\vv}$. Take the following neutrals adapted from \cite[p.25]{Mil90} and assume $V$ and $W$ are closed values: \begin{displaymath} \begin{array}{lll} \Term{L}_1 & \equiv & (x\,V)(y\,W) \\ \Term{L}_2 & \equiv & (\lambda z.z(y\,W))(x\,V) \\ \Term{L}_3 & \equiv & (\lambda z.(x\,V)z)(y\,W) \end{array} \end{displaymath} Respectively substituting values $X$ and $Y$ for $x$ and $y$ we get: \begin{displaymath} \begin{array}{lll} \Term{L}_1' & \equiv & (X\,V)(Y\,W) \\ \Term{L}_2' & \equiv & (\lambda z.z(Y\,W))(X\,V) \\ \Term{L}_3' & \equiv & (\lambda z.(X\,V)z)(Y\,W) \end{array} \end{displaymath} If all $\Term{L}_i'$ have \mbox{$\betaV$-nf}\ then it is the same and the instances are convertible. But different reduction sequences differ on the order in which $(X\,V)$ and $(Y\,W)$ are reduced in $\Term{L}_2'$ and $\Term{L}_3'$ and thus on which order is the same as in $\Term{L}_1'$. Under SECD reduction the $(X\,V)$ is reduced before $(Y\,W)$ in $\Term{L}_1'$ and $\Term{L}_2'$ whereas in $\Term{L}_3'$ the order is reversed. However, in a reduction sequence where abstraction bodies are reduced before operands then $(X\,V)$ is reduced before $(Y\,W$) in $\Term{L}_1'$ and $\Term{L}_3'$ whereas in $\Term{L}_2'$ the order is reversed.\footnote{In this example we have in mind a complete reduction sequence. There is a complete reduction strategy of $\lambda_{\vv}$ that goes under lambda in such `spine' fashion (Section~\ref{sec:value-normal-order}).} Suppose operators and operands were reduced in separate processors. If $x$ is instead substituted by a value $X$ such that $X\,V$ converts to a stuck, then we can tell on which processor reduction got stuck first. If we substitute $y$ for a value $Y$ such that $Y\,W$ diverges then one processor would diverge whereas the other would get stuck. As another example consider the following terms where now $V$ and $W$ are closed values in \mbox{$\betaV$-nf}: \begin{displaymath} \begin{array}{lll} \Term{L}_4 & \equiv & (\lambda z.V\,W)(xx) \\ \Term{L}_5 & \equiv & (\lambda z.(\lambda y.y\,W))(xx)V \end{array} \end{displaymath} Observe that $\Term{L}_5$ is a \mbox{$\betaV$-nf}\ whereas $\Term{L}_4$ is not. If $V\,W$ converges to a \mbox{$\betaV$-nf}\ $N$ then $(\lambda z.N)(xx)$ is a \mbox{$\betaV$-nf}\ different from $\Term{L}_5$. If $V\,W$ diverges then $\Term{L}_4$ diverges but $\Term{L}_5$ does not (it is a \mbox{$\betaV$-nf}). Let us now play with substitutions for the blocking variable $x$. Substitute in $\Term{L}_4$ and $\Term{L}_5$ a closed value $X$ for $x$ such that $XX$ converges to a value: \begin{displaymath} \begin{array}{lll} \Term{L}_4' & \equiv & (\lambda z.V\,W)(XX) \\ \Term{L}_5' & \equiv & (\lambda z.(\lambda y.y\,W))(XX)V \end{array} \end{displaymath} In the case where $V\,W$ converges to a \mbox{$\betaV$-nf}\ $N$ then $\Term{L}_4'$ and $\Term{L}_5'$ converge to $N$, but in $\Term{L}_4'$ whether $(V\,W)$ is reduced before $(XX)$ depends on whether the reduction strategy goes first under lambda, whereas in $\Term{L}_5'$ the term $(XX)$ is reduced first with that same strategy. In the case where $V\,W$ diverges, whether $\Term{L}_4'$ diverges before reducing $(XX)$ also depends on whether the reduction strategy goes first under lambda, whereas in $\Term{L}_5'$ the term $(XX)$ is reduced first with that same strategy. Thus, $\Term{L}_4$ and $\Term{L}_5$ are operationally distinguishable. For example, the concrete instantiations $(\lambda z.\Term{I}\I)(xx)$ and $(\lambda z.(\lambda y.y\,\Term{I}))(xx)\Term{I}$ are operationally distinguishable (here $V\equiv\Term{I}$, $W\equiv\Term{I}$, and $\Term{I}\I$ converges to a \mbox{$\betaV$-nf}). Neutral terms differ on the point at which a free variable pops up, that is, on the point of potential divergence. Stucks are only fully reduced neutrals that keep that point of divergence. Terms with neutrals that may convert to the same \mbox{$\betaV$-nf}\ when placed in the same closed context are nonetheless operationally distinguishable when placed in an open context. And the choice of substitutions for the blocking free variables is important. Keep this in mind when reading the following sections. \section{An overview of \texorpdfstring{$v$}{v}-solvability} \label{sec:v-solv} Solvability for $\lambda_{\vv}$ is first studied in \cite{PR99} where a definition of \mbox{$v$-solvable} term is introduced which adapts to $\lambda_{\vv}$ the \textsc{SolI} definition of solvability for $\lamK$. \begin{defi}[$v$-solvability] \label{def:v-solv} A term $M$ is $v$-solvable in $\lambda_{\vv}$ iff there exist closed values $N_1\in\mathsf{Val}^0$, \ldots, $N_k\in\mathsf{Val}^0$ with $k \geq 0$ such that $(\lambda x_1\ldots x_n.M)N_1\cdots N_k =_{\beta\va} \Term{I}$ where $\mathrm{FV}(M)=\{x_1,\ldots,x_n\}$. \end{defi} The definition can be stated alternatively in terms of the head contexts of Section~\ref{sec:open-open} by requiring the $C_i$'s and $N_i$'s in the head contexts to be closed values instead of closed terms. The provisos $N_i\in\mathsf{Val}^0$ could have been omitted because they are required by the $\betaV$-conversion to the closed value~$\Term{I}$. In line with the discussion in Section~\ref{sec:open-open}, an open head context whose free variables are discarded in the conversion can also be used, and so it is in \cite[p.9]{AP12}. Adapting \textsc{SolI} to $\lambda_{\vv}$ instead of \textsc{SolN} is surprising because, as anticipated in Section~\ref{sec:eq-defs}, the two properties that justify the equivalence between \textsc{SolI} and \textsc{SolN} in $\lamK$ do not hold in $\lambda_{\vv}$. (And as discussed in Section~\ref{sec:open-open}, the use of a closed and closing head context is excessive, but more on this below.) First, $\Term{I}\,X =_{\beta\va} X$ holds iff $X$ has a value. Assuming such proviso, the \textsc{SolX} equivalent of Def.~\ref{def:v-solv} is that a term is $v$-solvable iff it is convertible by application not to any term but to any \emph{value}. Indeed, if $M$ is $v$-solvable then \mbox{$(\lambda x_1\ldots x_n.M)N_1\cdots N_k =_{\beta\va} \Term{I}$} and, by compatibility, $(\lambda x_1\ldots x_n.M)N_1\cdots N_k\,X =_{\beta\va} \Term{I}\,X$ for any $X\in\Lambda$. The conversion $(\lambda x_1\ldots x_n.M)N_1\cdots N_k\,X =_{\beta\va} X$ is obtained by transitivity with $\Term{I}\,X =_{\beta\va} X$ iff $X$ has a value. Second, the adaptation of Lemma~\ref{lem:4.1Wad76} to $\lambda_{\vv}$ does not hold. \begin{stmt}[Adapts Lemma~\ref{lem:4.1Wad76} to $\lambda_{\vv}$] \label{prop:4.1Wad76-lamV} If $M\in\Lambda^0$ has a \mbox{$\betaV$-nf}\ then for all $X\in\Lambda$ there exist operands $X_1\in\Lambda$, \ldots, $X_k\in\Lambda$ with $k\geq 0$ such that $M\,X_1 \cdots X_k =_{\beta\va} X$. \end{stmt} This statement does not hold even with $X_i$ and $X$ values, whether open or closed. The controversial term $\Term{U}\equiv\lambda x.(\lambda y.\Term{\Delta})(x\,\Term{I})\Term{\Delta}$ mentioned in \cite{PR99} is one possible counter-example. (Notice the close resemblance to the term $\Term{L}_5$ in Section~\ref{sec:neutrals-seq}.) This term is a closed value and a \mbox{$\betaV$-nf}. It is an abstraction with a stuck body. There is no operand $X_1$, let alone further operands, that lets us convert $\Term{U}$ to any given $X$ whether arbitrary, a value, or a closed value. Suppose $X_1\in\mathsf{Val}^0$. Then $\Term{U}\,X_1$ converts to $(\lambda y.\Term{\Delta}) (X_1\,\Term{I})\Term{\Delta}$. If $(X_1\,\Term{I})$ diverges then the latter diverges. If $(X_1\,\Term{I})$ converts to a closed value $V$ then $(\lambda y.\Term{\Delta})\,V\Term{\Delta}$ converts to $\Term{\Delta}\DELTA\equiv\Term{\Omega}$ which diverges. However, $\Term{U}\,X_1$ converts to a \mbox{$\betaV$-nf}\ if $(X_1\,\Term{I})$ converts to a stuck. But the shape of the \mbox{$\betaV$-nf}, namely $(\lambda y.\Term{\Delta})(\ldots)\Term{\Delta}$, is determined by the shape of $\Term{U}$. Ony the concrete \mbox{$\betaV$-nf}\ obtained depends on the choice of \emph{open} value $X_1$ that generates the stuck. For example: $X_1\equiv\lambda x.z\,\Term{I}$ leads to $(\lambda y.\Term{\Delta})(z\,\Term{I})\Term{\Delta}$ whereas $X_1\equiv\lambda x.(\lambda x.x)(z\,\Term{K})$ leads to $(\lambda y.\Term{\Delta})((\lambda x.x)(z\,\Term{K}))\Term{\Delta}$, etc. We cannot send $\Term{U}$ to any arbitrary \mbox{$\betaV$-nf}. The only degree of freedom is $X_1$. The term $\Term{U}$ is controversial because, although a \mbox{$\betaV$-nf}, it is considered operationally equivalent to $\lambda x.\Term{\Omega}$ in \cite{PR99}. Certainly, $\Term{U}\,X_1$ and $(\lambda x.\Term{\Omega})X_1$ diverge for all $X_1\in\mathsf{Val}^0$. But as illustrated in the last paragraph, $\Term{U}$ and $\lambda x.\Term{\Omega}$ are operationally distinguishable in an open context: there exists $X_1\in\mathsf{Val}$ such that $\Term{U}\,X_1$ converts to a \mbox{$\betaV$-nf}, but there is no $X_1\in\mathsf{Val}$ such that $(\lambda x.\Term{\Omega})X_1$ converts to a \mbox{$\betaV$-nf}. The difference between $\Term{U}$ and $\lambda x.\Term{\Omega}$ is illustrated by the old chestnut `toss a coin, heads: you lose, tails: toss again'. We can pass a value to $\Term{U}$ to either diverge immediately or to postpone divergence, but this choice is not possible for $\lambda x.\Term{\Omega}$ which diverges whatever value passed. And since $\Term{U}$ is a \mbox{$\betaV$-nf}, it should be by definition solvable in $\lambda_{\vv}$. The restriction of operands to elements of $\mathsf{Val}^0$ is natural in the setting of SECD's weak reduction of closed terms where final results are closed values. This is the setting considered in \cite{PR99} where the proof-theory is not $\lambda_{\vv}$'s but consists of equations `$M = N$ iff $M$ and $N$ are operationally equivalent under SECD reduction'. However, $v$-solvability (Def.~\ref{def:v-solv}) is defined for $\lambda_{\vv}$ and its proof-theory, not the alternative pure-SECD-theory. Several problems arise. First, closed values such as $\Term{U}$ and $\lambda x.\Term{\Omega}$ which are definite results of SECD are $v$-unsolvable, so $v$-solvability is not synonymous with operational relevance. Second, there is a $v$-unsolvable $\Term{U}$ that is nevertheless a \mbox{$\betaV$-nf}\ of $\lambda_{\vv}$. As discussed in the introduction, the blame is mistakenly put on $\lambda_{\vv}$, not on $v$-solvability. The operational relevance of final results is partly recovered in \cite[p.21]{PR99} by adapting to $v$-unsolvables the notion of order of a term \cite{Lon83,Abr90} in the following fashion: a \mbox{$v$-unsolvable} $M$ is of order $n$ iff it reduces under the so-called `inner machine' to $\lambda x_1\ldots x_n.B$ where $n$ is maximal. That is, $M$ reduces to a value with $n$ lambdas. If $M$ has order $0$ then it does not reduce to a value. If $M$ has order $n>0$ then $M$ accepts $n-1$ operands and reduces to a value. For example, $\Term{\Omega}$ has order 0, and $\lambda x.\Term{\Omega}$ and $\Term{U}$ have order 1. With this notion of order, definite results include $v$-solvables and \mbox{$v$-unsolvables} of order $n>0$. This corresponds with the behaviour of SECD. The \mbox{$v$-unsolvables} of order $0$ denote the least element of the model $\Model{H}$ of \cite{EHR92} and can be equated without loss of consistency. However, the `inner machine' is a call-by-value reduction strategy of $\lamK$. It performs $\beta$-reduction, reducing redexes when the operand is not a value. Furthermore, $v$-unsolvables of order $n>0$, which according to \cite{PR99} are operationally irrelevant because no arbitrary result can be obtained from them, are definite results. These \mbox{$v$-unsolvables} cannot be consistently equated \cite{PR99} and thus the model $\Model{H}$ is not sensible. Moreover, it is not semi-sensible since some $v$-solvables can be equated to $v$-unsolvables (Thm.~5.12 in \cite[p.22]{PR99}). Finally, the operational characterisation of \mbox{$v$-solvability}, namely having a $v$-hnf, is given by the so-called `ahead machine' which is also a reduction strategy of $\lamK$, not of $\lambda_{\vv}$. The reason why $v$-solvability does not capture operational relevance in $\lambda_{\vv}$ is because it is based on \textsc{SolI} which requires the \emph{universally} (any $X$) quantified Lemma~\ref{def:v-solv} to hold. The solution lies in adapting to $\lambda_{\vv}$ the \emph{existentially} (has some \mbox{$\betaV$-nf}) quantified \textsc{SolN} definition with open and non-closing contexts. As we shall see, there are two ways to solve a term in $\lambda_{\vv}$. One is to apply it to suitable values to obtain any given value (or closed value as in \mbox{$v$-solvability}). We call this to \emph{transform} the application. Another is to pass suitable values to obtain some \mbox{$\betaV$-nf}. We call this to \emph{freeze} the application. Terms like $\Term{U}$ cannot be transformed but frozen. In \cite[p.36]{RP04} it is the open body of $\Term{U}$, \ie\ $\mathbf{B}\equiv(\lambda y.\Term{\Delta})(x\,\Term{I})\Term{\Delta}$, what is considered operationally equivalent to $\Term{\Omega}$. Now, $\Term{B}$ is not a value, but it is a \mbox{$\betaV$-nf}, a definite result of $\lambda_{\vv}$. The difference between $\Term{B}$ and $\Term{\Omega}$ lies in the value substituted for $x$. The intuition is best expressed using the following experiment paraphrased from \cite[p.4]{Abr90}: \begin{quote} Given [an arbitrary] term, the only experiment of depth $1$ we can do is to evaluate [weakly] and see if it converges to some abstraction [or to some neutral subsequently closed to some abstraction] $\lambda x.M_1$. If it does so, we can continue the experiment to depth 2 by supplying [an arbitrary value $N_1$ that may be open] as input to $M_1$, and so on. Note that what the experimenter can observe at each stage is only the \emph{fact} of convergence, not which term lies under the abstraction. [Note that the term \emph{reports} the need to provide a value for the blocking free variable by closing the neutral to an abstraction.] \end{quote} \section{Introducing % \texorpdfstring{$\lambda_{\vv}$}{lambda-V}-solvability} \label{sec:lamV-solv} We have seen that terms like the $\Term{L}'_i$ of Section~\ref{sec:neutrals-seq}, or $\Term{U}$ and $\lambda x.\Term{\Omega}$ in the previous section, are operationally distinguishable in open contexts. We thus define solvability in $\lambda_{\vv}$ by adapting \textsc{SolF} to that calculus. \begin{defi}[\textsc{SolF$_\textsl{V}$}] A term $M\in\Lambda$ is solvable in $\lambda_{\vv}$ iff there exists $N\in\V\NF$ and there exists a function context $\ctx{F}[\enspace]$ such that $\ctx{F}[M]=_{\beta\va} N$. \end{defi} Notice that operands in function contexts may be values if so wished. Hereafter we abbreviate `$M$ is solvable in $\lambda_{\vv}$' as `$M$ is $\lambda_{\vv}$-solvable'. The set of $\lambda_{\vv}$-solvables is a proper superset of the union of the set of terms with \mbox{$\betaV$-nf}\ and the set of \mbox{$v$-solvables}. A witness example is $\Term{T}_1 \equiv (\lambda y.\Term{\Delta})(x\,\Term{I})\Term{\Delta}(x(\lambda x.\Term{\Omega}))$. This term has no \mbox{$\betaV$-nf}. This term is not $v$-solvable: there is no closed and closing head context sending $\Term{T}_1$ to $\Term{I}$, or to a closed value, or to a closed \mbox{$\betaV$-nf}. However, the function context $\ctx{F}[\enspace]\equiv(\lambda x.[\enspace])(\lambda x.z\,\Term{I})$ sends $\Term{T}_1$ to the \mbox{$\betaV$-nf}\ $(\lambda x.\Term{\Delta})(z\,\Term{I})\Term{\Delta}(z\,\Term{I})$. Therefore $\Term{T}_1$ is \mbox{$\lambda_{\vv}$-solvable.} Notice that $\Term{T}_1$ has $\Term{B}$ as subterm, with the blocking variable $x$ of $\Term{B}$ the same blocking variable of the neutral $x(\lambda x.\Term{\Omega})$. The use of the same blocking variable illustrates that the function context in \textsc{SolF}$_\textsl{V}$ has to be open. There is no closed function context (nor head context) sending $\Term{T}_1$ to a \mbox{$\betaV$-nf}\ since substituting a closed value for $x$ would make $\Term{B}$ diverge. In contrast, the free variable $z$ in $\ctx{F}[\enspace]$ above is key to produce a stuck. We anticipated in Section~\ref{sec:open-open} that adapting \textsc{SolH} to $\lambda_{\vv}$ leaves solvable terms behind. The terms $\Term{U}$ and $\Term{T}_1$ are two witness examples. We now connect $\lambda_{\vv}$-solvability and operational relevance with effective use in $\lambda_{\vv}$, as we did for $\lamK$ in Section~\ref{sec:effective-use-lamK}. To this end we adapt to $\lambda_{\vv}$ the notion of `order of a term' \cite{Lon83}. \begin{defi}[Order of a term in $\lambda_{\vv}$] \label{def:oder-term-lamV} A term $M\in\Lambda$ is of order $0$ iff there is no $N$ such that $M=_{\beta\va}\lambda x.N$. A term $M\in\Lambda$ is of order $n+1$ iff $M=_{\beta\va}\lambda x.N$ and $N$ is of order $n$. In the limit, \ie\ when a maximum natural $k$ does not exist such that $M=_{\beta\va}\lambda x_1\ldots x_k.N$, we say $M$ is of order $\omega$. \end{defi} This definition differs from the one in \cite[p.21]{PR99}. The latter is for \mbox{$v$-unsolvables} and uses the `inner machine' which is a reduction strategy of $\lamK$ (Section~\ref{sec:v-solv}). Ours is for arbitrary terms (not just $\lambda_{\vv}$-unsolvables) and uses $\betaV$-conversion. The order of a term is an ordinal number that comprises the finite ordinals (\ie\ the naturals) and the first limit ordinal $\omega$. An example of a term of order $\omega$ is $\Term{Y}\,\Term{K}$ where $\Term{Y}$ is Curry's fixed-point combinator (see Prop.~2.7.(iv) in \cite[p.6]{Abr90} and Ex.~2 in \cite[p.502]{Wad76}). The term $\Term{Y}\,\Term{K}$ $\betaV$-converts to $\lambda x_1\ldots x_k.\Term{Y}\,\Term{K}$ with $k$ arbitrarily large. Notice that a term of order $\omega$ has no \mbox{$\betaV$-nf}\ and is $\lambda_{\vv}$-unsolvable. With this notion of order at hand we can now state our version of \textsc{SolC} for $\lambda_{\vv}$. \begin{defi}[\textsc{SolC$_\textsl{V}$}] \label{def:solvV} A term $M\in\Lambda$ of order $n$ is solvable in $\lambda_{\vv}$ iff there exists a context $\ctx{C}[\enspace]$ such that $\ctx{C}[M]=_{\beta\va} N$ for some $N\in\V\mathsf{NF}$, and not for all $X\in\Lambda$ of order $m\geq n$ it is the case that $\ctx{C}[X]=_{\beta\va} N$. \end{defi} Note that $X\in\mathsf{Val}$ is allowed by the definition. As was the case in $\lamK$ (Section~\ref{sec:effective-use-lamK}), the piece that lets us obtain \textsc{SolC}$_\textsl{V}$ from \textsc{SolF}$_\textsl{V}$ is a genericity lemma which in $\lambda_{\vv}$ has to take into account the order of $\lambda_{\vv}$-unsolvables. \begin{lem}[Partial Genericity Lemma] \label{lem:partial-genericity} Let $M\in\Lambda$ be of order $n$ and $N\in\V\NF$. $M$ is $\lambda_{\vv}$-unsolvable implies that for all contexts $\ctx{C}[\enspace]$, if $\ctx{C}[M]=_{\beta\va} N$ then for all $X\in\Lambda$ of order $m\geq n$ it is the case that $\ctx{C}[X]=_{\beta\va} N$. \end{lem} We postpone the proof to Section~\ref{sec:partial-genericity-lemma} and focus here on the intuitions. The lemma tells us that $\lambda_{\vv}$-unsolvables of order $n$ are \emph{partially} generic, \ie\ they are generic for terms of order $m \geq n$. A $\lambda_{\vv}$-solvable can be used effectively to produce a \mbox{$\betaV$-nf}\ therefore \mbox{$\lambda_{\vv}$-solvability} is synonymous with operational relevance. However, not all $\lambda_{\vv}$-unsolvables are totally undefined. Only $\lambda_{\vv}$-unsolvables of order $0$ are totally undefined. A $\lambda_{\vv}$-unsolvable of order $n$ cannot be used effectively to produce a \mbox{$\betaV$-nf}, but it can be used trivially (discarded) after receiving at most $n-1$ operands. Hence, it is partially defined. For example, take $M\equiv\lambda x.\lambda y.\Term{\Omega}$. This term is $\lambda_{\vv}$-unsolvable of order $2$. The context $\ctx{C}[\enspace]\equiv(\lambda x.(\lambda y.\Term{I})(x\,\Term{\Delta}))[\enspace]$ uses $M$ first `administratively' (\ie\ passes $\Term{\Delta}$ to it) and then `trivially' (\ie\ discards the result) such that $\ctx{C}[M]=_{\beta\va} \Term{I}$. Replacing $M$ with a totally undefined term like $\Term{\Omega}$ would make $\ctx{C}[\Term{\Omega}]$ diverge. But since $\ctx{C}[\enspace]$ uses $M$ only up to passing one argument, $M$ could be replaced by any term $X$ of order $2$ and still $\ctx{C}[X]=_{\beta\va}\Term{I}$. The Partial Genericity Lemma is stated as an implication but, as was the case with Lemma~\ref{lem:lamK-genericity-lemma}, it is an equivalence. Clearly, if $M$ is $\lambda_{\vv}$-solvable then there exists $\ctx{C}[\enspace]\equiv\ctx{F}[\enspace]$ such that $\ctx{F}[M]=_{\beta\va} N$, and by the shape of $\ctx{F}[\enspace]$ it is not the case that for all $X\in\Lambda$ of order $m\geq n$, $\ctx{F}[X]=_{\beta\va} N$. Take for instance $\ctx{F}[\lambda x_1\ldots x_m.\Term{\Omega}]$ which diverges. Stated as an equivalence, the Partial Genericity Lemma coincides with \textsc{SolC}$_\textsl{V}$ when read in the inverse. Pure $\lambda_{\vv}$ still has `functional character' \cite{CDV81,EHR92} but its notion of operational relevance takes into account trivial uses of terms that occur inside operands of other terms up to administratively passing them a number of operands. More precisely, if a term occurs inside the operand of another term then it has `negative polarity'. Otherwise it has `positive polarity'. The import of polarity for operational relevance is inherent to the duality between call-by-name and call-by-value \cite{CH00}. Subterms with positive polarity are used effectively. Subterms with negative polarity may or may not occur eventually with positive polarity, in which case they would respectively be used effectively or trivially (perhaps after receiving some operands). The partially generic terms may only be used trivially (up to order $n$) to produce a \mbox{$\betaV$-nf}\ if they occur with negative polarity. Partially generic terms can be equated attending to their order without loss of consistency. More precisely, given the set \begin{displaymath} \mathcal{V}_0 = \{M = N~|~M,N\in\Lambda^0\ \text{are $\lambda_{\vv}$-unsolvables of the same order}\} \end{displaymath} a consistent extended proof-theory $\mathcal{V}$ results from adding $\mathcal{V}_0$'s equations as axioms to $\lambda_{\vv}$ (\ie\ $\mathcal{V} = \mathcal{V}_0+\lambda_{\vv}$). The consistency of $\mathcal{V}$ is proved in Section~\ref{sec:consistent-lamV-theory}. We say that a consistent extension where $\lambda_{\vv}$-unsolvables of the same order are equated (\ie\ contains $\mathcal{V}$) is \emph{$\omega$-sensible}. Since the operational experiments that we have in mind (Sections~\ref{sec:neutrals-seq} and~\ref{sec:v-solv}) distinguish sequentiality features, no $\omega$-sensible \emph{functional models} (\emph{e.g.}, models that are solution to the domain equation ${D \cong [D \to_\bot D]}$ for strict functions) seem to exist. However, we conjecture the existence of $\omega$-sensible models that may resemble the `sequential algorithms' of \cite{BC82}. The notion of operational relevance in $\lambda_{\vv}$ that we advocate calls for increased `separating capabilities' (in the spirit of \cite{Cur07}) that $\omega$-sensible models would exhibit. Such capabilities are not present in existing models for `lazy' call-by-value (\emph{e.g.}, the model H in \cite{EHR92} based on the solution to the domain equation ${D \cong [D \to_\bot D]_\bot}$ for lifted strict functions). We also conjecture that existing functional models could be constructed from $\omega$-sensible models via some quotient that blurs the differences in sequentiality. As for the operational characterisation of $\lambda_{\vv}$-solvables, that is, a reduction strategy of $\lambda_{\vv}$ that terminates iff the input term is \mbox{$\lambda_{\vv}$-solvable}, we postpone the discussion to Section~\ref{sec:operational-characterisation}. \section{Towards the Partial Genericity Lemma} \label{sec:partial-genericity-lemma} Our proof of the Partial Genericity Lemma is based on the proof of $\lamK$'s Genericity Lemma presented in \cite{BKV00} that uses origin tracking. Given a reduction sequence $M\rel{\beta}\ldots\rel{\beta} N$ with $N\in\mathsf{NF}$, origin tracking traces the symbols in $N$ back to a prefix of $M$ (\ie\ a `useful' part) which is followed by a lower part (\ie\ the `garbage') that does not affect the result $N$. The tracking mechanism employs a refinement of Lévy-labels \cite{Lev75}. In our case the reduction sequence is $M\rel{\beta\va}\ldots\rel{\beta\va} N$ with $N\in\V\NF$. Instead of tracking the symbols in $N$ back to the the useful part in $M$, we mark as garbage a predefined subterm in $M$, namely, the $\lambda_{\vv}$-unsolvable of order $n$ that we want to test for partial genericity. We track this subterm forwards and check that it is discarded in the reduction sequence before passing $n$ operands to it. To this end we need two main ingredients: (i) a reduction strategy that is complete with respect to \mbox{$\betaV$-nf}\ (Section~\ref{sec:value-normal-order}) and (ii) a tracking mechanism that keeps count of the number of operands that are passed to a predefined subterm (Section~\ref{sec:labelling}). We prove that the predefined term is discarded by the complete reduction strategy after receiving at most $n-1$ operands (Section~\ref{sec:pgl-stated}). Confluence allows us to generalise from the reduction strategy to any reduction sequence ending in \mbox{$\betaV$-nf}. \subsection{Value normal order} \label{sec:value-normal-order} The first ingredient we need is a reduction strategy of reference that is complete with respect to \mbox{$\betaV$-nf}. We define one such strategy and call it \emph{value normal order} because we have defined it by adapting to $\lambda_{\vv}$ the results in \cite{BKKS87} relative to the complete \emph{normal order} strategy of $\lamK$ mentioned in Section~\ref{sec:prelim}. Those results are collected in App.~\ref{app:head-and-head-spine} for ease of reference. In this section we introduce their analogues for $\lambda_{\vv}$. The unacquainted reader may find it useful to read App.~\ref{app:head-and-head-spine} and this section in parallel. We advance that value normal order is not quite the same strategy as the complete reduction strategy of $\lambda_{\vv}$ named $\rel{\Gamma}^p$ that is obtained as an instantiation of the `principal reduction machine' of \cite{RP04}. The latter reduces the body and operator of a block in right-to-left fashion whereas value normal order uses the more natural left-to-right order (see Section~\ref{sec:related-work} for details). This difference does not affect completeness because both strategies entail standard reduction sequences (a notion defined in \cite[p.137]{Plo75} for the applied $\lambda_{\vv}$ and adapted to pure $\lambda_{\vv}$ in Def.~\ref{def:v-standard} below). For every $\lambda_{\vv}$ reduction sequence from $M$ to $N$, there exists a standard reduction sequence that starts at $M$ and ends at $N$. A reduction strategy that entails standard reduction sequences and that arrives at a \mbox{$\betaV$-nf}\ is complete. And standard reduction sequences are not unique (Section~\ref{sec:complete-standard}). Normal order can be defined as follows. The \emph{active components} of a term \cite[Def.~2.3]{BKKS87} (\ie\ the maximal subterms that are not in hnf) are considered in left-to-right fashion and reduced by \emph{head reduction} \cite[Def.~8.3.10]{Bar84}. At the start, the input term is the only active component if it is not a hnf. Once a hnf\ is reached its active components occur as subterms inside a `frozen' \mbox{$\betaK$-nf}\ context. Every time the hnf\ of an active component is reached, the subsequent active components in it (if any) are recursively considered in left-to-right-fashion. We define value normal order by adapting this pattern to $\lambda_{\vv}$. In particular, we adapt the definition of needed redex, of active component, and of head reduction, whose analogue we have called `chest reduction' following the convention of \cite[Sec.~4]{BKKS87} of considering the abstract syntax tree of a term and an anatomical analogy for terms. First we adapt the notion of needed redex \cite[p.212]{BKKS87} to $\lambda_{\vv}$: \begin{defi}[Needed redex in $\lambda_{\vv}$] \label{def:betaV-needed} Let $M\in\Lambda$ and $R$ a \mbox{$\betaV$-redex} in $M$. $R$ is needed iff every reduction sequence of $M$ to \mbox{$\betaV$-nf}\ contracts (some residual of) $R$. \end{defi} The \emph{chest} and \emph{ribcage} of a term provide progressively better approximations to the set of needed $\betaV$-redexes of a term. The chest of the term contains the head of the term and the outermost \emph{ribs}, that is, all the nodes connected by application nodes to the head of the term save for the \emph{rib ends}. The rib ends are the nodes descending through lambda nodes from the ribs. The ribcage of a term consists of the head spine and the ribs connected to the head spine, that is, all the nodes connected by application nodes to the head spine of the term save for the rib ends. Fig.~\ref{fig:chest-ribcage} illustrates with an example that is further developed after the following formal definition of chest and ribcage. In Def.~\ref{def:chest-ribcage} below we define the functions $\textup{bv}$, $\textup{ch}$, and $\textup{rc}$. The last two underline respectively the chest and the ribcage of a term. Both rely on auxiliary $\textup{bv}$ related to call-by-value as explained further below. \begin{defi}[Chest and ribcage] \label{def:chest-ribcage} Functions $\textup{ch}$ and $\textup{rc}$ underline the chest and the ribcage of a term respectively. \begin{displaymath} \begin{array}{rcl} \textup{bv}(x) &=& \underline{x}\\ \textup{bv}(\lambda x.B) &=& \underline{\lambda x}.B\\ \textup{bv}(M\,N) &=& \textup{bv}(M)\textup{bv}(N)\\[4pt] \textup{ch}(x) &=& \underline{x}\\ \textup{ch}(\lambda x.B) &=& \underline{\lambda x}.\textup{ch}(B)\\ \textup{ch}(M\,N) &=& \textup{bv}(M)\textup{bv}(N)\\[4pt] \textup{rc}(x) &=& \underline{x}\\ \textup{rc}(\lambda x.B) &=& \underline{\lambda x}.\textup{rc}(B)\\ \textup{rc}(M\,N) &=& \textup{rc}(M)\textup{bv}(N) \end{array} \end{displaymath} A $\betaV$-redex is chest (resp. ribcage) if the outermost lambda of it is underlined by function $\textup{ch}$ (resp. $\textup{rc}$). \end{defi} Function $\textup{bv}$ underlines the outermost lambda of the $\betaV$-redexes that are reduced by the call-by-value strategy of pure $\lambda_{\vv}$ (Def.~\ref{def:call-by-value}). This strategy differs from its homonym in \cite[p.136]{Plo75} which is for an applied version of the calculus. See \cite{Fel87,Ses02,RP04} for details on the difference. The chest and ribcage $\betaV$-redexes realise the idea that operands in applications must be reduced to a value. \begin{figure} \begin{center} \begin{tikzpicture} [ level distance=1cm, level 2/.style={sibling distance=2.5cm}, level 3/.style={sibling distance=1.5cm}, level 4/.style={sibling distance=1.5cm}, level 5/.style={sibling distance=1.5cm}, chest/.style={very thick}, ribcage/.style={dotted}, norm/.style={thin,solid} ] \begin{scope} \node (lx) {$\lambda x$} child[chest]{ node (a1) {$@$} child{ node (a2) {$@$} child{ node (ly) {$\lambda y$} child[ribcage]{ node (a3) {$@$} child{ node (y) {$y$} } child{ node (a4) {$@$} child{ node (lz) {$\lambda z$} child[norm]{ node (m1) {$M_1$} }} child{ node (x3) {$x$} }}}} child{ node (x1) {$x$} }} child{ node (a3) {$@$} child{ node (lt) {$\lambda t$} child[norm]{ node (m2) {$M_2$} }} child{ node (x2) {$x$}}}}; \end{scope} \end{tikzpicture} \end{center} \caption{Chest (thick edges) and ribcage (thick edges and dotted edges) of the term $\lambda x.(\lambda y.y((\lambda z.M_1)x))x((\lambda t.M_2)x)$.} \label{fig:chest-ribcage} \end{figure} As an example, consider the term whose abstract syntax tree is depicted in Fig~\ref{fig:chest-ribcage}. The chest (thick edges in the figure) is underlined in $\underline{\lambda x.(\lambda y}.y((\lambda z.M_1)x)) \underline{x((\lambda t}.M_2) {\color{black}\underline{{\color{white}\underline{{\color{black}x}}}}})$. The ribcage (thick edges and dotted edges) is underlined in $\underline{\lambda x.(\lambda y.y((\lambda z}.M_1)\underline{x))x ((\lambda t}.M_2) {\color{black}\underline{{\color{white} \underline{{\color{black}x}}}}})$. The subterms $M_1$ and $M_2$ are the rib ends. The subterms $(\lambda y.y((\lambda z.M_1)x))x$ and $(\lambda t.M_2)x$ are both chest and ribcage $\betaV$-redexes. (The former is also a head and head-spine $\beta$-redex, and the latter is neither head nor head-spine.) The subterm $(\lambda z.M_1)x$ is a ribcage $\betaV$-redex but it is neither a chest \mbox{$\betaV$-redex}, nor a head or head spine $\beta$-redex. We now define call-by-value and chest reduction using a (context-based) reduction semantics~\cite{Fel87} which is a handy device for defining small-step reduction strategies. It consists of EBNF-grammars for terms, irreducible forms, and reduction contexts, together with a contraction rule for redexes within context holes. The reduction strategy is defined by the iteration of single-step reductions which consist of (i) uniquely decomposing the term into a reduction context plus a redex within the hole, (ii) contracting the redex within the hole and, (iii) recomposing the resulting term. The iteration terminates iff the term is irreducible. Call-by-value is the strategy that contracts the leftmost $\betaV$-redex that is not inside an abstraction \cite[p.42]{Fel87}. Chest reduction is the strategy that contracts the leftmost chest $\betaV$-redex. Observe that the reduction contexts of chest reduction contain the reduction contexts of call-by-value. \begin{defi}[Call-by-value strategy] \label{def:call-by-value} The call-by-value strategy $\rel{\textsl{V}}$ is defined by the the following reduction semantics: \begin{displaymath} \begin{array}{rcl} \ctx{BV}[\enspace] &::=&[\enspace]~|~\ctx{BV}[\enspace]\,\Lambda~|~ \V\W\NF\,\ctx{BV}[\enspace]\\ \V\W\NF &::=& \mathsf{Val}~|~ \Neu\W \\ \Neu\W &::=& x\,\V\W\NF\,\{\V\W\NF\}^* ~|~ \Block\W\,\{\V\W\NF\}^* \\ \Block\W &::=& (\lambda x.\Lambda)\,\Neu\W \\ \multicolumn{3}{l}{\ctx{BV}[(\lambda x.B)N] \rel{\textsl{V}} \ctx{BV}[\cas{N}{x}{B}] \quad\quad\textup{with}\ N\in\mathsf{Val}} \end{array} \end{displaymath} \end{defi}\medskip \noindent The set $\V\W\NF$ of $\betaV$-weak-normal-forms ({vwnf}s for short) consists of the terms that do not have $\betaV$-redexes except under abstraction. It contains values and neutrals in vwnf. \begin{defi}[Chest reduction] \label{def:chest-reduction} The chest-reduction strategy $\rel{\stgy{ch}}$ is defined by the following reduction semantics: \begin{displaymath} \begin{array}{l} \ctx{CH}[\enspace]\ ::=\ [\enspace]\ |\ \ctx{BV}[\enspace]\,\Lambda\ |\ \V\W\NF\,\ctx{BV}[\enspace]\ |\ \lambda x.\ctx{CH}[\enspace] \\ \\ \ctx{CH}[(\lambda x.B)N] \rel{\stgy{ch}} \ctx{CH}[\cas{N}{x}{B}] \quad\quad\textup{with}\ N\in\mathsf{Val} \end{array} \end{displaymath} \end{defi}\medskip \noindent The set $\CH\NF ::= x~|~\lambda x.\CH\NF~|~\Neu\W$ of chest normal forms ({chnf}s for short) consists of variables, abstractions with body in chnf, and neutrals in vwnf. A chest normal form has the following shape: \begin{displaymath} \lambda x_1\ldots x_n.(\lambda y_p.B_p) (~\cdots((\lambda y_1.B_1)(z\,W_1^0\cdots W_{m_0}^0)W_1^1\cdots W_{m_1}^1) \cdots~)W_1^p\cdots W_{m_p}^p \end{displaymath} where $n\geq 0$, $p\geq 0$, $m_1\geq 0$, \ldots, $m_p\geq 0$, and $W_i^j$ are in vwnf. We say that $M\,W_1^j\cdots W_{m_j}^j$ is an \emph{accumulator}, where $M$ is its leftmost operator which is either a variable or a block. The operand of the block in an accumulator could be, in turn, an accumulator, and accumulators are nested in this way, where the innermost one has a variable as its leftmost operator. We call this variable the \emph{blocking variable}, which is variable $z$ in the term above. The term $\Term{T}_1\equiv(\lambda y.\Term{\Delta})(x\,\Term{I})\Term{\Delta}(x(\lambda x.\Term{\Omega}))$ introduced in Section~\ref{sec:lamV-solv} is an example of a chnf\ that has no \mbox{$\betaV$-nf}. \begin{defi}[Ribcage reduction] \label{def:ribcage-reduction} The ribcage-reduction strategy $\rel{\stgy{rc}}$ is defined by the following reduction semantics: \begin{displaymath} \begin{array}{l} \ctx{RC}[\enspace]\ ::=\ [\enspace]\ |\ \ctx{RC}[\enspace]\,\Lambda\ |\ \V\W\NF\,\ctx{BV}[\enspace]\ |\ \lambda x.\ctx{RC}[\enspace] \\ \\ \ctx{RC}[(\lambda x.B)N] \rel{\stgy{rc}} \ctx{RC}[\cas{N}{x}{B}] \quad\quad\textup{with}\ B\in\CH\NF\ \textup{and}\ N\in\mathsf{Val} \end{array} \end{displaymath} \end{defi}\medskip \noindent Ribcage reduction delivers a chnf\ if the term has some. (A term can convert to several $\betaV$-convertible {chnf}s that differ in the rib ends.) The only difference with respect to chest reduction is that ribcage reduction contracts the body of a $\betaV$-redex to chnf\ before contracting the \mbox{$\betaV$-redex}. \begin{defi}[Active components in $\lambda_{\vv}$] \label{def:active-components-lamV} The $\lambda_{\vv}$-active components of $M\in\Lambda$ are the maximal subterms of $M$ that are not in chnf. \end{defi} Paraphrasing \cite[p.195]{BKKS87} to the $\lambda_{\vv}$ case: \begin{quote} The word ``active'' refers to the fact that the [$\lambda_{\vv}$-active] components are embedded in a context which is ``frozen'', \ie\ a [\mbox{$\betaV$-nf}] when the holes are viewed as variables. (This frozen context of $M$ is the trivial empty context if $M$ is not a [chnf].) \end{quote} A \mbox{$\betaV$-nf}\ has no $\lambda_{\vv}$-active components. The \mbox{$\lambda_{\vv}$-active} components of a term are disjoint. For example, the $\lambda_{\vv}$-active components of $\lambda x.x(\lambda y.\Term{I}\,\Term{I})(\lambda z.(\lambda t.z)(x\,y)(\Term{I}\,\Term{I}))$ are the subterms $\lambda y.\Term{I}\,\Term{I}$ and $\lambda z.(\lambda t.z)(x\,y)(\Term{I}\,\Term{I})$. Value normal order is defined in terms of chest reduction as follows. The $\lambda_{\vv}$-active components of the term are considered in left-to-right fashion and reduced by chest reduction. (The following lines paraphrase the ones for normal order written at the beginning of the section.) At the start, the input term is the only $\lambda_{\vv}$-active component if it is not a chnf. Once a chnf\ is reached, the \mbox{$\lambda_{\vv}$-active} components in it (if any) are subterms inside a `frozen' \mbox{$\betaV$-nf}\ context. Every time the chnf\ of a $\lambda_{\vv}$-active component is reached, the subsequent $\lambda_{\vv}$-active components in it (if any) are recursively considered in left-to-right fashion. \begin{defi}[Value normal order] \label{def:value-normal-order} The value normal order strategy $\rel{\stgy{vn}}$ is defined by the following reduction semantics: \begin{displaymath} \ctx{A}[\ctx{CH}[(\lambda x.B)N]\rel{\stgy{vn}} \ctx{A}[\ctx{CH}[\cas{N}{x}{B}] \quad\quad\textup{with}\ N\in\mathsf{Val} \end{displaymath} where $\ctx{CH}[\enspace]$ is a chest reduction context and $\ctx{CH}[(\lambda x.B)N]$ is the leftmost $\lambda_{\vv}$-active component of $\ctx{A}[\ctx{CH}[(\lambda x.B)N]]$, \ie\ either $\ctx{A}[\enspace]\equiv[\enspace]$ and $\ctx{CH}[(\lambda x.B)N]$ is not in chnf, or $\ctx{A}[\enspace]\not\equiv[\enspace]$ and $\ctx{A}[\ctx{CH}[(\lambda x.B)N]$ is a chnf\ such that every subterm at the left of $\ctx{CH}[(\lambda x.B)N]$ is in \mbox{$\betaV$-nf}. \end{defi} We now adapt to pure $\lambda_{\vv}$ the notion of standard reduction sequence in \cite[p.137]{Plo75}. \begin{defi}[Standard reduction sequence in $\lambda_{\vv}$] \label{def:v-standard} A standard reduction sequence (abbrev. SRS) is a sequence of terms defined inductively as follows: \begin{enumerate} \item \label{it:variable} Any variable $x$ is a SRS. \item \label{it:prepend-value} If $N_2,\ldots,N_k$ is a SRS and $N_1 \rel{\textsl{V}} N_2$, then $N_1, \ldots,N_k$ is a SRS. \item \label{it:lambda} If $N_1,\ldots,N_k$ is a SRS then $\lambda x.N_1, \ldots,\lambda x.N_k$ is a SRS. \item \label{it:applicative} If $M_1,\ldots,M_j$ and $N_1,\ldots,N_k$ are SRS then $M_1\,N_1,\ldots,M_j\,N_1,\ldots,M_j\,N_k$ is a SRS. \end{enumerate} \end{defi} \begin{thm} \label{thm:vnor-standard} Value normal order entails a SRS. \end{thm} \begin{proof} The reduction contexts of value normal order (Def.~\ref{def:value-normal-order}) are of the shape $\ctx{A}[\ctx{CH}[\enspace]]$ where $\ctx{CH}[\enspace]$ is a chest-reduction context (Def.~\ref{def:chest-reduction}) and, if $R$ is the next $\betaV$-redex to be contracted, then $\ctx{CH}[R]$ is the leftmost $\lambda_{\vv}$-active component of $\ctx{A}[\ctx{CH}[R]]$. The reduction contexts for value normal order can be broken down further into $\ctx{A}[\lambda x_1\ldots x_n.\ctx{BV}[\enspace]]$, where $n\geq 0$ and $\ctx{BV}[\enspace]$ is a call-by-value reduction context (Def.~\ref{def:call-by-value}). Def.~\ref{def:v-standard}(\ref{it:prepend-value}) says that any reduction sequence entailed by the reduction contexts $\ctx{BV}[\enspace]$ of $\rel{\textsl{V}}$ is standard. Def.~\ref{def:v-standard}(\ref{it:lambda}) says that these reduction sequences can be lifted to any number of surrounding lambdas, and so it ensures that chest-reduction contexts $\lambda x_1\ldots x_n.\ctx{BV}[\enspace]$ are standard. The step of locating the leftmost $\lambda_{\vv}$-active component of $\ctx{A}[\enspace]$ is standard by Def.~\ref{def:v-standard}(\ref{it:lambda}) and Def.~\ref{def:v-standard}(\ref{it:applicative}). \end{proof} \subsection{Labelling for counting operands} \label{sec:labelling} The second ingredient for the proof of the Partial Genericity Lemma is a tracking mechanism that counts the number of operands that have been passed to a particular term. Following \cite{Klo80,BKV00} we define this tracking by introducing a \emph{lambda calculus labelling} \cite[Def.~8.4.26]{Ter03} that specifies a generalised notion of descendant. Def.~\ref{def:counting} defines the labelling $\mathfrak{C}$ for counting. The labels range over $\{\varepsilon\}\cup\mathbb{N}$, \ie\ either an empty count $\varepsilon$ or a count $c\geq 0$. When non-empty, the count of the operator in a redex is increased, assigned to the body of the redex, and then the redex is contracted (\ie\ the operand is substituted by the free occurrences of the formal parameter in the body of the redex). \begin{defi}[Counting labelling] \label{def:counting} Let the labels $\mathbb{L}=\{\varepsilon\}\cup\mathbb{N}$ be the union of the the empty count and the natural numbers. The counting labelling $\mathfrak{C}$ and the bisimulation $\mathbb{C}$ are defined by mutual induction as follow: \begin{itemize} \item The labelled terms $\mathfrak{C}(\Lambda)$ are labelled variables $x^\ell$ (with $\ell \in \mathbb{L}$), labelled abstractions $(\lambda x.B)^\ell$ (with $B$ a labelled term), and labelled applications $(M\,N)^\ell$ (with $M$ and $N$ labelled terms). The following statements about bisimulation $\mathbb{C}$ hold: \begin{itemize} \item $x\mathbb{C}x^\ell$. \item If $B\mathbb{C}B'$ then $(\lambda x.B)\mathbb{C}(\lambda x.B')^\ell$. \item If $M\mathbb{C}M'$ and $N\mathbb{C}N'$ then $(M\,N)\mathbb{C}(M'\,N')^\ell$. \end{itemize} \item Suppose $B\mathbb{C}B'$ and $N\mathbb{C}N'$ with the $\beta$-rule of the form $(\lambda x.B)N \rel{\beta} \cas{N}{x}{B}$. Let $B'\equiv C^{\ell_1}$. Consider the $\beta_{\mathfrak{c}}$-rule \begin{displaymath} ((\lambda x.C^{\ell_1})^{\ell_2}N')^{\ell_3} \rel{\beta_{\mathfrak{c}}} \left\{ \begin{array}{ll} \cas{N'}{x}{(C^c)}&\text{if}\ \ell_1=c\\ \cas{N'}{x}{(C^{c+1})}&\text{if}\ \ell_2=c\\ \cas{N'}{x}{(C^c)}&\text{if}\ \ell_3=c\\ \cas{N'}{x}{(C^\varepsilon)} &\text{if}\ \ell_1,\ell_2,\ell_3=\varepsilon \\ \end{array} \right. \end{displaymath} where the capture avoiding substitution function for labelled terms (defined below) preserves the label of the subject of the substitution: \begin{displaymath} \begin{array}{rcl} \cas{T^{\ell_1}}{x}{(x^{\ell_2})}&=&T^{\ell_1}\\ \cas{T^{\ell_1}}{x}{((\lambda x.B^{\ell_2})^{\ell_3})}&=& (\lambda x.\cas{T^{\ell_1}}{x}{(B^{\ell_2})})^{\ell_3}\\ \cas{T^{\ell_1}}{x}{((M^{\ell_2}\,N^{\ell_3})^{\ell_4})}&=& ((\cas{T^{\ell_1}}{x}{(M^{\ell_2})})(\cas{T^{\ell_1}}{x}{(N^{\ell_3})}))^{\ell_4} \end{array} \end{displaymath} If $\ell_2=c$, rule $\beta_{\mathfrak{c}}$ increments the count of the abstraction and assigns it to the body $C$ before performing the substitution. (Below we show that if some of the $\ell_1$, $\ell_2$, and $\ell_3$ are non-empty, the first three alternatives of the $\beta_{\mathfrak{c}}$-rule coincide.) We set $\beta\mathbb{C}\beta_{\mathfrak{c}}$. \end{itemize} \end{defi}\smallskip \noindent The definition of labelled terms is extended to contexts $\mathfrak{C}(\ctx{C}[\enspace])$ in the trivial way, observing that the hole $[\enspace]$ in a labelled context does not carry any label. When no confusion arises, we will omit the epithet `labelled' for terms and contexts. Initially, all subterms have empty count $\varepsilon$ except for a particular subterm. \begin{defi} Function $\mathfrak{c}$ takes a term $M$ and delivers $M'$ such that $M\mathbb{C}M'$ and where all the subterms of $M'$ have empty count $\varepsilon$. For example, \begin{displaymath} \mathfrak{c}((\lambda x.\lambda y.x)z(\lambda x.x))= (((\lambda x.(\lambda y.x^\varepsilon)^\varepsilon)^\varepsilon z^\varepsilon)^\varepsilon(\lambda x.x^\varepsilon)^\varepsilon)^\varepsilon \end{displaymath} The labelling function $\mathfrak{c}$ is extended to contexts in the trivial way. \end{defi} Typically, we would assign the non-empty count `0' to the unsolvable subterm that we wish to trace. \begin{defi} Function $\mathfrak{s}$ selects a subterm $M$ in $\ctx{C}[M]$, assigning count $0$ to $M$ and empty count everywhere else in $\ctx{C}[M]$, including the proper subterms of $M$. \begin{displaymath} \mathfrak{s}(\ctx{C}[\enspace],M)=\ctx{C}'[M'^0] \quad\quad\text{where}\ \mathfrak{c}(\ctx{C}[\enspace])\equiv\ctx{C}'[\enspace]\ \text{and}\ \mathfrak{c}(M)\equiv M'^\varepsilon \end{displaymath} Notice that $M\mathbb{C}(\mathfrak{s}(\ctx{C}[\enspace],M))$. When no confusion arises, we write $\mathfrak{s}(\ctx{C}[M])$ instead of $\mathfrak{s}(\ctx{C}[\enspace],M)$. \end{defi} Labelling $\mathfrak{C}$ serves two different purposes. It tracks some unsolvable with non-empty count, and it counts the operands that have been passed to it. Consider the $\beta_{\mathfrak{c}}$-reduction step $((\lambda x.B^{\varepsilon})^cN')^{\varepsilon} \rel{\beta_{\mathfrak{c}}} \cas{N'}{x}{(B^{c+1})}$ with $B\not\equiv x$ and $c$ a non-empty count. We are interested in counting the number of operands passed to operator $\lambda x.B$, and thus the second line of $\beta_{\mathfrak{c}}$ assigns the non-empty count $c+1$ to the body $B$ in the substitution instance $\cas{N'}{x}{(B^{c+1})}$. Notice that the tracking implemented by $\mathfrak{C}$ differs from the conventional notion of descendant \cite{Klo80,Bar84,KOV99}. In the example above, the term $\cas{N'}{x}{(B^{c+1})}$ would be a trace of $(\lambda x.B^\varepsilon)^c$ if $B\not\equiv x$. And similarly, if the label to be preserved was that of the application, as the \mbox{$\beta_{\mathfrak{c}}$-reduction} step $((\lambda x.B^\varepsilon)^\varepsilon N')^c \rel{\beta_{\mathfrak{c}}} \cas{N'}{x}{(B^c)}$ illustrates, then the term $\cas{N'}{x}{(B^c)}$ would be a trace of $((\lambda x.B^\varepsilon)^\varepsilon N')^c$ if $B\not\equiv x$. But the \emph{traces} $\cas{N'}{x}{(B^{c+1})}$ and $\cas{N'}{x}{(B^c)}$ could never be \emph{descendants} in the conventional sense of $(\lambda x.B^\varepsilon)^c$ and $((\lambda x.B^\varepsilon)^\varepsilon N')^c$ (respectively), because according to the labelled $\beta$-reduction of \cite[p.19]{Klo80} the labels of the operator and of the redex (\ie\ the $c$ in each of the examples above) vanish. We distinguish our more refined tracking from the conventional notion of descendant by using `trace' and `origin' for the former and `descendant' and `ancestor' for the latter. Notice that all the descendants of $M$ in $\mathfrak{s}(\ctx{C}[M])$ are traces (\ie\ have non-empty count), but not all the traces of $M$ in $\mathfrak{s}(\ctx{C}[M])$ are descendants. The counting labelling $\mathfrak{C}$ can be applied to $\lambda_{\vv}$ by restricting rule $\beta_{\mathfrak{c}}$ above with $N'\in\mathfrak{C}(\mathsf{Val})$. We call the restricted rule $\beta_{\mathfrak{c}\vv}$ and set $\betaV\mathbb{C}\beta_{\mathfrak{c}\vv}$. The definition of $\lambda_{\vv}$-solvable, of order of a term, and of value normal order is extended to labelled terms in the trivial way. Our counting labelling captures accurately the number of operands that are passed to the tracked unsolvable. That is, when tracking $M$ (an unsolvable of order $n$) in $\mathfrak{s}(\ctx{C}[M])$, all the traces $M_t^c$ in the $\beta_{\mathfrak{c}}$-reduction sequence are unsolvables of order $n-c$. In order to prove this invariant we first need to show that unsolvability and `order of an unsolvable' are preserved by substitution. This result holds respectively for $\lamK$ and $\lambda_{\vv}$, by taking the definitions of solvability and of `order of a term' in the corresponding calculus: for $\lamK$, the usual definition of solvability and the `order of a term' in \cite{Lon83}; for $\lambda_{\vv}$, Def.~\ref{def:solvV} and order of a term in Section~\ref{sec:lamV-solv}. We present the result for $\lambda_{\vv}$ first, since this one is the novel result. The result for $\lamK$ follows straightforwardly by adapting the proof of the former. \begin{lem}[Order of a $\lambda_{\vv}$-unsolvable is preserved by substitution] \label{lem:subst-pres-order-lamV} Let $M\in\Lambda$ be a \mbox{$\lambda_{\vv}$-unsolvable} of order $n$. For every $N\in\mathsf{Val}$, the substitution instance $\cas{N}{x}{M}$ is a \mbox{$\lambda_{\vv}$-unsolvable} of order $n$. \end{lem} \begin{proof} We distinguish two cases: \begin{enumerate} \item $M$ is of order $\omega$. Then $M=_{\beta\va}\lambda y_1\ldots y_k.B$ with $k$ arbitrarily large. If $x=y_i$ for some $i\leq k$, then by substitutivity and by definition of the substitution function $\cas{N}{x}{M}=_{\beta\va}\lambda y_1\ldots y_k.B$. If $x\not=y_i$ with $i\leq k$, then by substitutivity of $=_{\beta\va}$ and by definition of the substitution function, then $\cas{N}{x}{M}=_{\beta\va}\lambda y_1\ldots y_k.\cas{N}{x}{B}$ and we are done. \item $M$ is of order $n<\omega$. Then $M=_{\beta\va}\lambda y_1\ldots y_n.B$ and by substitutivity of $=_{\beta\va}$. If $x=y_i$ for some $i\leq n$ then $\cas{N}{x}{M}\equiv\lambda y_1\ldots y_n.B$ and the lemma holds. If $x\not=y_i$ for every $i\leq n$, since $B$ is $\lambda_{\vv}$-unsolvable of order $0$ and by the definitions of \mbox{$\lambda_{\vv}$-solvability} and of the substitution function, it suffices to show that $\cas{N}{x}{B}$ is of order $0$. We proceed by \emph{reductio ad absurdum}. Assume that $\cas{N}{x}{B}$ is of order $m>0$. Then $\cas{N}{x}{B}=_{\beta\va}\lambda z_1\ldots z_m.C$. If $x=z_j$ for some $j\leq m$, then by substitutivity and by definition of the substitution function $M=_{\beta\va}\lambda x_1\ldots x_nz_1\ldots z_m.C$, which contradicts the assumptions. If $x\not=z_j$ for every $j\leq m$, then $C\equiv \cas{N}{x}{B'}$ for some $B'$, and by substitutivity and by definition of the substitution function then $M=_{\beta\va}\lambda x_1\ldots x_nz_1\ldots z_m.B'$, which also contradicts the assumptions and we are done.\qedhere \end{enumerate} \end{proof} \begin{lem}[Order of a $\lamK$-unsolvable is preserved by substitution] \label{lem:subst-pres-order-lamK} Let $M\in\Lambda$ be a \mbox{$\lamK$-unsolvable} of order $n$. For every $N\in\Lambda$, the substitution instance $\cas{N}{x}{M}$ is a \mbox{$\lamK$-unsolvable} of order $n$. \end{lem} \begin{proof} By adapting the proof of Lemma~\ref{lem:subst-pres-order-lamV} to $\lamK$ in a straightforward way. \end{proof} The invariant stated in Lemma~\ref{lem:lamV-labelling} and \ref{lem:lamK-labelling} below ensure that, even if several of the $\ell_1$, $\ell_2$, and $\ell_3$ in the $\beta_{\mathfrak{c}}$-rule above are non-empty, all the alternatives coincide and thus $\beta_{\mathfrak{c}}$-reduction is confluent. \begin{lem}\label{lem:lamV-labelling} Let $M_0\in\Lambda$ $\lambda_{\vv}$-unsolvable of order $n_0$. Every trace of $M_0$ with non-empty count $c$ in any $\beta_{\mathfrak{c}\vv}$-reduction sequence starting at $\mathfrak{s}(\ctx{C}[M_0])$ is $\lambda_{\vv}$-unsolvable of order $n$ such that $n_0=c+n$.\footnote{We assume the standard conventions on ordinal number arithmetic \cite{Sie65}. The successor of an ordinal $\alpha$ is $\alpha+1$. Addition is non-commutative and left-cancellative, that is, let $n$ be a finite ordinal, then $n+\omega=0+\omega=\omega$. Only left subtraction is definable, \ie\ $\alpha-\beta=\gamma$ iff $\beta\leq\alpha$ and $\gamma$ is the unique ordinal such that $\alpha=\beta+\gamma$.} \end{lem} \begin{proof} By definition, only the traces of $M_0$ (we refer to them as $M_t$) have non-empty count $c$. We prove that the contractum of a $\beta_{\mathfrak{c}\vv}$-redex preserves the invariant $n_0=c+n$ (recall that we mind the left-cancellative addition for ordinals) for each labelled trace $M_t$ with non-empty count $c$ and order $n$. We consider any $\beta_{\mathfrak{c}\vv}$-reduction sequence and proceed by induction on the sequence order of the term in which the $\beta_{\mathfrak{c}\vv}$-redex occurs. (The general case coincides with the base case, except for the small differences pinpointed in Cases~2, 3, and 4 below.) Consider the $\beta_{\mathfrak{c}\vv}$-redex $R\equiv(\lambda x.B)N$ with $N\in\mathfrak{C}(\mathsf{Val})$ occurring at step $s$ that is contracted in order to produce the reduct at step $s+1$. We focus on each occurrence (if any) of $M_t$ with non-empty count $c$ in $R$ and distinguish the following cases: \begin{enumerate} \item $R\equiv (\lambda x.\ctx{C}[M_t])N$. The contractum is $C\equiv\ctx{C}'[\cas{N}{x}{M_t}]$ where $\ctx{C}'[\enspace]\equiv\cas{N}{x}{(\ctx{C}[\enspace])}$. By Lemma~\ref{lem:subst-pres-order-lamV}, if $\ctx{C}[\enspace]\equiv[\enspace]$ then the occurrence of $\cas{N}{x}{M_t}$ in the contractum is $\lambda_{\vv}$-unsolvable of order $n$ and the lemma holds. (Notice that the first line of rule $\beta_{\mathfrak{c}}$ of Def.~\ref{def:counting} takes care of preserving the count $c$ of the redex's body $M_t$ if $\ctx{C}[\enspace]\equiv[\enspace]$.) Otherwise the order and count of the occurrences of $M_t$ in $\cas{N}{x}{(\ctx{C}[M_t])}$ are trivially preserved and the lemma follows. \item $R\equiv M_t\,N$. Then $M_t\equiv(\lambda x.B)^c$ with $B$ $\lambda_{\vv}$-unsolvable of order $n-1$. By Lemma~\ref{lem:subst-pres-order-lamV} $\cas{N}{x}{B^{c+1}}$ is $\lambda_{\vv}$-unsolvable of order $n'=n-1$ and the lemma holds. Notice that left-subtraction allows for the limit case when both $n$ and $n'$ are infinite ordinals (\ie\ $\omega=n=1+n'=1+\omega=\omega$). This is enough for the base case (\ie\ $s=1$), but for the general case there can be an overlap with Case~1 if some trace $M'_t$ of $M_0$ occurs in $B^c$. The lemma follows as in Case~1 except if $\ctx{C}[\enspace]\equiv[\enspace]$, because the first and the second lines of rule $\beta_{\mathfrak{c}}$ of Def.~\ref{def:counting} produce a critical pair. But we show that both alternatives coincide. Let $M'_t$ with non-empty count $c'$ be $\lambda_{\vv}$-unsolvable of order $n'$. By the induction hypothesis, the invariant holds for $M'_t$ (\ie\ $n_0=c'+n'$). In the limit case (\ie\ $n_0=\omega$) both $M_t\equiv\lambda x.M'_t$ and $M'_t$ have infinite order (\ie\ $n=n'=\omega$) and then $n_0=c'+n'=c+n$ and the lemma follows. In the finite case, $n-n'=1$ and then $n_0=c'+n'=c+n'+1$ and $c'=c+1$. Therefore both alternatives for rule $\beta_{\mathfrak{c}\vv}$ coincide and the lemma follows. \item $R\equiv M_t$. Then $M_t\equiv((\lambda x.B)N)^c$ with $N\in\mathfrak{C}(\mathsf{Val})$. For the base case the lemma follows because the third line of $\beta_{\mathfrak{c}}$ of Def.~\ref{def:counting} preserves the count of the $\beta_{\mathfrak{c}\vv}$-redex. For the general case there can be overlap with Cases~1 and 2, and the lemma follows because the different alternatives for $\beta_{\mathfrak{c}}$ coincide by the induction hypothesis, as was illustrated in Case~2 above. \item $R\equiv(\lambda x.B)(\ctx{C}[M_t])$. For each occurrence of $x$ in $B$, the order and count of $M_t'$ is trivially preserved by the definition of the substitution function (Def.~\ref{def:counting}) and the lemma holds. This is enough for the base case. For the general case there can be an overlap with Cases~1, 2, and 3, and the lemma follows because the different alternatives for $\beta_{\mathfrak{c}}$ coincide by the induction hypothesis, as was illustrated in Case~2 above.\qedhere \end{enumerate} \end{proof} \begin{lem}\label{lem:lamK-labelling} Let $M_0\in\Lambda$ $\lamK$-unsolvable of order $n_0$. Every trace of $M_0$ with non-empty count $c$ in any $\beta_{\mathfrak{c}}$-reduction sequence starting at $\mathfrak{s}(\ctx{C}[M_0])$ is $\lamK$-unsolvable of order $n$ such that $n_0=c+n$. \end{lem} \begin{proof} By adapting the proof of Lemma~\ref{lem:lamV-labelling} to $\lamK$ in a straightforward way. \end{proof} \subsection{Generalised statement and illustration of the proof} \label{sec:pgl-stated} We generalise the statement of the Partial Genericity Lemma we gave in Lemma~\ref{lem:partial-genericity} to provide a proof by induction on the length of the reduction sequence of value normal order. First, we take Lemma~\ref{lem:partial-genericity} and pull out the universal quantifier `for all contexts $\ctx{C}[\enspace]$' from the consequent of the implication. We take value normal order (Def.~\ref{def:value-normal-order}) and the counting labelling $\mathfrak{C}(\lambda_{\vv})$ (Def.~\ref{def:counting}). We take $M$, $N$, and $\ctx{C}[\enspace]$ in Lemma~\ref{lem:partial-genericity} and subscript them with a 0 to indicate that $M_0\in\mathfrak{C}(\Lambda)$ is the initial labelled $\lambda_{\vv}$-unsolvable, $N_0\in\mathfrak{C}(\V\NF)$ is the labelled normal form, and $\ctx{C}_0[\enspace]$ is the initial labelled context such that $\mathfrak{s}(\ctx{C}[M])=\ctx{C}_0[M_0]$, $\ctx{C}_0[M_0] =_{\beta_{\mathfrak{c}\vv\!}} N_0$ and $N\mathbb{C}N_0$. (We also rename $n$ to $n_0$ for uniformity.) The generalised theorem reads as follows. \begin{thm} \label{thm:generalised-thm} Let $M'\in\mathfrak{C}(\Lambda)$ of order $n'\leq n_0$ and $\ctx{C}'[\enspace]$ a labelled context. That $\ctx{C}'[M']$ is a labelled reduct in the value-normal-order reduction sequence of $\ctx{C}_0[M_0]$ and $M'$ has non-empty count implies that if $\ctx{C}'[M']=_{\beta_{\mathfrak{c}\vv\!}} N_0$ then for all terms $X$ of order $m\geq n_0$ it is the case that $\ctx{C}_0[\mathfrak{c}(X)]=_{\beta_{\mathfrak{c}\vv\!}} N_0$. \end{thm} This theorem coincides modulo $\mathbb{C}$ bisimilarity with Lemma~\ref{lem:partial-genericity} by taking $\ctx{C}'[\enspace]\equiv\ctx{C}_0[\enspace]$, $M'\equiv M_0$, and $n' = n_0$. In that case $\ctx{C}'[M']\equiv\ctx{C}_0[M_0]$ is the first term in the reduction sequence and $M'$ has non-empty count $0$ in $\ctx{C}'[M']$. \begin{proof}[Proof of Thm.~\ref{thm:generalised-thm}] For brevity, we drop the $\mathfrak{C}$ and $\mathfrak{c}$ from the sets of terms and from the reduction rule respectively. Recall from Def.~\ref{def:value-normal-order} that the terms in a value-normal-order reduction sequence have the shape $\ctx{A}[\ctx{CH}[R]]$, where $R$ is the next $\betaV$-redex to be contracted, $\ctx{CH}[\enspace]$ is a chest-reduction context, and $\ctx{A}[\enspace]$ is the context in which the leftmost \mbox{$\lambda_{\vv}$-active} component $A\equiv\ctx{CH}[R]$ occurs. We focus on the traces of $M_0$ that pop up in the value-normal-order reduction sequence of $\ctx{C}'[M']$. We proceed by induction on the length of the value-normal-order reduction sequence of $\ctx{C}'[M']$. The base case is when all the traces of $M_0$ (\ie\ all the subterms with non-empty count) that occur in $\ctx{C}'[M']$ (the $M'$ itself is one of these traces since it has non-empty count) are discarded in the next value-normal-order reduction step. That is, the hole in $\ctx{C}'[\enspace]$, and every other trace of $M_0$, lie inside the operand of the next \mbox{$\betaV$-redex} $R$, \ie\ $\ctx{C}'[M']\equiv\ctx{A}[\ctx{CH}[R]]$, and $R\equiv(\lambda x.B)(\ctx{C}_1[M'])$ such that $x$ does not occur free in $B$ and $\ctx{C}_1[M']$ is a value that contains all the traces of $M_0$. The next reduct is $\ctx{A}[\ctx{CH}[B]]$. There is no $\ctx{C}''[\enspace]$ such that $\ctx{C}''[M']\mrel{\betaV}\ctx{A}[\ctx{CH}[B]]$ and such that the value-normal-order reduction sequence of $\ctx{C}''[M']$ is of length less than the value-normal-order reduction sequence of $\ctx{C}'[M']$, which explains why this is the base case. Since $M'$ is a trace of $M_0$ and $M_0$ has count $0$ in $\ctx{C}_0[M_0]$, if the count of $M'$ is greater than $0$ then by Def.~\ref{def:counting} this can only be the result of a reduction step $\ctx{A}[\ctx{CH}[(\lambda x.B^c)N]]\rel{\stgy{vn}}\ctx{A}[\ctx{CH}[\cas{N}{x}{(B^{c+1})}]$ with $B^c$ a trace of $M_0$ with non-empty count $c$. Had the count of the contractum $\cas{N}{x}{(B^{c+1})}$ reached $n_0$, then by Lemmata~\ref{lem:subst-pres-order-lamV} and \ref{lem:lamV-labelling} the contractum would be $\lambda_{\vv}$-unsolvable of order $0$ and the $\ctx{A}[\ctx{CH}[\cas{N}{x}{(B^{c+1})}]$ would have diverged under value normal order. But this contradicts the assumption $\ctx{C}'[M']=_{\beta\va} N_0$. Therefore $M'$ has count at most $n_0-1$ and order greater than $0$, \ie\ $M'\in\mathsf{Val}$. If the $M_0$ was replaced by a term $X$, the trace of $X$ would have reached at most count $n_0-1$. Therefore, for any $X\in \Lambda$ of order greater or equal than $n_0$ it is the case that $\ctx{C}_0[X]=_{\beta\va} N_0$, and the theorem follows. Now we proceed with the general case. We analyse the cases: \begin{enumerate} \item No trace of $M_0$ pops up, \ie\ $\ctx{C}'[M']$ is not of the form $\ctx{A}[\ctx{CH}[M_t]]$ where $M_t$ is a $\betaV$-redex with non-empty count $c$. Let $R$ be the next $\betaV$-redex to be contracted, \ie\ $\ctx{A}[\ctx{CH}[R]]$. Either $R\equiv(\lambda x.B)(\ctx{C}_1[M'])$ with $x$ not free in $B$ and this case matches the conditions of the base case and we are done, or contracting $R$ does not discard all the traces of $M_0$ that occur in $\ctx{C}'[M']$. Let $R'$ be the contractum of $R$ and $M_t$ be one of the traces of $M_0$ that occurs in $\ctx{A}[\ctx{CH}[R']]$, \ie\ $\ctx{A}[\ctx{CH}[R']]\equiv\ctx{C}_2[M_t]$ and $M_t$ has non-empty count $c$ (it is immaterial for the proof which of the the existing traces of $M_0$ you pick). By an argument similar to the one in the base case, the count of $M_t$ is at most $n_0-1$. The theorem holds for $\ctx{C}_2[\enspace]$ and $M_t$ by the induction hypothesis. \item A trace of $M_0$ pops up, \ie\ $\ctx{C}'[M']$ is of the form $\ctx{A}[\ctx{CH}[M_t]]$ where $M_t$ is a $\betaV$-redex with non-empty count $c$. $M_t$ is the next redex to be contracted, and thus it is of the shape $((\lambda x.B)N)^c$ with $N\in\mathsf{Val}$. By Def.~\ref{def:counting} the contractum of $M_t$ is $\cas{N}{x}{(B^c)}$, which has count $c$ by Lemma~\ref{lem:lamV-labelling}. The theorem holds for $\ctx{A}[\ctx{CH}[\enspace]]$ and $\cas{N}{x}{(B^c)}$ by the induction hypothesis.\qedhere \end{enumerate} \end{proof}\medskip \noindent The following example illustrates the proof. (Remember we are dropping the $\mathfrak{C}$ and $\mathfrak{c}$ from the sets of terms and from the reduction rule respectively.) Consider the context $\ctx{C}_0[\enspace]\equiv(\lambda x.(\lambda y.\Term{I})(x\,x))[\enspace]$ and the $\lambda_{\vv}$-unsolvable $M_0\equiv(\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0$ of order 2 and with non-empty count $0$. The conversion $\ctx{C}_0[M_0]=_{\beta\va} \Term{I}$ holds, where $\Term{I}\in\V\NF$. The proof proceeds by induction on the length of the value-normal-order reduction sequence of $\ctx{C}_0[M_0]$. We analyse this reduction sequence and check that the $\Term{I}$ is reached when replacing $M_0$ by a generic term $X$ of order $m\geq 2$. The first $\betaV$-redex to be contracted is $\ctx{C}_0[M_0]$. Not all traces of $M_0$ are discarded in the next reduct and we are at the sub-case of the general case where no trace of $M_0$ pops up in the reduction sequence. \begin{displaymath} \begin{array}{rcl} &&(\lambda x.(\lambda y.\Term{I})(x\,x)) (\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0\\[4pt] &\rel{\stgy{vn}}& \textup{\scriptsize$\left\{ \begin{array}{l} \ctx{A}[\ctx{CH}[\enspace]]\equiv[[\enspace]]{\color{white}^0}\\[2pt] R\equiv(\lambda x.(\lambda y.\Term{I})(x\,x)) (\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0 \end{array}\right\} $}\\[10pt] &&(\lambda y.\Term{I}) ((\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0 (\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0) \end{array} \end{displaymath} The remaining reduction sequence has length less than the reduction sequence of $\ctx{C}_0[M_0]$ and the property holds for $M_1\equiv M_0$ and $\ctx{C}_1[\enspace]\equiv(\lambda y.\Term{I})([\enspace](\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0)$ by the induction hypothesis. (Alternatively, we could have picked the rightmost trace of $M_0$ and the property would also hold for $M_1\equiv M_0$ and $\ctx{C}_1[\enspace]\equiv(\lambda y.\Term{I})((\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0[\enspace])$.) The next $\betaV$-redex to be contracted is the leftmost occurrence of $M_0$ in $\ctx{C}_1[M_1]$. Not all the traces of $M_0$ are discarded in the next reduct and we are at the sub-case of the general case where a trace of $M_0$ pops up in the reduction sequence. \begin{displaymath} \begin{array}{rcl} &&(\lambda y.\Term{I}) ((\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0 (\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0)\\[4pt] &\rel{\stgy{vn}}& \textup{\scriptsize$\left\{ \begin{array}{l} \ctx{A}[\ctx{CH}[\enspace]]\equiv [(\lambda y.\Term{I})([\enspace](\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0)] {\color{white}^0}\\[2pt] R\equiv(\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0 \end{array}\right\} $}\\[10pt] &&(\lambda y.\Term{I}) ((\lambda x.\lambda y.x\,\Term{\Omega})^0 (\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0) \end{array} \end{displaymath} The trace converts to $(\lambda x.\lambda y.x\,\Term{\Omega})^0$, which is $\lambda_{\vv}$-unsolvable of order $2$. The remaining reduction sequence has length less than the reduction sequence of $\ctx{C}_1[M_1]$ and the property holds for $M_2\equiv(\lambda y.\lambda z.x\,\Term{\Omega})^0$ and $\ctx{C}_2[\enspace]\equiv(\lambda y.\Term{I})([\enspace](\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0)$ by the induction hypothesis. The next $\betaV$-redex to be contracted is the rightmost occurrence of $M_0$ in $\ctx{C}_2[M_2]$. Again, we are at the sub-case of the general case where a trace of $M_0$ pops up in the reduction sequence. \begin{displaymath} \begin{array}{rcl} &&(\lambda y.\Term{I}) ((\lambda x.\lambda y.x\,\Term{\Omega})^0 (\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0)\\[4pt] &\rel{\stgy{vn}}& \textup{\scriptsize$\left\{ \begin{array}{l} \ctx{A}[\ctx{CH}[\enspace]]\equiv [(\lambda y.\Term{I})((\lambda x.\lambda y.x\,\Term{\Omega})^0[\enspace])] {\color{white}^0}\\[2pt] R\equiv(\Term{I}(\lambda x.\lambda y.x\,\Term{\Omega}))^0 \end{array}\right\} $}\\[10pt] &&(\lambda y.\Term{I}) ((\lambda x.\lambda y.x\,\Term{\Omega})^0 (\lambda x.\lambda y.x\,\Term{\Omega})^0) \end{array} \end{displaymath} The trace converts to $(\lambda x.\lambda y.x\,\Term{\Omega})^0$, which is $\lambda_{\vv}$-unsolvable of order $2$. The remaining reduction sequence has length less than the reduction sequence of $\ctx{C}_2[M_2]$. The property holds for $M_3\equiv(\lambda y.\lambda z.x\,\Term{\Omega})^0$ and $\ctx{C}_3[\enspace]\equiv(\lambda y.\Term{I})((\lambda y.\lambda z.x\,\Term{\Omega})^0[\enspace])$ by the induction hypothesis. (Alternatively, we could have picked the leftmost occurrence of $(\lambda y.\lambda z.x\,\Term{\Omega})^0$ as the trace of $M_0$ and the property would also hold for $M_3$ as before and $\ctx{C}_3[\enspace]\equiv(\lambda y.\Term{I})([\enspace](\lambda y.\lambda z.x\,\Term{\Omega})^0)$.) The next $\betaV$-redex to be contracted is $(\lambda x.\lambda y.x\,\Term{\Omega})^0 (\lambda x.\lambda y.x\,\Term{\Omega})^0$ (\ie\ it is not a trace of $M_0$ itself, but it has the traces of $M_0$ both as the operator and as the operand). Not all the traces of $M_0$ are discarded in the next reduct and we are at the sub-case of the general case where no trace of $M_0$ pops up in the reduction sequence. \begin{displaymath} \begin{array}{rcl} &&(\lambda y.\Term{I}) ((\lambda x.\lambda y.x\,\Term{\Omega})^0 (\lambda x.\lambda y.x\,\Term{\Omega})^0)\\[4pt] &\rel{\stgy{vn}}& \textup{\scriptsize$\left\{ \begin{array}{l} \ctx{A}[\ctx{CH}[\enspace]]\equiv [(\lambda y.\Term{I})[\enspace]] {\color{white}^0}\\[2pt] R\equiv(\lambda x.\lambda y.x\,\Term{\Omega})^0 (\lambda x.\lambda y.x\,\Term{\Omega})^0 \end{array}\right\} $}\\[10pt] &&(\lambda y.\Term{I}) (\lambda y.(\lambda x.\lambda y.x\,\Term{\Omega})^0\Term{\Omega})^1 \end{array} \end{displaymath} This step increases the count of the operator to $1$, which is now \mbox{$\lambda_{\vv}$-unsolvable} of order $1$. The next redex discards all the traces of $M_0$, neither of which has reached count $2$. We are at the base case. \begin{displaymath} \begin{array}{rcl} &&(\lambda y.\Term{I}) (\lambda y.(\lambda x.\lambda y.x\,\Term{\Omega})^0\Term{\Omega})^1\\[4pt] &\rel{\stgy{vn}}& \textup{\scriptsize$\left\{ \begin{array}{l} \ctx{A}[\ctx{CH}[\enspace]]\equiv[[\enspace]] {\color{white}^0}\\[2pt] R\equiv(\lambda y.\Term{I}) (\lambda y.(\lambda x.\lambda y.x\,\Term{\Omega})^0\Term{\Omega})^1 \end{array}\right\} $}\\[10pt] &&\Term{I} \end{array} \end{displaymath} Indeed, the property holds by replacing $M_0$ for any $X$ of order $m\geq 2$. Consider $X\equiv(\lambda x.\lambda y.M)$ with $M\in\Lambda$. The reduction sequence becomes: \begin{displaymath} \begin{array}{l} (\lambda x.(\lambda y.\Term{I})(x\,x))(\lambda x.\lambda y.M)\rel{\stgy{vn}} (\lambda y.\Term{I})((\lambda x.\lambda y.M)(\lambda x.\lambda y.M))\\ \rel{\stgy{vn}} (\lambda y.\Term{I})(\lambda y.\cas{(\lambda x.\lambda y.M)}{x}M))\rel{\stgy{vn}}\Term{I} \end{array} \end{displaymath} \subsection{Complete strategies of % \texorpdfstring{$\lambda_{\vv}$}{lambda-V} that are not standard} \label{sec:complete-standard} Standard reduction sequences are not unique \cite[Sec.1.5]{HZ09}. To this we add that not every complete reduction sequence that only contracts needed redexes is standard! There are reduction strategies of $\lambda_{\vv}$ which only contract needed redexes but do not entail standard reduction sequences. This fact is the analogous in $\lambda_{\vv}$ to the result in \cite{BKKS87} about spine strategies of $\lamK$. We shall see an example in Def.~\ref{def:ribcage-reduction} below. To illustrate the non-uniqueness of standard reduction sequences, consider the term $M\equiv (\lambda x.(\lambda y.z\,y)\Term{I})((\lambda y.z\,y)\Term{K})$ that converts to the stuck $(\lambda x.z\,\Term{I})(z\,\Term{K})$. The reduction sequence is standard and ends in $M$'s \mbox{$\betaV$-nf}: \begin{displaymath} (\lambda x.(\lambda y.z\,y)\Term{I})((\lambda y.z\,y)\Term{K}) \rel{\textsl{V}} (\lambda x.(\lambda y.z\,y)\Term{I})(z\,\Term{K}) \rel{\betaV} (\lambda x.z\,\Term{I})(z\,\Term{K}) \end{displaymath} The first $\rel{\textsl{V}}$ step is a call-by-value step, which is standard by Def.~\ref{def:v-standard}(\ref{it:prepend-value}). The second $\rel{\betaV}$ step is standard by Def.~\ref{def:v-standard}(\ref{it:applicative}), Def.~\ref{def:v-standard}(\ref{it:lambda}), and Def.~\ref{def:v-standard}(\ref{it:prepend-value}). However, the following alternative reduction sequence is also standard and also ends in $M$'s \mbox{$\betaV$-nf}: \begin{displaymath} (\lambda x.(\lambda y.z\,y)\Term{I})((\lambda y.z\,y)\Term{K}) \rel{\betaV} (\lambda x.z\,\Term{I})((\lambda y.z\,y)\Term{K}) \rel{\betaV} (\lambda x.z\,\Term{I})(z\,\Term{K}) \end{displaymath} The first $\rel{\betaV}$ step is standard by Def.~\ref{def:v-standard}(\ref{it:applicative}), Def.~\ref{def:v-standard}(\ref{it:lambda}), and Def.~\ref{def:v-standard}(\ref{it:prepend-value}). The second $\rel{\betaV}$ step is standard by Def.~\ref{def:v-standard}(\ref{it:applicative}) and Def.~\ref{def:v-standard}(\ref{it:prepend-value}). Ribcage reduction (Def.\ref{def:ribcage-reduction}) is complete with respect to chnf\ and only contracts needed redexes. The definition of value normal order (Def.~\ref{def:value-normal-order}) can be modified to use ribcage reduction instead of chest reduction for $\lambda_{\vv}$-active components. The resulting strategy is full-reducing and complete with respect to \mbox{$\betaV$-nf}, but it does not entail a standard reduction sequence. For example, consider the term $N\equiv(\lambda x.(\lambda y.x)z)\Term{I}$ which converts to the \mbox{$\betaV$-nf}\ $\Term{I}$. Ribcage reduction entails the reduction sequence \begin{displaymath} (\lambda x.(\lambda y.x)z)\Term{I} \rel{\stgy{rc}} (\lambda x.x)\Term{I} \rel{\stgy{rc}} \Term{I} \end{displaymath} This reduction sequence is not standard, although the steps, in isolation, are standard. The first is standard by Def.~\ref{def:v-standard}(\ref{it:applicative}), Def.~\ref{def:v-standard}(\ref{it:lambda}), and Def.~\ref{def:v-standard}(\ref{it:prepend-value}). The second is standard by Def.~\ref{def:v-standard}(\ref{it:prepend-value}). However, none of the rules of Def.~\ref{def:v-standard} allow us to prepend the first step to the standard reduction sequence consisting of the second step. Standard reduction sequences to \mbox{$\betaV$-nf}\ fall short of capturing all complete strategies of $\lambda_{\vv}$. In \cite[p.208]{BKKS87} they generalise the Quasi-Leftmost Reduction Theorem \cite[Thm.~3.22]{HS08} and show that `quasi-needed reduction is normalising'. An analogous result is missing for $\lambda_{\vv}$ (Section~\ref{sec:conclusion-future-work}). \subsection{An operational characterisation of % \texorpdfstring{$\lambda_{\vv}$}{lambda-V}-solvability?} \label{sec:operational-characterisation} Although analogous to head reduction and similar in spirit, chest reduction does not provide an operational characterisation of $\lambda_{\vv}$-solvability. The term $\Term{T}_1\equiv(\lambda y.\Term{\Delta})(x\,\Term{I})\Term{\Delta}(x(\lambda x.\Term{\Omega}))$ introduced in Section~\ref{sec:lamV-solv} and the term $\Term{T}_2\equiv(\lambda y.\Term{\Delta})(x\,\Term{I})\Term{\Delta}(\lambda x.\Term{\Omega})$ are {chnf}s that are not \mbox{$\lambda_{\vv}$-solvable}. The diverging subterm $\lambda x.\Term{\Omega}$ cannot be discarded because $(\lambda y.\Term{\Delta})(x\,\Term{I})\Term{\Delta}$ is not transformable. Although $(\lambda y.\Term{\Delta})(x\,\Term{I})\Term{\Delta}$ is trivially freezable into a \mbox{$\betaV$-nf}, there is no context $\ctx{C}[\enspace]$ that transforms that term to some term that could discard the trailing $\lambda x.\Term{\Omega}$ and obtain a \mbox{$\betaV$-nf}. The $\lambda_{\vv}$-solvables are `more reduced' than {chnf}s. This brings us to the question of the existence of an operational characterisation of $\lambda_{\vv}$-solvables, that is, a reduction strategy of $\lambda_{\vv}$ that terminates iff the input term is $\lambda_{\vv}$-solvable. We believe such strategy exists but cannot be compositional because it requires non-local information about the shape of the term to decide which is the next $\betaV$-redex (Section~\ref{sec:conclusion-future-work}). \section{The consistent % \texorpdfstring{$\lambda_{\vv}$}{lambda-V}-theory % \texorpdfstring{$\mathcal{V}$}{V}} \label{sec:consistent-lamV-theory} We adapt \cite[Def.~4.1.1]{Bar84} and say that a $\lambda_{\vv}$\emph{-theory} is a consistent extension of a conversion proof-theory of $\lambda_{\vv}$. In this section we prove the consistency of the $\lambda_{\vv}$-theory $\mathcal{V}$ introduced in Section~\ref{sec:lamV-solv}. The proof proceeds in similar fashion to the proof of consistency of the $\lamK$-theory $\mathcal{H}$ introduced in Section~\ref{sec:lamK-solv}. The latter proof is detailed in \cite[Sec.~16.1]{Bar84} and employs some technical machinery introduced in \cite[Sec.~15.2]{Bar84}. We prove the consistency of $\mathcal{V}$ in similar fashion, save for the use of a shorter proof technique in a particular lemma. We ask the reader to read this section in parallel with \cite[Sec.~16.1]{Bar84} and \cite[Sec.~15.2]{Bar84}. The reader also needs to recall the definition of `notion of reduction' \cite[p.50ff]{Bar84} and `substitutive' binary relation \cite[p.55ff]{Bar84}. Rule $\betaV$ is a notion of reduction from which relations $\rel{\beta\va}$, $\mrel{\beta\va}$, and $=_{\beta\va}$ are generated (Section~\ref{sec:prelim}). The structure of this section is as follows: We first define ${\OMEGA_{\vv}}$-reduction that sends $\lambda_{\vv}$-unsolvables of order $n$ to a special symbol $\Term{\Omega}_n$. We then consider the notion of reduction $\betaV\cup{\OMEGA_{\vv}}$ which, paraphrasing \cite[p.388]{Bar84}, is interesting because it analyses provability in $\lambda_{\vv}$. We define ${\betaV\OmV}$-reduction as the compatible, reflexive, and transitive closure of $\betaV\cup{\OMEGA_{\vv}}$, and prove that it is a {\scriptsize\vv}-substitutive relation. At this point the storyline differs from \cite{Bar84} in that we introduce the notion of complete ${\OMEGA_{\vv}}$-development of a term, and use the Z property \cite{Oos08} to prove that $\betaV\cup{\OMEGA_{\vv}}$ is Church-Rosser (${\betaV\OmV}$-reduction is confluent). Finally, we define $\mathcal{V}$ and the notion of $\omega$-sensibility, and prove that $\mathcal{V}$ is generated by $\betaV\cup{\OMEGA_{\vv}}$. The consistency of $\mathcal{V}$ (Thm.~\ref{thm:Hv-lamV-theory}) follows from the confluence of ${\betaV\OmV}$-reduction. \begin{defi} \label{def:omv-red} The ${\OMEGA_{\vv}}$-reduction, $\mrel{{\OMEGA_{\vv}}}$, is the compatible, reflexive, and transitive closure of the notion of reduction \begin{displaymath} {\OMEGA_{\vv}} = \{(M,\Term{\Omega}_n)\ |\ M\ \text{$\lambda_{\vv}$-unsolvable\ of order $n$ and}\ M \not\equiv \Term{\Omega}_n\} \end{displaymath} where $\Term{\Omega}_n$ stands for the term $\lambda x_1\ldots x_n.\Term{\Omega}$ (if $n\not=\omega$), or the term $\Term{Y}\,\Term{K}$ (if $n=\omega$). Notice that $\Term{Y}\,\Term{K}$ does not have a \mbox{$\betaV$-nf}\ and that it reduces to $\lambda x_1\ldots x_k.\Term{Y}\,\Term{K}$ with $k$ arbitrarily large. The ${\OMEGA_{\vv}}$-conversion, $=_{{\OMEGA_{\vv}}}$, is the symmetric closure of $\mrel{{\OMEGA_{\vv}}}$. \end{defi} \begin{defi} The ${\betaV\OmV}$-reduction, $\mrel{{\betaV\OmV}}$, is the compatible, reflexive, and transitive closure of the notion of reduction $\betaV\cup{\OMEGA_{\vv}}$. The ${\betaV\OmV}$-conversion, $=_{\betaV\OmV}$, is the symmetric closure of $\mrel{{\betaV\OmV}}$. \end{defi} \begin{defi} \label{lem:v-substitutive} Let $M,N\in\Lambda$ and $V\in\mathsf{Val}$. A binary relation $R$ is {\scriptsize\vv}-substitutive iff $R(M,N)$ implies $R(\cas{V}{x}{M},\cas{V}{x}{N})$. \end{defi} \begin{lem} \label{lem:notion-red-v-substitutive} If $R$ is {\scriptsize\vv}-substitutive, then $\rel{R}$, $\mrel{R}$, and $=_{R}$ are {\scriptsize\vv}-substitutive. \end{lem} \begin{proof} Straightforward by structural induction on the derivations of $\rel{R}$, $\mrel{R}$, and $=_{R}$, respectively (\ie\ by considering the sets $\{\mu,\nu,\xi\}$, $\{\mu,\nu,\xi,\rho,\sigma\}$, or $\{\mu,\nu,\xi,\rho,\sigma,\tau\}$, respectively, from the rules in Section~\ref{sec:prelim}). \end{proof} \begin{lem} The notion of reduction $\betaV$ is {\scriptsize\vv}-substitutive. \end{lem} \begin{proof} Thm.~1 in \cite[p.135]{Plo75} states that $=_{\beta\va}$ is \mbox{{\scriptsize\vv}-substitutive} in the applied $\lambda_{\vv}$. By an argument similar to the proof of that theorem it is straightforward to prove that the $\betaV$-rule is \mbox{{\scriptsize\vv}-substitutive}. \end{proof} \begin{lem} The relations $\rel{\beta\va}$, $\mrel{\beta\va}$, and $=_{\beta\va}$ are \mbox{{\scriptsize\vv}-substitutive}. \end{lem} \begin{proof} Trivial by Lemma~\ref{lem:notion-red-v-substitutive} above. \end{proof} \begin{lem} \label{lem:union-v-substitutive} Let $R_1$ and $R_2$ be two notions of reduction that are {\scriptsize\vv}-substitutive. The union $R_1\cup R_2$ is {\scriptsize\vv}-substitutive. \end{lem} \begin{proof} Trivial, by considering $R_1$ or $R_2$ individually. \end{proof} \begin{lem}[${\betaV\OmV}$ is {\scriptsize\vv}-substitutive] \label{lem:bvomv-substitutive} Let $M,N\in\Lambda$ and $V\in\mathsf{Val}$. $M\rel{{\betaV\OmV}}N$ implies $\cas{V}{x}{M}\rel{{\betaV\OmV}}\cas{V}{x}{N})$. \end{lem} \begin{proof} By Lemma~\ref{lem:union-v-substitutive}, it is enough to prove that ${\OMEGA_{\vv}}$ is {\scriptsize\vv}-substitutive. Let $M \rel{{\OMEGA_{\vv}}} \Term{\Omega}_n$. By Lemma~\ref{lem:subst-pres-order-lamV}, the substitution instance $\cas{V}{x}{M}$ is \mbox{$\lambda_{\vv}$-unsolvable} of order $n$ for any $V\in\mathsf{Val}$. By Def.~\ref{def:omv-red} above, $\cas{V}{x}{M} \rel{{\OMEGA_{\vv}}} \Term{\Omega}_n$ and $\Term{\Omega}_n\equiv\cas{V}{x}{\Term{\Omega}_n}$ because all the $\Term{\Omega}_n$ (including ${\OMEGA_\omega}$) are closed terms. \end{proof} \begin{lem} The relations $\mrel{{\betaV\OmV}}$, and $=_{{\betaV\OmV}}$ are \mbox{{\scriptsize\vv}-substitutive}. \end{lem} \begin{proof} Trivial by Lemma~\ref{lem:notion-red-v-substitutive} above. \end{proof} \begin{defi} Let $M,N\in\Lambda$. $M$ and $N$ are \mbox{$\lambda_{\vv}$-solvably} equivalent, $M\sim_{s_\textsl{V}} N$, iff for every arbitrary context $\ctx{C}[\enspace]$, $\ctx{C}[M]$ is $\lambda_{\vv}$-unsolvable of order $n$ iff $\ctx{C}[N]$ is $\lambda_{\vv}$-unsolvable of order $n$. Relation $\sim_{s_\textsl{V}}$ is reflexive, symmetric, and transitive, and hence it is an equivalence relation. \end{defi} \begin{lem} \label{lem:solv-equiv} Let $M,N\in\Lambda$. \begin{enumerate} \item[\textup{(1)}] $M =_{\beta\va} N$ implies $M \sim_{s_\textsl{V}} N$. \item[\textup{(2)}] $M =_{\OMEGA_{\vv}} N$ implies $M \sim_{s_\textsl{V}} N$. \end{enumerate} \end{lem} \begin{proof} First consider (1). Since $=_{\beta\va}$ is compatible, for any context $\ctx{C}[\enspace]$ then $\ctx{C}[M]=_{\beta\va} \ctx{C}[N]$, and (1) trivially follows. Now consider (2). Since $\sim_{s_\textsl{V}}$ is an equivalence relation, it is enough to show that $M\sim_{s_\textsl{V}}\Term{\Omega}_n$ for $M$ $\lambda_{\vv}$-unsolvable of order $n$. Suppose $\ctx{C}[M]$ is $\lambda_{\vv}$-solvable. Then there exists a function context $\ctx{F}[\enspace]$ such that $\ctx{F}[\ctx{C}[M]]=_{\beta\va} N$ for some $N \in \V\NF$. By the Partial Genericity Lemma (Lemma~\ref{lem:partial-genericity}) then $\ctx{F}[\ctx{C}[\Term{\Omega}_n]]=_{\beta\va} N$. Similarly, $\ctx{C}[\Term{\Omega}_n]$ being $\lambda_{\vv}$-solvable implies $\ctx{C}[M]$ is $\lambda_{\vv}$-solvable, and (2) follows. \end{proof} \begin{rem} We write ${\OMEGA_{\vv}}(M)$ for the ${\OMEGA_{\vv}}$-normal-form (abbrev. ${\OMEGA_{\vv}}$-nf) of the term $M$. \end{rem} \begin{lem} \label{lem:omv-normal-form} Every term has a unique ${\OMEGA_{\vv}}$-nf. \end{lem} \begin{proof} The maximal ${\OMEGA_{\vv}}$-redexes are mutually disjoint. By replacing them by the appropriate $\Term{\Omega}_n$s, no new ${\OMEGA_{\vv}}$-redexes are created, since $U_n \sim_{s_\textsl{V}} \Term{\Omega}_n$ for $U_n$ $\lambda_{\vv}$-unsolvable of order $n$. The \mbox{${\OMEGA_{\vv}}$-nf} is unique since ${\OMEGA_{\vv}}$-reduction is Church-Rosser. \end{proof} The complete ${\OMEGA_{\vv}}$-development of a term defined below adapts the notion of complete development of a term \cite[Sec.4.5,p.106]{Ter03} to ${\betaV\OmV}$-reduction. \begin{defi} \label{def:complete-omv-development} The complete ${\OMEGA_{\vv}}$-development $M\cdv{\Term{\Omega}}$ of a term $M$ consists of the complete development of the ${\OMEGA_{\vv}}$-nf of $M$, \ie\ $M\cdv{\Term{\Omega}} = ({\OMEGA_{\vv}}(M))\cdv{}$ \end{defi} \begin{lem}[Confluence of $\mrel{{\OMEGA_{\vv}}}$] \label{lem:omv-church-roser} The relation $\mrel{{\OMEGA_{\vv}}}$ is Church-Rosser. \end{lem} \begin{proof} It is enough to prove that $(\rel{{\OMEGA_{\vv}}}\cup\equiv)$ has the diamond property. Consider $M\rel{{\OMEGA_{\vv}}}M_1$ by contracting the \mbox{${\OMEGA_{\vv}}$-redex} $U_1$, and $M\rel{{\OMEGA_{\vv}}}M_2$ by contracting the ${\OMEGA_{\vv}}$-redex $U_2$. We analyse the cases: \begin{enumerate} \item $U_1$ and $U_2$ are disjoint. The lemma trivially holds. \item $U_1$ and $U_2$ overlap. Let $U_1$, a $\lambda_{\vv}$-unsolvable of order $m$, be a superterm of $U_2$, a $\lambda_{\vv}$-unsolvable of order $n$. The diagram \begin{center} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (a) [matrix of math nodes, row sep=3em, column sep=3em,text height=1.5ex,text depth=0.25ex] { M \equiv \ctx{C}_1[U_1] \equiv \ctx{C}_1[\ctx{C}_2[U_2]] & M_2 \equiv \ctx{C}_1[\Term{\Omega}_m]\\ M_1 \equiv \ctx{C}_1[\ctx{C}_2[\Term{\Omega}_n]] & \ctx{C}_1[\Term{\Omega}_m] \\}; \path[->,font=\scriptsize] (a-1-1) edge node[above] {$U_1$} (a-1-2) edge node[right] {$U_2$} (a-2-1) (a-2-1) edge node[above] {$\ctx{C}_2[\Term{\Omega}_n]$} (a-2-2); \path[triple] (a-1-2) to (a-2-2); \path[thirdline] (a-1-2) to (a-2-2); \end{tikzpicture} \end{center} commutes because $\ctx{C}_2[\Term{\Omega}_n] \sim_{s_\textsl{V}} U_1$ holds by Lemma~\ref{lem:solv-equiv} above.\qedhere \end{enumerate} \end{proof} \begin{lem}[Confluence of ${\betaV\OmV}$] \label{lem:conf-bo} ${\betaV\OmV}$-reduction is Church-Rosser. \end{lem} \begin{proof} It is enough to prove that $\rel{{\betaV\OmV}}$ has the Z property \cite{Oos08}: \begin{center} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (a) [matrix of math nodes, row sep=4em, column sep=4em,text height=1.5ex,text depth=0.25ex] { M & N \\ M\cdv{\Term{\Omega}} & N\cdv{\Term{\Omega}} \\}; \path[->,font=\scriptsize] (a-1-1) edge node[below,pos=0.7]{${\betaV\OmV}$} (a-1-2); \path[->,dashed,font=\scriptsize] (a-1-2) edge node[sloped,above,pos=1]{$*$} node[sloped,below,pos=0.7]{${\betaV\OmV}$} (a-2-1); \path[->,dashed,font=\scriptsize] (a-2-1) edge node[above,pos=1]{$*$} node[below,pos=0.7]{${\betaV\OmV}$} (a-2-2); \end{tikzpicture} \end{center} There are two cases, $M \rel{{\OMEGA_{\vv}}} N$ and $M \rel{\betaV} N$: \begin{enumerate} \item Case $M \rel{{\OMEGA_{\vv}}} N$. It follows that \mbox{${\OMEGA_{\vv}}(M) \equiv {\OMEGA_{\vv}}(N)$} and $M\cdv{\Term{\Omega}} \equiv N\cdv{\Term{\Omega}}$. Therefore \mbox{$N \mrel{{\betaV\OmV}} M\cdv{\Term{\Omega}}$} and $M\cdv{\Term{\Omega}} \mrel{{\betaV\OmV}} N\cdv{\Term{\Omega}}$ and so the lemma follows. \item Case $M \rel{\betaV} N$. Let $R$ be the $\betaV$-redex contracted in $M \rel{\betaV} N$. Let $\mathsf{S}$ be the set of maximal ${\OMEGA_{\vv}}$-redexes in $M$. If $R$ is disjoint with $\mathsf{S}$ then $M\cdv{\Term{\Omega}} \equiv N\cdv{\Term{\Omega}}$ and the lemma follows as in the previous case. If $R$ is not disjoint with some $U \in\mathsf{S}$ then we consider the sub-cases: \begin{enumerate} \item Sub-case $U \equiv \ctx{C}[R]$ is $\lambda_{\vv}$-unsolvable of order $n$. Let $R'$ be the contractum of $R$. By Lemma~\ref{lem:solv-equiv} above we have ${\OMEGA_{\vv}}(\ctx{C}[R]) \equiv {\OMEGA_{\vv}}(\ctx{C}[R'])$ and ${\OMEGA_{\vv}}(M) \equiv {\OMEGA_{\vv}}(N)$. Therefore $M\cdv{\Term{\Omega}} \equiv N\cdv{\Term{\Omega}}$. \item Sub-case $R \equiv (\lambda x.B)\ctx{C}[U]$ is $\lambda_{\vv}$-solvable with $B$ disjoint with $\mathsf{S}$. Let $n$ be the order of $U$. The following diagram \begin{center} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (a) [matrix of math nodes, row sep=2em, column sep=4em,text height=1.5ex,text depth=0.25ex] { M & N \\ \ctx{C}'[(\lambda x.B)\ctx{C}[U]] & \ctx{C}'[\cas{\ctx{C}[U]}{x}{B}\,]\\ \ctx{C}'[(\lambda x.B)\ctx{C}[\Term{\Omega}_n]] & \\ \ctx{C}'[\cas{\ctx{C}[\Term{\Omega}_n]}{x}{B}] & \\ M\cdv{\Term{\Omega}} & N\cdv{\Term{\Omega}} \\}; \path[->,font=\scriptsize] (a-1-1) edge node[below,pos=0.7]{$\betaV$} (a-1-2); \path[triple] (a-1-1) to (a-2-1); \path[thirdline] (a-1-1) to (a-2-1); \path[triple] (a-1-2) to (a-2-2); \path[thirdline] (a-1-2) to (a-2-2); \path[->,font=\scriptsize] (a-2-1) edge node[left,pos=0.7]{${\OMEGA_{\vv}}$} (a-3-1); \path[->,font=\scriptsize] (a-3-1) edge node[left,pos=0.7]{$\betaV$} (a-4-1); \path[->,font=\scriptsize] (a-2-2) edge node[sloped,below,pos=0.7]{${\OMEGA_{\vv}}$} (a-4-1); \path[->,font=\scriptsize] (a-4-1) edge node[right,pos=1]{$*$} node[left,pos=0.7]{${\betaV\OmV}$} (a-5-1); \path[->,font=\scriptsize] (a-2-2) edge node[right,pos=1]{$*$} node[left,pos=0.94]{${\betaV\OmV}$} (a-5-2); \path[triple] (a-5-1) to (a-5-2); \path[thirdline] (a-5-1) to (a-5-2); \end{tikzpicture} \end{center} commutes because $\ctx{C}'[\cas{\ctx{C}[\Term{\Omega}_n]}{x}{B}] \mrel{{\betaV\OmV}} M\cdv{\Term{\Omega}} \equiv N\cdv{\Term{\Omega}}$, since $B$ and $\mathsf{S}$ are disjoint. \item Sub-case $R \equiv (\lambda x.\ctx{C}[U])V$ is $\lambda_{\vv}$-solvable with $V\in\mathsf{Val}$ not necessarily disjoint with~$\mathsf{S}$. Let $n$ be the order of $U$. The following diagram \begin{center} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (a) [matrix of math nodes, row sep=2em, column sep=2.5em,text height=1.5ex,text depth=0.25ex] { M & N \\ \ctx{C}'[(\lambda x.\ctx{C}[U])V] & \ctx{C}'[\cas{V}{x}{(\ctx{C}[U])}]\\ \ctx{C}'[(\lambda x.\ctx{C}[\Term{\Omega}_n])V] & \ctx{C}'[\cas{V}{x}{(\ctx{C}[\Term{\Omega}_n])}]\\ \ctx{C}'[(\lambda x.\ctx{C}[\Term{\Omega}_n]){\OMEGA_{\vv}}(V)] & \\ \ctx{C}'[\cas{{\OMEGA_{\vv}}(V)}{x}{(\ctx{C}[\Term{\Omega}_n]})] & \\ M\cdv{\Term{\Omega}} & N\cdv{\Term{\Omega}} \\}; \path[->,font=\scriptsize] (a-1-1) edge node[below,pos=0.7]{$\betaV$} (a-1-2); \path[triple] (a-1-1) to (a-2-1); \path[thirdline] (a-1-1) to (a-2-1); \path[triple] (a-1-2) to (a-2-2); \path[thirdline] (a-1-2) to (a-2-2); \path[->,font=\scriptsize] (a-2-1) edge node[left,pos=0.7]{${\OMEGA_{\vv}}$} (a-3-1); \path[->,font=\scriptsize] (a-2-2) edge node[left,pos=0.7]{${\OMEGA_{\vv}}$} (a-3-2); \path[->,font=\scriptsize] (a-3-1) edge node [right,pos=1]{$*$} node[left,pos=0.7]{${\OMEGA_{\vv}}$} (a-4-1); \path[->,font=\scriptsize] (a-3-2) edge node[above,pos=1]{$*$} node[sloped,below,pos=0.7]{${\OMEGA_{\vv}}$} (a-5-1); \path[->,font=\scriptsize] (a-4-1) edge node[left,pos=0.7]{$\betaV$} (a-5-1); \path[->,font=\scriptsize] (a-5-1) edge node[right,pos=1]{$*$} node[left,pos=0.7]{${\betaV\OmV}$} (a-6-1); \path[->,font=\scriptsize] (a-3-2) edge node[right,pos=1]{$*$} node[left,pos=0.94]{${\betaV\OmV}$} (a-6-2); \path[triple] (a-6-1) to (a-6-2); \path[thirdline] (a-6-1) to (a-6-2); \end{tikzpicture} \end{center} commutes because \begin{displaymath} \ctx{C}'[\cas{V}{x}{(\ctx{C}[\Term{\Omega}_n])}] \mrel{{\OMEGA_{\vv}}} \ctx{C}'[\cas{{\OMEGA_{\vv}}(V)}{x}{(\ctx{C}[\Term{\Omega}_n]})] \end{displaymath} follows because of (i) Prop.~2.1.17(ii) in \cite{Bar84}, (ii) $\ctx{C}[\Term{\Omega}_n]$ and $\mathsf{S}\setminus \{U\}$ are disjoint, and (iii) ${\OMEGA_{\vv}}$ is {\scriptsize\vv}-substitutive.\qedhere \end{enumerate} \end{enumerate} \end{proof} \begin{defi} We say that any theory containing $\mathcal{V}$ is \mbox{$\omega$-sensible} (and by extension, any model satisfying $\mathcal{V}$ is \mbox{$\omega$-sensible}). \end{defi} \begin{defi}[Consistent theory] Let $\mathcal{T}$ be a set of equations between terms. $\mathcal{T}$ is consistent, ${\mathrm{Con}}(\mathcal{T})$, iff $\mathcal{T}$ does not prove every closed equation, \ie\ \begin{displaymath} \mathcal{T} \not\vdash M=N\ \text{with}\ M,N\in\Lambda^0 \end{displaymath} \end{defi} \begin{defi}[$\lambda_{\vv}$-theory] Let $\mathcal{T}$ be a set of closed equations between terms. $\mathcal{T}$ is a $\lambda_{\vv}$-theory iff ${\mathrm{Con}}(\mathcal{T})$ and \begin{displaymath} \mathcal{T}=\{M=N~|~M,N\in\Lambda^0\ \text{and}\ \lambda_{\vv} + \mathcal{T} \vdash M=N\} \end{displaymath} \end{defi} \begin{prop} The theory of $\betaV$-convertible closed terms, $\lambda_{\vv}$, is a $\lambda_{\vv}$-theory. Observe that $\lambda_{\vv}$ is consistent by confluence of \mbox{$\betaV$-reduction}. \end{prop} \begin{defi}[Theory $\mathcal{V}$] \label{def:theory-Hv} Let $\mathcal{V}_0$ be the following set of equations: \begin{displaymath} \mathcal{V}_0 = \{M = N~|~M,N\in\Lambda^0\ \text{$\lambda_{\vv}$-unsolvable of the same order $n$}\} \end{displaymath}\smallskip \noindent The theory $\mathcal{V}$ is the set of equations: \begin{displaymath} \mathcal{V}=\{M=N~|~M,N\in\Lambda^0\ \text{and}\ \lambda_{\vv} + \mathcal{V}_0 \vdash M=N\} \end{displaymath} \end{defi} \begin{lem} \label{lem:bo-generates-hv} ${\betaV\OmV}$-reduction generates $\mathcal{V}$, \ie\ \begin{displaymath} \mathcal{V} \vdash M = N\ \text{iff}\ M =_{\betaV\OmV} N\ \text{with}\ M,N\in\Lambda^0 \end{displaymath} \end{lem} \begin{proof} We first consider the direction ($\Longrightarrow$). If ${\mathcal{V}_0 \vdash M = N}$ then $M \rel{{\OMEGA_{\vv}}} \Term{\Omega}_n$ and $M \rel{{\OMEGA_{\vv}}} \Term{\Omega}_n$ because both $M$ and $N$ are \mbox{$\lambda_{\vv}$-unsolvable} of order $n$. Consequently, for all axioms $M_0 = N_0$ in the set $\mathcal{V}_0$ that generates $\mathcal{V}$, ${M_0 =_{{\betaV\OmV}} N_0}$ holds, and then $M =_{\betaV\OmV} N$ follows by compatibility, reflexivity, symmetry and transitivity. Now for the direction ($\Longleftarrow$). The theory $\mathcal{V}$ is generated by $\lambda_{\vv} + \mathcal{V}_0$, and then each $\betaV$- or ${\OMEGA_{\vv}}$-reduction step is provable in $\mathcal{V}$. \end{proof} \begin{thm} \label{thm:Hv-lamV-theory} $\mathcal{V}$ is a $\lambda_{\vv}$-theory. \end{thm} \begin{proof} By Def.~\ref{def:theory-Hv} and because ${\mathrm{Con}}(\mathcal{V})$ by Lemmata~\ref{lem:bo-generates-hv} and \ref{lem:conf-bo}. \end{proof} \section{Related work} \label{sec:related-work} We have commented at length from the introduction onwards on the relevant related work on solvability in $\lamK$ and $\lambda_{\vv}$. We only comment here briefly on several outstanding points and on other work of related interest. As mentioned in Section~\ref{sec:value-normal-order}, value normal order is not the same strategy as the complete reduction strategy of $\lambda_{\vv}$ named $\rel{\Gamma}^p$ that is obtained as an instantiation of the `principal reduction machine' of \cite[p.70]{RP04}. The principal reduction machine is actually a template of small-step reduction strategies that is parametric on a set of permissible operands and a set of irreducible terms. The complete reduction strategy $\rel{\Gamma}^p$ is obtained by instantiating the template with the set of permissible operands fixed to $\mathsf{Val}$ and the set of irreducible terms fixed to $\V\NF$ (in \cite{RP04} $\mathsf{Val}$ is called $\Gamma$ and $\V\NF$ is called $\Gamma$-NF). Value normal order differs from $\rel{\Gamma}^p$ when reducing a term $(\lambda x.B)N$ where $N$ converts to a neutral. In $\rel{\Gamma}^p$ the operand $N$ is reduced to the neutral $N'$ using call-by-value so that $(\lambda x.B)N'$ is a block. At this point $\rel{\Gamma}^p$ keeps reducing $N'$ fully to \mbox{$\betaV$-nf}\ before reducing $B$ fully to \mbox{$\betaV$-nf}. In contrast, value normal order proceeds in left-to-right fashion with the block $(\lambda x.B)N'$, first reducing $B$ fully to \mbox{$\betaV$-nf}\ and then reducing $N'$ fully to \mbox{$\betaV$-nf}. The left-to-right order is the regular one, at least so in all the strategies cited in this paper. And we have defined value normal order as the $\lambda_{\vv}$ analogue of $\lamK$'s normal order following the results in \cite{BKKS87}. At any rate, reducing blocks left-to-right or right-to-left does not affect completeness. Both $\rel{\Gamma}^p$ and value normal order entail standard reduction sequences (Def.~\ref{def:v-standard}) and are therefore complete (this is shown for $\rel{\Gamma}^p$ in \cite[p.11]{RP04}). The $\lambda_{\betaV}^*$ calculus of \cite[Def.~11]{EHR91,EHR92} is a calculus with partial terms. There is a unique constant $\Omega$ that represents `bottom'. The calculus has reduction rules $M\,\Omega \rel{} \Omega$ and $\Omega\,M \rel{} \Omega$ which capture preservation of unsolvability by application (Section~\ref{sec:effective-use-lamK}). In \cite[p.508]{Wad76} we find conversion rules $\Omega\,M = \Omega$ and $\lambda x.\Omega = \Omega$ now in the context of $\lamK$. In both approaches $\Omega$ is uniquely used as `bottom'. However, we have considered infinite bottoms with different orders, and have followed in Section~\ref{sec:consistent-lamV-theory} the syntactic approach of \cite{Bar84} where $\Term{\Omega}$ is a term (not a constant representing `bottom') and $M \rel{} \Term{\Omega}$ when $M$ unsolvable. The $\Term{\Omega}_n$ of Section~\ref{sec:consistent-lamV-theory} are terms. The computational lambda calculus of \cite{Mog91} adds the equations $\Term{I}\,X = X$ and $(\lambda x.y\,x)X = y\,X$, for all $X\in\Lambda$, as axioms to the proof-theory. These equations affect sequentiality (Section~\ref{sec:neutrals-seq}). The occurrence of a free variable can be seen as the result of implicitly applying the `opening' operation to a locally-nameless representation of a program (a closed term) \cite{Cha12}. In the local scope operational equivalence is refined by considering open and non-closing contexts (Section~\ref{sec:open-open}) that disclose the differences in sequentiality. After that, the program can be recovered by the `closing' operation. The Genericity Lemma (Section~\ref{sec:effective-use-lamK}) conforms with the axiomatic framework for meaningless terms of \cite{KOV99}. The axioms for $\lamK$ state that meaningless terms are closed under reduction and substitution (Axioms 1 and 3) and that if $M$ is meaningless then $M\,N$ is meaningless, \ie\ $M$ cannot be used as a function (Axiom 2). For $\lamK$, Axioms~1, 2, and 3 are enough to prove the Genericity Lemma and the consistency of the proof-theory extended with equations between all meaningless terms. However, in $\lambda_{\vv}$ there is partiality in meaninglessness, \ie\ not all meaningless terms are bottom. The analogues of the axioms have to be order-aware. In particular, Lemma~\ref{lem:subst-pres-order-lamV} is the order-aware analogue of Axiom~3. The analogue of Axiom~1 is trivial, just consider $=_{\beta\va}$. As for Axiom~2, if $M$ \mbox{$\lambda_{\vv}$-unsolvable} of order $n$, then $M\,N$ (with $N\in \mathsf{Val}$) is \mbox{$\lambda_{\vv}$-unsolvable} of order $n-1$. However, if $N\not\in\mathsf{Val}$, then $M\,N$ is $\lambda_{\vv}$-unsolvable of order $0$. We leave the proof of the analogy as future work. \section{Conclusions and future work} \label{sec:conclusion-future-work} The presupposition of $v$-solvability (Section~\ref{sec:v-solv}) is that terms with \mbox{$\betaV$-nf}\ that are not transformable to a value of choice (such as $\Term{B}$ and $\Term{U}$) are observationally equivalent to terms without \mbox{$\betaV$-nf}\ that are also not transformable to a value of choice (such as $\Term{\Omega}$ and $\lambda x.\Term{\Omega}$), and that all of them are operationally irrelevant and meaningless. This gives rise to an inconsistent $\lambda_{\vv}$-theory. We have shown that these terms can be separated operationally and that this conforms to $\lambda_{\vv}$'s nature. Neutral terms differ at the point of potential divergence, \ie\ at the blocking variable which has to be given the opportunity to be substituted by an arbitrary value according to $\lambda_{\vv}$'s principle of `preserving confluence by preserving potential divergence' (Section~\ref{sec:pure-lamV}). The actual choice of values for blocking variables lets us separate terms with the same functional result that nonetheless have different sequentiality, or may have different sequentiality when using a different complete reduction strategy. The functional models of $\lambda_{\vv}$ do not have such separating capabilities, but functional models are not the only possible models. We have to follow the other line of investigation, namely, to `vary the model to fit the intended calculus'. Models that capture sequentiality exist, and we believe there are \mbox{$\omega$-sensible} models that resemble the sequential algorithms of \cite{BC82} (Section~\ref{sec:lamV-solv}). As discussed in Section~\ref{sec:complete-standard}, standard reduction sequences fall short of capturing all complete strategies of $\lambda_{\vv}$. A result analogous to $\lamK$'s `quasi-needed reduction is normalising' \cite[p.208]{BKKS87} is missing for $\lambda_{\vv}$. We are currently developing the analogue for $\lambda_{\vv}$ of quasi-needed reduction, and the proof that it is normalising. As discussed in Section~\ref{sec:operational-characterisation}, we believe it is possible to give an operational characterisation of $\lambda_{\vv}$-solvability, \ie\ a reduction strategy of $\lambda_{\vv}$ that terminates iff the input term is \mbox{$\lambda_{\vv}$-solvable}. But we believe it cannot be compositional because it requires non-local information about the shape of the term to decide which is the next $\betaV$-redex. We have a preliminary implementation that uses a mark-test-and-contract algorithm. Terms with positive polarity are tested for transformability and terms with negative polarity are tested for valuability. In order to test we keep a sort of stratified environment that references the operands in the nested accumulators of a chnf. The environment grows as reduction proceeds inside the body of nested blocks, where a table of lexical offsets defines what is visible at each layer. The $\betaV$-redexes are marked for contraction, but are only contracted after testing the $\lambda_{\vv}$-solvability of the subterm in which they occur. Our implementation can be refined using the `linear blocking structure' of the sequent term calculus \cite{Her95,CH00,San07}. The blocking structure of {chnf}s (\ie\ the structure of nested blocks around the blocking variable) becomes a linear structure when injecting the {chnf}s into their sequent-term representation. The sequent-term representation seems promising to develop the analogue of Böhm trees in $\lambda_{\vv}$. Let us illustrate this by adopting the untyped lambda-Gentzen calculus of \cite{San07} ($\lambda^{\mathsf{Gtz}}$ for short). Assume the injection $\widehat{\ }:\CH\NF \to \Lambda^{\mathsf{Gtz}}$ and consider the shape of a chnf\ from Section~\ref{sec:value-normal-order}: \begin{displaymath} \lambda x_1\ldots x_n.(\lambda y_p.B_p) (~\ldots((\lambda y_1.B_1)((z\,W_0^0)W_1^0\cdots W_{m_0}^0) W_1^1\cdots W_{m_1}^1)\ldots~) W_1^p\cdots W_{m_p}^p \end{displaymath} This shape is injected into the sequent term: \[\eqalign{ \lambda x_1\ldots x_n. (z[\widehat{W^0_0}])[\widehat{W^0_1},\ldots,\widehat{W^0_{m_0}}] @(y_1)(&\widehat{B_1}[\widehat{W^1_1},\ldots,\widehat{W^1_{m_1}}]\cr &@(y_2)(\ldots (y_p)(\widehat{B_p}[\widehat{W^p_1},\ldots,\widehat{W^p_{m_p}}])\ldots)) } \] The $\lambda^{\mathsf{Gtz}}$ representation reflects the blocking structure of the nested blocks and accumulators in linear fashion, where the blocking variable $z$ appears in the leftmost position, and each accumulator in the trailing context `unblocks' the subsequent accumulator. \section*{Acknowledgement} A preliminary version of this work was presented at the 11th International Workshop on Domain Theory and Applications (Domains XI), Paris, 10th September 2014. We are grateful to Thomas Ehrhard and Alberto Carraro for their insightful comments during the workshop. We also thank Beniamino Accattoli and Flavien Breuvart for the stimulating discussions in the early stages of this work. Our gratitude to Luca Aceto at the Icelandic Centre of Excellence in Theoretical Computer Science, and to Manuel Carro and Manuel Hermenegildo at the IMDEA Software Institute of Madrid, for providing an excellent supporting environment. We are also indebted to Luca for his thoughtful review and encouragement. \bibliographystyle{alpha}
1,108,101,566,847
arxiv
\section{Introduction} For carbon dioxide (CO$_2$), being an important industrial chemical, numerous interaction potentials (IPs) have been proposed, surpassed perhaps only by water in the amount of computer attention it has attracted among the molecular models. There are the parametric fits to the {\em ab initio} potential energy surface of the dimer, such as the the IPs of Steinebrunner and coworkers,\cite{steinebrunner98} of Bukowski and colleagues,\cite{bukowski99} and of Bock {\em et al.} \cite{bock00} The most successful and widely used IPs, however, have been fitted against bulk properties known from observation, such as the vapor-liquid equilibrium \cite{potoff98,vrabec01,zhang05} (VLE) or the crystal lattice parameters \cite{murthy81,*murthy83}, and still others against experimental properties of the dilute gas, such as the second virial coefficient. \cite{koide74,macrury76} One may ask why this is so, but the {\em ab initio} IPs employ a great number of fitting parameters, not always of clear physical origin and, even so, interfacing them with other IPs, for instance when studying mixtures, is technically difficult because suitable combining equations are not known for the many parameters. Moreover, this problem is not unique to the {\em ab initio} IPs: also some empirical IPs use truncated series expansions \cite{koide74, macrury76, vrabec01} for both angular and radial functions in a way which makes it difficult to interface them with IPs of radial site-site interactions. Hence, such IPs can find little use outside simulations of the neat liquid. On the other hand, great success has been enjoyed by the simple site-site interaction formula of one Lennard-Jones interaction center, and one atomic point charge, centered on every atomic site\cite{murthy81, *murthy83, harris95, potoff98, zhang05}. Restricting ourselves to the rigid models, these can all be regarded as descendants of the original\cite{murthy81, *murthy83} IP due to Murthy, Singer and MacDonald (MSM). These IPs have the very appealing property that they can be readily interfaced (``mixed'') with existing force fields, many of which share the exact same mathematical form. Nature is not kind enough, however, to allow such a simplified description of the CO$_2$ molecule at no cost. Even though very good experimental agreement for a wide variety of properties is obtained by the simple, rigid model with three Lennard-Jones centers and one point quadrupole developed by Merker {\em et al.},\cite{merker10} like the EPM-2 model,\cite{harris95} it suffers from experimental disagreement in more ``basic'' properties such as the carbon-oxygen bond distance. With respect to the VLE envelope, the most successful such simple MSM-type model to date is the IP due to Zhang and Duan \cite{zhang05} (ZD). Nevertheless, despite its high accuracy in this property, I report in this Note that the microstructure of the dimer, and the temperature dependence of the second, $B_2(T)$, and third, $B_3(T)$, virial coefficients are of much poorer quality. It should be pointed out, also, that the results presented in Ref.~\onlinecite{zhang05} have been called into question.\cite{merker08} One striking omission from the published CO$_2$ IPs is that of many-body effects. That the CO$_2$ molecule lacks an electric dipole moment may have dissuaded investigators from looking in this direction, but in the work leading up to this Research Note, extensive trials indicated that it is not possible with a non-polarizable IP of MSM-type to simultaneously fit $B_2(T)$ while keeping the experimental agreement of the VLE envelope, at least not if the experimental bond distance and electric quadrupole moment are to be kept intact. The decision was then made in favor of a polarizable IP, to be described in this Note, but the extra cost that the high-resolution solution of the electric field equations incurs, even for a single polarization site, make simulations of the vapor-liquid coexistence prohibitively expensive for parametrization purposes. Instead, the temperature-dependence of the fluid density at 200 bar was used as a test for the many-body (concentrated phase) and $B_2(T)$ for the two-body (dilute phase) properties. Further developments then introduced a three-body dispersion interaction of Axilrod-Teller type,\cite{axilrod43} and $B_3(T)$ as a parametrization target. As a test of the validity of the IP, numerous other properties are calculated without input into the parametrization procedure. The model introduced in this work goes by the moniker of Gaussian Charge Polarizable Carbon Dioxide (GCPCDO). This is because of its great similarity to the highly successful GCPM water \cite{paricaud05} from which it borrows most of the essential equations. This Note is organized as follows. First, in Section \ref{sec1} a description of the mathematical form of the IP is given, and also the details of the calculations and the targeted properties of the parametrization. Then in Section \ref{sec2}, the results are presented and discussed. Finally, a brief recapitulation of the main points is given in Section \ref{sec3}. \section{Model and computational details} \label{sec1} \subsection{Electrostatic interaction} We compute the interaction between partial charge $q_\alpha$ and partial charge $q_\beta$ at a separation of $r_{\alpha \beta}$, through the formula \begin{eqnarray} u_\mathrm{q}(r_{\alpha \beta}) = \frac {q_\alpha q_\beta} {r_{\alpha \beta}} \eta(r_{\alpha \beta}, \tau_\alpha, \tau_\beta) \label{eq:chgenergy} \end{eqnarray} Here $\eta(r_{\alpha \beta}, \tau_{\alpha}, \tau_{\beta})$ is a function that assures that the electrostatic interactions remain finite at all separations; for large $r_{\alpha \beta}$ it approaches unity. Physically, this corresponds to charges distributed over a finite volume in space and like this we avoid the singularity of the potential at zero charge separation. For point charges, $\eta$ is identical to unity and Eq.~(\ref{eq:chgenergy}) reduces to the classical Coulomb law. We choose the following form for the $\eta$-function, \begin{equation} \eta(r_{\alpha \beta}, \tau_\alpha, \tau_\beta) = \mathrm{erf} \left (\frac {r_{\alpha \beta}} {\sqrt{2(\tau_\alpha^2 + \tau_\beta^2)}} \right ) \end{equation} which corresponds the physical case of two Gaussian charge distributions of standard deviations $\tau_\alpha$ and $\tau_\beta$ interacting with each other.\cite{chialvo98} For now, we shall not concern ourselves with the general case of many-body interaction, as given by both ``static'' and ``dynamic'' electron correlation, but exclusively take care of that ``static'' correlation from the average electric field around each molecule, {\em i. e.} electronic induction effects. Furthermore, we do not carry this analysis beyond the dipole induction, {\em i. e.} we consider only the gradient of the electric potential. An induced dipole is hence positioned at the center of mass of molecule $l$ and is given by \begin{equation} \vec \mu_l = \vec E_l \left ( \alpha_{\bot} + |\cos(\theta)| (\alpha_\parallel - \alpha_{\bot}) \right ) \end{equation} where $\vec E_l$ is the electrostatic field at that point, $\alpha_\bot$ the polarizability perpendicular to the molecular axis and $\alpha_\parallel$ that parallel to the same. Both $\alpha_\bot$ and $\alpha_\parallel$ are known from theory \cite{maroulis03} and their average agree in magnitude with that ascertained in experiment, \cite{bridge66, chrissanthopoulos00} but the interpolation between them has been chosen merely for convenience. $\theta$ is the angle between the molecular axis and the direction of $\vec E_l$, the electric field at site $l$. Because of the uncertainty in the precise form of the polarization matrix, and the approximations involved with the rigid rotor, the polarizabilities have been rounded to only two significant digits. In its turn, the electric field is computed as the sum of the contributions due to the permanent charges and that due to the other induced dipoles, \begin{equation} \vec E_l = \vec E_l^\mathrm q + \vec E_l^\mathrm \mu \end{equation} where $\vec E_l^\mathrm q$ is the electric field at the center of molecule $i$ due to the all the charges on the other molecules, and $\vec E_l^\mathrm \mu$ is the electric field at the same point, but due to all the other induced dipoles. These are given by \begin{equation} \vec E_l^\mathrm q = \sum_{m \neq l} \sum_{\beta \in m} \frac {q_{\beta} (\vec r_l - \vec r_\beta)} {r_{l\beta}^3} \left (\eta(r_{lm}, \tau_l, \tau_\beta) - \frac {\partial} {\partial r_{lm}} \eta(r_{lm}, \tau_l, \tau_\beta) \right ) \end{equation} and \begin{equation} \vec E_l^\mathrm \mu = \sum_{m \neq l} \mathbf T_{lm} \vec \mu_m \end{equation} where the tensor $\mathbf T_{lm}$ is obtained from the Hessian of equation (\ref{eq:chgenergy}), with -- following Paricaud {\em et al.} \cite{paricaud05} -- the charge width parameter of the molecular center equal to that of the center atom. If the dipoles are converged, the extra energy of interaction due to the polarization is given by, \cite{ahlstrom89} \begin{equation} U_\mathrm{pol} = -\frac 1 2 \sum_{l=1}^N \vec \mu_l \cdot \vec E_l^\mathrm{q} \end{equation} \subsection{Dispersion interaction} First of all, the modified Buckingham exp-6 potential is adopted between atoms $\alpha$ and $\beta$, \begin{equation} u_{\mathrm{disp},2}(r_{\alpha \beta}) = \frac {\epsilon_{\alpha \beta}} {1 - 6 / \gamma_{\alpha \beta}} \left [\frac {6} {\gamma_{\alpha \beta}} \exp \left \{ \gamma_{\alpha \beta} \left (1 - \frac r \sigma_{\alpha \beta} \right ) \right \} - \left ( \frac \sigma r \right )^6 \right ] \end{equation} This represents the pairwise additive part of the dispersion interaction as well as the steric repulsion between atoms. Also for this interaction, a hard-core is introduced at $r_{\alpha \beta} = 0.57 \sigma_{\alpha \beta}$ to avoid the spurious behavior of this potential at short range. This is the same hard-core cutoff used by Paricaud and coworkers \cite{paricaud05} in the GCPM water model. Investigations indicated that the results are not very sensitive to the shortening of this hard-core radius to $0.35 \sigma_{\alpha \beta}$, but the speed of simulation is. Here $\epsilon_{\alpha \beta}$, $\gamma_{\alpha \beta}$ and $\sigma_{\alpha \beta}$ are atomic interaction parameters, related to the well-depth, steepness and position, respectively, of the dispersion interaction. Second, a modified Axilrod-Teller term is added for all molecular triples. This is the triple-dipole dispersion correction to the van der Waals interaction which was first published by Axilrod and Teller. \cite{axilrod43} In the derivation by Axilrod,\cite{axilrod51} spherical polarizabilities are assumed, but for anisotropically polarizable molecules, such as CO$_2$, his result does not hold. To investigate this case, we quickly recapitulate Axilrod's derivation\cite{axilrod51} where we dispose of the assumption of spherical polarizabilities. For each of the three molecules $l$, $m$ and $n$, we define a local coordinate system where the $z$-axis is normal to the plane of the molecular centers, the $x$-axis parallel to the bisector of the angle spanned by the two other molecules and the $y$-axis mutually orthogonal to the $x$- and $z$-axes. Hence, the $z$-axes all coincide between the three local coordinate systems, but the $x$- and $y$-axes need not. Assume now that the electronic structure of each molecule is independent and described by a wavefunction that factorizes into separable $x$-, $y$- and $z$-contributions in its local coordinate system, {\em i. e.} for molecule $l$, \begin{equation} \psi_l(x_l, y_l, z_l) = X(x_l) Y(y_l) Z(z_l) \end{equation} We write the third-order perturbation correction to the groundstate energy, $W_0'''$, which in Axilrod's notation is (Eq.~[5a] in Ref.~\onlinecite{axilrod51}), \begin{equation} W_0''' = \mathop{\sum_{j}}_{j \neq k, 0} \mathop{\sum_{k}}_{k \neq 0} \frac{H_{0j}' H_{jk}' H_{k0}'} {(W_j^0 - W_0^0)(W_k^0 - W_0^0)} \end{equation} Here $H'_{jk}$ is the matrix element of the dipole perturbation for states $j$ and $k$, and $W_j^0$ is the energy of state $j$. Invoking the closure approximation, \cite{atkins05} this equation may be approximately recast as \begin{equation} W_0''' \approx \frac 1 {\nu'} \mathop{\sum_{j}}_{j \neq k, 0} \mathop{\sum_{k}}_{k \neq 0} {H_{0j}' H_{jk}' H_{k0}'} \equiv u_{\mathrm{disp},3}(l, m, n) \label{pert} \end{equation} where \begin{equation} \nu' = \langle (W_j^0 - W_0^0)(W_k^0 - W_0^0) \rangle_{jk} \end{equation} and $\langle \ldots \rangle_{jk}$ signifies the arithmetic average over $j$ and $k$. Eq.~(\ref{pert}) serves to define the three-body correction to the dispersion energy that we will use. Formally, the set of matrix elements $\{H_{jk}'\}$ covers all possible excited states but we shall assume contributions to the sum only from the first excited orbital of each symmetry for each molecule. That is, the three lowest excited states of the arbitrary molecule $l$ are assumed to be $X^*(x_l) Y(y_l) Z(z_l)$, $X(x_l) Y^*(y_l) Z(z_l)$ and $X(x_l) Y(y_l) Z^*(z_l)$ where the asterisk denotes the next higher-energy orbital. Because they share a common orthogonal $z$-axis, only mixed excited states for $x$- and $y$- components between the molecules contribute to the sum over states. Hence, the matrix elements for the sequences of possible excitations are exhaustively given in Table I of Ref.~\onlinecite{axilrod51} and the explicit form of these matrix elements is provided in Eq.~(29) of the same reference, except that the common factor $M^2$ is no longer applicable because the transition diople moment is no longer the same for the different components. Instead, given the two molecules $l$ and $m$, excited in their $x$ and $y$ orbitals, respectively, we write the corresponding matrix element ({\em cf.} Eq.~[29] in Ref.~\onlinecite{axilrod51}) \begin{equation} (x_m, y_l) = (M_{x_m} M_{y_l} / (4 \pi \varepsilon R_{ml}^3))(2 \cos \frac 1 2 \gamma_m \sin \frac 1 2 \gamma_l - \sin \frac 1 2 \gamma_m \cos \frac 1 2 \gamma_l) \label{eq:matel} \end{equation} where $M_{x_m}$ denotes the expectation value of transition dipole moment along the $x$-axis of molecule $m$ and $\gamma_m$ the angle defined by the molecules $l,m$ and $n$ with its the apex in molecule $m$. All the other matrix elements follow by analogy with Ref.~\onlinecite{axilrod51}. We have written Eq.~(\ref{eq:matel}) in a general form with $\varepsilon$ being the permittivity of the medium in which the molecules are dispersed. It is most reasonable to take this as the permittivity of free space. Since because of the very short-range nature of the interaction (it tapers off as the inverse ninth power of distance) it is unreasonable to assume that a homogeneous medium of CO$_2$ molecules can be accommodated between the interacting molecules. The final approximation is to replace the $M$-factors by the square-root of the corresponding polarizabilities. Collecting the constants of proportionality in the common prefactor of Eq.~(\ref{pert}), which is then seen to have dimensions of reciprocal energy, we treat it as a fitting parameter and denote it by $1 / \nu$ proper. With anisotropic polarizabilities, the sum over states in Eq.~(\ref{pert}) does not simplify to the simple form given in the original references \cite{axilrod43, axilrod51} and the complicated closed-form expression will not be reproduced here. In any case, since it involves sines and cosines it is not optimal from a computational point of view; it is much more efficient in terms of the total number of floating-point operations to calculate the truncated sum over states directly. To this end, the half-angle formulas are used to rewrite the matrix elements, such as the one in Eq.~(\ref{eq:matel}), in terms of dot products and square-roots, which are much more efficient in terms of CPU cycles than trigonometric functions. The polarizabilities are calculated, like before, as the interpolation \begin{equation} \alpha(\theta) = \alpha_\bot + |\cos(\theta)|(\alpha_\parallel - \alpha_\bot) \end{equation} where $\theta$ is, once again, the angle to the molecular axis. The limiting polarizabilities are taken to be the same as the static ones. In total, after self-consistent solution of the induced dipoles, the energy of interaction among $N$ molecules is given by \begin{equation} U = \sum_{l=1}^N\sum_{m>l}^N \sum_{\alpha \in l} \sum_{\beta \in m} \left (u_\mathrm{q} (r_{\alpha \beta}) + u_{\mathrm{disp},2}(r_{\alpha \beta}) \right ) - \frac 1 2 \sum_{l=1}^N \vec \mu_l \cdot \vec E_l^\mathrm q + \sum_{l=1}^N \sum_{m>l}^N \sum_{n>m}^N u_{\mathrm{disp},3}(l,m,n) \end{equation} The analytical gradient of this expression is very involved, with the chain-rule giving factors proportional to the gradient of the polarizability. Consequently, when the gradient has been needed, for instance, in energy minimization, it has been calculated numerically using the finite-difference approximation. \subsection{Parametrization strategy} \subsubsection{Gas-phase properties} A number of parameters have not been optimized, but simply assigned from plausible experimental values in the literature. Thus, the bond length is fixed at 1.161 \AA, midway between published values of 1.160 \AA\ and 1.162 \AA\ by experimental groups,\cite{harmony79, granar86} and the partial charges on the atoms are chosen to reproduce the experimental quadrupole moment.\cite{buckingham63,harries70} To further reduce the number of free parameters, the charge width of the oxygen atom, $\tau_\mathrm O$, was set equal to 0.610 \AA, the value of the oxygen atom in GCPM water ,\cite{paricaud05} and, the corresponding quantity for carbon was scaled according to the ratio of the $\sigma$ parameters ({\em vide infra}). These parameters were not optimized. Moreover, I introduced the additional constraint of $\gamma_{\alpha \alpha} = \gamma_{\beta \beta} = \gamma_{\alpha \beta}$ and for the remaining parameters, the following ``mixing rules'' were adopted for unlike interactions of the modified Buckingham exp-6 potential, \begin{eqnarray} \epsilon_{\alpha \beta} & = & \frac {2 \epsilon_{\alpha \alpha} \epsilon_{\beta \beta}} {\epsilon_{\alpha \alpha} + \epsilon_{\beta \beta}} \label{eqn:mix1} \\ \sigma_{\alpha \beta} & = & \frac 1 2 \left ( \sigma_{\alpha \alpha} + \sigma_{\beta \beta} \right ) \label{eqn:mix2} \end{eqnarray} Eq.~(\ref{eqn:mix1}) can be justified with appeal to the London formula,\cite{eisenschitz30} in which the harmonic average of the ionization energy is taken. Normally, it is the geometric average of the polarizabilities in said equation that lends its mathematical form to the combining rule for the $\epsilon$-parameters. However, the precise form of the mixing rules are of a secondary concern, as they serve mainly to reduce the parameter space that has to be fit, and provided the atomic interaction parameters do not turn out to be very different from each other, the result will be insensitive to reasonable choices of mixing rules. A test set of potentials with predefined $\sigma$ values, covering a broad range, but with the ratio, $\sigma_\mathrm C / \sigma_\mathrm O$ of the carbon sigma value, $\sigma_\mathrm C$, and the oxygen sigma value, $\sigma_\mathrm O$, conserved at $1.0483$ were investigated. This value was arbitrarily chosen early in the development of the model and never subjected to revision. For each such class of IPs, manual tuning of the $\epsilon$-parameters in trial-and-error fashion was made to obtain a good fit for $B_2(T)$ against experimental data and a reasonable binding energy and geometry of the dimer structure. $B_2(T)$ was calculated from the Mayer sampling method \cite{singh04} over $5 \times 10^8$ Monte Carlo steps. Typically, with this number of Monte Carlo steps, the resulting uncertainty, calculated from the block average method \cite{flegal07} with blocks of $10^5$ cycles, is of the order of one percent. The induced dipoles were iteratively solved to within a tolerance of $3 \times 10^{-10}$ D at all times. In the course of this parametrization, it was found that the $\gamma$-parameter plays a most crucial role. This parameter controls the steepness of the modified Buckingham potential, without affecting well depth or location. Small values lead to a flat minimum and $B_2(T)$ turned out to be very sensitive to this parameter. Thus, it was found that, to fit $B_2(T)$, this parameter had to be increased from its initial estimate of 12.75 (taken from GCPM water \cite{paricaud05}) to 15.50. Later in the development of the model, it was found necessary, with respect to the liquid densities, to include also the three-body dispersion in the energy expression. Because the $\nu$-parameter is completely independent of all dimer properties, including $B_2(T)$, it was fitted independently so that $B_3(T)$ coincided with the data of Dushek {\em et al.}, \cite{dushek90} It was not possible to reproduce, with this single parameter, also the data of Holste {\em et al.},\cite{holste87} something which lends credence to the former measurements. The value obtained, $\nu = 2.82 \times 10^4$ K, is reasonable in that it is of the same order of magnitude as the Axilrod-Teller coefficient obtained for argon, for which this coefficient is $2.1 \times 10^4$ K in the appropriate units.\cite{leonard75} \subsubsection{Bulk simulations} The fluid densities were extracted from isothermal-isobaric Metropolis Monte Carlo simulations \cite{metropolis53} for an ensemble of 200 molecules in periodic boundary conditions of cubic symmetry over $M = 2.5 \times 10^7$ steps, run in parallel over five independent Markov chains. Standard errors of the mean were estimated from the block average method \cite{flegal07} with $\sqrt{M}$ blocks. For the modified Buckingham potential, with neglect of interactions beyond the cutoff the energy was corrected by \begin{equation} U_\mathrm{LRC} = u_\mathrm{LRC}^\mathrm{CC} + 4 u_\mathrm{LRC}^\mathrm{CO} + 4 u_\mathrm{LRC}^\mathrm{OO} \end{equation} where $u_\mathrm{LRC} = \frac \rho 2 \int_{r_\mathrm{c}} ^{\infty} 4 \pi r^2 U_\mathrm{exp-6}(r) \mathrm d r$ and superscripts indicate between which two atom types the interaction is computed. Here $\rho$ is the number density of molecules. The integral in question can be analytically computed, which yields \begin{equation} u_\mathrm{LRC} = \frac {2 \pi \epsilon \rho} {1 - 6/\gamma} \left [ \frac {6 \sigma} {\gamma^4} \left (r_c^2 \gamma^2 + 2 r_c^2 \gamma \sigma + 2 \sigma^2 \right ) \exp \left (\gamma - \frac {r_c \gamma} {\sigma} \right ) - \frac {\sigma^6} {3 r_c^3} \right ] \end{equation} For the Axilrod-Teller terms, in light of their very strong distance dependence, no long-range correction was deemed necessary. For the electrostatic interaction, a long-range correction was introduced for the induced dipole using the Onsager expression for the dipole reaction field\cite{onsager36} with the dielectric constant taken from the Clausius-Mosotti equation fitted against experimental measurements of the dielectric constant.\cite{keyes30} This may seem to be very approximate but was done for two principal reasons. First, lattice summation techniques such as the Ewald sum \cite{frenkel01} introduce assumptions of periodicity which are unsuitable for the uncorrelated nature of the molecules in a disordered phase. Second, the quadrupolar interaction terms decay sufficiently fast with distance so as to make their sum absolutely convergent. On the assumption of uncorrelated molecules beyond the cutoff radius, the long-range correction vanishes. Last but not least, the exact same procedure was used by Paricaud {\em et al.} in their simulations of GCPM water.\cite{paricaud05} As a technical side-note, it should be pointed out that Ewald summation for Gaussian charges is complicated by the fact that the required Fourier transform for the reciprocal space sum is not analytically tractable. Using the reciprocal space sum for point charges, with due corrections in direct space and a long cutoff radius, is an alternative but inefficient. Each trial move consisted of randomly displacing and rotating from one up to four molecules. Interaction cutoffs were introduced at half the box length. For the three-body dispersion, this was interpreted to mean that all interacting molecules had to be within cutoff of each other. Simple mixing \cite{eyert96} with a mixing factor of 10\% was used to enforce and speed convergence. Because of increased computational load, the induced dipoles were solved to within a tolerance of $3.4 \times 10^{-5}$ D per molecule. This is still less tolerant than many other reported simulations of polarizable IPs: Ren and Ponder \cite{ren03} report simulations on liquid water with their AMOEBA force field where the dipoles are solved to within $10^{-2}$ D precision; Paricaud {\em et al.}.\cite{paricaud05} solve the induced dipoles to within $5 \times 10^{-5}$ D in their simulations of GCPM water. However, further relaxation of the tolerance is unnecessary as the greater part of the simulation time is not spent on the self-consistent solution of the elctrostatic field equations, but (about 70\%) on the evaluation of $\sum_{lmn} u_{\mathrm{disp},3}(l,m,n)$. This computation can be significantly sped up with shorter interaction cutoffs. Two types of bulk simulations were performed. A series of $NpT$-simulations to determine the density and constant-pressure heat capacity of the model fluid and $NVT$-simulations to determine the radial distribution functions of the model fluid. Together with the calculation of the virial coefficients, these served to parametrize the model. All parameters except for $\nu$ were decided for the potential model lacking the $u_3$ term, which has no effect on either the dimer binding energy, its geometry or the second virial coefficient. Selection among candidates of this pairwise dispersion model was effected by comparing densities from the $NpT$-simulation with experiment, although the long-range correction was not fully developed at this time. Subsequent introduction of the long-range correction lead to an increase of the computed densities, later corrected by the introduction of the three-body dispersion interaction. The final parameters, extracted from these tests, are listed in Table \ref{tab:param}. \begin{table} \caption{Interaction parameters and experimental observables for the GCPCDO model and, where appropriate, experimental reference values.} \footnotetext[1]{Ref. \onlinecite{bridge66}} \footnotetext[2]{Ref. \onlinecite{harmony79}} \footnotetext[3]{Ref. \onlinecite{granar86}} \footnotetext[4]{Ref. \onlinecite{buckingham63}} \footnotetext[5]{Ref. \onlinecite{harries70}} \label{tab:param} \begin{ruledtabular} \begin{tabular}{c r r} Parameter & GCPCDO & Exp. \\ \hline $\sigma_\mathrm O$ / \AA & $3.347$ & \\ $\sigma_\mathrm C$ / \AA & $3.193$ & \\ $\epsilon_\mathrm O$ / K & $67.72$ & \\ $\epsilon_\mathrm C$ / K & $71.34$ & \\ $\nu$ / K & $2.82 \times 10^4$ \\ $\gamma_\mathrm O$ / \AA & $15.50$ & \\ $\gamma_\mathrm C$ / \AA & $15.50$ & \\ $\tau_\mathrm O$ / \AA & $0.6100$ & \\ $\tau_\mathrm C$ / \AA & $0.5819$ & \\ $q_\mathrm O$ / $e$ & $-0.3321$ & \\ $q_\mathrm C$ / $e$ & $0.6642$ & \\ $\alpha_\bot$ / \AA $^3$ & $1.95$ & $1.929$\footnotemark[1] \\ $\alpha_\parallel$ / \AA $^3$ & $4.05$ & $4.038$\footnotemark[1] \\ Bond length / \AA & $1.161$ & $1.160$\footnotemark[2] \\ & & $1.162$\footnotemark[3] \\ Quadrupole / (e \AA$^2$) & $-0.90$ & $-0.85$\footnotemark[4] \\ & & $-0.90$\footnotemark[5] \\ & & $-0.96$\footnotemark[5] \\ \end{tabular} \end{ruledtabular} \end{table} \section{Results and discussion} \label{sec2} \subsection{Virial coefficients} Not only have calculations for the parametrization of the GCPCDO IP been carried out, but in the investigations leading up to its formation, I also investigated some other CO$_2$ IPs. The VLE envelopes of these models have all been thoroughly investigated in Ref.~\onlinecite{zhang05}, in which it was established that the ZD potential exhibits near-perfect experimental agreement in this property. While this excellent agreement might not be reproducible in every aspect,\cite{merker08} the critical temperature and density cannot lie very far from their experimental counterparts nonetheless, because the relative deviations between the coexistence densities reported in Ref. \onlinecite{zhang05} and Ref. \onlinecite{merker08} decrease and approach the experimental values when nearing the experimental critical temperature. Moreover, the variation in $B_2(T)$ is not very great between the models; except possibly for the TraPPE IP \cite{potoff01} which---it is interesting to point out---is that of the non-polarizable IPs of MSM-type, which exhibits the best experimental agreement for both $B_2(T)$ and $B_3(T)$. For the GCPCDO IP, having had this as one of its goals in the parametrization, the fit is very good with the relative error never exceeding 5\%. The results are reported for a select number of temperatures in Table \ref{tab:b2}. \begin{table} \caption{Second virial coefficients for the ZD, MSM, EPM2, TraPPE and GCPCDO IPs, as well as experimental results, at a select number of temperatures. For the computed results, bracketed numbers indicate the estimated standard error of the mean in the last digit from the Mayer sampling\cite{singh04} Monte Carlo integration.} \label{tab:b2} \begin{ruledtabular} \begin{tabular}{l r r r r r r} & \multicolumn{3}{l}{$B_2$ / (cm$^3$ mol$^{-1}$)} & & & \\ $T$ / K & ZD & MSM & EPM2 & TraPPE & GCPCDO & Exp. \\\hline $220.00$ & $-200.6(6)$ & $-204.7(6)$ & $-206.6(7)$ & $-222.0(7)$ & $-247.2(9)$ & $-247.50$\footnotemark[1] \\ & & & & & & $-247.52$\footnotemark[2]\\ $240.00$ & $-168.1(5)$ & $-171.5(6)$ & $-170.9(6)$ & $-183.5(6)$ & $-202.8(9)$ & $-202.83$\footnotemark[1] \\ & & & & & & $-202.13$\footnotemark[2] \\ $260.00$ & $-141.7(5)$ & $-145.0(5)$ & $-144.7(5)$ & $-153.7(6)$ & $-169.8(9)$ & $-168.92$\footnotemark[1] \\ & & & & & & $-168.27$\footnotemark[2] \\ $280.00$ & $-120.6(5)$ & $ -123.3(5)$ & $-122.1(5)$ & $-129.3(5)$ & $-143.7(7)$ & $-142.70$\footnotemark[1] \\ & & & & & & $-142.11$\footnotemark[2] \\ $300.00$ & $-104.4(4)$ & $-106.2(4)$ & $-105.0(4)$ & $-110.3(5)$ & $-123.5(6)$ & $-121.70$\footnotemark[1] \\ & & & & & & $-121.35$\footnotemark[2] \\ $340.00$ & $-78.0(4)$ & $ -79.5(4)$ & $-78.0(4)$ & $-81.9(4)$ & $-91.3(4)$ & $-90.57$\footnotemark[2] \\ $423.15$ & $-43.5(3)$ & $-44.8(3)$ & $-43.8(3)$ & $-45.6(4)$ & $-52.2(4)$ & $-51.25$\footnotemark[1] \\ $448.15$ & $-36.6(3)$ & $-37.5(3)$ & $-36.5(3)$ & $-37.7(3)$ & $-43.9(3)$ & $-43.51$\footnotemark[1] \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Ref. \onlinecite{holste87}} \footnotetext[2]{Ref. \onlinecite{dushek90}} \end{table} It does seem surprising that IPs that fare very well in reproducing VLE properties should fail so remarkably at reproducing the much ``simpler'' property $B_2(T)$. If $B_2(T)$ is overestimated, then a compensating underestimation of $B_3(T)$ seems very likely. Direct calculation of $B_3(T)$ for the computer models seem to confirm this. However, contrary to the case of $B_2(T)$, good quality experimental data are very hard to find for $B_3(T)$. Different authors report widely different results, over the same temperature range. In the narrow range around the critical temperature, however, both Holste {\em et al.} \cite{holste87} and Dushek {\em et al.} \cite{dushek90} report measurements which are in at least slight mutual concordance. In Table \ref{tab:b3}, these measurements are reported and compared with predictions from the IPs. As is evident, the pairwise additive IPs underestimate $B_3(T)$ across the whole temperature range. \begin{table} \caption{Third virial coefficients for the ZD, MSM, EPM2, TraPPE and GCPCDO IPs, as well as experimental results, at a select number of temperatures. For the computed results, bracketed numbers indicate the estimated standard error of the mean from the Mayer sampling \cite{singh04} Monte Carlo integration.} \label{tab:b3} \begin{ruledtabular} \begin{tabular}{l r r r r r r} & \multicolumn{3}{l}{$B_3$ / (cm$^6$ mol$^{-2}$)} & & & \\ $T$ / K & ZD & MSM & EPM2 & TraPPE & GCPCDO & Exp. \\ \hline $280.00$ & $2920(30)$ & $2940(30)$ & $2910(30)$ & $3080(40)$ & $5140(60)$ & $5636$\footnotemark[1] \\ & & & & & & $5165$\footnotemark[2] \\ $300.00$ & $2820(20)$ & $2860(20)$ & $2922(20)$ & $3060(30)$ & $4790(60)$ & $4927$\footnotemark[1] \\ & & & & & & $4753$\footnotemark[2] \\ $320.00$ & $2690(20)$ & $2760(20)$ & $2700(20)$ & $2870(20)$ & $4460(50)$ & $4423$\footnotemark[1] \\ & & & & & & $4360$\footnotemark[2]\\ $340.00$ & $2530(20)$ & $2570(20)$ & $2560(20)$ & $2680(20)$ & $4046(50)$ & $3996$\footnotemark[2] \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Ref. \onlinecite{holste87}} \footnotetext[2]{Ref. \onlinecite{dushek90}} \end{table} \subsection{Volumetric properties} To investigate the properties of the many-body potential with more than just three bodies, the density and the heat capacity at constant pressure, both readily extracted from the $NpT$ simulations, serve as indicators. These results are summarized in Table \ref{tab:rho} with experimental data. It is surprising that the agreement with experiment is considerably better at the higher densities. This increasing discrepancy between theory and experiment can be tentatively attributed to the fourth virial coefficient, $B_4(T)$. It is clear that in order to explain these results, $B_4(T)$ (or possibly higher virial coefficients) must rise quicker with temperature for the GCPCDO model than for its experimental counterpart. Unfortunately, quality experimental values of $B_4(T)$ are not known but it is worth pointing out that if one applies the virial equation of state truncated after $B_3(T)$, the computed densities at 290 K and 310 K turn out to be $0.915 \pm 0.018$ g / cm$^3$ and $0.791 \pm 0.015$ g / cm$^3$, respectively, at 200 bar pressure. These values are indeed very close to the simulated values reported in Table \ref{tab:rho} and indicate that $B_4(T)$ is overestimated and close to zero. However, because at the higher pressure, the truncated virial equation of state predicts densities {\em higher} than the simulated ones, it is clear that higher-order coefficients must be compensating for errors in $B_4(T)$. This is not surprising, at high densities, the steric effects become the dominant mode of interaction. Sadly, the precise causes of these deviations in the virial coefficients is unknown. Unlike the other models investigated in this work, we have come some way in correcting $B_2(T)$ and $B_3(T)$ to their correct experimental values. However, we may safely say that the interactions of the CO$_2$ molecule are not as simple as one may at first suppose. Also shown in Table \ref{tab:rho} is the constant-pressure heat capacity which was calculated from the fluctuation formula, \cite{hill56} \begin{equation} C_\mathrm p = \frac 1 {N k T^2} \left (\langle H^2 \rangle - \langle H \rangle^2 \right ) \end{equation} where $H = U + p V + 5 k T / 2$ is the enthalpy, the last term being the classical kinetic contribution of a linear rigid body and $k$ the Boltzmann constant, $p$ the pressure, $U$ the potential energy, $V$ the volume, $N$ the number of molecules and $T$ the temperature. For a completely fair comparison with the experimental values, also the internal vibrational degrees of freedom should be included. Assuming harmonic behavior, for each normal mode of frequency $\nu$ this quantized harmonic contribution to the heat capacity is then \begin{equation} C_\mathrm{p,h} = k \left (\frac {h \nu} {k T} \right )^2 \left (\exp \left ( \frac {h \nu} {k T} \right ) - 1 \right )^{-2} \exp \left (\frac {h \nu} {k T} \right ) \label{eq:hcapcor} \end{equation} where $h$ is the Planck constant. Taking into account the experimental frequencies \cite{martin32,ouazzany87} of the four harmonic normal modes of the CO$_2$ molecule, this term is added to $C_\mathrm{p}$ and reported as the corrected values in Table \ref{tab:rho}. Not surprisingly, this expression compares favorably with the experimental $C_\mathrm p$ extrapolated to vanishing density.\cite{webbookfluid} For instance, the experimentally ascertained intramolecular contribution is 5.74 J / (K mol) at 250 K, whereas from Eq.~(\ref{eq:hcapcor}) one has 5.78 J / (K mol). At 310 K, the experiments indicate 8.58 J / (K mol) and from Eq.~(\ref{eq:hcapcor}) we have 8.68 J / (K mol). It is computationally too demanding, at present, to include the quantized vibrations in the bulk simulation of the GCPCDO IP and the approximation of separable internal and external degrees of freedom is expected to be fair. The general overestimation of the heat capacity, even before the correction for intramolecular degrees of freedom, is due, at least in part, to the assumption of classical translational and rotational degrees of freedom. Especially at high density, free rotation and translation is not possible and the rotational and translational degrees of freedom are in effect partly quantized librational modes. Unlike the intramolecular degrees of freedom, these are highly coupled and there exists no viable computational approximation for their contribution to the heat capacity. The very large overestimation of the heat capacity at 310 K and 200 bar cannot, however, be attributed to this effect alone. Moreover, at this state point the discrepancy in density between real CO$_2$ and GCPCDO is so large that a more fair comparison (as relates to $C_\mathrm p$) is with the experimental $C_\mathrm p$ at $0.79$ g / cm$^3$ density, which is \cite{webbookfluid} $119$ J / (K mol), not too far off from the computed value. \begin{table} \caption{Calculated and experimental fluid properties at 200 and 800 bar pressure and temperatures of 250 to 310 K. Heat capacities assume classical contribution of $3k$ for the rotational and translational degrees of freedom. The corrected heat capacities include the heat capacity of the quantized internal degrees of freedom within the harmonic approximation. For the calculated results, numbers in parentheses indicate the estimated standard error of the mean in the last digit.} \label{tab:rho} \begin{ruledtabular} \begin{tabular}{ll lr lcr} $p$ / bar & $T$ / K & \multicolumn{2}{l}{$\rho$ / (g / cm$^3$)} & \multicolumn{3}{l}{$C_p$ / (J / [K mol])} \\ & & Calc. & Exp.\footnotemark[1] & Calc. & Corr. & Exp.\footnotemark[1] \\ \hline 200 & 250.0 & 1.100(4) & 1.105 & 96(4) & 102(4) & 83.3 \\ 200 & 270.0 & 1.010(5) & 1.032 & 87(3) & 95(3) & 86.0 \\ 200 & 290.0 & 0.902(7) & 0.951 & 91(3) & 99(3) & 90.4 \\ 200 & 310.0 & 0.79(1) & 0.856 & 109(5) & 118(4) & 97.7 \\ 800 & 250.0 & 1.216(2) & 1.211 & 76(3) & 82(2) & 74.6 \\ 800 & 270.0 & 1.159(2) & 1.165 & 71(2) & 78(2) & 73.7 \\ 800 & 290.0 & 1.107(2) & 1.118 & 72(2) & 80(2) & 73.0 \\ 800 & 310.0 & 1.057(2) & 1.073 & 68(2) & 77(2) & 72.4 \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Ref. \onlinecite{webbookfluid}} \end{table} No IP for the fluid can be deemed satisfactory if unable to reproduce the experimental fluid structure. Accordingly, at the density and temperature of the neutron-diffraction experiments of Cipriani {\em et al.},\cite{cipriani98} the atomic pair distribution functions (PDF), $g(r)$, have been computed, and these are shown in Figures \ref{fig:rdf} and \ref{fig:rdf2} for two different thermodynamic states. What is experimentally ascertained, however, is not the individual, atomically resolved $g(r)$, but the superimposed effect from all atomic scatterers. Taking account not only of the four times greater abundance of C-O and O-O vectors than of C-C vectors between molecules, but also of the different propensity toward neutron scattering, the following formula was used to calculate the superimposed PDF, \cite{vantricht84} \begin{eqnarray} G(r) & = & 0.403 g_\mathrm{OO}(r) + 0.464 g_\mathrm{CO}(r) + 0.133 g_\mathrm{CC}(r) \label{eq:scatter} \end{eqnarray} In terms of the position and number of peaks, these $G(r)$ are seen to be in reasonable-to-good agreement with the experimental results. \begin{figure} \includegraphics{rdf.eps} \caption{The individual atomic PDFs, $g(r)$, of Eq.~(\ref{eq:scatter}) for carbon-carbon (long-dashed line), carbon-oxygen (short-dashed line) and oxygen-oxygen (dotted line) at 312 K and 0.83 g / cm$^3$ for the GCPCDO IP as well as their superposition (full line), weighted by occurrence and scattering propensity, $G(r)$. Squares are experimental results from Ref. \onlinecite{cipriani98}. For clarity, no intramolecular contributions are shown for the computed results.} \label{fig:rdf} \end{figure} \begin{figure} \includegraphics{rdf2.eps} \caption{Same as Figure \ref{fig:rdf} but for 1.09 g / cm$^3$ and 240 K.} \label{fig:rdf2} \end{figure} At 240 K and 1.09 g / cm$^3$, the carbon-oxygen distribution function clearly shows more structure at short range, than at 312 K and 0.83 g / cm$^3$, where the lack of orientational correlation in the fluid is also apparent in how quickly the atomic carbon-oxygen and oxygen-oxygen PDFs decay to unity. Still, however, the carbon-carbon PDF exhibits a slight peak at around 7.5~\AA\ indicating a weak second coordination shell, albeit of random order in molecular orientation. The PDFs indicate that the fluid is slightly overstructured at the higher density, where the first peak is overestimated. The better agreement for the computed PDF at the low density is most likely due to the overall lesser contributions from many-body effects at this density. For the pair potential used in Ref. \onlinecite{cipriani98}, the first peak is overestimated at both thermodynamics states. Last, it should be pointed out that in the fluid with flexible bonds, a general broadening of the peak structure is expected. This effect is at least responsible for the deviation seen at the very short distances, where the internal scattering vectors contribute. In the rigid model, they are $\delta$-functions and have been omitted for clarity. \subsection{Clusters} Having established the weakness of the GCPCDO model primarily in its $B_4(T)$, {\em i. e.} four-body interaction, we turn to properties of the IP for which only three bodies contribute. Clusters like these offer excellent tests of the model, due to the availability of quantum-mechanical reference calculations. It also allows us to pinpoint more clearly the role of many-body effects in the interaction potential. \subsubsection{Dimer and trimers} Both experiment \cite{barton78, walsh87} and {\em ab initio} simulation \cite{steinebrunner98, bukowski99, bock00} agree that the equilibrium dimer structure is of $C_{2h}$ symmetry, with the {\em ab initio} simulations indicating that there is a saddle point of $C_{2v}$ symmetry. The GCPCDO IP reproduces these two dimer states very well, as shown in Table \ref{tab:dimer}. As for the geometry of the dimer configurations (see Figure \ref{fig:dimer}), it is neither better nor worse than the simpler ZD IP,\cite{zhang05} but when it comes to the binding energy of the two states, it is markedly superior when judged against the {\em ab initio} IPs: the binding energy is underestimated by less than 4\% for the minimum and overestimated by less than 1\% for the saddle-point, whereas the ZD IP underestimates the binding energy in both cases by about 15--20\%. This gross underestimation of the dimer binding energy helps explain the overestimation of $B_2(T)$ for the ZD model ({\em vide supra}) but is nevertheless surprising because for non-polarizable potentials the general trend is an overestimation of dimer binding energies. \begin{figure} \includegraphics{co2-dimer-aip.eps} \caption{Schematic illustration of the dimer with definitions of angles and distances. Each molecule is represented by a stick. All atoms lie in the same plane.} \label{fig:dimer} \end{figure} \begin{table} \caption{Equilibrium geometry and well-depth energy of the CO$_2$ dimer for the GCPCDO model, the ZD model, and BUK, the angular {\em ab initio} potential energy surface of Bukowski {\em et al}.\cite{bukowski99} $U$ refers to the potential well-depth at the specific conformation. See Figure \ref{fig:dimer} for the definitions of the geometric quantities.} \label{tab:dimer} \begin{ruledtabular} \begin{tabular}{r l r r r r} & & GCPCDO & ZD & BUK\footnote{Ref. \onlinecite{bukowski99}} & Exp.\footnote{Ref. \onlinecite{walsh87}} \\ \multicolumn{2}{l}{Minimum ($C_{2h}$)} & & & & \\ & $U$ / K & $-675.4$ & $-548.7$ & $-696.8$ & \\ & $\theta_1, \theta_2$ / deg & $55.5$ & $56.5$ & $59.0$ & $57.96$ \\ & $R$ / \AA & $3.64$ & $3.64$ & $3.54$ & $3.60$ \\ \multicolumn{2}{l}{Saddle-point ($C_{2v}$)} & & & & \\ & $U$ / K & $-596.6$ & $-508.7$ & $-593.2$ & \\ & $\theta_1$ / deg & $90.0$ & $90.0$ & $90.0$ & \\ & $\theta_2$ / deg & $0.0$ & $0.0$ & $0.0$ & \\ & $R$ / \AA & $4.16$ & $4.17$ & $4.14$ & \\ \end{tabular} \end{ruledtabular} \end{table} Because the GCPCDO model is so successful at capturing the dimer binding energy, it is interesting to test it across a broader range of conformations. Hence, I show in Figure \ref{fig:dissoc} the potential energy function of this model compared to the accurate data of the angular fit of the symmetry-adapted perturbation theory\cite{jeziorski94} calculations of Bukowski {\em et al.} ,\cite{bukowski99} the BUK IP, for radial displacements. Both the energy minimal pathway separating the two molecules and the path along the $C_{2v}$ transition state are shown. For the energy optimal pathway, the optimized angles of the two molecules at each separation are shown in Table \ref{tab:dissoc}. The two molecules, for both the GCPCDO and BUK IPs, prefer a slipped-parallel conformation at close range, but eventually prefer the T-shaped geometry of the minimum at long range. The transition is noticeable as a slight trough in the dissociation curve at around 4 \AA\ and is quicker for the BUK IP with an earlier onset. Another interesting property of the model, which cannot be answered by the BUK IP, is the total dipole of the $C_{2v}$ configuration. Because the electric field gradients are very inhomogeneous close to the molecule, it might be suspected that only allowing the center atom to polarize is artificially deflating the induced dipole, and like this introducing errors in the short-range interaction. For GCPCDO, the dipole is predicted to be 0.19 D at the transition-state separation. Running calculations with Gaussian 03 ,\cite{gaussian03} the suspicion is confirmed as calculations of the dipole moment in the identical configuration at the CISD/aug-cc-pVDZ level of theory, predict a dipole moment of 0.21 D. The dipole moment is hence underestimated by 10\% in the transition state configuration. \begin{figure} \includegraphics{dissoc.eps} \caption{Dependence of energy of interaction, $U(R)$, on separation, $R$, for two different paths of approach. Squares are for the energy minimal path of the GCPCDO IP and the full line is the analogous result for the BUK IP. Circles are for the $C_{2v}$ conformation of the GCPCDO dimer, and the dashed line is for the BUK result.} \label{fig:dissoc} \end{figure} \begin{table} \caption{The optimum values of the angles $\theta_1$ and $\theta_2$ as a function of the separation $R$ between two molecules as predicted by the GCPCDO IP and the BUK IP. The angles are defined geometrically in Figure \ref{fig:dimer}.} \label{tab:dissoc} \begin{ruledtabular} \begin{tabular}{l r r r r} & \multicolumn{2}{l}{GCPCDO} & \multicolumn{2}{l}{BUK} \\ $R$ / \AA & $\theta_1$ / deg & $\theta_2$ / deg& $\theta_1$ / deg & $\theta_2$ / deg \\ \hline 3.0 & $71.7$ & $71.7$ & $66.4$ & $66.4$ \\ 3.3 & $62.3$ & $62.3$ & $62.2$ & $62.2$ \\ 3.5 & $58.0$ & $58.0$ & $59.5$ & $59.5$ \\ 3.7 & $54.5$ & $54.5$ & $57.0$ & $57.0$ \\ 3.8 & $53.0$ & $53.0$ & $46.1$ & $65.2$ \\ 3.9 & $42.1$ & $61.3$ & $37.1$ & $71.1$ \\ 4.0 & $30.8$ & $70.2$ & $28.6$ & $75.9$ \\ 4.1 & $19.6$ & $77.9$ & $18.9$ & $80.9$ \\ 4.2 & $3.7$ & $88.0$ & $0.0$ & $90.0$ \\ 4.5 & $0.0$ & $90.0$ & $0.0$ & $90.0$ \\ \end{tabular} \end{ruledtabular} \end{table} Because of its many-body nature, it is interesting to test the GCPCDO IP on the simplest cluster for which many-body effects contribute, {\em i. e.} the trimer. Consequently, energy minimized structures of the trimer have been located. The two most stable conformations are shown in Figures \ref{fig:tr2} and \ref{fig:tr1}; the specific data on each are summarized in Table \ref{tab:trimers}. Both of these trimer conformations have been observed spectroscopically, with the planar $C_{3h}$ conformation being slightly more abundant.\cite{weida95,weida96} In terms of relative energies, however, neither of GCPCDO or BUK predict the right ordering, but there are two general remarks to be made. The first one is that the inclusion of many-body effects, clearly levels the difference between the two states, the difference in the GCPCDO prediction being less than $0.1$ K.\footnote{The precise value of this difference is uncertain because of the numerical minimization involved, but my program code indicates that the $C_2$ conformation lies $0.03$ K above the $C_{3h}$ one.} That many-body effects would alleviate the problem was hinted at already by Bukowski and coworkers \cite{bukowski99} in their discussion of this problem and they argued using single-point calculation from higher-order quantum chemical theory that this was the case. Second, the possible role of the zero-point vibrational quanta has to be kept in mind. A first-order estimate of this effect is provided by the harmonic zero-point energy. Indeed, as indicated by Bukowski and coworkers,\cite{bukowski99} inclusion of this energy for the BUK IP decreases the difference between the states. Carrying out the same analysis for the GCPCDO IP, however, we find that theory is brought into qualitative agreement with observation. As discussed by Bukowski and collaborators,\cite{bukowski99} the harmonic approximation is very strained in the CO$_2$ trimer but correct evaluation of the zero-point vibrational energy necessitates a numerical solution of the Schr\"odinger equation. Future code development will remedy this situation. Even so, I believe that these results are indicative of the qualities of the GCPCDO IP. \begin{figure} \includegraphics{co2-tr2-aip.eps} \caption{Schematic illustration of the $C_{3h}$ trimer. All atoms lie in the same plane. Molecules A, B and C are all identical by symmetry.} \label{fig:tr2} \end{figure} \begin{figure} \includegraphics{co2-tr1-aip.eps} \caption{Schematic illustration of the $C_2$ trimer. Molecules A and B are identical by symmetry and the distance between their centers is $R$. The perpendicular distance from the center of the line joining their centers to the center of molecule C is $P$.} \label{fig:tr1} \end{figure} \begin{table} \caption{Energy well depth, $U$, equilibrium geometry and harmonic zero-point vibrational energy, $E_0$, for the two most stable trimers predicted by the GCPCDO IP and the BUK IP. Because of uncertainties in the numerical Hessian, $E_0$ is rounded for the GCPCDO. The geometric variables are defined in Figure \ref{fig:tr1} for the $C_2$ minimum and in Figure \ref{fig:tr2} for the $C_{3h}$ minimum.} \label{tab:trimers} \begin{ruledtabular} \begin{tabular}{r l r r} & & GCPCDO & BUK\footnote{Ref. \onlinecite{bukowski99}} \\ \hline \multicolumn{2}{l}{$C_2$ minimum} & & \\ & $U$ / K & $-1849.4$ & $-1889.8$ \\ & $E_0$ / K & $359$ & $343.6$ \\ & $R$ / \AA & $3.73$ & $3.60$ \\ & $P$ / \AA & $2.96$ & $2.89$ \\ & $\alpha$ / deg & $12.1$ & $10.9$ \\ & $\beta$ / deg & $53.7$ & $53.4$ \\ & $\gamma$ / deg & $152.8$ & $157.6$ \\ \multicolumn{2}{l}{$C_{3h}$ minimum} & & \\ & $U$ / K & $-1849.4$ & $-1819.6$ \\ & $E_0$ / K & $318$ & $304.2$ \\ & $R$ / \AA & $4.07$ & $4.04$ \\ & $\beta$ / deg & $40.9$ & $39.3$ \\ \end{tabular} \end{ruledtabular} \end{table} \subsubsection{Water complexes} Because of its transparent physical form, the GCPCDO IP can, through the adoption of suitable ``combining rules'', be interfaced with other IPs. As a first test of the feasibility of this approach, I have calculated the binding energy and molecular geometry of the \ce{[H2O-CO2]} complex using the successful GCPM water \cite{paricaud05} for the water moiety. In addition to the combining rules of Eqs~(\ref{eqn:mix1}) and (\ref{eqn:mix2}), the $\gamma$-parameters were calculated as \begin{equation} \gamma_{xy} = \frac 1 2 (\gamma_x + \gamma_y) \end{equation} For comparison purposes, the complexation of ZD CO$_2$ and TIP3P water\cite{jorgensen83} serves as an indicator of the effect of neglected polarization. These results are summarized in Table \ref{tab:cmplx}. It is important to point out that the potential energy surface of this complex is very shallow near the minimum so precision is difficult, and two different {\em ab initio} minimum energy geometries have been reported in the literature. Danten and collaborators \cite{danten05} find that the energy minimum has $C_s$ symmetry from counterpoise-corrected MP2 calculations with the aug-cc-pVTZ basis set. However, this is contested by both experimental \cite{peterson84, tso85} and more high-level {\em ab initio} counterpoise-corrected calculations at the CCSD(T) level of theory and the same basis set which predict $C_{2v}$ symmetry of the complex.\cite{garden06} It is interesting to note that the MP2 level of theory often overestimates correlation effects, consistent with the fact that simple pair potentials, such as the ZD \cite{zhang05} and TIP3P \cite{jorgensen83} IPs under the Lorentz-Berthelot mixing rules, predict a $C_{2v}$ minimum for the complex. The intermixing of the GCPCDO IP with GCPM water \cite{paricaud05} produces two, equivalent, global minima of $C_s$ symmetry, but the difference in energy between them and the transition state of $C_{2v}$ symmetry connecting them is only about 7 K, {\em i. e.} less than 0.1\% of the total interaction energy and should not be taken as great drawback of the model. It is a greater error that the total binding energy is underestimated by about 25\%, a fault shared by the ZD/TIP3P combination as well. This is surprising, because in general non-polarizable IPs parametrized against bulk properties tend to overestimate cluster binding energies. The explanation can partly be found in the rigid geometries of the GCPCDO/GCPM and ZD/TIP3P moieties. The results by Danten and coworkers \cite{danten05} indicate that the CO$_2$ molecule is bent slightly upon complexation, thus further inducing a dipole moment which increases the energy of interaction. However, the greater part of the explanation is to be found in the approximation that only the central atom polarizes. A QM calculation at the CISD/aug-cc-pVDZ level of theory for the $C_s$ minimum of the GCPM/GCPCDO complex indicate that while the total dipole moment of 2.19~D is in good agreement with the accurate value of 2.21 D, the individual components are poorly reproduced. \footnote{It must be remembered, however, that a large part of this dipole moment is due to the ground-state charge distribution of the water molecule, captured already by the partial charges in the GCPM water.} The QM calculation indicates that for the complex as a whole, the electric dipole moment along the $C_2$ axis of the water molecule is 2.16 D, and that the perpendicular component is 0.465 D in this particular configuration. The GCPM/GCPCDO pairing, however, indicates 2.19 D and 0.14 D, respectively, for these components. Further discrepancy is expected at the closer range indicated as the equilibrium distance by the {\em ab initio} calculations. In summary, as far as geometry and energy are concerned, the predictions of the GCPCDO/GCPM complex can hardly be considered an improvement over the simple empirical IPs, compared to {\em ab initio} calculations. \begin{table} \caption{Minimum energy, $U$, and geometry of the \ce{[CO2-H2O]} complex as predicted by the mixing of GCPCDO and GCPM polarizable IPs, the ZD and TIP3P non-polarizable IPs, or {\em ab initio} calculation by Danten {\em et al.} at the MP2/aug-cc-pVTZ level of theory, or by Garden {\em et al.} at the CCSD(T)/aug-cc-pVTZ level of theory. The binding energy of the $C_{2v}$ transition state predicted by the GCPCDO model is reported for completeness. For the definition of the geometry, see Figure \ref{fig:cmplx}.} \label{tab:cmplx} \begin{ruledtabular} \begin{tabular}{l r r r r r} & \multicolumn{2}{l}{GCPCDO/GCPM} & ZD/TIP3P & Danten\footnote{Ref. \onlinecite{danten05}} & Garden\footnote{Ref. \onlinecite{garden06}} \\ & $C_s$ minimum & $C_{2v}$ tr. state & & & \\ \hline $U$ / K & $-1036.4$ & $-1029.5$ & $-1048.1$ & $-1308.4$ & $-1358.7$ \\ $R$ / \AA & $3.06$ & $3.06$ & $2.93$ & $2.77$ & $2.81$ \\ $\phi$ / deg & $17.0$ & $0.0$ & $0.0$ & $13.9$ & $0.0$ \\ $\alpha$ / deg & $87.4$ & $0.0$ & $0.0$ & $88.0$ & $0.0$ \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \includegraphics{co2-h2o-aip.eps} \caption{Schematic illustration of the structure of the monohydrate complex. The $C_2$ axis of the water molecule is marked. All atoms lie in the same plane.} \label{fig:cmplx} \end{figure} \begin{table} \caption{Characteristics of the global minima of the \ce{[(CO2)2-H2O]} complex predicted by the GCPCDO/GCPM IPs, the ZD/TIP3P IPs or the {\em ab initio} results of Danten {\em et al.} at the MP2/aug-cc-pVTZ level of theory. The oxygen of the water molecule is placed at the origin of a Cartesian coordinate system in which the $y$-axis coincides with the $C_2$ axis of the water molecule with the positive direction pointing toward the hydrogen atoms, the $z$-axis is normal to the molecular plane and the $x$-axis is orthogonal to the $y$-- and $z$-axes. The centers of mass of the CO$_2$ molecules are given with respect to this coordinate system, and the orientation of each molecule is expressed in the Euler angles $\alpha$, $\beta$ and $\gamma$ which denote counterclockwise rotation around the $x$--, $y$-- and $z$-axes, respectively. $U$ is the energy of that conformation. Indices $1$ and $2$ denote the two CO$_2$ molecules. The work in Ref. \onlinecite{danten05} used flexible molecules. The coordinates reported here are rounded over these differences.} \label{tab:cmplx2} \begin{ruledtabular} \begin{tabular}{l r r r} & GCPCDO/GCPM & ZD/TIP3P & Danten\footnote{Ref. \onlinecite{danten05}} \\ \hline Point group & $C_s$ & $C_2$ & $C_s$ \\ $U$ / K & $-2738.5$ & $-5074.0$ & $-2918.7$ \\ $x_1$ / \AA & $0.00$ & $-1.43$ & $0.00$ \\ $y_1$ / \AA & $-2.61$ & $-2.11$ & $-2.52$ \\ $z_1$ / \AA & $1.58$ & $-0.85$ & $1.11$ \\ $x_2$ / \AA & $0.00$ & $1.43$ & $0.00$ \\ $y_2$ / \AA & $0.79$ & $-2.11$ & $0.42$ \\ $z_2$ / \AA & $3.80$ & $0.85$ & $3.80$ \\ $\alpha_1$ / deg & $53.7$ & $-62.8$ & $61.8$ \\ $\beta_1$ / deg & $90.0$ & $86.0$ & $90.0$ \\ $\gamma_1$ / deg & $90.0$ & $-82.3$ & $90.0$ \\ $\alpha_2$ / deg & $-69.1$ & $62.8$ & $-55.0$ \\ $\beta_2$ / deg & $90.0$ & $86.0$ & $90.0$ \\ $\gamma_2$ / deg & $90.0$ & $82.3$ & $90.0$ \\ \end{tabular} \end{ruledtabular} \end{table} Despite this small short-coming of the predictions for the \ce{[CO2-H2O]} complex, it is still worthwhile to consider the \ce{[(CO2)2-H2O]} complex, also studied by Danten and coworkers,\cite{danten05} because here the true many-body interactions start to play a role. Moreover, contrary to the case of the CO$_2$ trimer, where none of the moieties is dipolar, the water molecule carries a substantial dipole moment and the electronic induction effects are expected to be more pronounced. It must be pointed out that despite this being a three-body system, no Axilrod-Teller potential has been applied. The reason is that while GCPM water has a known polarizability, it has no Axilrod-Teller coefficient. Rather than impose one on the model, I have decided to judge it fairly according to its own merits, and these exclude a three-body dispersion interaction. The results are shown in Table \ref{tab:cmplx2}. The agreement is very good in terms of energy. Contrary to the \ce{[CO2-H2O]} complex, for the \ce{[(CO2)2-H2O]} complex, there is no appreciable underestimation of the binding energy. This can probably be attributed to the larger fraction of CO$_2$-CO$_2$ interactions in this complex. Moreover, both Danten {\em et al.} \cite{danten05} and I find that the minimum is of $C_s$ symmetry, but for the ZD\cite{zhang05}/TIP3P\cite{jorgensen83} model, the global minimum is of $C_2$ symmetry in a conformation reminiscent of the $C_2$ trimer. A local minimum of $C_2$ symmetry is predicted by the GCPCDO/GCPM\cite{paricaud05} model at about $208$ K above the global minimum. Clearly, polarization changes the relative stability of these two conformations in favor of the $C_s$ one and this is captured both by the MP2 calculations of Danten {\em et al.} \cite{danten05} and by the present work. The peculiarities of the global minima predicted by the IPs are given in Table \ref{tab:cmplx2}. It is very clear that the many-body effects are responsible for the altered symmetry of the equilibrium structure. Also, the binding energy is vastly overestimated by the non-polarizable IP pair. \section{Concluding remarks} \label{sec3} A new, polarizable IP for CO$_2$ has been introduced and shown to be in excellent agreement dimer properties and excellent-to-passable agreement for the bulk phase. Classical non-polarizable IPs that reproduce bulk phase properties well have been shown inadequate for the description of $B_2(T)$ and also $B_3(T)$. That many-body effects should not be ignored for the CO$_2$ molecule is evidenced by the qualitative experimental agreement that is achieved for the stability of the two trimer conformations, something which not even the {\em ab initio} potential energy surface of Bukowski {\em et al.} \cite{bukowski99} manages. Further corroboration of the model is provided by calculations on the water complexes, where especially the \ce{[(CO2)2-H2O]} complex is in good agreement with {\em ab initio} calculation at the MP2/cc-aug-pVTZ level of theory.\cite{danten05} Absence of polarization changes the symmetry of this complex. A tough test for all of the molecular CO$_2$ potentials investigated is the prediction of the virial coefficients. The GCPCDO model handles $B_2(T)$ and $B_3(T)$, but only because of design, and fails at $B_4(T)$ and up. This can still be considered an improvement over the classical, non-polarizable models for which most probably none of the virial coefficients beyond the ideal gas term are in agreement with experiment. Clearly, the interaction among CO$_2$ molecules is more complicated than a simple pairwise sum over atomic charges and Lennard-Jones terms. However, it is also more complicated than self-consistent solution of induced dipoles and triple-dipole dispersion interaction. Nevertheless, for systems of three bodies or less, it seems to be highly satisfactory, meaning that the remaining errors relate to many-body effects beyond the third. On the precise causes of the remaining errors in the IP can only be speculated and future computer experiments may provide the answer to what the mechanisms are. It is in this light that it must be kept in mind that even if in comparison with experiment, the GCPCDO model, with a few exceptions, rests more on qualitative concordance than on many digits of precision, this is the first many-body molecular IP for the CO$_2$ molecule to be developed and that many areas of inquiry remain to explore. One obvious and immediate improvement to the model would be to distribute the induced dipole over all atoms for a better short-range description of the induced electrostatics. The Fortran 90 source code for the GCPCDO energy subroutines is available upon request. \begin{acknowledgments} The simulations were performed on the C3SE computing resources. The author also wishes to express his gratitude to Prof. Robert Bukowski for granting him the BUK subroutines and to Prof. Sture Nordholm for insightful comments on the manuscript. Serious errors were noted by the anonymous reviewers and the author is highly indebted to both. \end{acknowledgments}
1,108,101,566,848
arxiv
\section{Introduction} Research activities towards the understanding of how to characterize and control matter away---especially very far away---from equilibrium experience a rapid growth during the past few years. New advances in the experimental front call for the theoretical physics community to rise to a computational challenge, notably in the investigation of out-of-equilibrium cold atomic gases, where minute control and precision measurements are now possible within a time-scale where quantum coherence is maintained, see for example \cite{relax}. The importance of analytical tools for handling quantum evolution away from equilibrium cannot be overstated. They are intertwined with the first principles of quantum physics, which underly diverse fundamental phenomena, such as: thermalization of an expanding gas or quantum plasma, the appearance of new critical behaviour in exotic materials and eventually the dynamics of the early universe. Even today, quantum evolution of the many body system remains a hard computational problem that has no general solution or a systematic self-consistent approach. In the presence of interactions only limited number of analytical approaches exist that allows for exact solutions feasible in certain regimes. AdS/CFT correspondence \cite{Maldacena:1997re} is one of them. It has provided several tractable examples and insights to the quenched systems, see, for example, \cite{AdSquench} and references therein. Imposing certain restrictions on the parameters of the system may provide additional instance when non-equilibrium dynamics admits a self-consistent solution. The adiabatic process is such an example. In that case the external change that perturbs the system out of equilibrium, {\it e.g.,}\ change in the external magnetic field coupled to a spin chain, is sufficiently slow compared to the characteristic relaxation time, and full dynamical solution can be constructed perturbatively. In the opposite limit, the external environment is assumed to change abruptly at some point in time, but otherwise remains constant. The process can be approximated by Dirichlet boundary conditions on the field variables at the instant of the sudden change. Studies of this class of dynamics, termed the \emph{quantum quench}, were pioneered in the seminal paper \cite{Cardy1}. The state of the system right after the quench is highly excited and its subsequent relaxation is of particular interest. So far a lot of results, particularly for integrable models and in low dimensions, such as $1+1$-dimensions, have been obtained \cite{oned}. In many of them, equilibrium physics is found to be remarkably well described by the generalized Gibbs ensemble \cite{conjecture}. In \cite{Sotiriadis:2010si} Sotiriadis and Cardy paved a way for handling quantum quenches in higher dimensional interacting field theories. They took advantage of the large-$N$ approximation \cite{Weinberg:1997rv} to investigate quantum evolution of the scalar $O(N)$ vector model whose mass and $\phi^4$ coupling undergo sudden change. To leading order in the large-$N$ expansion this model is effectively free, and the only remnant of interaction is encoded in the time dependent effective mass of the field. In particular, the coupling between modes of different momenta is suppressed and thermalization does not occur. The key insight of \cite{Sotiriadis:2010si} rests on two assumptions: (i) The effective mass approaches a stationary value at sufficiently large times and, (ii) Its evolution can be approximated by a jump. These provide dramatic simplification of the quenched dynamics problem. In \cite{Hung:2012zr} the method was used to study the impact of quantum quenches on the effective potential of a $\phi^6$ field theory model. It was shown that the phase structure is highly sensitive to the details of quantum quench. In particular, new phases, corresponding to extra minima of the effective potential, emerge and the critical value of the coupling constant, beyond which the theory is unstable, is modified. Furthermore, symmetry breaking and subsequent symmetry restoration is a particularly interesting realm that can be explored in the context of quantum quenches. Sudden changes obviously break certain symmetries. However, if the system relaxes and thermalizes, it would be desirable to understand if and when symmetry restoration emerges in the process. Time reversal is perhaps the simplest example of broken symmetry that exhibits restoration when the system asymptotically approaches a steady state. In this paper we investigate the consequences of quantum quenches on supersymmetric system. Supersymmetry (SUSY), and especially its breaking, play an essential role in phenomenology mainly because of potential relevance to various fundamental aspects of high energy physics that span from the hierachy problem to the cosmological constant and dark matter conundrums. If there is a supersymmetry in Nature, it must be broken and therefore various mechanisms of SUSY breaking attracted attention of theoretical physicists ever since the realm of supersymmetry was discovered. SUSY cannot be broken perturbatively, since quantum corrections preserve supersymmetry at all orders in the perturbative expansion. However, perturbative analysis breaks down when the system is subject to the quantum quench \cite{Sotiriadis:2010si}, and here we argue that supersymmetric quench of certain field theory model leads to SUSY breaking. We focus on the simplest supersymmetric extension of the three dimensional $\phi^6$ vector model. In the presence \cite{Bardeen:1984dx,Moshe:2002ra} or absence \cite{Townsend:1975kh} of supersymmetry this model possesses a complex phase diagram. Therefore, it conveniently plays a role of a unique laboratory for studies of various phenomena, {\it e.g.,}\ big bang cosmology \cite{Craps:2009qc}, Vasiliev's higher spin gravity \cite{Aharony:2011jz} etcetera. To avoid explicit breaking of SUSY we study supersymmetric quantum quenches, {\it i.e.,}\ parameters of the system are fine-tuned to keep the Hamiltonian supersymmetric before and after the sudden change. Then we follow the approach \cite{Sotiriadis:2010si} and demonstrate that quantum quench leads to the breaking of supersymmetry in this model. Importantly, the SUSY breaking cannot be attributed to thermal physics as there is no effective temperature that can be assigned to the final state. Furthermore, we find quite generally that the structure of the effective potential is modified by the quench. In contrast to the non supersymmetric case \cite{Hung:2012zr}, where competing vacua and phase transitions emerged, in the case of SUSY there is only one stable vacuum in the final state. We support our results by numerical dynamical simulations based on the methods of \cite{Sotiriadis:2010si}. While there is a qualitative agreement between estimated and numerically calculated asymptotic effective masses, the specific numbers do not match as well as in the case of scalar field theory. However, it is perhaps not too surprising that numerics in the SUSY case do not match analytic computations to the same extent as in the scalar case studied in \cite{Sotiriadis:2010si}. For one thing, the fermionic field obeys first order Dirac equation, whereas scalar field satisfies second order Klein-Gordon equation. Hence, derivative of the fermionic field with respect to time is not continuous across the quench while temporal derivative of the scalar field is smooth, we elaborate on this in concluding remarks section. We take advantage of the free field theories to uncover the differences in the response of the scalar and fermionic fields to sharp quantum quenches of the mass parameter. Such theories are Gaussian, and therefore any correlator is given in terms of two point functions that can be expressed as an infinite sum over contributions of instantaneously quenched harmonic oscillators\footnote{In this setup (simple harmonic oscillator) the limit of sudden quenches is a well-posed and exactly solvable problem that does not suffer from divergences suggested by the holographic calculations in \cite{Buchel:2012gw}.}. We find that Hamiltonian density of the fermionic field diverges at the instant of quench, but remains finite otherwise. In odd dimensions this divergence is ultra local, {\it i.e.,}\ proportional to the delta function and its derivatives are supported at the instant of quench, whereas in even dimensions it is power law $\sim t^{1-d}$, where $d$ is the number of spacial dimensions. Appearance of singularities at the instant of sharp quench supports observations made in \cite{Buchel:2012gw}, which apparently makes its treatment a very intricate problem in comparison to pure scalar case. Fortunately, in three dimensions this problem can be easily circumvented by adopting dimensional regularization to remove singularities associated with the delta function and its derivatives. The organization of the paper is as follows: in section \ref{major} we consider quantum quenches of the free Majorana field in three spacetime dimensions and find a general solution, in section \ref{sec:scalar} we investigate supersymmetric quantum quenches of the $\mathcal{N} = 1$ supersymmetric vector model, numerical time-evolution of the effective masses is explored in \ref{sec_dynamics} and concluding remarks are relegated to section \ref{sec:concl}. \section{Linearly coupled Majorana oscillators} \label{major} In this section we study sharp quantum quench of a free Majorana field in 2+1 dimensions. The results obtained here will be used in the next section when we explore a particular supersymmetric model that undergoes (supersymmetric) quantum quench. Based on this simple example we argue that fermionic field responds in a substantially different way to quantum quenches than their scalar counterpart \cite{Sotiriadis:2010si}. In particular, the expectation value of the fermionic mass term $\langle \bar\psi\psi\rangle$ exhibits singular behaviour at the instant of quench and therefore requires careful consideration. This intricate characteristic of sharp quenches was observed numerically in the context of AdS/CFT correspondence in \cite{Buchel:2012gw}, see also \cite{new} for analytic approach. Here we provide a simple field-theoretic example in favour of this pattern. Since the properties of Majorana spinors in three space-time dimensions may not be universally known, we briefly recall some of them and explain our notation. Majorana field is given by the condition \begin{equation} \psi=C\bar\psi^T~, \end{equation} where $\bar\psi=\psi^{\dag}\gamma^0$, $T$ stands for transpose, and $C$ is the charge conjugation matrix obeying \begin{equation} C\gamma^{\mu\,T} C^{-1}=-\gamma^{\mu}~. \end{equation} We use mostly minus signature and construct $\gamma$-matrices out of Pauli matrices, $\gamma^{\mu}=(\sigma_y,-i\sigma_z,i\sigma_x)$. This is the so-called Majorana representation. In this representation all $\gamma$-matrices are purely imaginary and the Majorana field is real. Indeed, since $\sigma_y\sigma^*\sigma_y=-\sigma$ with $\sigma$ being an arbitrary Pauli matrix, one can check that $C=-\sigma_y$. As a result the Majorana condition reads \begin{equation} \psi=C\bar\psi^T=-\sigma_y(\psi^\dag\sigma_y)^T=\psi^*~, \end{equation} Therefore the Fourier expansion of the free Majorana field of mass $\mu_0$ takes the following form \begin{equation} \hat\psi(x)=\int {d^2 \vec p \over (2\pi)^2} \sqrt{\mu_0\over \omega_{0p}}\[ \hb_{0 p}\,u_{ 0 p}\, e^{-i\omega_{0p} t+i\vec p\cdot \vec x}+ \hb_{ 0 p}^\dag \,u^*_{ 0 p}\, e^{i\omega_{0p} t-i\vec p\cdot \vec x}\]~, \label{MajorFourier} \end{equation} where $\omega_{0p}=\sqrt{\vec p^{\,2}+\mu_0^2}$ and $u_{0p}$ is the on-shell Majorana spinor with positive frequency, {\it i.e.,}\ it satisfies Dirac equation in Majorana representation \begin{equation} (\displaystyle{\not} p-\mu_0)u_{ 0 p}=0\, , \quad p^\mu=(\omega_{0p},p_x,p_y)\, . \labell{diraceq} \end{equation} Solving this equation, yields \begin{equation} u_{ 0 p}={1\over \sqrt{2\mu_0(p_y+\omega_{0p} )} } \left(\begin{array}{c}p_y+\omega_{0p} \\ p_x+i\mu_0\end{array}\right)~. \labell{spinor} \end{equation} Note that there is no summation over the spin indices in the Fourier expansion \reef{MajorFourier}. Indeed, complex conjugation in Majorana representation transforms Dirac spinors with mass $\pm\mu_0$ into each other. In particular, $v_{0p}=u^*_{0p}$ represents positive-frequency spinor with mass $-\mu_0$ or equivalently negative-frequency spinor with mass $+\mu_0$. However, in 3D the spinor space is two dimensional and therefore there are no other independent positive-frequency spinors which are eigenvectors of $\displaystyle{\not} p$ or equivalently solve the Dirac equation. Normalization of $u_{0p}$ is chosen such that the following relations hold \begin{eqnarray} \bar u_{0p} u_{0p}&=&1=-\bar v_{0p} v_{0p}~, \nonumber \\ u^{\dag}_{0p}u_{0p}&=&{\omega_{0p} \over \mu_0}=v^{\dag}_{0p}v_{0p}~,\nonumber \\ \bar v_{0p}u_{0p}&=&0= \bar u_{0p}v_{0p}~,\nonumber \\ v^{\dag}_{0p}u_{0-p}&=&0~. \labell{orthrel} \end{eqnarray} In order to satisfy the standard equal-time anticommutation relations \begin{equation} \{\hat\psi_\alpha} \def\bt{\beta(t,\vec x),\hat\psi_{\beta}(t,\vec y)\}=\delta^{(2)}(\vec x - \vec y) \delta_{\alpha} \def\bt{\beta\bt}~,\\ \labell{commrel} \end{equation} creation and annihilation operators must obey the following anticommutation rules \begin{equation} \{\hb_{0p},\hb^{\dag}_{0q}\}=(2\pi)^2 \delta( p - q)~, \labell{commrel2} \end{equation} with all other anticommutators equal to zero. The quantum quench that we are going to consider here consists of an instantaneous change of the mass from $\mu_0$ to $\mu$ everywhere in space at time $t=0$ . We assume that before the quench the state of the system corresponds to the ground state of the initial hamiltonian $|\Psi_0\rangle$. In addition, the system is kept isolated from the environment immediately before and after the instant $t=0$. Since the theory is free, we deduce that immediately after the quench Majorana field takes the following form \begin{equation} \hat\psi(x)=\int {d^2 \vec p \over (2\pi)^2} \sqrt{\mu\over \omega_{p}}\[ \hb_{p}\,u_{ p}\, e^{-i\omega_{p} t+i\vec p\cdot \vec x}+ \hb_{ p}^\dag \,u^*_{ p}\, e^{i\omega_{p} t-i\vec p\cdot \vec x}\]~, \end{equation} where $\omega_{p}=\sqrt{\vec p^{\,2}+\mu^2}$ and $u_p$, $\hb_{p}$, $\hb_{ p}^\dag$ satisfy eqs. \reef{diraceq}-\reef{commrel2} with $\mu_0$ replaced by $\mu$. Creation and annihilation operators before and after the quench are related by boundary conditions at $t=0$. To uncover these relations, let us examine the Heisenberg equation of motion \begin{equation} \dot{\hat\psi}=i[\hat H, \hat\psi]~. \end{equation} Integrating it in the infinitesimal neighbourhood $\delta$ of the instant $t=0$, and taking the limit $\delta\rightarrow 0$, we deduce that even though the Hamiltonian is only piecewise smooth, the field operator $\hat\psi(x)$ is continuous across the quench. Note, however, that $\dot{\hat\psi}$ exhibits an abrupt jump at $t=0$. Indeed, \begin{equation} \Delta\dot{\hat\psi}\equiv\dot{\hat\psi}\big|_{t=0^+}-\dot{\hat\psi}\big|_{t=0^-}=i[\Delta\hat H, \hat\psi|_{t=0}] ={i(\mu-\mu_0)\over 2}\int d^2\vec x\,[\hat{\overline\psi}\,\hat\psi, \hat\psi]\big|_{t=0}=i(\mu-\mu_0)\hat{\overline\psi}|_{t=0}~, \label{psijump} \end{equation} where in the last equality we used equal-time anticommutation relations \reef{commrel}. Form this perspective the behaviour of the Majorana fermion is different from its scalar counterpart since in the scalar case both the field and its time derivative are continuous across the quench. The difference between these two cases emanates from the Heisenberg equations of motion: while in the scalar case it yields the second order Klein-Gordon equation for $\hat\phi$, in the case of Majorana (or Dirac) field $\hat\psi$, it boils down to the standard first order Dirac equation. Imposing continuity of the field $\hat\psi$ across the quench, leads to the following relations between creation and annihilation operators before and after the quench \begin{equation} \sqrt{\mu\over \omega_{p}}\[ \hb_{p}\,u_{ p}\, + \hb_{ -p}^\dag \,u^*_{ -p}\,\]=\sqrt{\mu_0\over \omega_{0p}}\[ \hb_{0p}\,u_{ 0p}\, + \hb_{ 0-p}^\dag \,u^*_{ 0-p}\,\]~. \end{equation} Based on orthogonality relations \reef{orthrel} for $u_{0p}$ and similar relations for $u_{p}$ yields the following Bogoliubov transformation between creation-annihilation operators \begin{equation} \left(\begin{array}{c} \hb_p \\ \hb_{-p}^{\dag}\end{array}\right)=\sqrt{\mu\,\mu_0\over\omega_p\,\omega_{0p}} \left(\begin{array}{cc} u_p^{\dag} u_{0p} & u_p^{\dag} u_{0-p}^* \\ v_{-p}^\dag v_{0p}^*& v_{-p}^\dag v_{0-p}^{}\end{array}\right) \left(\begin{array}{c}\hb_{0p} \\ \hb_{0-p}^{\dag}\end{array}\right)~. \end{equation} Now we have all ingredients to construct Feynman propagator for the free Majorana field after the quench. In momentum space it takes the following form \begin{multline} \langle\Psi_0| \mathcal{T} \{\hat{\psi}_{p \alpha} \def\bt{\beta}(t_1) \, \hat{\overline\psi}_{q \bt}(t_2)\} |\Psi_0\rangle=(2\pi)^2\delta^{(2)}(p+q){\mu^2 \mu_0\over \omega_p^2\omega_{0p}} \\ \times \Bigg[\theta(t_1-t_2)\Big( e^{-i\omega_p(t_1-t_2)} \bar u_{p\beta}u_{p\alpha}(u_p^\dag u_{0p})(v_p^\dag v_{0p}) +e^{i\omega_p(t_1-t_2)} \bar v_{-p\beta} v_{-p\alpha}(u_{-p}^\dag v_{0p})(v_{-p}^\dag u_{0p}) \\ e^{-i\omega_p(t_1+t_2)} \bar v_{-p\beta}u_{p\alpha}(u_p^\dag u_{0p})(u_{-p}^\dag v_{0p}) +e^{i\omega_p(t_1+t_2)} \bar u_{p\beta} v_{-p\alpha}(v_{-p}^\dag u_{0p})(v_{p}^\dag v_{0p}) \Big) \\ -\theta(t_2-t_1)\Big( e^{-i\omega_q(t_2-t_1)} \bar v_{q\beta}v_{q\alpha}(u_q^\dag u_{0q})(v_q^\dag v_{0q}) +e^{i\omega_q(t_2-t_1)} \bar u_{-q\beta} u_{-q\alpha}(u_{-q}^\dag v_{0q})(v_{-q}^\dag u_{0q}) \\ e^{-i\omega_q(t_1+t_2)} \bar v_{q\beta}u_{-q\alpha}(u_q^\dag u_{0q})(u_{-q}^\dag v_{0q}) +e^{i\omega_q(t_1+t_2)} \bar u_{-q\beta} v_{q\alpha}(v_{-q}^\dag u_{0q})(v_{q}^\dag v_{0q}) \Big)\Bigg]~, \label{majorprop} \end{multline} where $\mathcal{T}$ denotes time ordering operator for fermions. Notice that the propagator contains terms that break time invariance. In Appendix \ref{major2}, we present an alternative derivation of the above result using prescription suggested in \cite{Sotiriadis:2010si}. Various terms in the above expression can be written explicitly using eq. \reef{spinor}. Thus, for instance, we find \begin{eqnarray} \bar u_{p\beta}u_{p\alpha}= -( \bar v_{p\beta}v_{p\alpha})^*&=&{1\over 2\mu}\left(\begin{array}{cc}ip_x+\mu & i(\omega_p-p_y) \\ -i(\omega_p+p_y)& -ip_x+\mu\end{array}\right)_{\bt\alpha} \def\bt{\beta}~, \nonumber \\ (u_p^\dag u_{0p})(v_p^\dag v_{0p}) &=& {(\omega_p+\omega_{0p})^2-(\mu-\mu_0)^2\over 4\mu\mu_0}~, \nonumber \\ (u_{-p}^\dag v_{0p})(v_{-p}^\dag u_{0p}) &=& {(\mu-\mu_0)^2-(\omega_p-\omega_{0p})^2\over 4\mu\mu_0} ~, \nonumber \\ (u_p^\dag u_{0p})(u_{-p}^\dag v_{0p}) &=& (v_{-p}^\dag u_{0p})^*(v_{p}^\dag v_{0p})^*={\mu-\mu_0\over 2\mu_0\mu\sqrt{\omega_p^2-p_y^2}}(\mu\,p_y-i\, \omega_p \, p_x)~, \end{eqnarray} However, in all relevant computations we only need to evaluate eq.\reef{majorprop} and its derivatives in the limit $t_1=t_2=t$. Using \begin{equation} \bar v_{-p} u_p=-{\mu\, p_y+i\, \omega_p \, p_x\over \mu \sqrt{\omega_p^2-p_y^2}}\, , \quad \quad \bar u_{p} v_{-p}=-{\mu\, p_y-i\, \omega_p \, p_x\over \mu \sqrt{\omega_p^2-p_y^2}}~, \end{equation} yields \begin{equation} \langle\Psi_0| \hat{\overline\psi} \, \psi |\Psi_0\rangle =-{\mu\over 2} \int{d^2p\over (2\pi)^2}{\omega_p^2+\omega_{0p}^2-(\mu-\mu_0)^2 \over \omega_p^2\omega_{0p}} +(\mu-\mu_0)\int{d^2p\over (2\pi)^2} {p^2\over \omega_p^2\omega_{0p}}\cos(2\omega_pt)~. \labell{ferloop} \end{equation} The second term in this expression breaks time reversal. Such term is also present in the scalar case, and it was shown in \cite{Sotiriadis:2010si} that its contribution is finite and vanishes in the asymptotic future. However, in our case this term diverges. To isolate the corresponding divergence, let us subtract and add $\cos(2\omega_pt)/\omega_p$ to the integrand of the second term, then eq.\reef{ferloop} takes the following form \begin{multline} \langle\Psi_0| \hat{\overline\psi} \, \psi |\Psi_0\rangle =-{\mu\over 2} \int{d^2p\over (2\pi)^2}{\omega_p^2+\omega_{0p}^2-(\mu-\mu_0)^2 \over \omega_p^2\omega_{0p}} +(\mu-\mu_0)\int{d^2p\over (2\pi)^2} {p^2-\omega_p\omega_{0p}\over \omega_p^2\omega_{0p}}\cos(2\omega_pt)\\-(\mu-\mu_0){\sin(2|\mu| t)\over 4\pi t}+{\mu-\mu_0\over 4}\,\delta(t)~, \labell{Floop} \end{multline} where we used the following identity \begin{equation} \int{d^2p\over (2\pi)^2} {\cos(2\omega_pt)\over \omega_p}=\int_{|\mu|}^\infty {d\omega_p\over 2\pi} \cos(2\omega_pt)= -{\sin(2|\mu| t)\over 4\pi t} +{1\over 4}\delta(t)~. \label{iden} \end{equation} Delta function is not the only singularity of eq.\reef{Floop}. Its first term exhibits linear divergence. However, in dimensional regularization these singularities disappear as $d\to2$. For instance, eq.\reef{iden} becomes \begin{multline} \int \frac{d^d p}{(2 \pi)^d} {\cos(2\omega_pt) \over \omega_p} ={1\over 2^{d-1}\pi^{d/2}\Gamma(d/2)}\int_{|\mu|}^{\infty} (\omega_p^2-\mu^2)^{d-2\over 2} \cos(2\omega_pt) d\omega_p \\ = -{1\over 2^{d}\pi^{d-1\over2}}\Big({t\over |\mu|}\Big)^{1-d\over 2} Y_{1-d\over 2}(2|\mu| t) \label{idnt}~. \end{multline} Therefore the integral is finite and decays as $t^{-d/2}$ in the asymptotic future defined by $t\gg\mu^{-1}$. Remarkably, there is a significant difference between odd and even $d$ for $t\ll |\mu|^{-1}$ in the above expression \begin{equation} \label{odd_even} \int \frac{d^d p}{(2 \pi)^d} {\cos(2\omega_pt) \over \omega_p} \underset{t\mu\to 0}{\longrightarrow} \left\{\begin{array}{c} {(-1)^{d-1\over 2}\Gamma\big({d-1\over 2}\big)\over 2^d\pi^{d+1\over 2} }~ t^{1-d} \quad \text{odd $d$\,,} \\ \\ {\Gamma\big({1-d\over 2}\big)\over 2^d\pi^{d+1\over 2} }~ |\mu|^{d-1} \quad\quad\quad~~ \text{even $d$\,.} \end{array}\right. \end{equation} In particular, for odd $d$, {\it i.e.,}\ even dimensional space-time, eq.\reef{idnt} diverges as $t|\mu|\to 0$. Of course, strictly speaking the above computation cannot be taken at face value in general $d$ since the space of Dirac spinors is not the same in various dimensions. However, it suggests that in even dimensional space-time the limit of infinitely sharp quench may require a refined analysis in the presence of fermions \cite{Buchel:2012gw}, we leave investigation of this for future work. In our case $d$ is even, and therefore either implicitly assuming appropriate regularization scheme or by noting that $\delta(t)=0$ in the region of our main interest ($t|\mu|\to\infty$), we drop the last term in eq.\reef{Floop}. The second term in eq.\reef{Floop} is now finite. It breaks time reversal and decreases in time. More specifically using the stationary phase method, one can show that for large times it decays as $1/t$. Hence, for $t\gg\mu^{-1}$ only first term contributes \begin{equation} \langle\Psi_0| \hat{\overline\psi} \, \psi |\Psi_0\rangle\big|_{t\gg\mu^{-1}} =-{\mu\over 2} \int{d^2p\over (2\pi)^2}{\omega_p^2+\omega_{0p}^2-(\mu-\mu_0)^2 \over \omega_p^2\omega_{0p}}~. \labell{infmajor2p} \end{equation} Now using the definition of $\mathcal{T}$, commutation relations \reef{commrel} and the fact that $\hat\psi$ satisfies the Dirac equation, yields the standard Green's equation that Feynman propagator must satisfy \begin{equation} (i\displaystyle{\not}{\partial}} \def\dag{\dagger_{x_1}-\mu) \langle\Psi_0| \mathcal{T} \{\hat{\psi}_{\alpha} \def\bt{\beta}(x_1,t_1) \, \hat{\overline\psi}_{ \bt}(x_2,t_2)\} |\Psi_0\rangle=i\delta(t_1-t_2)\delta^{(2)}(\vec x - \vec y) \delta_{\alpha} \def\bt{\beta\bt}~, \end{equation} We verified that our final expression in eq.\reef{majorprop} indeed satisfies this identity. In particular, it follows that \begin{equation} \langle\Psi_0| \hat{\overline\psi} \,i \displaystyle{\not}{\partial}} \def\dag{\dagger \,\psi |\Psi_0\rangle = \mu \langle\Psi_0| \hat{\overline\psi} \, \psi |\Psi_0\rangle~. \label{major2p} \end{equation} \section{Quantum quenches of $\mathcal{N}=1$ SUSY in 3D} \label{sec:scalar} We turn now to explore the impact of quantum quenches on supersymmetry. This problem is particularly interesting in light of our findings in the previous section where we argued that response of the fermionic field to sudden changes in the parameters of the theory is substantially different from the scalar case. Unfortunately, quantum quenches in the presence of interactions is an intricate field theory problem without general self-consistent approach. Therefore we resort to studies of the simplest supersymmetric extension of the $O(N)$ $\phi^6$ model \cite{Bardeen:1984dx} using techniques developed by S.Sotiriadis and J.Cardy in\footnote{Sharp quenches of $\phi^6$ model without supersymmetry were studied in \cite{Hung:2012zr}, see also \cite{Das:2012mt} for quenches of $O(N)$ nonlinear sigma model.} \cite{Sotiriadis:2010si}. Their method is based on certain assumptions that make the problem of quantum quenches tractable in the large-$N$ limit. Of course, the final results are only reliable provided that these assumptions indeed hold, therefore we test our conclusions numerically in the next section. We find that asymptotic state is stationary, but not thermal and demonstrate that supersymmetry in this state is broken. The model that we study is given by $\mathcal{N}=1$ SUSY consisting of an $N$-component real scalar field $\phi$ and an $N$-component, two-component Majorana spinor $\psi$. The corresponding action takes the following form\footnote{The superspace representation of the model can be found in, {\it e.g.,}\ \cite{Gates:1983nr,Moshe:2002ra}.} \begin{eqnarray} S(\phi,\psi)&=&{1\over 2} \int d^3x \big[{\partial}} \def\dag{\dagger_\mu\phi{\partial}} \def\dag{\dagger^\mu\phi-\mu_0^2\phi^2+\bar\psi(i\displaystyle{\not}\partial-\mu_0)\psi \nonumber \\ &-&2\,{g_0\mu_0 \over N}(\phi^2)^2-{g_0^2\over N^2}(\phi^2)^3-{g_0\over N}\phi^2(\bar\psi\cdot\psi) -2\,{g_0\over N}(\phi\cdot\bar\psi)(\phi\cdot\psi)\big]~. \labell{action} \end{eqnarray} Our spinor notation is explained in the previous section. At $t=0$ we instantaneously change the mass from $\mu_0$ to $\mu$ and the coupling constant from $g_0$ to $g$. As before, for simplicity we assume that initially $g_0=0$, {\it i.e.,}\ there is no interaction before the quench and the system is prepared in the ground state of the corresponding free hamiltonian $|\Psi_0\rangle$. Since parameters of the theory are changed abruptly rather than adiabatically, one needs to resort to a well-known Keldysh-Schwinger, or in-in, formalism for non-equilibrium quantum systems. In this approach one needs to impose the boundary conditions only at $t=t_i$, and in our case they are such that the initial state at $t_i=0$ is identical to $|\Psi_0\rangle$. In particular, the expectation value of an arbitrary operator $\mathcal{\hat O}(t)$ is given by \begin{equation} \langle \Psi_0 | \mathcal{ \hat O}(t) |\Psi_0\rangle =\int_{CTP} D\eta \, \mathcal{ \hat O}(t) \, e^{i S(\eta)}~, \end{equation} where for brevity $\eta$ collectively denotes scalar and Majorana fields and the following notation is used to designate the closed-time-path (CTP) integral measure \begin{equation} \int_{CTP} D\eta=\int D\eta_i \, \Psi_0(\eta_i)\int D \tilde\eta_i \, \Psi_0^*(\tilde\eta_i) \int_{\eta_i}^{\tilde\eta_i}D\eta ~, \end{equation} where $\eta_i$ and $\tilde\eta_i$ denote the values of the fields at the end points of the time contour, whereas $\Psi_0(\eta_i)=\langle \eta_i|\Psi_0\rangle$ and similarly for the complex conjugate $\Psi_0^*(\tilde\eta_i)$. In what follows we are going to use the large $N$ (or equivalently Hartee-Fock) approximation in order to explore the evolution of the effective mass associated with the real scalar and Majorana fields. We start from noting that the action is quadratic in $\psi$ and hence the Majorana field can be easily integrated out, and the action takes the following form \begin{multline} S(\phi)={1\over 2} \int d^3x \big[{\partial}} \def\dag{\dagger_\mu\phi{\partial}} \def\dag{\dagger^\mu\phi-\mu_0^2\phi^2-2\,{g_0\mu_0 \over N}(\phi^2)^2-{g_0^2\over N^2}(\phi^2)^3\big]~. \\ -i{N-1\over 2}\text{Tr}\log\big(i\displaystyle{\not}\partial-\mu-{g\over N}\phi^2\big)-{i\over 2}\text{Tr}\log\big(i\displaystyle{\not}\partial-\mu-3 {g\over N}\phi^2\big) ~. \end{multline} Using now the following identity\footnote{We keep CTP label in the path integral over $\rho$ and $\lambda$ to emphasize that the delta-function is inserted at each point of the Keldysh-Schwinger contour. Obviously there are no boundary conditions associated with $\rho$ and $\lambda$. Note also that the equalities hold up to irrelevant multiplicative constant.} \begin{equation} \mathbb{I} = \int_{CTP} D\rho\,\delta(\phi^2-N\rho)=\int_{CTP} D\rho D\lambda ~e^{-{i\over 2}\int d^3x \,\lambda(\phi^2-N\rho)}~, \label{identity} \end{equation} we can rewrite the path integral over the scalar field $\phi$ as follows \begin{equation} \langle \Psi_0 | \mathcal{ \hat O}(t) |\Psi_0\rangle = \int_{CTP} D\phi \int D\rho D\lambda\, \mathcal{ \hat O}(t) \, e^{i S(\phi,\rho,\lambda)}~, \end{equation} where \begin{multline} S(\phi,\rho,\lambda)=N\int d^3x\[{\lambda\,\rho\over 2}-{g^2 \over 2}\rho^3-g\, \mu\, \rho^2\]-{1\over 2}\int d^3x \phi(\square+\mu^2+\lambda)\phi \\ -i{N-1\over 2}\text{Tr}\log\big(i\displaystyle{\not}\partial-\mu-g\rho\big)-{i\over 2}\text{Tr}\log\big(i\displaystyle{\not}\partial-\mu-3 g\rho\big) ~. \end{multline} Performing the Gaussian integral over $\phi$ yields \begin{equation} \langle \Psi_0 | \mathcal{ \hat O}(t) |\Psi_0\rangle =\int_{CTP} D\rho D\lambda \, \mathcal{ \hat O}(t) \, e^{i N S_{eff}(\rho,\lambda)}~, \label{pathint} \end{equation} with \begin{multline} S_{eff}(\rho,\lambda)=\int d^3x\[{\lambda\,\rho\over 2}-{g^2 \over 2}\rho^3-g\, \mu\, \rho^2\]+{i\over 2}\text{Tr}\log(\square+\mu^2+\lambda) \\ -{i\over 2}{N-1\over N}\text{Tr}\log(i\displaystyle{\not}\partial-\mu-g\rho)-{i\over 2N}\text{Tr}\log(i\displaystyle{\not}\partial-\mu-3 g\rho) ~. \label{susyact} \end{multline} Since we are interested in the large $N$ limit, we drop the last term in the second line and replace $N-1\sim N$. The boundary conditions are now encoded in the functional traces that explicitly depend on the integration parameters $\lambda$ and $\rho$. The latter, of course, makes their evaluation extremely difficult. In what follows we are interested to study the evolution of the effective mass of the fields for $t>0$. When $N\to\infty$ (with $g$ and $\mu$ fixed), the effective mass can be evaluated using the stationary phase approximation. Indeed, in this limit the right hand side of \reef{pathint} is dominated by the field configuration that minimizes \reef{susyact}, {\it i.e.,}\ solves the corresponding classical equations of motion derived from $S_{eff}(\rho,\lambda)$ \begin{eqnarray} m_\phi^2&\equiv&\mu^2+\bar\lambda=\mu^2+4 g \mu \bar\rho+3 g^2\bar\rho^2-g \int {d^2 p \over (2\pi)^2} \, \text{tr}\,\tilde G_\psi(t,t;p) ~,\nonumber \\ \bar\rho&=&\int {d^2 p \over (2\pi)^2} \, \tilde G_\phi(t,t;p)~, \label{susygap} \end{eqnarray} where ``$\text{tr}$" denotes the trace over spinor indices, $m_\phi^2$ as defined above corresponds to the effective mass of the scalar field, while $\tilde G_\psi(t_1,t_2;p)$ and $\tilde G_\phi(t_1,t_2;p)$ represent the leading-$N$ momentum space two point correlation functions of the Majorana and scalar fields respectively. Note that $\tilde G_\psi(t_1,t_2;p)$ depends on \begin{equation} m_\psi\equiv\mu+g\bar\rho~, \end{equation} which plays a role of the effective mass of the Majorana field. To explore supersymmetry breaking we consider the following order parameter \begin{equation} m_\phi^2-m_\psi^2=2 g m_\psi\bar\rho- g \int {d^2 p \over (2\pi)^2} \text{tr}\, \tilde G_\psi(t,t;p)~. \label{orderparam} \end{equation} The right hand side of the above equation must vanish in order to preserve supersymmetry. Unfortunately, it is difficult to analyze it in full generality since exact solution to the gap equations \reef{susygap} is out of reach. Hence, we resort to approximation proposed in \cite{Sotiriadis:2010si}, {\it i.e.,}\ we assume that as $t\to\infty$\footnote{Here and in what follows $t\to\infty$ means that time is much larger than any other length scale in the problem.}, $m_\phi$ and $m_\psi$ approach stationary values $m_\phi^*$ and $m_\psi^*$ respectively, and at late times large compared to the duration of the transients, their evolution can be approximated by a jump. Then the two point correlation function $\tilde G_\phi(t_1,t_2;p)$ (or $\tilde G_\psi(t_1,t_2;p)$) is approximately the same as the propagator in the massive scalar (or fermion) free field theory in which the mass parameter is instantaneously changed from $\mu_0$ to $m_{\phi}^*$ (or $m_\psi^*$),{\it i.e.,}\ \begin{eqnarray} \label{free_corr} \tilde G_\phi(t_1,t_2;p)&\simeq& G_\phi(t_1,t_2;p;\mu_0, m^*_\phi)~, \nonumber \\ \tilde G_\psi(t_1,t_2;p)&\simeq& G_\psi(t_1,t_2;p;\mu_0, m_\psi^*)~, \labell{approx} \end{eqnarray} where as shown in \cite{Sotiriadis:2010si} \begin{equation} G_\phi(t_1,t_2;p;\mu_0, m^*_\phi)={(\tilde\omega_p^*-\tilde\omega_{0p})^2\over 4\tilde\omega_p^{*2}\tilde\omega_{0p}}\cos\tilde\omega_p^*(t_1-t_2) +{\tilde\omega_p^{*2}-\tilde\omega_{0p}^2\over 4\tilde\omega_p^{*2}\tilde\omega_{0p}}\cos\tilde\omega_p^*(t_1+t_2) +{1\over 2\tilde\omega_p^*} e^{-i\tilde\omega_p^*|t_1-t_2|}~. \label{phicorr} \end{equation} with $\tilde\omega_{p}^*=\sqrt{\vec p^{\,2}+m_\phi^{*2}}$ and $\tilde\omega_{0p}=\sqrt{\vec p^{\,2}+\mu_0^2}$, whereas for Majorana field $G_\psi(t_1,t_2;p;\mu_0, m_\psi^*)$ is given by eq.\reef{majorprop} with $\mu$ there substituted by $m_\psi^*$. These relations are expected to be asymptotically exact. Their validity is scrutinized in the next section, where we present numerical studies of the full time evolution of the effective masses without implementing assumptions about asymptotic stationarity and fast relaxation. Note that the second term on the right hand side of eq.\reef{phicorr} as well as terms in the third and last lines of eq.\reef{majorprop} break time invariance, however as argued in the previous section their contribution vanishes in the limit $t_1=t_2=t\gg m_\phi^{*-1},m_\psi^{*-1}$, and hence we drop them in what follows. As a result, asymptotically eq.\reef{orderparam} becomes \begin{equation} m_\phi^{*\,2}-m_\psi^{*\,2}=2 g m_\psi^*\int {d^2 p \over (2\pi)^d} G_\phi(t,t;p;\mu_0,m_\phi^*)- g \int {d^2 p \over (2\pi)^2} \text{tr} \, G_\psi(t,t;p;\mu_0,m_\psi^*)~. \end{equation} Now it follows form eq.\reef{infmajor2p} that for $t\gg m_\psi^{*-1}$, the following relation holds \begin{multline} \text{tr} \, G_\psi(t,t;p;\mu_0,m_\psi^*)= m_\psi^* \, {\omega_p^{*2}+\omega_{0p}^2-(m_\psi^*-\mu_0)^2\over 2\omega_p^{*2}\omega_{0p}} \\ =2m_\psi^*G_\phi(t,t;p;\mu_0,m_\psi^*)-m_\psi^* {(m_\psi^*-\mu_0)^2\over 2\omega_p^{*2}\omega_{0p}}\,. \end{multline} Hence, \begin{eqnarray} m_\phi^{*\,2}-m_\psi^{*\,2}&=&2 g m_\psi^*\int {d^2 p \over (2\pi)^2} \bigg( G_\phi(t,t;p;\mu_0,m_\phi^*)- G_\phi(t,t;p;\mu_0,m_\psi^*) \bigg) \nonumber \\ &+&{g\, m_\psi^*\over 4\pi} {(m_\psi^*-\mu_0)^{2}\over (m_\psi^{*2}-\mu_0^2)^{1/2}}\arccos\bigg( {\mu_0\over |m_\psi^*|} \bigg)~. \labell{order} \end{eqnarray} The first thing to note about this expression is that each loop integral linearly diverges, however the divergent contributions cancel between scalar and fermionic loops exhibiting a supersymmetric nature of the underlying model. Let us now ask if any supersymmetric solution to the gap equations \reef{susygap} exists within our approximation. From eq.\reef{order} such solution (if exists) must satisfy $m_\phi^{*}=m_\psi^{*}=\mu_0$. It corresponds to a supersymmetric state that emerges as $t\to\infty$ and is characterized by the same mass parameter $\mu_0$ as before the quench. Plugging this solution back into the definition of the Majorana mass $m_\psi=\mu+g\bar\rho$ results in the following constraint \begin{equation} \mu_0=\mu+g\int {d^d p \over (2\pi)^d} {1\over 2\sqrt{p^2+\mu_0^2}}\underset{d\rightarrow 3}{=}\mu-{g\over 4\pi}|\mu_0| ~. \end{equation} This constraint represents a family of supersymmetric solutions, but it cannot be satisfied for all values of $\mu, \mu_0$ and $g$ and therefore generically SUSY is broken in the final state\footnote{Strictly speaking this constraint should hold only approximately since it was derived within certain approximation scheme.}. For instance, one such fine-tuned supersymmetric solution is given by $m_\phi^{*}=m_\psi^{*}=\mu_0>0, \,g=-4\pi$ and $\mu=0$. However, as we demonstrate below it represents an inflection point rather than (local) minimum of the effective potential, and hence does not correspond to a (meta)stable phase of the theory. Numerical calculations of the next section support this conclusion. Implementing approximation \reef{approx} at the level of the gap equations \reef{susygap}, we find that stationary values $m_\phi^*$ and $m_\psi^*$ satisfy the following equations \begin{eqnarray} m_\psi^{*}&=&\mu+g\bar\rho~, \label{mpsi} \\ \bar\rho&=&\lim_{t\to\infty}\int {d^2 p \over (2\pi)^2} \, \tilde G_\phi(t,t;p)=-{1\over 4\pi}\bigg(\mu_0+{1\over 2}\sqrt{m_\phi^{*2}-\mu_0^2}\,\arccos(\mu_0/ |m_\phi^*|)\bigg)~, \labell{susygap2} \\ m_\phi^2&=&\mu^2+4 g \mu \bar\rho+3 g^2\bar\rho^2 +{g \, m_\psi^*\over 2\pi}\bigg(\mu_0+{1\over 2}\sqrt{m_\psi^{*2}-\mu_0^2}\,\arccos (\mu_0/ |m_\psi^*|)\bigg) \nonumber \\ &&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad~ +{g\, m_\psi^*\over 4\pi} {(m_\psi^*-\mu_0)^{2}\over (m_\psi^{*2}-\mu_0^2)^{1/2}}\arccos\bigg( {\mu_0\over |m_\psi^*|} \bigg) ~, \labell{susygap3} \end{eqnarray} where dimensional regularization has been used to regulate the divergent loop integral in the definition of $\bar\rho$. Solutions to these equations describe the stationary points of the effective potential and correspond to various phases of the theory. Typical plots of $m_\psi^*$ as a function of $m_\phi^{*2}$ for $\mu_0=1$, $\mu=0$ and various values of the coupling constant $g$ are shown in figure \ref{fig:mf_ms}. In the next subsection we construct the effective potential and explore its behaviour as a function of parameters of the theory. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.95]{mf_ms.pdf} \caption{The stationary fermion mass, $m_\psi^*$, as a function of the scalar mass $m_\phi^*$ for $\mu_0=1$, $\mu=0$ and a set of values of the coupling constant $g$. } \label{fig:mf_ms} \end{center} \end{figure} \subsection*{Effective potential} To analyze the phase structure of the model let us consider the effective potential of the theory as $t\rightarrow\infty$. To leading order in the $1/N$ expansion it is given by eq.\reef{susyact} evaluated for constant $\bar\rho$ and $\bar\lambda$ (or $m_\phi^{*2}$). Up to irrelevant constant it takes the following form \begin{eqnarray} V_{eff}(\bar\rho,m_\phi^{*2})&=&{\mu^2\over 2}\bar\rho+g\mu\bar\rho^2+{g^2 \over 2}\bar\rho^3-{m_\phi^{*2}\,\bar\rho\over 2} +{1\over 2} \int_0^{m_\phi^{*2}} dm^2 \int {d^2 p \over (2\pi)^2} \, G_\phi(t,t;p;\mu_0,m) \nonumber \\ &&-{1\over 2}\int_0^{\mu+g\bar\rho} dm \int {d^2 p \over (2\pi)^2} \text{tr} \, G_\psi(t,t;p;\mu_0,m)~. \labell{effpot} \end{eqnarray} Varying this effective potential with respect to $\bar\rho$ and $m_\phi^{*2}$ reproduces the saddle point equations \reef{susygap}. Now we can use eq.\reef{susygap2} to eliminate the auxiliary field $\bar\rho$ and express $V_{eff}$ in terms of variational parameter $m_\phi^{*}$ only. The value of the physical mass $m_\phi^{*}$ corresponds to the minimum of the resulting $V_{eff}$. The role of the effective potential within Hartee-Fock approximation resembles the free energy in thermodynamics that must be minimized at equilibrium with respect to any unconstrained internal variable for a closed system at constant temperature and volume. In particular, the system is stable if and only if the free energy as a function of unconstrained variables is bounded from below. In our case for $m_\phi^{*}\gg\mu_0,\mu$, we have \begin{equation} V_{eff}\simeq{1024-12 g^2+(g^2)^{3/2}\over 98304} m_\phi^{*3} >0~, \end{equation} and therefore as expected in supersymmetric theory the potential is always bounded from below and the system is stable. Let us analyze the effective potential as a function of $g$ when the only dimensional parameter $\mu$ is set to zero. In this case the theory right after the quench is conformal since to leading order in $1/ N$ all anomalous dimensions vanish. In appendix \ref{emtrace} we explicitly verify that expectation value of the trace of supersymmetric energy momentum tensor indeed vanishes in the quenched state. The characteristic shape of the effective potential \reef{effpot} for several choices of the coupling constant $g$ is shown in figure \ref{fig:phases}. The only supersymmetric solution of the gap eqs. \reef{mpsi}-\reef{susygap3} for this choice of $\mu$ is given by $m_\phi^{*}=m_\psi^{*}=\mu_0$ and $g=-4\pi$. However, as shown on the plot, this solution corresponds to an inflection point rather than to a minimum of $V_{eff}$, and therefore it does not represent a stable phase of the theory. Hence, we conclude that in this case supersymmetry is always broken in the final state. Table \ref{tab} presents the masses $m_\phi^*$ and $m_\psi^*$ for a select set of values of the coupling constant $g$. \begin{table}[ht] \centering \begin{tabular}{c c c } \hline\hline & $m_\phi^*$ & $m_\psi^*$ \\ [0.5ex] \hline $g=5$& $0.199$ & $0.05$\\ [0.5ex] \hline $g=-10$& $0.234$& $-0.03$\\ [0.5ex] \hline $g=-4\pi$& $0.239$ & $-0.025$ \\[0.5ex] \hline \end{tabular} \caption{ Masses of the particles for various values of the coupling constant $g$, $\mu_0=1$ and $\mu=0$.} \label{tab} \end{table} Remarkably, susy breaking in this case cannot be attributed to thermal physics since as argued in \cite{{Sotiriadis:2010si},Hung:2012zr} the theory is integrable to leading order in $1/N$, and therefore the final state cannot be described by an emergent effective temperature. \begin{figure}[t] \begin{center} \includegraphics[scale=1]{Veff.pdf} \caption{Effective potential \reef{effpot} as a function of $m_\phi^*$ for several values of the coupling $g$, $\mu_0=1$ and $\mu=0$. } \label{fig:phases} \end{center} \end{figure} \section{Dynamical evolution of the effective masses.} \label{sec_dynamics} We have analysed the phase structure of the model, assuming that the effective masses relax to their asymptotic values at late times, after an instantaneous initial transient. The full time evolution will require solving the gap equation (\ref{susygap}) to the future of the quench $t>0^+$. While difficult to achieve in general, the problem becomes tractable under the assumption of short transient and subsequent evolution described by the free propagators (\ref{free_corr}), with time-dependent effective masses \cite{Sotiriadis:2010si}. In this section we test this hypothesis and solve the dynamical evolution problem by integrating the effective masses in time. To leading order in the large-$N$ expansion the model is effectively free. In particular, it can be decomposed into a set of independent harmonic modes in momentum space that are coupled via the time-dependent effective masses of scalar and Majorana fields, see eq.\reef{susygap} and discussion thereafter \begin{equation} \label{meff_t} m_{\phi}^2(t) = \mu^2+4g\mu\bar\rho+3 g^2 \bar\rho^2+ g \, \bar\rho_\psi, ~~~~~~~ m_\psi(t)=\mu + g \, \bar\rho \end{equation} where \begin{eqnarray} \label{rhos} \bar\rho&=&{1\over N}\langle \phi^2\rangle=\int {d^2 p \over (2\pi)^2} \, \tilde G_\phi(t,t;p), \nonumber\\ \bar\rho_\psi&=&{1\over N}\langle \bar\psi\psi\rangle=-\int {d^2 p \over (2\pi)^2} \text{tr}\, \tilde G_\psi(t,t;p). \end{eqnarray} We require that $\phi$ and $\psi$ are continuous across the instant of quench, it follows that at $t=0^+$ the effective masses are given by \begin{eqnarray} \label{id} m_{\phi}^2(0^+) &=&\mu^2+4 g \mu \({-\mu_0\over 4\pi}\)+3 g^2 \({-\mu_0\over 4\pi}\)^2-2 g \mu_0 \({-\mu_0\over 4\pi}\) \, , \nonumber \\ m_\psi(0^+)&=&\mu+g \({-\mu_0\over 4\pi}\), \end{eqnarray} where loop integrals were evaluated in the free field theory, and dimensional regularization was used to handle divergences. The time-evolution of the effective masses can be formulated as initial value problem. Indeed, as argued in \cite{Sotiriadis:2010si} the equations of motion governing the modes with spatial momentum $p$, can be solved by introducing the following ansatz \begin{equation} \label{y_Omega} \hat\Psi_p(t) \sim \frac{1}{\sqrt{2\, \Omega_{\Psi }(t)}} \, \exp\(-i\, \int^t_0 \Omega_{\Psi} (t') dt' \), \end{equation} where $\hat\Psi=(\hat\phi,\hat\psi)$ denotes collectively the scalar and Majorana field operators. Substituting this ansatz into the Heisenberg equation of motion obeyed by each momentum mode separately, \begin{equation} \ddot{\hat\Psi}_p(t)+\omega_\Psi^2(t)\hat\Psi_p(t)=0, \end{equation} where $\omega_\Psi^2(t)=m_{\Psi}(t)^2+p^2$, yields the following nonlinear equation for $\Omega_\Psi$ \begin{eqnarray} \label{om_eqs} && \frac{\ddot{\Omega}_\Psi}{2 \, \Omega_\Psi}- \frac{3}{4} \(\frac{\dot{\Omega}_\Psi}{\Omega_\Psi}\)^2 +\Omega_\Psi^2 = \omega_\Psi^2(t), \nonumber \\ &&\dot \varphi_\Psi =\Omega_\Psi \, ,\\ && \Omega_\Psi(0)=\omega_\Psi(0), ~~~\dot{\Omega}_\Psi(0) =0, ~~~ \varphi_\Psi(0)=0, \nonumber \end{eqnarray} where dot denotes derivative with respect to time, and where for the future convenience we define the phase $ \varphi_\Psi \equiv \int_0^t \Omega_\Psi(t')dt' $. Taking now into account the initial conditions for the field $\hat\Psi$ itself, we obtain \begin{equation} \hat\Psi_p(t) = \hat\Psi_p(0^+) \sqrt{\Omega_\Psi(0)\over \Omega_\Psi(t)} \cos\(\int_0^t \Omega_\Psi(t')dt'\) +\dot{\hat\Psi}_p(0^+) {1\over \sqrt{\Omega_\Psi(0)\Omega_\Psi(t)}} \sin\(\int_0^t \Omega_\Psi(t')dt'\)\, . \labell{fullsol} \end{equation} The field $\hat\Psi$ is continuous across the quench,{\it i.e.,}\ $\hat\Psi_p(0^+)=\hat\Psi_p(0^-)$. The same is true about $\phi$-component of $\dot{\hat\Psi}_p$. However, as argued in section \ref{major}, the time derivative of $\psi$-component exhibits an abrupt jump at $t=0$ \begin{equation} \dot{\hat\psi}_p(0^+)= \dot{\hat\psi}_p(0^-)+i\big(m_\psi(0^+)-\mu_0\big)\hat{\overline\psi}_p(0^-)~. \labell{jump} \end{equation} Using \reef{fullsol}, it was shown in \cite{Sotiriadis:2010si} that \begin{equation} \label{rho_t} \rho(t)=\int \frac{d^2 p}{(2 \pi)^2} \, \frac{1}{2\,\Omega_\phi (t)} \left( 1+\frac{\(\Omega_\phi(0)-\omega_{0p}\)^2}{2 \Omega_\phi(0)\omega_{0p}} + \frac{\Omega_\phi(0)^2-\omega_{0p}^2}{2 \Omega_\phi(0)\omega_{0p}} \, \cos(2 \varphi_\phi) \right). \end{equation} Similarly, using \reef{fullsol}, \reef{jump} and \reef{anticomrel}, the Majorana loop takes the following form \begin{equation} \label{rhopsi_t} \rho_\psi(t)=- \int \frac{d^2 p}{(2 \pi)^2} \, { m_\psi(0^+)\big(p^2+m_\psi(0^+)\mu_0\big)- p^2 \big(m_\psi(0^+)-\mu_0\big)\cos(2\varphi_\psi)\over \Omega_\psi(t)\Omega_\psi(0)\omega_{0p}}~. \end{equation} In the limit of large momentum $\Omega_\Psi(t)$ approaches $\omega_\Psi(t)$ and therefore both $\rho(t)$ and $\rho_\psi(t)$ exhibit linear divergence. Furthermore, as $p\to\infty$ the oscillatory term in the integrand of $\rho_\psi(t)$ behaves as $\cos(2 p t)/p$ which upon integration over $p$ boils down to a delta function supported at $t=0$. As argued in section \ref{major} the singularities can be removed using, {\it e.g.,}\ dimensional regularization scheme. Unfortunately, such a scheme is difficult to implement numerically. Therefore we resort to a different regularization procedure. First, following our discussion in section \ref{major} we separate singularity associated with the delta function \begin{equation} \rho_\psi(t)=\rho_\psi(t)- \big(m_\psi(0^+)-\mu_0\big)\int \frac{d^2 p}{(2 \pi)^2} {\cos(2pt) \over p}+{m_\psi(0^+)-\mu_0\over 2} \delta(t)~. \end{equation} Next we drop $\delta(t)$ since for $t>0$ it vanishes, whereas for $t=0$ it generates a divergence that can be renormalized away, {\it i.e.,}\ \begin{equation} \label{rhopsiprime_t} \rho_\psi(t)\to\rho'_\psi(t)=\rho_\psi(t) - \big(m_\psi(0^+)-\mu_0\big)\int \frac{d^2 p}{(2 \pi)^2} {\cos(2pt) \over p}~. \end{equation} Finally, we introduce a sharp cut-off $\Lambda$ in the integrals over momentum to regularize the divergent loops of $\rho'_\psi(t)$ and $\rho_\phi(t)$. The terms that depend on the cut-off scale $\Lambda$ can be absorbed in the redefinition of the mass parameters\footnote{Note that because of delta function singularity, mass counterterms at $t=0$ are different from mass counterterms at $t>0$. However, such peculiar behaviour of counterterms can be attributed to the specific regularization scheme. As we already noticed everything is completely smooth if dimensional regularization is adopted.}. To maintain conformal invariance across the quench instant we set the renormalized mass parameters to zero. As a result, we obtain \begin{eqnarray} m_{\phi}(t)^2& =& 3 g^2 \( \rho-\frac{\Lambda}{4 \pi}\)^2+ g \( \rho'_\psi+ m_\psi(0^+)\frac{\Lambda}{2 \pi}\), \label{mphi} \\ m_\psi(t)&=&g \( \rho-\frac{\Lambda}{4 \pi}\). \label{meff_reg} \end{eqnarray} Together with (\ref{om_eqs},\ref{rho_t}, \ref{rhopsi_t}) and (\ref{rhopsiprime_t}) these give the coupled system of equations that determine the time-evolution of the effective masses. We found that for the reasons of numerical stability and accuracy it is advantageous to rewrite these equations as time-dependent differential equations for the masses, \begin{eqnarray} \label{meff_dot} {d{m}_{\phi}^2\over dt}& =& 6 g^2 \( \rho_\phi-\frac{\Lambda}{4 \pi}\) \, \dot{ \rho}_\phi+g\, \dot{\rho}'_\psi, \nonumber \\ \dot{m}_\psi&=&g \,\dot{ \rho}_\phi, \end{eqnarray} where \begin{eqnarray} \label{rhodot} \dot{\rho}_\phi&=&\int^\Lambda_0 dp\, \frac{p}{2 \pi} \(-\frac{\dot{\Omega_\phi}}{2 \,\Omega_\phi^2}\) \( 1+\frac{\(\omega_\phi(0)-\omega_0\)^2}{2 \omega_\phi(0)\omega_0} + \frac{\omega_\phi(0)^2-\omega_0^2}{2 \omega_\phi(0)\omega_0} \, \cos(2 \varphi_\phi) \) - \nonumber \\ &-& \int^\Lambda_0 dp\, \frac{p}{2 \pi} \frac{\omega_\phi(0)^2-\omega_0^2}{2 \omega_\phi(0)\omega_0} \, \sin(2 \varphi_\phi), \\ \dot\rho'_\psi&=& \int \frac{d^2 p}{(2 \pi)^2} \,\(\frac{\dot{\Omega_\psi}}{\Omega_\psi^2}\) \, { m_\psi(0^+)\big(p^2+m_\psi(0^+)\mu_0\big)- p^2 \big(m_\psi(0^+)-\mu_0\big)\cos(2\varphi_\psi)\over \Omega_\psi(0)\omega_{0p}} \nonumber \\ &-& 2\big(m_\psi(0^+)-\mu_0\big)\(\int \frac{d^2 p}{(2 \pi)^2} { p^2 \sin(2\varphi_\psi)\over \Omega_\psi(0)\omega_{0p}} -\int \frac{d^2 p}{(2 \pi)^2} \sin(2pt)\)~. \end{eqnarray} We solve the above coupled equations numerically by applying a modified version of the algorithm proposed in \cite{Sotiriadis:2010si}. To this end the equations are discretized in the momentum space, so that we consider only the lower uniformly spaced $N_p$ modes with $p \leq \Lambda$. Starting from $t=0$, the set of equations (\ref{om_eqs}) for the scalar and fermion modes, together with the time-dependent equations for the masses (\ref{meff_dot}) are advanced in time. In practice, we use the 4th order Runge-Kutta time-stepping method for the time evolution, and the Simpson method for evaluation the momentum space integrals in (\ref{rhodot}), see e.g. \cite{NumRec}. Typical values of the numerical parameters used to generate the results discussed in this paper are in the ranges\footnote{$\mu_0$ is the only dimensionful parameter that sets a scale in our numerical simulations and we choose $\mu_0=1$.} $\Lambda=10-400,~ N_p=2000-50000$, and the discrete time-step is of order $h_t =0.001$. Self-consistency of the approach requires that effective masses remain well below $\Lambda/4 \pi$, and we verified that this is indeed satisfied in our case. Other checks reveal that the results are independent of $N_p$ provided $\Lambda/N_p \lesssim 0.005$, and the convergence of the method as a function of $h_t$ is fourth order, meaning that the discretization errors scale as $O(h_t^4)$ in the continuum limit $h_t \rightarrow 0$. \begin{figure}[t!] \hspace{-0.8cm}\includegraphics[width=1.1\textwidth,height=5cm]{ms-mf_new01.pdf} \caption{Time-evolution of the effective masses for several values of the coupling $g$. We find that for generic value of the coupling the effective masses tend to distinct constants in the late times, signalling breakdown of supersymmetry. The relaxation time scale grows with $|g|$. } \label{fig:meff_gs} \end{figure} Figure \ref{fig:meff_gs} shows the temporal dynamics of the effective masses for several values of the coupling $g$. We find that generically the masses approach constant values at late times, and the relaxation time grows with $|g|$. Whilst the details of the evolution are dictated by the coupling constant, in all cases the masses of the scalar and the fermion remain distinct, indicating that supersymmetry is broken by the quench. Typically, the asymptotic mass of the scalar is very small, being several orders of magnitude below the mass of the fermion, whose absolute value is of order $0.1$. As suggested by the analysis of the effective potential in the previous section in the special case of $g=-4 \pi$ the supersymmetry is preserved in the quench. In the dynamical setup the effective masses of the scalar and fermion are initialized as $m_\psi(0^+)=m_\phi(0^+)=\mu_0=1$. This implies that $\Omega_\psi=\Omega_\phi \equiv \Omega$. Equations (\ref{rho_t},\ref{rhopsi_t}) yield $\rho_\psi(t)=-2\,\mu_0\,\rho(t) = -2\, \mu_0 \int p\, dp /(2 \pi\, \Omega) $. It follows then from (\ref{mphi},\ref{meff_reg}) that $m_\psi(t)=m_\phi(t)=\mu_0=1$, are time-independent in the limit of infinite $\Lambda$. Figure \ref{fig:meff_g4pi} depicts the dynamics of the effective masses for a set of increasing $\Lambda$'s. It demonstrates that small numerical imprecisions generated by finite $\Lambda$ eventually drive the masses away from the initial attractor values. Our numerical method performs better for larger cut-offs, smaller time-steps etc, such that the masses remain at their initial values for longer periods of time. The dynamical evolution at $g=-4 \pi$ and near it (see rightmost panel of Figure \ref{fig:meff_gs}) indicates that the attractor at $m_\psi=m_\phi=\mu_0$ is unstable to small perturbations. This is in tune with the stationary analysis in the last section that showed that in this case the effective potential has an inflection point, rather than a minimum, see Figure \ref{fig:phases}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.6\textwidth]{mf_ms_m4pi.pdf} \caption{The time-evolution of the effective masses for $g=-4 \pi$ demonstrates instability of the attractor $m_\psi(t)=m_\phi(t)=\mu_0$ to small numerical errors.} \label{fig:meff_g4pi} \end{center} \end{figure} \section{Concluding remarks} \labell{sec:concl} In this paper we applied the methods proposed in \cite{Sotiriadis:2010si} to quantum quenches in the presence of fermions. Our findings show that fermionic field responds differently to abrupt changes in the parameters of the theory than its scalar counterpart. For instance, the qualitative behaviour of fermionic field turns out to be sensitive to the parity of spacetime dimension $d+1$, see (\ref{odd_even}). The reason for that can be attributed to the fact that fermions obey the first order Dirac equation, and therefore time derivative of the field experiences a jump at the instant of quench, as opposed to scalars that satisfy the second order Klein-Gordon equation, and therefore both the field and its time derivative are continuous accross the quench. To gain a better understanding of this we investigate the expectation value of the fermionic mass term $\langle \bar\psi\psi\rangle$ in the case of free Majorana field. This problem can be solved exactly. We find that for odd dimensional spacetime (even $d$) sharp quenches generate finite and ultra-local terms in $\langle \bar\psi\psi\rangle$,{\it i.e.,}\ regular functions superposed with delta function or its derivatives supported at the instant of quench, whereas for even dimensional spacetime (odd $d$), sharp quenches result in a singular contribution to $\langle \bar\psi\psi\rangle$ that behaves as $t^{1-d}$ in the vicinity of the quench at $t=0$, but is otherwise finite everywhere. In even $d$'s these singularities can be ignored on account of appropriate regularization scheme accompanied by standard renormalization of the field theory parameters. Indeed, divergences associated with the delta function and its derivatives are scheme dependent and therefore can be removed by a proper choice of regularization scheme, {\it e.g.,}\ in dimensional regularization they are absent. In odd $d$, however, sharp quenches require a refined analysis since divergences in that case are independent of the choice of regularization scheme. This result is in agreement with \cite{Buchel:2012gw}, where using AdS/CFT it was argued that the limit of sharp quenches is not smooth. We studied the effect of supersymmetric quantum quenches on the state of the theory at times large compared to any other scale in the problem. No self-consistent computational framework to tackle this problem in general has been found yet. Therefore, we model the quench in supersymmetric $O(N)$ vector model in the limit of large-$N$ and adopt the simplifying assumptions proposed in \cite{Sotiriadis:2010si}. To avoid singularities that may emerge at the instant of sharp quench in even dimensional spacetime and that cannot be removed by a proper choice of regulator, we consider quantum quenches of the simplest supersymmetric extension of the three dimensional ($d=2$) $\phi^6$ model \cite{Hung:2012zr}. Using stationary phase approximation we find that supersymmetric quantum quench breaks supersymmetry in the asymptotic state. From this perspective the effect of quantum quench is reminiscent of the finite temperature effect. In both cases supersymmetry is broken due to different boundary conditions satisfied by the scalar and fermionic fields. However, SUSY breaking in our case cannot be attributed to thermalization since to leading order in the large $N$ limit the system is integrable, and there is no effective temperature that can be assigned to the final state. There is one unfortunate caveat to the above approach. Generic results obtained in this way should be trusted with caution since the Sotiriadis-Cardy approach lacks analytical argument that supports their assumptions. This is why in \cite{Sotiriadis:2010si} they resort to numerical evaluation of exact expressions and eventually find a remarkable match with analytical predictions obtained in the framework of proposed approximation. Note, however, that matching between the analytical and numerical results in \cite{Sotiriadis:2010si} was achieved in the case of scalar field theory only, and it is completely reasonable to expect that behaviour of the fermionic field is very different. Therefore, to verify and test our stationary analysis, we derive exact equations of motion and integrate them numerically in time. We find that SUSY is broken in the dynamical setting as the system relaxes to a stationary state, see figure \ref{fig:meff_gs}. However, in spite of the qualitative agreement with the analytical predictions, the quantitative details are somewhat different. Perhaps, the chief reason for the apparent discrepancy is intrinsic assumption in the current approach, that the masses tend to constant values at late times after the initial jump-like transient that leaves no imprint in the asymptotic dynamics. This approach proved to be robust in the case of $\phi^4$ field theory in various dimensions. However, as we argued here, fermionic fields exhibit a substantially different response to quantum quenches, and therefore it is not too surprising that some of the assumptions that worked well in the case of scalars are less successful in the case of fermions. In particular, the numerical time-evolution in the case of fermions, shows that the transient is significant even for small $g$'s, and it keeps growing for larger values of the coupling constant. In this paper we considered only $O(N)$ symmetric part of the full phase diagram of the model. However, it is of particular interest to explore the impact of quantum quenches on the broken $O(N)$ symmetry, on which some numerical work has appeared in \cite{Gubser}. We leave investigation of this matter for future work. On the one hand, research in this direction will allow us to compare the full phase diagram of the model in the quenched case with its counterpart at finite temperature \cite{Moshe:2002ra}, while on the other hand, it will provide a better insight into the mechanism of symmetry breaking in general. \acknowledgments We would like to thank Alex Buchel and especially Robert C. Myers for useful discussions and correspondence. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research \& Innovation. ES is partly supported by National CITA Fellowship and by NSERC discovery grant.
1,108,101,566,849
arxiv
\section{Introduction} The classical mechanics (CM) of Newton gives a deterministic description of objects (particles, bodies) supposed to have a reality in an inertial frame of the Galilei space-time centered on an inertial mathematical observer playing no dynamical role beyond defining Cartesian coordinates. This space-time is assumed to be a given background container of the real objects, whose world-lines are described in terms of an absolute notion of time. At each instant there is an absolute Euclidean 3-space where the objects are localized. The inertial frames are connected by the transformations of the Galilei group. This description can be extended to non-inertial frames centered on mathematical accelerated observers. \bigskip This realistic description of the world-lines of particles is preserved in special relativity (SR). \ However, now they are described in the inertial frames of Minkowski space-time centered on inertial mathematical relativistic observers and the Poincar\'{e} group describes the transformations connecting the inertial frames. However, in SR there is no notion of absolute time and of absolute 3-space: only the whole Minkowski space-time is absolute and only the conformal structure (i.e. the light-cone describing the locus of incoming and outgoing radiation in every point) has an intrinsic meaning. As a consequence, we must introduce a \textit convention of clock synchronization} to define an instantaneous 3-space, whose definition is needed to formulate the Cauchy problem for wave equations like the Maxwell's ones. \bigskip In this Introduction we give an outline of the main problems to be faced in the definition of relativistic classical mechanics (RCM) and relativistic quantum mechanics (RQM). This will clarify the context which gave origin to this paper and to its results and implications. \bigskip Usually RCM is formulated in inertial frames, whose Euclidean 3-spaces are defined by Einstein's convention \footnote{The inertial observer A sends a ray of light at $x_{i}^{o}$ towards the (in general accelerated) observer B; the ray is reflected towards A at a point P of B world-line and then reabsorbed by A at $x_{f}^{o}$; by convention P is synchronous with the mid-point between emission and absorption on A's world-line, i.e. $x_{P}^{o}=x_{i}^{o}+{\frac{1}{2}}\,(x_{f}^{o}-x_{i}^{o})= \frac{1}{2}}\,(x_{i}^{o}+x_{f}^{o})$. This convention selects the Euclidean instantaneous 3-spaces $x^{o}=ct=const.$ of the inertial frames centered on A. However, if the observer A is accelerated, the convention can breaks down due the possible appearance of coordinate singularities.}. Only with this convention does the 1-way velocity of light between two observers (it depends on how their clocks are synchronized) coincide with the 2-way velocity of light $c$ of an inertial observer (it replaces the unit of length in relativistic metrology \cite{1}). \bigskip However this description of RCM is still incomplete for interacting systems due to the following problems: 1) There is not a unique notion of relativistic center of mass of a system of particles like in Newtonian mechanics; 2) There is the problem of the elimination of relative times in relativistic bound states (time-like excitations are not seen in spectroscopy); 3) It is highly non trivial to find the explicit form of the Poincar\'e generators (especially the Lorentz boosts) for interacting particles in the instant form of dynamics; 4) There is no accepted global formulation of non-inertial frames without the pathologies of the rotating disk and of Fermi coordinates. \bigskip Recently a solution to all these problems has been given in Refs.\cite{2,3,4} (see also the review in Ref.\cite{5}). As sketched in Section II, the main differences from non-relativistic CM are the \textit{non-local nature} of the relativistic collective variables proposed for the relativistic center of mass (implying their non-measurability with local measurements) and a \textit{spatial non-separability} of the particles, which must be described by means of suitable Wigner-covariant relative 3-variables. \bigskip This formulation of RCM allows one to get a consistent definition of RQM of particles with an associated notion of relativistic entanglement as an extension of non-relativistic quantum mechanics (NRQM) avoiding all the known relativistic pathologies. This was done in Ref.\cite{6}. This framework for RCM has been extended to classical field theory (CFT) in Refs.\cite {3,5} both for classical fields and fluids, but the extension of the approach to quantum field theory (QFT) has still to be done. \bigskip Unlikely from the transition from Galilei space-time to the Minkowski one, the transition from CM to NRQM can be done only in an operational way due to the big unsolved foundational problems of NRQM (see for instance Ref.\cite{7}) \footnote{Let us remark that for many physicists the absence of experimental facts in contrast with QM is an indication that the foundational problems are fake problems of philosophical type. See for instance Ref.\cite{8}. Our attitude is neither operational nor foundational: we try to understand some aspects of the transition from the quantum to the classical regime following Bohr's viewpoint.}. The main problem is that the notion of {\it reality} of a classical particle and of its properties cannot be extended to NRQM as shown A) by the EPR experiment (see Ref.\cite{9} for a review) and the violation of Bell's inequalities (no local realistic hidden variable explanation; see Ref.\cite{10} for the status of experiments); B) by the Kochen-Specker theorem \cite{11} (no non-contextual explanation of the properties of a quantum system); C) by the probabilistic Born's rule for the unique outcomes of measurements (in a random way a unique value is obtained for the observable, describing a property of the state of the quantum object under investigation; but a repeated measurement on an identical state can give other values, i.e. we cannot speak of a property of the quantum system but only of quantity which takes value depending on the context randomly). \bigskip As a consequence it is not clear which is the meaning of the localization of a quantum particle even having taken into account the Heisenberg uncertainty relations (see Ref.\cite{12} for the problem of simultaneous measurements of position and momentum). \medskip On one side we have the mathematical theory of NRQM with the unitary evolution of the wave function, but there is no consensus on whether this wave function describes the given quantum system or is only an information on a statistical ensemble of such systems. Moreover it could be that only the real-valued density matrix, i.e. the statistical operator determining the probabilities, makes sense and not the complex-valued wave function (only a mathematical tool). There is no accepted interpretation for the theory of measurements going beyond the non-unitary collapse of the wave function in an instantaneous idealized Von Neumann measurement of a self-adjoint operator describing some mathematical observable property of the quantum system. \medskip In experiments we have macroscopic semi-classical objects as source and detectors of quantities named quantum particles (or atoms) and the results shown by the pointers of the detectors are the end point of a macroscopic (many-body) amplification of the interaction of the quantum object with some microscopic constituent of the detector (for instance an $\alpha $ particle interacting with a water molecule followed by the formation of a droplet as the amplification allowing detection of the particle trajectory in bubble chambers). Usually one invokes the theory of decoherence \cite{13,14} with its uncontrollable coupled environment for the emergence of robust classical aspects explaining the well defined position of the pointer in a measurement. \bigskip Also recently Haag \cite{15} said that the deterministically propagating (due to the Schroedinger equation) pure state of a quantum particle has no objective significance and do not represent a real phenomenon. The only relevant notion are the \textit{events}, namely a set of mutually exclusive possibilities with an associate probability assignment for the outcomes of operationally defined measurements. To identify events in particle physics we need not only the beams of incoming particles but also an effective microscopic description of the interaction of the particle with some constituent of the macroscopic detector. This means that we need a localization region in space-time and a description of the detector (what it is supposed to measure with a microscopic interaction then suitably amplified). Then we need a principle of random realization to justify the unique outcome of each measurement, whose repetition gives results distributed according to the Born rule. In this ensemble interpretation only the events, containing not only the quantum particle but also the measuring apparatus, have some kind of reality: due to decoherence things happens "as if" a semi-classical particle interacts with semi-classical constituents of the detector. \bigskip A first step to face all these problems (always in an ensemble interpretation) is presented in Ref.\cite{16}. It is an approach more general than decoherence but limited to spin systems (the only ones where many-body calculations can be explicitly made). One considers the interacting object as an \textit{open quantum subsystem} \cite{17} of a macroscopic many-body system (system plus detector plus environment) with the induced non-unitary stochastic behavior (even without going to the thermodynamic limit): now time scales for the various phases of the measurement and aspects of decoherence can be explicitly evaluated. \medskip The described state of affairs is in accord with Bohr's point of view according to which we need a classical description of the experimental apparatus. It seems that all the realizable experiments must admit a quasi-classical description not only of the apparatus but also of the quantum particles: they are present in the experimental area as classical effective particles with a mean trajectory and a mean value of 4-momentum (measured with time-of-flight methods). \medskip As shown in Ref.\cite{18}, if one assumes that the wave function describes the given quantum system (no ensemble interpretation), the statement of Bohr can be justified by noting that the wave functions used in the preparation of particle beams (semi-classical objects with a mean classical trajectory and a classical mean momentum determined with time-of-flight methods) are a special subset of the wave functions solutions of the Schroedinger equation for the given particles. Their associated density matrix, pervading the whole 3-space, admits a \textit{multi-polar expansion around a classical trajectory} having \textit{zero dipole}. This implies that in this case the equations of the Ehrenfest theorem give rise to the Newton equations for the Newton trajectory (the monopole; it is not a Bohmian trajectory) with a classical force augmented by forces of a quantum nature coming from the quadrupole and the higher multipoles (they are proportional to powers of the Planck constant). As shown in Ref.\cite{18} the mean trajectories of the prepared beams of particles and of the particles revealed by the detectors are just these classically emerging Newton trajectories impled by the Ehrenfest theorem for wave functions with zero dipole. Also all the intuitive descriptions of experiments in atomic physics are compatible with this emergence of classicality. In these descriptions an atom is represented as a classical particle delocalized in a small sphere, whose origin can be traced to the effect of the higher-multipole forces in the emerging Newton equations for the atom trajectory. The wave functions without zero dipole do not seem to be implementable in feasible experiments. No explanation is given of the probabilistic Born rule, but it is suggested that the random unique outcomes have a quasi-classical localization given by these Newton trajectories. \bigskip In this paper we want to focus on the problems connected with the localization of particles both at the relativistic and non-relativistic levels and both at the classical and quantum levels QFT included. We identify the existing proposals for position measurements and we analyze the existing theoretical problems in this area (usually they are not well known to many researchers). The comparison with the experimental status of particle and atom localization will help to clarify whether it is possible to measure the non-relativistic center of mass of a system or whether it is non-measurable like the relativistic collective variables. Then the results on localization will be used to clarify the connection between relativistic and non-relativistic entanglement and what can be seen in the experiments. We hope that collecting and gluing together results on these topics coming from usually non mutually interacting communities will be helpful for researchers approaching the quickly developing areas of mesoscopic physics, atomic and molecular physics, atomic clocks and space physics, quantum information, teleportation,.... \medskip In Section III we make a review of the problems in the localization of particles both at the relativistic (Subsection A) and at the non-relativistic (Subsection B) level; then in Subsection C we look at the notion of particle in QFT and to its problems. In Section IV we show the differences between non-relativistic and relativistic entanglement in a two-body case induced by the relativistic spatial non-separability forbidding the identification of subsystems. In Section V we study the preparation and detection of particles in experiments and we propose a reconciliation of the non-relativistic and relativistic visions valid for all practical purposes. In the final Section there are concluding remarks and a list of open problems. \section{Review of Relativistic Classical and Quantum Mechanics} The new formulation of RCM \cite{2,3,5} and of a consistent RQM \cite{6} makes use of the 3+1 point of view to build a theory of global non-inertial frames centered on arbitrary time-like observers \cite{2,5}. This is done by giving the world-line of the time-like observer and a nice foliation of Minkowski space-time with non-intersecting space-like Riemannian 3-spaces, all tending to the same space-like hyper-plane at spatial infinity. Moreover, one uses the radar 4-coordinates (\tau ;\sigma ^{r})$, i.e. an arbitrary monotonically increasing function of the proper time of the atomic clock carried by the observer and curvilinear 3-coordinates $\sigma ^{r}$ centered on the observer for the 3-spaces. CFT may be reformulated in this framework by using fields knowing the clock synchronization convention. \medskip Both the knowledge of the whole world-line of an arbitrary time-like observer and of nice foliation with 3-spaces of Minkowski space-time are \textit{non-factual} notions. The observer is a purely mathematical entity carrying a clock, an idealization of a physical atomic clock carried by a dynamical observer. The foliation is the mathematical idealization of a physical protocol of clock synchronization. Actually the physical protocols (think of GPS) can establish a clock synchronization convention only inside future light-cone of the physical observer defining the local 3-spaces only inside it. However to be able to formulate the Cauchy problem for field equations and to have predictability of the future, due to the theorem on the existence and unicity of solution of partial differential equations we have to extend the convention outside the light-cone \footnote As far as we know the theorem on the existence and unicity of solutions has not yet been extended starting from data given only on the past light-cone. unphysical process), we can predict the future with every observer receiving the information only from his/her past light-cone (retarded information from inside it; electromagnetic signals on it). \medskip For non-relativistic observers the situation is simpler, but the non-factual need of giving the Cauchy data on a whole initial absolute Euclidean 3-space is present also in this case for non-relativistic field equations like the Euler equation for fluids. \subsection{Relativistic Classical Mechanics} As shown in Ref.\cite{2} in this framework the description of isolated systems can be done with an action principle (the \textit{parametrized Minkowski theories} for particles, fields, strings, fluids) implying that the transition among non-inertial frames is described by gauge transformations (so that only the appearances of phenomena change, not the physics) and allowing one to define the energy-momentum tensor and then the Poincar\'{e} generators of the system. \medskip Inertial frames are a special case of this theory having Euclidean 3-spaces. For isolated systems there is a special family of inertial systems, the \textit{intrinsic rest frames}, in which the space-like 3-spaces are orthonormal to the conserved time-like 4-momentum of the isolated system. At the Hamiltonian level it turns out that every isolated system can be described by a decoupled canonical non-covariant relativistic center of mass (whose spatial part is the classical counterpart of the Newton-Wigner position operator). \ Such a system carries a pole-dipole structure, namely an internal 3-space with a well defined total invariant mass $M$ and a total rest spin $\vec{S}$ and a well defined realization of the Poincar\'{e} algebra (\textit{\ the external Poincar\'{e} group} for a free point particle, which we identify with the \textit{\ external center of mass}, whose mass and spin are its Casimir invariants describing the matter of the isolated system in a global way). The internal rest 3-space, named Wigner 3-space, is defined in such a way that it is the same for all the inertial rest frames and its 3-vectors are Wigner spin-1 3-vectors \cite{19}, so that the covariance under Poincar\'{e} transformations is under control. The particles of the isolated system, all having the same time of the given 3-space, are identified by this type of Wigner-covariant 3-vectors (see Refs.[3,5] for the description of fields). \medskip As shown in Refs.\cite{6,20} the canonical non-covariant (a pseudo 4-vector) relativistic center of mass, the non-canonical covariant (a 4-vector) Fokker-Price center of inertia and the non-canonical non-covariant (a pseudo 4-vector) M$\o $ller center of energy are the \textit{only three relativistic collective variables} which can be built only in terms of the Poincar\'{e} generators of an isolated system so that they depend only on the system and nothing external to it. All of them collapse onto the Newton center of mass of the system in the non-relativistic limit \footnote It is of interest that the three properties of the non-relativistic center of mass, namely i) a position associated with the spatial mass distribution of the constituents ii) it transformation under rotations as a three \ vector and iii) together with the total momentum being canonical variables, have their respective relativistic counterparts taken up by the Moller non-covariant, non-canonical center of energy $R^{\mu }(\tau )$ , the covariant but non-canonical Fokker Pryce center of inertia \ $Y^{\mu }(\tau ) $ and the canonical, but non-covariant center of mass $\tilde{x}^{\mu }(\tau ).$ \ }. \medskip As shown in Refs.\cite{2,4} in the Wigner 3-space there is another realization of the Poincar\'e algebra (\textit{the internal Poincar\'e group ) built with the rest 3-coordinates and 3-momenta of the matter of the isolated system starting from its energy-momentum tensor: the internal energy is the invariant mass $M$ (the Hamiltonian inside the rest 3-space) and the internal angular momentum is the rest spin $\vec S$. Since we are in rest frames the internal 3-momentum must vanish. Moreover, to avoid a double counting of the center of mass, the internal center of mass, conjugate to the vanishing 3-momentum, has to be eliminated: this is done by fixing the value of the internal Poincar\'e boost. If we put it equal to zero, this implies that the time-like observer has to be an inertial observer coinciding with the non-canonical 4-vector describing the Fokker-Price center of inertia of the isolated system. Therefore the internal realization of the Poincar\'e algebra is unfaithful and inside the Wigner rest 3-spaces the matter is described by \textit{relative} 3-positions and 3-momenta. \medskip The world-lines of the particles (and their 4-momenta) are derived notions, which can be rebuilt given the relative 3-coordinates, the time-like observer (for instance the Fokker-Price center of inertia) and the axes of the inertial rest frame \cite{4}. They are described by 4-vectors x^{\mu}(\tau)$, which however are not canonical like in most of the approaches: there is a classical \textit{non-commutative structure} induced by the Lorentz signature of Minkowski space-time \cite{4,6}. \medskip As shown in Ref. \cite{3} these three variables can be expressed as known functions of the Lorentz scalar rest time $\tau $, of canonically conjugate Jacobi data (frozen (fixed $\tau =0)$ Cauchy data) $\vec{z}=Mc\,{\vec{x}} _{NW}(0)$, $\vec{h}=\vec{P}/Mc$, (${\vec{x}}_{NW}(\tau )={\vec{\tilde{x}}} (\tau )$ is the standard Newton-Wigner 3-position; $P^{\mu }$ is the external 4-momentum), and of the invariant mass $M$ and rest spin $\vec{S}$. The external Poincar\'{e} generators are then expressed in terms of these variables. \medskip As said in Ref.\cite{6}, since the three relativistic collective variables depend on the internal Poincar\'e generators $M$ and $\vec S$, which are conserved integrals of suitable components of the energy-momentum tensor of the isolated system over the whole rest 3-space, they are \textit non-local} quantities which cannot be determined with local measurements. \subsection{Relativistic Quantum Mechanics} The use of $\vec{z}$ avoids taking into account the mass spectrum of the isolated system at the quantum kinematical level and allows one to avoid the Hegerfeldt theorem (the instantaneous spreading of wave packets with violation of relativistic causality) in RQM \cite{6}. \bigskip Besides these non-local features in RQM there is an intrinsic \textit spatial non-separability} forbidding the identification of subsystems at the physical level and generating a notion of relativistic entanglement very different from the non-relativistic one. \medskip In order to exhibit these two properties, let us consider a quantum two-body system. In non-relativistic quantum mechanics (NRQM) its Hilbert space can be described in the three following unitarily equivalent ways \cite{6}: A) as the tensor product $H=H_{1}\otimes H_{2}$, where $H_{i}$ are the Hilbert spaces of the two particles (separability of the two subsystems as the zeroth postulate of NRQM); B) as the tensor product $H=H_{com}\otimes H_{rel} $, where $H_{com}$ is the Hilbert space of the decoupled free Newton center of mass and $H_{rel}$ the Hilbert space of the relative motion (in the interacting case only this presentation implies the separation of variables in the Schroedinger equation); C) as the tensor product $H=H_{HJcom}\otimes H_{rel}$, where $H_{HJcom}$ is the Hilbert space of the frozen Jacobi data of the Newton center of mass (use is made of the Hamilton-Jacobi transformation). \medskip Each of these three presentations gives rise to a different notion of entanglement due to the different notion of separable subsystems. As shown in Ref. \cite{19} other presentations are possible in NRQM: in each presentation there is a different notion of separable or entangled pure state (the same is true in the mixed case). \medskip As shown in Ref.\cite{6}, at the relativistic level the elimination of the relative times of the particles (they are defined in a 3-space with a definite value of time) and the treatment of the relativistic collective variables allows \textit{only the presentation C)}, i.e. $H=H_{HJcom}\otimes H_{rel}$ with $H_{HJcom}$ being the Hilbert space associated to the quantization of the canonically conjugate frozen Jacobi data $\vec{z}$ and \vec{h}$ and $H_{rel}$ is the Hilbert space of the Wigner-covariant relative 3-coordinates and 3-momenta. Therefore only the frozen relativistic 3-center of mass and the set of all the relative variables are the admissible separable relativistic subsystems in RQM. Already at the classical level the subsystems particle 1 and particle 2 (without relative times) are only defined in the \textit{un-physical} rest 3-space, which is how one terms the 3-space before adding the rest-frame conditions eliminating the internal 3-center of mass and its 3-momentum. In contrast, the physical space is the one with these constraints imposed. The rest-frame conditions, defining the physical variables, destroy the separability of the particles leaving only relative variables. In this framework there are no problems with the treatment of relativistic bound states \footnote If one considers the tensor product $H_{1}\otimes H_{2}$ of two massive Klein-Gordon particles most of the states will have one particle allowed to be the absolute future of the other due to the lack of restrictions on the relative times. Only in S-matrix theory is this irrelevant since one takes the limit for infinite future and past times.}. \medskip Let us remark that instead of starting from the physical Hilbert space containing the frozen Jacobi data, one could first define an un-physical Hilbert space containing the Jacobi data and the 3-position and 3-momenta of the particles (in it we have the same kind of separability as in the presentation A) of NRQM) and then define the physical Hilbert space by imposing the rest-frame conditions at the quantum level with the Gupta-Bleuler method. However there is the risk to get an inequivalent quantum theory due to the complex form of the internal boosts. \subsection{Classical and Quantum Field Theory} Given a 3+1 splitting associated with a time-like observer using radar 4-coordinates $\sigma^A = (\tau, \sigma^r)$ we can rebuild the Cartesian coordinates of an inertial observer of Minkowski space-time with a coordinate transformation $\sigma^A\, \rightarrow\, x^{\mu} = z^{\mu}(\tau, \sigma^r)$ with $z^{\mu}(\tau, 0) = x^{\mu}(\tau)$ being the world-line of the time-like observer. The functions $z^{\mu}(\tau, \sigma^r)$ describe the embedding of the instantaneous 3-spaces $\Sigma_{\tau}$ in Minkowski space-time. \medskip Given a classical field, for instance the Klein-Gordon field $\tilde \phi(x^{\mu})$, its reformulation as a field knowing the clock synchronization convention is $\phi (\tau, \sigma^r) = \tilde \phi(x^{\mu} = z^{\mu}(\tau, \sigma^r))$. These are the fields used in parametrized Minkowski theories \cite{2}. \medskip As shown in Ref.\cite{3} for the case of particles plus the electro-magnetic field, at the classical level one can define the relativistic external center of mass and the relative variables for these fields and find the rest-frame conditions eliminating the internal center of mass. In atomic physics this allows to avoid pathologies like the Haag theorem (non existence of the interaction picture in QFT) and to follow the evolution of atoms in the interaction region for finite times taking into account the relativistic properties of non-separability and non-locality. The extension of these results to QFT is highly non-trivial, because at the classical level one uses variables of the action-angle type for which no consistent quantization exists. The alternative is to quantize the standard variables and to try to impose the quantum rest-frame conditions with the Gupta-Bleuler methods. In any case a consistent quantization along these lines would lead to a non-local QFT due to the relativistic preperties of non-separability and non-locality. \section{Localization of Particles} Both NRQM and RQM are defined on a fixed space-time structure, the Galilei and Minkowski space-times respectively. More exactly they are defined in the \textit{inertial} frames of these space-times, because the extension to non-inertial frames is still an open problem \footnote{ At the classical level, the framework described in Section II for the definition of parametrized Minkowski theories leads to the theory of global non-inertial frames (see Ref.\cite{2}). However till now only the quantization of particles in relativistic rotating frames and its non-relativistic limit have been studied (see Refs.\cite{22}).}. \medskip This spatio-temporal point of view is presupposed to the postulates of quantum mechanics (QM) and to each possible interpretation of it. It is only at the level of Einstein general relativity, where the metric structure of space-time and space-time itself become dynamical, that this scheme breaks down opening the basic problem of getting a consistent theory of quantum gravity conciliating QM and gravitation (such a problem does not exist for Newtonian gravity, which is defined in Galilei space-time). \medskip If we start with this space-time oriented point of view, QM is defined in the Euclidean 3-spaces of the inertial frames of either Galilei or Minkowski space-time. This implies that \textit{the coordinate representation has a privileged kinematical and descriptive status} among all possible bases in the Hilbert space of quantum systems. Since all the experiments are localized in space-time, it is important to consider always the trajectories of the carriers of quantum properties (like spin or qubits or other quantum numbers) and not to treat the quantum systems independently from their localization in the space-time. See the second part of the review paper \cit {23} for the relevance of Lorentz transformations for creating entanglement between spin and momentum degrees of freedom. This privileged status of the coordinate representation coming from the spatio-temporal interpretation is different (but is reinforced) by the existence of a natural selection of robust positional bases of pointer states for the apparatuses appearing in the \textit{de-coherence} approach to QM with its dominant role of the environment in the description of entanglement (see Refs. \cite{13,14}). \bigskip Since the relativistic spatial non-separability forbidding the identification of the subsystems of the given quantum system is a consequence of defining relativistic collective position variables, one has to face the open problem of \textit{position measurements} in QM. While most of the mathematical properties of quantum systems are based on instantaneous precise measurements of self-adjoint bounded operators (the observables) with a discrete spectrum, whose treatment requires projection operators (or projection valued measures, PVM), position operators are usually described by self-adjoint unbounded operators without normalized position eigenvalues (usually one uses the improper Dirac kets $|\vec{x}\,>$, sharp eigenstates of the position operator, satisfying $<\vec{x}|\vec{y}>=\delta ^{3}(\vec{x} \vec{y})$). However, as noted in Ref.\cite{13}, to be able to describe the standard (even if questionable) postulate of the non-unitary collapse of the wave function one needs position wave functions with a finite support (inside the apparatus) to avoid the necessity of using arbitrary strong couplings and arbitrarily large amounts of energy to make an arbitrarily precise measurement of position. As a consequence, the notion of \textit unsharp} positions with bad localization has emerged (see Refs.\cite{24} for the theory of projection operator valued measures, POVM). The results of measurements of a POVM give imprecise information of stochastic type on the localization of particles (see for instance Ref.\cite{25} for continuous quantum position weak measurements). \medskip We will now describe some of the existing problems with the notions of position and localization both at the relativistic and non-relativistic levels, having in mind the following question: "Is the center-of-mass position measurable"? \subsection{The Relativistic Case} In the relativistic case there are two types of problems, one at the classical level, the other at the quantum level. \bigskip $\alpha $) \textit{M$\o $ller non-covariance world-tube} \cite{26}. As we have said, in each relativistic inertial frame one has the world-line of the Fokker-Price non-canonical, covariant center of inertia $Y^{\mu }(\tau )$ of the isolated system and different pseudo-world-lines for the non-covariant, canonical 4-center of mass ${\tilde{x}}^{\mu }(\tau )$ and for the non-covariant, non-canonical M$\o $ller center of energy $R^{\mu }(\tau )$. If in a given inertial frame we consider the positions of ${\tilde{x}}^{\mu }(\tau )$ and $R^{\mu }(\tau )$ corresponding to every possible inertial frame, we get a tube centered on $Y^{\mu }(\tau )$ ( with ${\tilde{x}}^{\mu }(\tau )$ always lying between $Y^{\mu }(\tau )$ and $R^{\mu }(\tau )$). The invariant radius of the tube is determined by the two Casimirs invariant mass $M$ and rest spin $\vec{S}$: $\rho =|\vec{S}|/Mc$. As said in Ref.\cit {3}, this classical intrinsic radius is a \textit{non-local effect of the Lorentz signature } of Minkowski space-time absent in Euclidean spaces and delimits the place of the non-covariant effects (the pseudo-world-lines) connected with the relativistic collective variables. These effects are not classically detectable because the M$\o $ller radius is of the order of the Compton wavelength of the isolated system: an attempt to test its interior would mean to enter in the quantum regime of pair production. The M$\o $ller radius $\rho $ is also a remnant of the energy-conditions of general relativity in the flat Minkowski space-time: if a body has its material radius less than its M$\o $ller one, then there is some inertial frame in which the energy density of the body is not positive definite even if the total energy is positive \cite{26}. \medskip Therefore the Compton wavelength is the best theoretical approximation for the localization of a classical massive particle. \bigskip $\beta $) {Newton-Wigner operator}. As found in Ref.\cite{6}, at the quantum level the spatial component ${\vec{\tilde{x}}}$ of the canonical non-covariant center of mass ${\tilde{x}}^{\mu }=({\tilde{x}}^{o};{\vec \tilde{x}}})$ becomes the Newton-Wigner position operator \cite{27}, whose eigenfunctions are wave functions with infinite tail and a mean width around the eigenvalue of the order of the Compton wavelength. Therefore also at the quantum level there is bad localization. \medskip In Refs.\cite{28} it is said that we cannot consider the Newton-Wigner operator a self-adjoint operator (in the framework of quantum field theory it is neither a local nor a quasi-local operator) but at best a \textit symmetric} operator \footnote{ An operator $A$ in an infinite dimensional Hilbert spaces is said to be symmetric if $\langle Ay|x\rangle =\langle y|Ax\rangle .$ Such operators are not diagonalizable and therefore describe real degrees of freedom which display a form of "unsharpness" or "fuzzyness".}. See Refs.\cite{29} for an approach to \textit{fuzzy localization} based on the use of certain types of symmetric operators. \medskip Let us remark that, whichever point of view is chosen for the position operator, the generators of the Poincar\'{e} algebra (in particular the Lorentz ones) of isolated systems must be described by self-adjoint operators as it is usually assumed. This implies that the 4-momentum operators must be self-adjoint operators. \medskip In the approach presented in Section II these problems appear only for the external non-covariant canonical center of mass described by the Jacobi data $\vec{z}$. While its conjugate variable $\vec{h}$ must be taken as a self-adjoint operator, it is a totally open problem how to quantize $\vec{z}$ and whether one has to introduce super-selection rules \cite{30} either forbidding its measurability or at least forbidding the possibility of making center-of-mass wave packets (only plane waves with fixed eigenvalue of $\vec{h}$ allowed). \medskip However, particle physics experiments utilize beams of particles with a mean 4-momentum and localized around a classical trajectory pointing to the experimental area. Therefore the description of particle beams requires well picked wave packets in momentum space with also a good localization in the 3-space. \subsection{Non-Relativistic Case} In standard NRQM the position of particles are usually described by self-adjoint unbounded operators and usually one says that there are only experimental problems with localization of particles and atoms. \medskip However recently in the framework of the theory of measurements based on POVM there was a revisit of the problem by extending the Wigner-Araki-Yanase theorem \cite{31} from bounded (like angular momentum) to unbounded (position) operators. The theorem says that given a conserved quantity (additive over the system plus apparatus), then a discrete self-adjoint operator non commuting with the conserved quantity does not admit perfectly accurate and repeatable measurements. In Refs.\cite{32} it was shown that generically in an isolated two-body system with conserved momentum the conjugate center-of-mass operator (and also the absolute positions of the two particles) are \textit{unsharp}. Unsharp positions are different from the un-determination of symmetric operators, but the final result is the same. On the other hand, there is no problem with the relative position variable. \medskip \medskip At the experimental level the previous statements have been confirmed also in Refs.\cite{33}, where it is shown that it is only possible to measure mutual relative positions of atoms. Regarding their absolute positions the best localization of atoms which can be realized is at the level of hundred of nanometers \cite{34,35}, much higher than the atom Compton wavelength. \bigskip In conclusion at every level we have indications that the absolute position of massive particles can be determined only with a precision most probably much greater that the Compton wavelength of the particle, as it happens with the radius of the macroscopic tracks of particles in bubble chambers. Therefore also in the non-relativistic case it seems that there are problems with the localization of the center of mass of isolated quantum systems: one can only say that effective atoms are inside the size of the apparatus. As a consequence this state of affairs together with the results of Ref.\cite{33} point to the same picture as in RQM: a non-measurable center of mass plus non-separable relative motions. \bigskip Let us also remark that the standard treatment of non-relativistic particles, with its notion of separability of subsystems, ignores the fact that to take into account electro-magnetic interactions one has to use a $1 / c$ approximation of QED below the threshold of pair production. Only in the limit $c \rightarrow\, \infty$ one has an irreversible contraction of the Poincar\'e algebra to the Galilei one. Therefore atomic physics needs a relativistic formulation like the one of Ref.[5] even when the particles have non-relativistic velocities: as just said this picture is emerging already in NRQM. However the quantization of the electromagnetic field in the rest frame, with the rest-frame conditions implemented, has still to be done. \bigskip \medskip Finally notions like the Planck length, dimensionally relevant when one takes into account gravity, are completely outside the existing experimental level. \subsection{The Notion of Particle in Quantum Field Theory} Often it is said that RQM of particles is an irrelevant theory, because relativistic particles have to be described by QFT, which solves the problem of negative energies with anti-particles and allows pair production. This is an ambiguous statement. Firstly the description of relativistic bound states requires a transition from exact QFT equations (like the Bethe-Salpeter one) to effective RQM ones, valid below the threshold of pair production. Secondly the existing notion of particle in QFT can be done only for free fields and is subject to criticism as it can be seen from the (philosophically oriented, but mathematically relevant) papers of Ref.\cite{36} \footnote{In the interacting case one looses the control on the mass shell condition of the interacting particles. Let us remark that in formulation of RCM of Refs.\cite{2,3,4} the mass-shell condition is a derived property and depends upon the interactions.}. Finally the standard definition of particles by means of the Foc k space gives rise to completely delocalized objects (plane waves to be used in the in- and out-states of scattering theory). Moreover, the definition of positive- and negative-energy particles requires the existence of a time-like Killing vector of the space-time and of a suitable vacuum state (so that in curved space-times without Killing vectors one has to use algebraic QFT, where no sound definition of particle exists \cite{37}). \medskip Here we will consider only the massive Klein-Gordon uncharged quantum field in an inertial frame of Minkowski space-time re-expressed in the radar 4-coordinates of an inertial 3+1 splitting with Euclidean 3-spaces. It has the expression \bea \hat \phi(\tau, \vec \sigma) &=& \int {{d^3k}\over {(2 \pi)^3\, 2 \omega_k}}\, \Big[e^{- i\, (\omega_k\, \tau - \vec k \cdot \vec \sigma)}\, \hat a(\vec k) + e^{i\, (\omega_k\, \tau - \vec k \cdot \vec \sigma)}\, {\hat a}^{\dagger}(\vec k)\Big],\nonumber \\ {}&&\nonumber \\ &&[\hat a({\vec k}_1), \hat a({\vec k}_2)] = [{\hat a}^{\dagger}({\vec k}_1), {\hat a}^{\dagger}({\vec k}_2)] = 0,\qquad [\hat a({\vec k}_1), {\hat a}^{\dagger}({\vec k}_2)] = \delta^3({\vec k}_1 - {\vec k}_2). \label{a1} \eea \noindent with $\omega_k = \sqrt{{\vec k}^2 + m^2\, c^2}$. Instead of the plane waves $e^{\pm i\, (\omega_k\, \tau - \vec k \cdot \vec \sigma)}$ one can use any other basis of positive- and negative-energy solutions of the classical Klein-Gordon equation. By using the creation operators ${\hat a}^{\dagger}(\vec k)$ one can build the standard Fock space starting from the vacuum (defined by $\hat a(\vec k)\, | 0 > = 0$): it describes the particle (or better quanta) states of the theory. \medskip Let us consider a 1-particle state ${\hat a}^{\dagger}(\vec k)\, | 0 >$ with an associated positive-energy solution $g(\tau, \vec \sigma) = < \vec \sigma | g(\tau) >$ of the classical Klein-Gordon equation. Therefore this wave function satisfies $i \hbar {{\partial}\over {\partial\, \tau}}\, g(\tau, \vec \sigma) = + \sqrt{m^2\, c^2 - \hbar^2\, {{\partial^2}\over {\partial\, {\vec \sigma}^2}}}\, g(\tau, \vec \sigma)$. Let $< g(\tau) | {\hat {\vec \sigma}} | g(\tau) >$ and $< g(\tau) | {\hat {\vec \pi}} = i \hbar\, {{\partial}\over {\partial\, \vec \sigma}} | g(\tau) >$ denote the expectation values of the position and momentum operators in this state. \medskip As shown in Ref.\cite{18} \footnote{See the expanded version 1 of the arXiv paper.} we can consider the multipolar expansion of the wave function $g(\tau, \vec \sigma)$ around a classical trajectory ${\vec \sigma}_{cl}(\tau)$. For all the wave functions with vanishing dipole moment with respect to the classical trajectory we get $< g(\tau) | {\hat {\vec \sigma}} | g(\tau) > = {\vec \sigma}_{cl}(\tau)$ and the Ehrenfest theorem implies ${{d}\over {d\, \tau}}\, < g(\tau) | {\hat {\vec \sigma}} | g(\tau) > = {{d\, {\vec \sigma}_{cl}(\tau)}\over {d\, \tau}} = < g(\tau) | {{{\hat {\vec \pi}}}\over {\sqrt{m^2\, c^2 + {\hat {\vec \pi}}^2}}}| g(\tau) > $ and $ {d\over {d\, \tau}}\, < g(\tau) | {\hat {\vec \pi}} = i \hbar\, {{\partial}\over {\partial\, \vec \sigma}} | g(\tau) > = 0$, so that the classical trajectory is determined by the equation ${{d^2\, {\vec \sigma}_{cl}(\tau)}\over {d \tau^2}} = 0$. Therefore it is possible to associate an effective particle following an effective mean trajectory only to all the 1-particle states whose wave function has a vanishing dipole. \bigskip Already in Minkowski space-time, without going to curved space-times and remaining in the area of condensed matter, the definition of an interacting theory governed by a unitary time evolution is a non-trivial problem: see Ref.\cite{38} for the difficulties to define a self-adjoint Hamiltonian operator bounded from below in free and non-free QFT. Even when this can be done, like in some cases with Hamiltonians bilinear in the creation and annihilation operators, the time evolution implies a unitary (i.e. of the Hilbert-Schmidt type) Bogoliubov transformation leading to new creation and annihilation operators linear combination of the old ones. At each time the instantaneous annihilation operator defines a new instantaneous vacuum, from which a new instantaneous Fock space with a different notion of particle can be created. The new 1-particle states are a superposition of all the states (with every possible particle number) of the initial Fock space. The big open problem is wh at kind of quanta (either the initial or the final ones) materialize as effective particles detected by the measuring apparatus (it too can be either inertial or accelerated). \medskip In the free case in Minkowski space-time one can consider uniformly accelerated observers (the Rindler ones used for obtaining the Unruh effect \cite{39}): they use a different time-like Killing vector for defining the notion of positive energy and their description of the free Klein-Gordon quantum field is connected with the standard description given by an inertial observer by a Bogoliubov transformation leading to a representation of the free field unitarily inequivalent to the inertial one. Again which one of the unitarily inequivalent quanta give rise to an effective particle to be detected \cite{40} ? The use of Rindler observers for studying the entanglement of modes of the electro-magnetic field in moving cavities in the framework of quantum optics is a quickly developing sector of relativistic quantum information even if the basic interpretational problems are unsolved. \medskip Moreover it has been shown \cite{41} that if one describes the free massive Klein-Gordon field in non-inertial frames (like it is done in the Tomonaga-Schwinger formulation on arbitrary space-like hyper-surfaces of Minkowski space-time) then generically the time evolution is not unitarily implementable (the implied Bogoliubov transformation is not of the Hilbert-Schmidt type). \medskip In conclusion the notion of particle in QFT is essentially valid for the in- and out-states of the S matrix in inertial frames, a framework relying upon a perturbation expansion with suitable ultra-violet and infra-red cutoffs. \section{Relativistic Entanglement versus Non-Relativistic Entanglement} After the localization problem let us now look at a simple two-body problem to display the \textit{changes in its separability and entanglement properties} going from the non-relativistic case to the relativistic one. This example will also show the explicit construction of the relativistic collective variables in the two-body case.\bigskip As shown in Ref.\cite{42}, the electron-proton system (with masses $m_e$ and $m_p $ respectively) in the hydrogen atom, governed by the Hamiltonian $H = \frac{{{\vec p}^2_e}}{{2\, m_e}}} + {\frac{{{\vec p}^2_p}}{{2\, m_p}}} - \frac{{e^2}}{{|{\vec x}_e - {\vec x}_p|}}} = H_{com} + H_{rel}$ with H_{com} = {\frac{{{\vec p}^2}}{{2\, M}}}$ and $H_{rel} = {\frac{{{\vec p}^2_ }}{{2\, \mu}}} - {\frac{{e^2}}{r}}$ ($M = m_e + m_p$, $\mu = {\frac{{m_e\, m_p}}{M}})$, can be presented in two ways. Either it is composed by the subsystems electron $m_e$ and proton $m_p$ (with coordinates and momenta $ \vec x}_e$, ${\vec p}_e$ and ${\vec x}_p$, ${\vec p}_p$, respectively) or by the subsystems center of mass $M$ (with coordinate and momentum $\vec x = \frac{{m_e\, {\vec x}_e + m_p\, {\vec x}_p}}{M}}$, $\vec p = {\vec p}_e + \vec p}_p$) and relative motion $\mu$ ( with coordinate and momentum $\vec r , ${\vec p}_r$). At the quantum level all the positions and momenta become self-adjoint operators. \medskip Since both in scattering and bound state theories one describes the dynamics in the \textit{preferred} momentum basis with given conserved total momentum $\vec{p}$, the center-of-mass position $\vec{x}$ is un-determined. In the theoretical description of these theories one never considers wave packets in $\vec{p}$ with defined localization properties of the center of mass, but only plane wave with the given value of $\vec{p}$, following the standard approach without considering the problems of unsharp states exposed in the previous Section. \medskip Therefore a stationary solution of the Schroedinger equation for the hydrogen atom (it factorizes only in the center-of-mass and relative variables) in the coordinate representation is \beq \psi (\vec{x},\vec{r}) =\phi _{int}(\vec{r})\,e^{{\frac{i}{{\hbar }}}\ \vec{p}\cdot \vec{x}}= \phi _{int}({\vec{x}}_{e}-{\vec{x}}_{p})\,e^{{\frac{i}{{\hbar }}}\,\vec{p \cdot {\frac{{m_{e}\,{\vec{x}}_{e}+m_{p}\,{\vec{x}}_{p}}}{M}}}{\buildrel {def}\over {=}} \Psi ({\vec{x}}_{e},{\vec{x}}_{p}), \label{1} \eeq \noindent where $\phi_{int}(\vec r)$ is one of the energy levels of the atom. Our presentation here is in the spirit of formal scattering theory done with plane waves in virtually all text books. In actualilty the effective particle beams must be described with Gaussian wave packets with a "classical mean momentum" obtained with flight time methods. \medskip If we now trace out the center of mass, we get the reduced density matrix \rho_{rel}(\vec r, {\vec r}^{^{\prime }}) = \phi_{int}(\vec r)\, \phi^*_{int}({\vec r}^{^{\prime }})$ with the associated entanglement properties of the subsystem \textit{relative motion}. If instead we trace out the proton, we get the reduced density matrix for the entanglement properties of the subsystem electron. In Ref.\cite{42} it is shown that it has the form \begin{eqnarray} \rho_{el}({\vec x}_e, {\vec x}_e^{^{\prime }}) &=& \int d^3x_p\, \psi({\vec }_e, {\vec x}_p)\, \psi^*({\vec x}_e^{^{\prime }}, {\vec x}_p) = \nonumber \\ &=& e^{ {\frac{i}{{\hbar}}}\, {\frac{{m_e}}{M}}\, \vec p \cdot ({\vec x}_e - {\vec x}_e^{^{\prime }}) }\, \rho_{int}({\vec x}_e - {\vec x}_e^{^{\prime }}), \label{2} \end{eqnarray} \noindent and this implies that it is equally like to observe the electron in any position since the center of mass is un-determined. \bigskip To avoid the complications of the full particle and field configurations discussed in Ref.\cite{3}, we will consider the simple two-body system studied in Ref.\cite{4}, which is described in the framework explained in the Introduction. If $\vec{z}$, $\vec{h}$, are the frozen Jacobi data of the relativistic center of mass, at the classical level the rest frame is defined by the embedding of the intrinsic rest 3-spaces of the 3+1 foliation into Minkowski space-time (see Refs.\cite{4,6}; $\tau $ and $\sigma ^{r}$ are radar coordinates and the $W$ index on the embedding refers to the role of the Wigner rest frame) \begin{eqnarray} z_W^{\mu}(\tau, \vec \sigma) &=& Y^{\mu}(\tau) + \epsilon^{\mu}_r(\vec h)\, \sigma^r,\qquad \epsilon^{\mu}_r(\vec h) = \Big( h_r; \delta^i_r + {\frac{ h^i\, h_r}}{{1 + \sqrt{1 + {\vec h}^2}}}}\Big), \nonumber \\ &&{} \nonumber \\ Y^{\mu}(\tau) &=& \Big(\sqrt{1 + {\vec h}^2}\, (\tau + {\frac{{\vec h \cdot \vec z}}{{Mc}}}); {\frac{{\vec z}}{{Mc}}} + (\tau + {\frac{{\vec h \cdot \vec z}}{{Mc}}})\, \vec h + {\frac{{\vec S \times \vec h}}{{Mc\, (1 + \sqrt 1 + {\vec h}^2})}}} \Big), \nonumber \\ {\tilde x}^{\mu}(\tau) &=& Y^{\mu}(\tau ) + \Big(0; {\frac{{- \vec S \times \vec h}}{{Mc\, (1 + \sqrt{1 + {\vec h}^2})}}}\Big), \label{3} \end{eqnarray} \noindent where $Y^{\mu}(\tau)$ is the Fokker-Price center of inertia, $ \tilde x}^{\mu}(\tau)$ the canonical center of mass, $M$ the invariant mass and $\vec S$ the rest spin of the two-body system. The external Poincar\'e group has the generators $P^{\mu} = M c\, h^{\mu} = M\, c\, \Big(\sqrt{1 + \vec h}^2}; \vec h\Big)$, $J^{ij} = z^i\, h^j - z^j\, h^i + \epsilon^{ijk}\, S^k$, $K^i = J^{oi} = - \sqrt{1 + {\vec h}^2}\, z^i + {\frac{{(\vec S \times \vec h)^i}}{{1 + \sqrt{1 + {\vec h}^2}}}}$ (the last term in the boost is responsible for the Wigner covariance of the 3-vectors in the rest Wigner 3-space $\tau = const.$). \medskip Before adding the rest-frame conditions the world-lines and the 4-momenta of the two particles are ($V$ is an arbitrary action-at-a-distance potential \footnote In the electromagnetic case of Ref.\cite{3} the Coulomb potential plus the Darwin one are outside of the square root.}) \begin{eqnarray} x^{\mu}_i(\tau) &=& z^{\mu}_W(\tau, {\vec \eta}_i(\tau)) = Y^{\mu}(\tau) + \epsilon^{\mu}_r(\tau)\, \eta^r_i(\tau), \nonumber \\ p_i^{\mu}(\tau) &=& h^{\mu}\, \sqrt{m_i^2\, c^2 + {\vec \kappa}_i^2(\tau) + V(({\vec \eta}_1(\tau) - {\vec \eta}_2(\tau))^2)} - \epsilon_r^{\mu}(\vec h)\, \kappa_{ir}(\tau), \nonumber \\ && |p_i^2| = m_i^2\, c^2 + V(({\vec \eta}_1(\tau) - {\vec \eta}_2(\tau))^2). \label{4} \end{eqnarray} This equations imply that the un-physical Wigner-covariant 3-positions and 3-momenta inside the rest Wigner 3-space are ${\vec \eta}_i(\tau)$, ${\vec \kappa}_i(\tau)$, $i = 1,2$.. The conserved internal Poincar\'e generators are ($M c$ is the Hamiltonian for the motion inside the Wigner 3-space) \begin{eqnarray} M\, c &=& \sum_{i=1}^2\, \sqrt{m_i^2\, c^2 + {\vec \kappa}^2_i(\tau) + V(( \vec \eta}_1(\tau) - {\vec \eta}_2(\tau))^2)}, \nonumber \\ {\vec {\mathcal{P}}} &=& \sum_{i=1}^2\, {\vec \kappa}_i(\tau) \approx 0, \nonumber \\ \vec S &=& \sum_{i=1}^2\, {\vec \eta}_i(\tau) \times {\vec \kappa}_i(\tau), \nonumber \\ {\vec {\mathcal{K}}} &=& - \sum_{i=1}^2\, {\vec \eta}_i(\tau)\, \sqrt{ m_i^2\, c^2 + {\vec \kappa}_i^2(\tau) + V(({\vec \eta}_1(\tau) - {\vec \eta} _2(\tau))^2)} \approx 0. \label{5} \end{eqnarray} The rest-frame conditions ${\vec{\mathcal{P}}}\approx 0$, ${\vec{\mathcal{K}} }\approx 0$, imply that the physical canonical variables in the rest 3-space are $\vec{\rho}(\tau)={\vec{\eta}}_{1}(\tau) - {\vec{\eta}}_{2}(\tau)$ and \vec{\pi}(\tau) = {\frac{ m_{2}}{M}}\,{\vec{\kappa}}_{1}(\tau) - {\frac{m_{1 }{M}}\,{\vec{\kappa}}_{2}(\tau)$, ($M=m_{1}+m_{2}$). Using these relative variables and imposing the rest frame condition gives the following expressions for the internal center of mass $\vec{\eta}(\tau)$ (conjugate to $\mathcal{\vec{P}\approx }0$) and for the world-lines \begin{eqnarray} \vec{\eta}(\tau) &=&{\frac{{m_{1}\,{\vec{\eta}}_{1}(\tau) + m_{2}\,{\vec{\et }}_{2}(\tau)}}{M}} \approx {\frac{{m_{1}\,\sqrt{m_{2}^{2}\,c^{2}+H(\tau) -m_{2}\,\sqrt{m_{1}^{2}\,c^{2}+H(\tau)}}}{{M\,(\sqrt{m_{1}^{2}\,c^{2}+H(\tau }+\sqrt{m_{2}^{2}\,c^{2}+H(\tau) })}}}\,\vec{\rho}(\tau), \nonumber \\ {}&&\qquad H(\tau)={\vec{\pi}}^{2}(\tau)+V({\vec{\rho}}^{2}(\tau)), \nonumber \\ {} && \nonumber \\ &\Downarrow & \nonumber \\ {} && \nonumber \\ M\,c &\approx &\sqrt{m_{1}^{2}\,c^{2}+H(\tau)}+\sqrt{m_{2}^{2}\,c^{2}+H(\tau },\qquad \vec{S}\approx \vec{\rho}(\tau)\times \vec{\pi}(\tau), \nonumber \\ x_{1}^{\mu }(\tau) &\approx &Y^{\mu }(\tau)+\epsilon _{r}^{\mu }(\vec{h})\, \frac{\sqrt{m_{2}^{2}\,c^{2}+H(\tau)}}{{Mc}}}\,\rho ^{r}(\tau), \nonumber \\ x_{2}^{\mu }(\tau) &\approx &Y^{\mu }(\tau)-\epsilon _{r}^{\mu }(\vec{ })\,{\frac{\sqrt{m_{1}^{2}\,c^{2}+H(\tau)}}{{Mc}}}\,\rho ^{r}(\tau). \label{6} \end{eqnarray} Let us remark that only in the global inertial frame defined by $\vec{h}=0$ (which we designate as the center-of-mass frame) one has $x_{1}^{o}(\tau )=x_{2}^{o}(\tau )=\tau $. For any other value of $\vec{h}$ (for instance in the laboratory frame ${\vec{p}}_{2}(\tau )=0$, so that $\vec{\pi}$ becomes parallel to $\vec{h}$) the time variables of the world-lines do not coincide, so that we cannot make equal time statements by using them (for more details see the relativistic kinetic theory of fluids developed in Ref \cite{43}). \medskip The quantization of the model is done in the \textit{preferred} $\vec h -base. The variables $\vec h$, $\vec \rho$, $\vec \pi$, are replaced by self-adjoint operators. The open problems are whether we replace $\vec z$ with either a self-adjoint or a symmetric operator (see the discussion on the Newton-Wigner operator in Section III) and whether we accept either only momentum plane waves or also wave packets with some localization of the center of mass (the particle beams with mean classical trajectory). If $\vec z$ becomes a self-adjoint operator, then also the external Lorentz generators can be made self-adjoint after a suitable ordering. As a consequence, the operators corresponding to the world-lines of the particles become complicated objects needing non-trivial orderings except in the case \vec h = 0$, the only one in which $\vec \rho = {\vec \eta}_1 - {\vec \eta _2 = {\vec x}_1 - {\vec x}_2$. \medskip If we work in the $\vec{h}$, $\vec{\rho}$, basis of the Hilbert space and we fix $\vec{h}=\vec{k}$, then the wave function is $\psi _{\vec{k}}(\vec{h} \vec{\rho},\tau )=\delta ^{3}(\vec{h}-\vec{k})\,\phi (\vec{\rho},\tau )$ with $\phi (\vec{\rho},\tau )$ satisfying the Schroedinger equation i\,\hbar \,{\frac{{\partial }}{{\partial \,\tau }}}\,\phi (\vec{\rho},\tau ) {\hat{M}}\,c\,\phi (\vec{\rho},\tau )$. By putting $\phi (\vec{\rho},\tau )=\exp (-{\frac{i}{\hbar }}\,{\frac{\epsilon }{c}}\,\tau )\phi (\vec{\rho}) , the stationary solutions $\phi _{nlm}(\vec{\rho})$ satisfy the equations \begin{eqnarray} &&\hat M\, c\, \phi_{nlm}(\vec \rho) = \epsilon_n\, \phi_{nlm}(\vec \rho), \nonumber \\ &&{\hat {\vec S}}^2\, \phi_{nlm}(\vec \rho) = l\, (l + 1)\, \phi_{nlm}(\vec \rho),\qquad {\hat S}_3\, \phi_{nlm}(\vec \rho) = m\, \phi_{nlm}(\vec \rho), \label{7} \end{eqnarray} \noindent and the external 4-momentum becomes $P^{\mu}_n = {\frac{1}{c}}\, \Big(\epsilon_n\, \sqrt{1 + {\vec k}^2}; \epsilon_n\, \vec k\Big)$. In the \vec z$ basis of the Hilbert space we get a plane wave $e^{{\frac{i}{\hbar} \, \vec k \cdot \vec z}$ for the delocalized center of mass. \medskip Regarding entanglement we can trace out the center of mass and find the reduced density matrix of the subsystem "relative motion": it is of the type $\rho_{rel}(\vec \rho, {\vec \rho}^{^{\prime }}) = \phi(\vec \rho)\, \phi^*( \vec \rho}^{^{\prime }})$ like in the non-relativistic case. \medskip In the center-of-mass frame $\vec{h}=0$, where $\vec{\rho}={\vec{\eta}}_{1}- \vec{\eta}}_{2}={\vec{x}}_{1}-{\vec{x}}_{2}$, for $\vec\rho = {\vec \rho} {}^{\prime }$ we get $\rho _{rel}(\vec{\rho},\vec{\rho})=|\phi (\vec{\rho )|^{2}=\rho _{int}({\vec{x}}_{1}-{\vec{x}}_{2})$ to be compared with the non-relativistic equation (4.2) with $\vec{p}=0$. \medskip However we cannot study the subsystem 1 tracing out the subsystem 2: this is the spatial non-separability of relativistic entanglement discussed in the Introduction and in Ref.\cite{6}. \medskip The same problems would appear in the study of the entanglement in scattering processes: see Refs.\cite{44} for some of the existing results. \section{Reconciliation of the Relativistic and Non-Relativistic Worlds Taking into Account the Preparation and the Detection of Particles in Experiments} We have seen that the non-relativistic notion of separability, according to which the Hilbert space of a quantum system composed of subsystems is the tensor product of the Hilbert spaces of the subsystems (zeroth postulate of NRQM), is destroyed in special relativity where clock synchronization is needed to define 3-space and to avoid relative times in bound states.. This fact, together with the non-local nature of relativistic collective variables, induces a spatial non-separability implying that we can speak of subsystems only at an un-physical level, that existing before adding the rest-frame conditions. After their imposition we describe the overall system by a non-measurable external canonical non-covariant decoupled center of mass and by an internal world of Wigner-covariant relative variables. Only the frozen Jacobi data of the center of mass condition $\vec{h}=0$ and the relative variables can be quantized consistently at the physical level. \medskip This picture is strongly different from the standard non-relativistic framework, where there is unitary equivalence between the presentation with separable subsystems and the one with Newton center of mass and relative variables. However we have seen in Section III that there are problems also with the localization of the Newton center of mass (unsharp positions). \medskip Let us consider a non-relativistic (but the same is true at the relativistic level) experiment testing a quantum system, let us say a two-particle system. There are dynamical observers (replaced with mathematical ones like Alice and Bob) using some apparatus for the preparation of the beam of particles to be used to define the system, having the particles interacting in a well (classically defined) way and using some detectors to extract information from the process. As said in the Introduction all these steps, realized by the observers, are imagined and realized by using a strongly classical intuition, in agreement with Bohr's point of view according to which every feasible experiment must admit a classical description. \medskip Coming back to the experiment, we see that the two incoming particles of our system are in special states (mean trajectory and mean momentum) prepared by the apparatus and that the outgoing particles can be detected only if they have a mean trajectory collineated with the detectors. \medskip This means that at the experimental level an isolated two-particle system is a mathematical idealization. As said in the Introduction at best it is a (non-unitary evolving) \textit{open quantum system} \cite{17} always with some interaction first with the preparing apparatus and then with the detectors, both of which should be considered as quantum many-body systems. \medskip Moreover, as noted in Ref.\cite{45}, the observers are not able to define a perfect classical mathematical reference frame, but only a \textit{bounded} one determined by the level of precision of every instrument used. \medskip As a consequence, in a realistic description of an experiment in NRQM one should consider as an idealized isolated quantum system at least the union of our two-particle system plus an environment composed by a quantum many-body description of the preparing apparatus and of the detecting one (they admit an effective classical description in terms of emerging effective notions like a pointer). The presence of interactions forces us to work in the position basis of the overall Hilbert space by using the decoupled Newton center of mass of the whole system and relative variables. Even if the non-relativistic center-of-mass position operator would be measurable (and this is an open problem due to unsharp positions), it is out of reach of the experiment (it pertains to the reference frame of the observer). What is measured of the particles are their relative variables with respect to the preparing apparatus and, after the interaction in the experimental area, with respect to the detectors (the effective mean trajectories of the incoming and outgoing particles) plus the relative variable between the two particles (it controls their mutual interaction). This is in accord with the experimental results of Ref.\cite{33}: only the relative positions of atoms are measurable. For all practical purposes (FAPP) this description is the same that we have at the relativistic level, now both for the isolated two-body problem and for the system particles plus experimental apparatus. \medskip Therefore at the experimental level there is not a drastic difference between the relativistic and the non-relativistic frameworks induced by the Lorentz signature of Minkowski space-time below the threshold of pair production. \section{Concluding Remarks} We have emphasized the differences between relativistic and non-relativistic quantum mechanics and the associated notions of entanglement in inertial frames and at the same time revealed unexpected common features. \medskip Due to the Lorentz signature of Minkowski space-time, creating the problems of clock synchronization to define 3-space, of the elimination of the relative times in bound states and of the non-uniqueness of the relativistic collective variables, at the relativistic level we have global spatial non-separability limiting the existence of subsystems to an un-physical level before adding the rest-frame conditions to eliminate the internal collective variable in the Wigner 3-space. The external decoupled canonical non-covariant relativistic center of mass is a non-local, and therefore non-measurable, quantity already at the classical level. \medskip These non-separability and non-locality both at the classical and quantum levels reduce the relevance of the still debated quantum non-locality of NRQM. As shown in Ref.\cite{3} these properties are present also in CFT and their extension to QFT is a difficult open problem, which adds to the existing problems with the notion of particle in QFT \cite{36} (instability under either unitary or non-unitary Bogoliubov transformations). The relativistic non-separability and non-locality point towards non-local QFT and rise the problem of the validity of \textit{micro-causality} (quantum operators at space-like distances commute) at the relativistic level. This issue has already been raised by Busch \cite{46} with the notion of \textit{\ unsharp observables} (a local operator not measurable with local actions in a given 3-region). According to him sharp spatial localization is an \textit{operationally meaningless idealization} (it requires an infinite amount of energy with unavoidable pair production; the quantum nature of the constituents of the detectors should be taken into account; and so on). \medskip We have shown the status of the particle localization problem both at the relativistic and non-relativistic levels showing the existing theoretical problems and some of the experimental limitations. One would need the identification of some relevant (well mathematically defined) position basis of wave functions with compact support (or like the over-complete coherent state basis for the harmonic oscillator) for the position operator to be used for every type of localization problem. \medskip By taking into account the quantum nature of both the preparing apparatus and of the detectors we have shown that also at the non-relativistic level the real quasi-isolated system with a decoupled center of mass is the full set of "apparatus + detectors + observed system" and that the prepared and detected particles, moving along mean classical Newton trajectories, are described by relative variables. \medskip Since, in any case, electromagnetism is always present, this implies that the relativistic picture is valid at every level in experiments. \medskip We have shown the \textit{non-factual} nature of the mathematical time-like observers and of their mathematical synchronization conventions for building reference frames (see also Ref.\cite{47} in the framework of quantum information). It seems quite difficult to develop a theory in which Alice and Bob are dynamical observers \footnote{First steps in this direction are done with {\it quantum metrology} (see Ref.\cite{48}, ch.7), in which a quantum system is described by means of the relative variables with respect to another quantum system (the observer) without using variables defined with respect to an external classical reference frame (this relational approach is consistent with relativistic non-separability). It is under investigation what is the change in the information and in the entanglement if one goes from the description with respect to a quantum observer to the one with respect to another quantum observer.} and exchange information by using a dynamical electromagnetic field! Instead Alice and Bob are present in every protocol for relativistic quantum information (see Ref.\cite{49} for an old review emphasizing the problems with special relativity), a quickly developing theory, in which the study of the spatio-temporal trajectories of the investigated quantum systems is nearly always lacking. \medskip We have nothing to say about the probabilistic Born rule (randomness of the unique outcomes) and on the nature of quantum reality (objective, subjective, mixture of classical reality and information theory,...). However, if the wave function of a quantum system describes its properties (no ensemble interpretation), then the experimentally observable wave functions have an associated emergent classical description in terms of Newton trajectories. Moreover, the spatial non-separability, introduced by SR already at the classical level, gives rise to many non-ignorable problems: A) the world-lines of the observers and of the macroscopic apparatuses are inter-wined with the investigated relativistic (classical or quantum) system and must be taken into account; B) as already said this implies that the observers can no longer be treated as mathematical decoupled entities but must be macroscopic bodies localized in the space-time; C) the non-separability together with Busch unsharpness show that causality problems can no longer be solved by saying that systems in disjoined regions with space-like distance are un-related; D) therefore foundational statements like the freedom to choose the measurements settings independently from the investigated quantum system (see for instance Refs \cite{50}) are no longer meaningful. Finally in this paper we have only considered inertial frames. The extension of our results to non-inertial frames is now under investigation, with an attempt to avoid uniformly accelerated Rindler observers (they disappear with the light-cone in the non-relativistic limit) but taking into account Unruh-DeWitt detectors \footnote{ See Ref.\cite{51} for the relativistic description of the two-level atoms used in Unruh-DeWitt detectors.} (see the review paper \cite{22} and the bibliography of Ref.\cite{52}). The extension of these ideas to classical general relativity to include gravity can be done along the lines described in the review paper \cite{5}, but the totally open problem is to find a consistent theory of quantum gravity \cite{37}. \vfill\eject
1,108,101,566,850
arxiv
\section{Supplementary Online Material for:\\Evolution of $c$-$f$ hybridization and a two component Hall effect in {$\beta$-YbAlB$_4$}} \section{Methods} Single crystals were grown using Al flux method \cite{macaluso2007crystal}. We performed Hall effect measurements of several single crystals of {$\beta$-YbAlB$_4$}~with different residual resistivity ratios ($RRR$). The Hall voltage and longitudinal resistivity were measured simultaneously using a 5-wire method with current applied within the $ab$-plane and magnetic field parallel to the $c$-axis. $\rho_{xy}$ was obtained from the antisymmetric part of the transverse voltage. $R_H$ is defined as $R_H=\rho_{xy}/H$ and $d\rho_{xy}/dH$, respectively for $T$ and $H$ sweep measurements. At low fields when $\rho_{xy}$ is linear in {$\beta$-YbAlB$_4$}~these definitions agree. \section{Coefficients of the Anomalous Hall effect} The Hall coefficient is described as the sum of normal and anomalous parts, according to \cite{nagaosa2010anomalous}: \begin{align} \label{eqn:S1} \rho_{xy}(T,H)&=R_{N}(T,H) H + R_{a}(T,H) M(T,H)\\ \label{eqn:S2} R_H&=R_N+R_a\chi\equiv R_N+R_A. \end{align} where $T$, $H$, $M$ and $\chi \equiv M/H$ are temperature, applied magnetic field, magnetization, and magnetic susceptibility, respectively. We note that the separation of the NHE coefficient $R_N$ and AHE coefficient $R_A\equiv R_a\chi$ in Eqn.~\ref{eqn:2} is frequently ambiguous, requiring careful consideration of the underlying mechanism of the AHE and the electronic structure. In Fig. 1C of the main article we observe two regions where $R_H\propto\chi$ corresponding to two temperature ranges. The coefficients of Eqn.~\ref{eqn:2} are as in Table ~\ref{tab:AHE}. \begin{table}[H] \begin{center} \caption{Coefficients of Eqn. \ref{eqn:S2} for temperature ranges above and below the minimum in $R_H$} \begin{tabular}{lll} \label{tab:AHE} Temperature range $(K)$ & $R_N~(10^{-10}m^{3}C^{-1})$ & $R_a~(10^{-10}m^{3}C^{-1}\rm{emu}^{-1})$ \\ \hline $10\leq{}T\leq40$ & 11.6 & -2150\\ $60\leq{}T\leq300$ & -17.3 & 550\\ \hline\end{tabular} \end{center} \end{table} We use these coefficients to subtract the anomalous Hall effect (AHE) part from the total Hall coefficient, as described in the main text. The dependence of both total Hall effect and AHE on the magnetic susceptibilty is illustrated in Fig.~\ref{fig:S1}A and the dependence of both the AHE and the normal Hall effect (NHE) coefficients on temperature is illustrated in Fig.~\ref{fig:S1}B. \begin{figure}[!ht] \begin{center} \includegraphics[viewport=37 24 507 525,clip=true,width=0.6\columnwidth]{SOM-Fig1.pdf} \put(-30,310){$(a)$} \put(-30,140){$(b)$} \caption{$(a)$: Procedure used for subtracting the anomalous Hall effect (AHE) from the total Hall effect. $(b)$: Proposed temperature dependence of the normal and anomalous Hall coefficients, $R_N$ and $R_A$ in the temperature range where the subtraction was performed. } \label{fig:S1} \end{center} \end{figure} \section{Curie-Weiss fits for magnetic susceptibility} The analysis of the magnetic suceptibility of {$\beta$-YbAlB$_4$}~is described in detail in \cite{matsumoto2011anisotropic}, here we reproduce the results of that analysis. The Curie-Weiss relation for fields along the $c$-axis in an Ising system is \begin{equation} \label{eqn:CW} \chi_c=\frac{C}{T+\Theta_{\rm W}} \end{equation} where $\Theta_{\rm W}~$~is the Weiss temperature and $C=N_AI_z^2/k_B$ with $N_A$ and $k_B$ the Avagadro and Boltzmann constants and $I_z$ the Ising moment. The constants for {$\beta$-YbAlB$_4$} at high and low temperatures above and below the minimum in $R_H$ are as in Table ~\ref{tab:chi}. \begin{table}[H] \begin{center} \caption{Coefficients of Eqn. \ref{eqn:CW}} \begin{tabular}{lll} \label{tab:chi} Temperature range (K) & $\Theta_{\rm W}~$~$(K)$ & $I_z$ ($\mu_B$) \\ \hline $150\leq{}T\leq350$ & 108 & 2.24\\ $60\leq{}T\leq300$ & 25 & 1.3\\ \hline\end{tabular} \end{center} \end{table} \section{K\"ohler's relation for magnetoresistance} \begin{figure} \begin{center} \includegraphics[viewport=2 3 395 265,clip=true,width=0.7\columnwidth]{Fig5.pdf} \put(-300,220){$(a)$} \put(-120,220){$(b)$} \caption{$(a)$: K\"ohler magnetoresistance plot ($\Delta\rho_{xx}=\rho_{xx}(B)-\rho_{xx}(0)$) in the temperature range of $10\leq{}T\leq110$ K. If the Kohler rule is valid, all the data should collapse on top. $(b)$: Modified K\"ohler plot using the square of $\tan\theta_H=\rho_{xy}/\rho_{xx}$ for the horizontal axis. All the data reasonably overlap each other, indicating the modified version holds for {$\beta$-YbAlB$_4$}. } \label{fig:SOM-Kohler} \end{center} \end{figure} The K\"ohler relation for the magnetoresistance is usualy expressed as \begin{equation} \frac{\rho_{xx}(B)-\rho_{xx}(B=0)}{\rho_{xx}}=\frac{\Delta\rho_{xx}}{\rho_{xx}}=F\Big(\frac{B}{\rho_{xx}}\Big) \end{equation} where $F$ is a function that depends on the electronic structure. This is expected to hold in conventional metals where a single scattering time determines the transport properties. Therefore, a so called K\"ohler plot of $\Delta\rho_{xx}/\rho_{xx}$ vs. $B/\rho_{xx}$ on a logarithmic scale is expected to collapse the data from all fields on a single line. Fig.~\ref{fig:SOM-Kohler} shows the K\"ohler plot for {$\beta$-YbAlB$_4$}{}, indicating that the K\"ohler relation does not hold in this material. Several materials have been reported for which the conventional K\"ohler scaling does not hold. Broadly speaking these belong to the class of strongly correlated electron materials wherein strong spin or charge fluctuations influence the transport. Prominent examples are the cuprate high temperature superconductors \cite{harris1995violation} and $f$-electron based heavy electron materials \cite{nakajima2007non}. An alternative, so called modified K\"ohler, scaling was proposed for the cuprates \cite{harris1995violation} \begin{equation} \frac{\Delta\rho_{xx}}{\rho_{xx}}\propto\tan^2\theta_H, \end{equation} where $\theta_H$ is the Hall angle ($\tan\theta_H=\rho_{xy}/\rho_{xx}$). Figure~\ref{fig:SOM-Kohler} shows this scaling relation for {$\beta$-YbAlB$_4$} using raw data for $\rho_{xy}$ (\emph{i.e.} without subtracting the AHE), indicating that it holds reasonably accurately in the temperature range $10<T<110$ K. The modified K\"ohler rule was proposed to consider the presence of an additional lifetime $\tau_H$ that governs the transverse transport and comes from antiferromagnetic fluctuations. For the case of {$\beta$-YbAlB$_4$}, the failure of the K\"ohler rule and the success of the modified K\"ohler relation down to 10 K suggests that strong spin and charge fluctuations affect the transport despite the onset of coherence at $T_{\rm K} = 250$ K.
1,108,101,566,851
arxiv
\section{Introduction} Many theorists now believe that there is a unified framework for all string theories, which also accomodates $d=11$ supergravity \cite{CJS}. Much of the evidence for this elusive theory, called ``M-Theory'' \cite{M}, is based on recent work on duality symmetries in string theory which suggests that all string theories are connected through a web of non-perturbative dualities \cite{duality}. Although it is unknown what M-theory really is, we can probably assert with some confidence $(i)$ that it will be a pregeometrical theory, in which space-time as we know it will emerge as a secondary concept (which also means that it makes little sense to claim that the theory ``lives'' in either ten or eleven dimensions), and $(ii)$ that it should possess a huge symmetry involving new and unexplored types of Lie algebras (such as hyperbolic Kac Moody algebras), and perhaps other exotic structures such as quantum groups. In particular, the theory should be background independent and should be logically deducible from a vast generalization of the principles underlying general relativity. According to a widely acclaimed recent proposal \cite{BFSS} M-Theory ``is'' the $N\rightarrow\infty$ limit of the maximally supersymmetric quantum mechanical $SU(N)$ matrix model \cite{SUSY} (see \cite{dW} for recent reviews, points of view and comprehensive lists of references). This model had already appeared in an earlier investigation of the $d=11$ supermembrane \cite{BST} in a flat background in the light cone gauge \cite{dWHN}. Crucial steps in the developments leading up to this proposal were the discovery of Dirichlet $p$-branes and their role in the description of non-perturbative string states \cite{Pol}, and the realization that the dynamics of an ensemble of such objects is described by dimensionally reduced supersymmetric Yang Mills theories \cite{Witten2}. Although there are a host of unsolved problems in matrix theory, two central ones can perhaps be singled out: one is the question whether the matrix model admits massless normalizable states for any $N$ (see \cite{massless} for recent work in this direction); the other is related to the still unproven existence of the $N\rightarrow\infty$ limit. This would have to be a weak limit in the sense of quantum field theory, requiring the existence of a universal function $g=g(N)$ (the coupling constant of the $SU(N)$ matrix model) such that the limit $N\rightarrow\infty$ exists for all correlators. The existence of this limit would be equivalent to the renormalizability of the supermembrane \cite{dWHN}. However, even if these problems can be solved eventually, important questions remain with regard to the assertions made above: while matrix theory is pregeometrical in the sense that the target space coordinates are replaced by matrices, thus implying a kind of non-commutative geometry, the hidden exceptional symmetries of dimensionally reduced supergravities discovered long ago \cite{CJ,Julia1} are hard to come by (see \cite{EGKR} and references therein). In the first part of this contribution, I will report on work \cite{NM}, which was motivated by recent advances in string theory as well as the possible existence of an Ashtekar-type canonical formulation of $d=11$ supergravity. Although at first sight our results, which build on earlier work of \cite{dewnic1,nic1}, may seem to be of little import for the issues raised above, I will argue that they could actually be relevant, assuming (as we do) that the success of the search for M-Theory will crucially depend on the identification of its underlying symmetries, and that the hidden exceptional symmetries of maximal supergravity theories may provide important clues as to where we should be looking. Namely, as shown in \cite{dewnic1,nic1}, the local symmetries of the dimensionally reduced theories can be partially ``lifted'' to eleven dimensions, indicating that these symmetries may have a role to play also in a wider context than that of dimensionally reduced supergravity. The existence of alternative versions of $d=11$ supergravity, which, though equivalent on-shell to the original version of \cite{CJS}, differ from it off-shell, suggests the existence of a novel kind of ``exceptional geometry'' for $d=11$ supergravity and the bigger theory containing it. This new geometry would be intimately tied to the special properties of the exceptional groups, and would be characterized by relations such as (\ref{id1})--(\ref{id4}) below, which have no analog in ordinary Riemannian geometry. The hope is, of course, that one may in this way gain valuable insights into what the (surely exceptional) geometry of M-Theory might look like, and that our construction may provide a simplified model for it. After all, we do not even know what the basic physical concepts and mathematical ``objects'' (matrices, BRST string functionals, spin networks,...?) of such a theory should be, especially if it is to be a truly pregeometrical theory of quantum gravity. The second part of this paper discusses the infinite dimensional symmetries of $d=2$ supergravities \cite{Julia1,Julia2,BM,BMG,N,JN1,BJ} and an ansatz that would incorporate them into the construction of \cite{NM,dewnic1,nic1}. The point of view adopted here is that the fundamental object of M-Theory could well be a kind of ``Unendlichbein'' belonging to an infinite dimensional coset space (cf. (\ref{coset3}) below), which would generalize the space $GL(4,{\bf R})/{\rm SO}(1,3)$ of general relativity. This bein would be acted upon from the right side by a huge extension of the Lorentz group, containing not only space-time, but also internal symmetries, and perhaps even local supersymmetries. For the left action, one would have to appeal to some kind of generalized covariance principle. An intriguing, but also puzzling, feature of the alternative formulations of $d=11$ supergravity is the apparent loss of manifest general covariance, as well as the precise significance of the global $E_{11-d}$ symmetries of the dimensionally reduced theories. This could mean that in the final formulation, general covariance will have to be replaced by something else. The approach taken here is thus different from and arguably even more speculative than current ideas based on matrix theory, exploiting the observation that instead of dimensionally reducing the maximally extended {\em rigidly} supersymmetric theory to one dimension, one might equally well contemplate reducing the maximally extended {\em locally} supersymmetric theory to one (light-like $\equiv$ null) dimension. While matrix theory acquires an infinite number of degrees of freedom only in the $N\rightarrow\infty$ limit, the chirally reduced supergravity would have an infinite number from the outset, being one half of a field theory in two dimensions. The basic idea is then that upon quantization the latter might undergo a similarly far-reaching metamorphosis as the quantum mechanical matrix model, its physical states being transmuted into ``target space'' degrees of freedom as in string theory \cite{nic2}. This proposal would amount to a third quantization of maximal ($N=16$) supergravity in two dimensions, where by ``third quantization'' I mean that the quantum treatment should take into account the gravitational degrees of freedom on the worldsheet, i.e. its (super)moduli for arbitrary genus. The model can be viewed as a very special example of $d=2$ quantum cosmology; with the appropriate vertex operator insertions the resulting multiply connected $d=2$ ``universes'' can be alternatively interpreted as multistring scattering diagrams \cite{Mandel}. One attractive feature of this proposal is that it might naturally bring in $E_{10}$ as a kind of non-perturbative spectrum generating (rigid) symmetry acting on the third quantized Hilbert space, which would mix the worldsheet moduli with the propagating degrees of freedom. A drawback is that these theories are even harder to quantize than the matrix model (see, however, \cite{KNS} and references therein). \section{${\rm SO}(1,2) \times {\rm SO}(16)$ invariant supergravity in eleven dimensions} In \cite{dewnic1,nic1}, new versions of $d=11$ supergravity \cite{CJS} with local ${\rm SO}(1,3) \times {\rm SU}(8)$ and ${\rm SO}(1,2) \times {\rm SO}(16)$ tangent space symmetries, respectively, have been constructed. \cite{NM} develops these results further (for the ${\rm SO}(1,2) \times {\rm SO}(16)$ invariant version of \cite{nic1}), and also discusses a hamiltonian formulation in terms of the new variables. In both versions the supersymmetry variations acquire a polynomial form from which the corresponding formulas for the maximal supergravities in four and three dimensions can be read off directly and without the need for complicated duality redefinitions. This reformulation can thus be regarded as a step towards the complete fusion of the bosonic degrees of freedom of $d=11$ supergravity (i.e. the elfbein $E_M^{~A}$ and the antisymmetric tensor $A_{MNP}$) in a way which is in harmony with the hidden symmetries of the dimensionally reduced theories. For lack of space, and to exhibit the salient features as clearly as possible I will restrict the discussion to the bosonic sector. To derive the ${\rm SO}(1,2) \times {\rm SO}(16)$ invariant version of \cite{nic1,NM} from the original formulation of $d=11$ supergravity, one first breaks the original tangent space symmetry SO(1,10) to its subgroup ${\rm SO}(1,2) \times {\rm SO}(8)$ through a partial choice of gauge for the elfbein, and subsequently enlarges it again to ${\rm SO}(1,2) \times {\rm SO}(16)$ by introducing new gauge degrees of freedom. The symmetry enhancement of the transverse (helicity) group SO(9)$\, \subset \,$ SO(1,10) to ${\rm SO(16)}$ requires suitable redefinitions of the bosonic and fermionic fields, or, more succinctly, their combination into tensors w.r.t. the new tangent space symmetry. The construction thus requires a 3+8 split of the $d=11$ coordinates and indices, implying a similar split for all tensors of the theory. It is important, however, that the dependence on all eleven coordinates is retained throughout. The elfbein and the three-index photon are thus combined into new objects covariant w.r.t. to the new tangent space symmetry. In the special Lorentz gauge preserving ${\rm SO}(1,2) \times {\rm SO}(8)$ the elfbein takes the form \begin{eqnarray} E_M^{~A} = \left(\begin{array}{cc} \Delta^{-1}e_\mu^{~a} & B_\mu^{~m} e_m^{~a}\\ 0& e_m^{~a} \end{array} \right) \label{11bein} \end{eqnarray} where curved $d=11$ indices are decomposed as $M=(\mu ,m)$ with $\mu =0,1,2$ and $m= 3,...,10$ (with a similar decomposition of the flat indices), and $\Delta := {\rm det} \, e_m^{~a}$. In this gauge, the elfbein contains the (Weyl rescaled) dreibein and the Kaluza Klein vector $B_\mu{}^{m}$ both of which will be kept in the new formulation. By contrast, the internal achtbein is replaced by a rectangular 248-bein $(e^{m}_{IJ},e^{m}_{A})$ containing the remaining ``matter-like'' degrees of freedom, where $([IJ],A)$ label the 248-dimensional adjoint representation of $E_8$ in the SO(16) decomposition. This 248-bein, which in the reduction to three dimensions contains all the propagating bosonic matter degrees of freedom of $d=3,N=16$ supergravity, is defined in a special SO(16) gauge by \begin{eqnarray} (e^m_{IJ},e^m_A ) := \left\{ \begin{array}{ll} \Delta^{-1} e_a^{~m} \Gamma^a_{\alpha \dot \beta} & \mbox{if $[IJ]$ or $A = (\alpha \dot \beta)$}\\ 0 & \mbox{otherwise} \end{array} \right. \end{eqnarray} where the ${\rm SO(16)}$ indices $IJ$ or $A$ are decomposed w.r.t. the diagonal subgroup ${\rm SO}(8)\equiv ({\rm SO}(8)\times {\rm SO}(8))_{diag}$ of ${\rm SO(16)}$ (see \cite{nic1} for details). Being the inverse densitized internal achtbein contracted with an SO(8) $\Gamma$-matrix, this object is very much analogous to the inverse densitized triad in the framework of Ashtekar's reformulation of Einstein's theory \cite{A}. Note that, due to its rectangularity, there does not exist an inverse for the 248-bein (nor is one needed for the supersymmetry variations and the equations of motion!). In addition we need the composite fields $(Q_{\mu}^{IJ}, P_{\mu}^{A})$ and $(Q_{m}^{IJ}, P_{m}^{A})$, which together make up an $E_8$ connection in eleven dimensions and whose explicit expressions in terms of the $d=11$ coefficients of anholonomity and the four-index field strength $F_{MNPQ}$ can be found in \cite{nic1}. The new geometry is encoded into algebraic constraints between the vielbein components, which are without analog in ordinary Riemannian geometry because they rely in an essential way on special properties of the exceptional group $E_8$. We have \begin{eqnarray} e^m_A e^n_A - \bruch{1}{2}e^m_{IJ} e^n_{IJ} = 0 \label{id1} \end{eqnarray} and \begin{eqnarray} \Gamma^{IJ}_{AB} \Big( e^m_B e^n_{IJ} - e^n_B e^m_{IJ} \Big) = 0 \qquad \Gamma^{IJ}_{AB} e^m_A e^n_B + 4 e^m_{K[I} e^n_{J]K} = 0 \label{id2} \end{eqnarray} where $\Gamma^I_{A\dot A}$ are the standard SO(16) $\Gamma$-matrices and $\Gamma_{AB}^{IJ}\equiv (\Gamma^{[I} \Gamma^{J]})_{AB}$, etc.; the minus sign in (\ref{id1}) reflects the fact that we are dealing with the maximally non-compact form $E_{8(+8)}$. While the SO(16) covariance of these equations is manifest, it turns out, remarkably, that they are also covariant under $E_8$. Obviously, (\ref{id1}) and (\ref{id2}) correspond to the singlet and the adjoint representations of $E_8$. More complicated are the following relations transforming in the $\bf 3875$ representation of $E_8$ \begin{eqnarray} e^{(m}_{IK} e^{n)}_{JK} - \bruch{1}{16} \delta_{IJ} e^m_{KL} e^n_{KL} &=& 0 \nonumber \\ \Gamma^K_{\dot A B} e^{(m}_B e^{n)}_{IK} - \bruch{1}{14} \Gamma^{IKL}_{\dot A B} e^{(m}_B e^{n)}_{KL} &=& 0 \nonumber \\ e^{(m}_{[IJ} e^{n)}_{KL]} + \bruch{1}{24} e^m_A \Gamma^{IJKL}_{AB} e^n_B &=& 0 \label{id4} \end{eqnarray} Yet another set of relations involves the $\bf 27000$ representation of $E_8$ \cite{NM}. The 248-bein and the new connection fields are subject to a ``vielbein postulate" similar to the usual vielbein postulate stating the covariant constancy of the vielbein w.r.t. to generally covariant and Lorentz covariant derivative: \begin{eqnarray} (\partial_\mu - B_\mu^{~n}\partial_n) e_{IJ}^{m} + \partial_{n} B_{\mu}{}^{n} e^{m}_{IJ} + \partial_{n}B_{\mu}{}^{m} e^{n}_{IJ} + 2\, {{Q_\mu}^K}_{[I} e^m_{J]K} + P_\mu^A \Gamma^{IJ}_{AB} e_m^B &=& 0 \nonumber \\ (\partial_\mu - B_\mu^{~n} \partial_n) e_{A}^{m} + \partial_{n} B_{\mu}{}^{m} e^{n}_{A} + \partial_{n}B_{\mu}{}^{n} e^{m}_{A} + \bruch{1}{4} Q_\mu^{IJ} \Gamma^{IJ}_{AB} e^m_B - \bruch{1}{2} \Gamma^{IJ}_{AB} P_{\mu}^{B} e^{m}_{IJ} &=& 0 \nonumber \\ \partial_m e^{n}_{IJ} + 2\, {{Q_m}^K}_{[I} e^n_{J]K} +P_{m}^{A} \Gamma^{IJ}_{AB} e^n_B & = & 0 \nonumber \\ \partial_m e^{n}_{A} + \bruch{1}{4}Q_m^{IJ} \Gamma^{IJ}_{AB} e^n_B -\bruch{1}{2} \Gamma^{IJ}_{AB} P_{m}^{B} e^{n}_{IJ} & = & 0 \label{VVP4} \end{eqnarray} Like (\ref{id1})--(\ref{id4}), these relations are $E_8$ covariant. It must be stressed, however, that the full theory of course does not respect $E_8$ invariance. A puzzling feature of (\ref{VVP4}) is that the covariantization w.r.t. an affine connection is ``missing'' in these equations, even though the theory is still invariant under $d=11$ coordinate transformations. One can now show that the supersymmetry variations of $d=11$ supergravity can be entirely expressed in terms of these new variables (and their fermionic partners). The reduction of $d=11$ supergravity to three dimensions yields $d=3, N=16$ supergravity \cite{MS}, and is accomplished rather easily, since no duality redefinitions are needed any more, unlike in \cite{CJ}. The propagating bosonic degrees of freedom in three dimensions are all scalar, and combine into a matrix ${\mathcal{V}}(x)$, which is an element of a non-compact $E_{8(+8)}/{\rm SO(16)}$ coset space, and whose dynamics is governed by a non-linear $\sigma$-model coupled to $d=3$ gravity. The identification of the 248-bein with the $\sigma$-model field ${\mathcal{V}}\in E_8$ is given by \begin{eqnarray} e^m_{IJ} = \bruch{1}{60}{\rm Tr} \, \big( Z^m {\mathcal{V}} X^{IJ} {\mathcal{V}}^{-1} \big) \qquad e^m_A = \bruch{1}{60}{\rm Tr} \, \big( Z^m {\mathcal{V}} Y^A {\mathcal{V}}^{-1} \big) \label{bein1} \end{eqnarray} where $X^{IJ}$ and $Y^A$ are the compact and non-compact generators of $E_8$, respectively, and where the $Z^m$ for $m=3,...,10$ are eight non-compact commuting generators obeying ${\rm Tr} (Z^m Z^n) = 0$ for all $m$ and $n$ (the existence of eight such generators is a consequence of the fact that the coset space $E_{8(+8)}/{\rm SO(16)}$ has real rank 8 and therefore admits an eight-dimensional maximal flat and totally geodesic submanifold \cite{H}). This reduction provides a ``model'' for the exceptional geometry, where the relations (\ref{id1})--(\ref{VVP4}) can be tested by means of completeness relations for the $E_8$ Lie algebra generators in the adjoint representation. Of course, this is not much of a test since all dependence on the internal coordinates is dropped in (\ref{bein1}), and the terms involving $B_\mu^{~m}$ disappear altogether. It would be desirable to find other ``models" with non-trivial dependence on the internal coordinates. The only example of this type so far is provided by the $S^7$ truncation of $d=11$ supergravity for the ${\rm SO}(1,3) \times {\rm SU}(8)$ invariant version of $d=11$ supergravity \cite{dewnic2}. \section{More Symmetries} The emergence of hidden symmetries of the exceptional type in extended supergravities \cite{CJ} was a remarkable and, at the time, quite unexpected discovery. It took some effort to show that the general pattern continues when one descends to $d=2$ and that the hidden symmetries become infinite dimensional \cite{Julia1,Julia2,BM,BMG,N,JN1,BJ}, generalizing the Geroch group of general relativity \cite{Geroch}. As we will see, even the coset structure remains, although the mathematical objects one deals with become a lot more delicate. The fact that the construction described above works with a 4+7 and 3+8 split of the indices suggests that we should be able to go even further and to construct versions of $d=11$ supergravity with infinite dimensional tangent space symmetries, which would be based on a 2+9 or even a 1+10 split of the indices. This would also be desirable in view of the fact that the new versions are ``simple'' only in their internal sectors. The general strategy is thus to further enlarge the internal sector by absorbing more and more degrees of freedom into it, such that in the final step corresponding to a 1+10 split, only an einbein is left in the low dimensional sector. Although the actual elaboration of these ideas has to be left to future work, I will try to give at least a flavor of some anticipated key features. \subsection{Reduction to two dimensions} Let us first recall some facts about dimensional reduction of maximal supergravity to two dimensions. Following the empirical rules of dimensional reduction one is led to predict $E_9 = E_8^{(1)}$ as a symmetry for the dimensional reduction of $d=11$ supergravity to two dimensions \cite{Julia1}. This expectation is borne out by the existence of a linear system for maximal $N=16$ supergravity in two dimensions \cite{nic2,NW} (see \cite{BelZak,BM} for the bosonic theory). The linear system requires the introduction of an extra ``spectral" parameter $t$, and the extension of the $\sigma$-model matrix ${\mathcal{V}} (x)$ to a matrix ${\widehat{\mathcal{V}}}(x;t)$ depending on this extra parameter $t$, as is generally the case for integrable systems in two dimensions. An unusual feature is that, due to the presence of gravitational degrees of freedom, this parameter becomes coordinate dependent, i.e. we have $t=t(x;w)$, where $w$ is an integration constant, sometimes referred to as the ``constant spectral parameter'' whereas $t$ itself is called the ``variable spectral parameter''. Here, we are mainly concerned with the symmetry aspects of this system, and with what they can teach us about the $d=11$ theory itself. The coset structure of the higher dimensional theories has a natural continuation in two dimensions, with the only difference that the symmetry groups are infinite dimensional. This property is manifest from the transformation properties of the linear system matrix ${\widehat{\mathcal{V}}}$, with a global affine symmetry acting from the left, and a local symmetry corresponding to some ``maximal compact'' subgroup acting from the right: \begin{eqnarray} {\widehat{\mathcal{V}}} (x;t) \longrightarrow g(w) {\widehat{\mathcal{V}}}(x;t) h(x;t) \end{eqnarray} Here $g(w)\in E_9$ with affine parameter $w$, and the subgroup to which $h(x;t)$ belongs is characterized as follows \cite{Julia2,BM}. Let $\tau$ be the involution characterizing the coset space $E_{8(+8)}/{\rm SO(16)}$: then $h(t)\in{\rm SO}(16)^\infty_{\varepsilon}$ is defined to consist of all $\tau^\infty$ invariant elements of $E_9$, where the extended involution $\tau^\infty$ is defined by $\tau^\infty(h(t)):= \tau h(\varepsilon t^{-1})$, with $\varepsilon=+1$ (or $-1$) for a Lorentzian (Euclidean) worldsheet. For $\varepsilon=1$, which is the case we are mainly interested in, we will write ${\rm SO}(16)^\infty \equiv {\rm SO}(16)^\infty_{\varepsilon}$. We also note that ${\rm SO}(16)^\infty_{\varepsilon}$ is different from the affine extension of ${\rm SO(16)}$ for either choice of sign. What has been achieved by the coset space description is the following: by representing the ``moduli space of solutions'' ${\mathcal{M}}$ (of the bosonic equations of motion of $d=11$ supergravity with nine commuting space-like Killing vectors) as \begin{eqnarray} {\mathcal{M}} = \frac{{\rm solutions \, of \, field \, equations}}% {{\rm diffeomorphisms}} = \frac{E_9}{{\rm SO}(16)^\infty} \label{coset1} \end{eqnarray} we have managed to endow this space, which a priori is very complicated, with a group theoretic structure, that makes it much easier to handle. In particular, the integrability of the system is directly linked to the fact that ${\mathcal{M}}$ possesses an infinite dimensional ``isometry group'' $E_9$. The introduction of infinitely many gauge degrees of freedom embodied in the subgroup ${\rm SO}(16)^\infty$ linearizes and localizes the action of this isometry group on the space of solutions. Of course, in making such statements, one should keep in mind that a mathematically rigorous construction of such spaces is a thorny problem. This is likewise true for the infinite dimensional groups\footnote{For instance, the Geroch group can be defined rigorously to consist of all maps from the complex $w$ plane to $SL(2,{\bf R})$ with meromorphic entries. With this definition, one obtains all multisoliton solutions of Einstein's equations, and on this solution space the group acts transitively by construction. Whether this is the right choice or not is then a matter of physics, not mathematics.} and their associated Lie algebras; the latter being infinite dimensional vector spaces, there are myriad ways of equiping them with a topology. We here take the liberty of ignoring these subleties, not least because these spaces ultimately will have to be ``quantized'' anyway. There is a second way of defining the Lie algebra of ${\rm SO}(16)^\infty_{\varepsilon}$ which relies on the Chevalley-Serre presentation. Given a finite dimensional non-compact Lie group $G$ with maximal compact subgroup $H$, a necessary condition for this prescription to work is that ${\rm dim} \,H = \frac12 ({\rm dim}\, G - {\rm rank}\, G)$, and we will subsequently extend this prescription to the infinite Lie group. Let us first recall that any (finite or infinite dimensional) Kac Moody algebra is recursively defined in terms of multiple commutators of the Chevalley generators subject to certain relations \cite{Kac}. More specifically, given a Cartan matrix $A_{ij}$ and the associated Dynkin diagram, one starts from a set of $sl(2,{\bf R})$ generators $\{e_i,f_i,h_i\}$, one for each node of the Dynkin diagram, which in addition to the standard $sl(2,{\bf R})$ commutation relations \begin{eqnarray} [h_i , h_j] = 0 \qquad [e_i , f_j] = \delta_{ij} h_j \nonumber \end{eqnarray} \begin{eqnarray} [h_i , e_j] = A_{ij} e_j \qquad [h_i , f_j ] = -A_{ij} f_j \label{Serre1} \end{eqnarray} are subject to the multilinear Serre relations \begin{eqnarray} [e_i,[e_i,...[e_i,e_j]...]] = 0 \qquad [f_i,[f_i,...[f_i,f_j]...]] = 0 \label{Serre2} \end{eqnarray} where the commutators are $(1-A_{ij})$-fold ones. The Lie algebra is then by definition the linear span of all multiple commutators which do not vanish by virtue of these relations. To define the subalgebra ${\rm SO}(16)^\infty_{\varepsilon}$, we first recall that the Chevalley involution $\theta$ is defined by \begin{eqnarray} \theta(e_i) = -f_i \qquad \theta(f_i) = -e_i \qquad \theta(h_i) = -h_i \end{eqnarray} This involution, like the ones to be introduced below, leaves invariant the defining relations (\ref{Serre1}) and (\ref{Serre2}) of the Kac Moody algebra, and extends to the whole Lie algebra via the formula $\theta ([x,y])=[\theta (x),\theta (y)]$. It is not difficult to see that, for $E_8$ (and also for $sl(n,{\bf R})$), we have $\tau = \theta$, and the maximal compact subalgebras defined above correspond to the subalgebras generated by the multiple commutators of the $\theta$ invariant elements $(e_i-f_i)$ in both cases. The trick is now to carry over this definition to the affine extension, whose associated Cartan matrix has a zero eigenvalue. To do this, however, we need a slight generalization of the above definition; for this purpose, we consider involutions $\omega$ that can be represented as products of the form \begin{eqnarray} \omega = \theta \cdot s \label{involution} \end{eqnarray} where the involution $s$ acts as \begin{eqnarray} s (e_i) = s_i e_i \qquad s (f_i) = s_i f_i \qquad s (h_i) = h_i \label{s} \end{eqnarray} with $s_i = \pm 1$. It is important that different choices of $s_i$ do not necessarily lead to inequivalent involutions (the general problem of classifying the involutive automorphisms of infinite dimensional Kac Moody algebras has so far not been completely solved, see e.g. \cite{inv}\footnote{I am very grateful to C.~Daboul for helpful discussions on this topic.}). In particular for $E_9$, which is obtained from $E_8$ by adjoining another set $\{e_0,f_0,h_0\}$ of Chevalley generators, we take $s_i =1$ for all $i\geq 1$, whereas $s_0 = \varepsilon$, with $\varepsilon$ as before, i.e. $\varepsilon=+1$ (or $-1$) for Lorentzian (Euclidean) worldsheet. Thus, on the extended Chevalley generators, \begin{eqnarray} \omega (e_0) = - \varepsilon f_0 \qquad \omega (f_0) = - \varepsilon e_0 \qquad \omega (h_0) = -h_0 \label{involution1} \end{eqnarray} With this choice, the involution $\omega$ coincides with the involutions defined before for the respective choices of $\varepsilon$, i.e. $\omega = \tau^\infty$, and therefore the invariant subgroups are the same, too. For $\varepsilon = 1$, the involution $\omega$ defines an infinite dimensional ``maximal compact'' subalgebra consisting of all the negative norm elements w.r.t. to the standard bilinear form \begin{eqnarray} \langle e_i| f_j\rangle = \delta_{ij} \qquad \langle h_i| h_j\rangle = A_{ij} \end{eqnarray} (the norm of any given multiple commutator can be determined recursively from the fundamental relation $\langle[x,y]|z\rangle = \langle x|[y,z]\rangle$). The notion of ``compactness'' here is thus algebraic, not topological: the subgroup ${\rm SO}(16)^\infty$ will not be compact in the topological sense (recall the well known example of the unit ball in an infinite dimensional Hilbert space, which is bounded but not compact in the norm topology). On the other hand, for $\varepsilon=-1$, the group ${\rm SO}(16)^\infty_{\varepsilon}$ is not even compact in the algebraic sense, as $e_0+f_0$ has positive norm. However, this is in accord with the expectation that ${\rm SO}(16)^\infty_{\varepsilon}$ should contain the (non-compact) group SO(1,8) rather than SO(9) if one of the compactified dimensions is time-like. \subsection{$2+9$ split} Let us now consider the extension of the results described in section 2 to the situation corresponding to a 2+9 split of the indices. Elevating the local symmetries of $N=16$ supergravity from two to eleven dimensions would require the existence of yet another extension of the theory, for which the Lorentz group SO(1,10) is replaced by ${\rm SO}(1,1) \times {\rm SO}(16)^\infty$; the subgroup ${\rm SO}(16)^\infty$ can be interpreted as an extension of the transverse group SO(9) in eleven dimensions. Taking the hints from (\ref{11bein}), we would now decompose the elfbein into a zweibein and nine Kaluza Klein vectors $B_\mu^{~m}$ (with $m=2,...,10$). The remaining internal neunbein would have to be replaced by an ``Unendlichbein'' $\big(e^m_{IJ}(x;t),e^m_A(x;t)\big)$, depending on a spectral parameter $t$, necessary to parametrize the infinite dimensional extension of the symmetry group. However, in eleven dimensions, there is no anolog of the dualization mechanism, which would ensure that despite the existence of infinitely many dual potentials, there are only finitely many physical degrees of freedom. This indicates that if the construction works it will take us beyond $d=11$ supergravity. Some constraints on the geometry can be deduced from the requirement that in the dimensional reduction to $d=2$, there should exist a formula analogous to (\ref{bein1}), but with ${\mathcal{V}}$ replaced by the linear system matrix ${\widehat{\mathcal{V}}}$, or possibly even the enlarged linear system of \cite{JN1}. Evidently, we would need a ninth nilpotent generator to complement the $Z^m$'s of (\ref{bein1}); an obvious candidate is the central charge generator $c$, since it obeys $\langle c|c \rangle = \langle c| Z^m \rangle = 0$ for all $m=3,...,10$. The parameter $t$, introduced somewhat ad hoc for the parametrization of the unendlichbein, must obviously coincide with the spectral parameter of the $d=2$ theory, and the generalized ``unendlichbein postulate'' should evidently reduce to the linear system of $d=2$ supergravity in this reduction. To write it down, we need to generalize the connection coefficients appearing in the linear system. The latter are given by \begin{eqnarray} {\mathcal{Q}}_\mu^{IJ} = Q_\mu^{IJ} + \dots \qquad {\mathcal{P}}_\mu^A = \frac{1+t^2}{1-t^2} P_\mu^A + \frac{2t}{1-t^2} \varepsilon_{\mu \nu} P^{\nu A} + \dots \label{conn} \end{eqnarray} with $Q_\mu^{IJ}$ and $P_\mu^A$ as before; the dots indicate $t$ dependent fermionic contributions which we omit. A very important difference with section 2, where the tangent space symmetry was still finite dimensional, is that the Lie algebra of ${\rm SO}(16)^\infty$ also involves the $P$'s, and not only the $Q$'s. More specifically, from the $t$ dependence of the dimensionally reduced connections in (\ref{conn}) we infer that the connections $({\mathcal{Q}}_M^{IJ}(x;t), {\mathcal{P}}_M^A(x;t))$ constitute an ${\rm SO}(16)^\infty$ (and not an $E_9$) gauge connection. This means that the covariantizations in the generalized vielbein postulate are now in precise correpondence with the local symmetries, in contrast with the relations (\ref{VVP4}) which look $E_8$ covariant, whereas the full theory is invariant only under local ${\rm SO(16)}$. To write down an ansatz, we put \begin{eqnarray} {\mathcal{D}}_\mu := \partial_\mu - B_\mu^{~n} \partial_n + \dots \end{eqnarray} where the dots stand for terms involving derivatives of the Kaluza Klein vector fields. Then the generalization of (\ref{VVP4}) should read \begin{eqnarray} {\mathcal{D}}_\mu e_{IJ}^{m}(t) + 2\, {{{\mathcal{Q}}_\mu}^K}_{[I} (t)e^m_{J]K}(t) + \, {\mathcal{P}}_\mu^A(t) \Gamma^{IJ}_{AB} e^m_B (t) &=& 0 \nonumber \\ {\mathcal{D}}_\mu e_{A}^{m}(t) + \bruch{1}{4} {\mathcal{Q}}_\mu^{IJ}(t) \Gamma^{IJ}_{AB} e^m_B(t) -\, \bruch{1}{2} \Gamma^{IJ}_{AB} {\mathcal{P}}_{\mu}^{B}(t) e^{m}_{IJ}(t) &=& 0 \nonumber \\ \partial_m e^{n}_{IJ}(t) + 2\, {{{\mathcal{Q}}_m}^K}_{[I}(t) e^n_{J]K}(t) +\, {\mathcal{P}}_{m}^{A}(t) \Gamma^{IJ}_{AB} e^n_B(t) & = & 0 \nonumber \\ \partial_m e^{n}_{A}(t) + \bruch{1}{4}{\mathcal{Q}}_m^{IJ}(t) \Gamma^{IJ}_{AB} e^n_B(t) -\bruch{1}{2} \, \Gamma^{IJ}_{AB} {\mathcal{P}}_{m}^{B}(t) e^{n}_{IJ}(t) & = & 0 \label{VVP5} \end{eqnarray} Of course, the challenge is now to find explicit expressions for the internal components ${\mathcal{Q}}_m^{IJ}(x;t)$ and ${\mathcal{P}}_m^A(x;t)$, such that (\ref{VVP5}) can be interpreted as a $d=11$ generalization of the linear system of dimensionally reduced supergravity. Another obvious question concerns the fermionic partners of the unendlichbein: in two dimensions, the linear system matrix contains all degrees of freedom, including the fermionic ones, and the local $N=16$ supersymmetry can be bosonized into a local ${\rm SO}(16)^\infty$ gauge transformation \cite{NW}. Could this mean that there is a kind of bosonization in eleven dimensions or M-Theory? This idea may not be as outlandish as it sounds because a truly pregeometrical theory might be subject to a kind of ``pre-statistics'', such that the distinction between bosons and fermions arises only through a process of spontaneous symmetry breaking. \section{Yet more symmetries?} In 1982, B.~Julia conjectured that the dimensional reduction of maximal supergravity to one dimension should be invariant under a further extension of the $E$-series, namely (a non-compact form of) the hyperbolic Kac Moody algebra $E_{10}$ obtained by adjoining another set $\{e_{-1}, f_{-1}, h_{-1}\}$ of Chevalley generators to those of $E_9$ \cite{Julia3}\footnote{The existence of a maximal dimension for supergravity \cite{Nahm} would thus be correlated with the existence of a ``maximally extended'' hyperbolic Kac Moody algebra, which might thus explain the occurrence of maximum spin 2 for massless gauge particles in nature.}. As shown in \cite{nic5}, the last step of the reduction requires a null reduction if the affine symmetry of the $d=2$ theory is not to be lost. The reason is that the infinite dimensional affine symmetries of the $d=2$ theories always involve dualizations of the type \begin{eqnarray} \partial_\mu \varphi = \varepsilon_{\mu \nu} \partial^\nu \tilde \varphi \end{eqnarray} (in actual fact, there are more scalar fields, and the duality relation becomes non-linear, which is why one ends up with infinitely many dual potentials for each scalar degree of freedom). Dimensional reduction w.r.t. to a Killing vector $\xi^\mu$ amounts to imposing the condition $\xi^\mu \partial_\mu \equiv 0$ on {\it all} fields, including dual potentials. Hence, \begin{eqnarray} \xi^\mu \partial_\mu \varphi = 0 \quad , \quad \xi^\mu \partial_\mu \tilde \varphi \equiv \eta^\mu \partial_\mu \varphi = 0 \end{eqnarray} where $\eta^\mu \equiv \varepsilon^{\mu \nu} \xi_\nu$. If $\xi^\mu$ and $\eta^\mu$ are linearly independent, this constraint would force all fields to be constant, which is clearly too strong a requirement. Hence we must demand that $\xi^\mu$ and $\eta^\mu$ are collinear, which implies \begin{eqnarray} \xi^\mu \xi_\mu =0 , \end{eqnarray} i.e. the Killing vector must be null. Starting from this observation, it was shown in \cite{nic5} that the Matzner Misner $sl(2,{\bf R})$ symmetry of pure gravity can be formally extended to an $sl(3,{\bf R})$ algebra in the reduction of the vierbein from four to one dimensions. Combining this $sl(3,{\bf R})$ with the Ehlers $sl(2,{\bf R})$ of ordinary gravity, or with the $E_8$ symmetry of maximal supergravity in three dimensions, one is led to the hyperbolic algebra ${{\mathcal{F}}}_3$ \cite{FF} for ordinary gravity, and to $E_{10}$ for maximal supergravity. The transformations realizing the action of the Chevalley generators on the vierbein components can be worked out explicitly, and the Serre relations can be formally verified \cite{nic5} (for $E_{10}$, this was shown more recently in \cite{Mizo}). There is thus some evidence for the emergence of hyperbolic Kac Moody algebras in the reduction to one null dimension, but the difficult open question that remains is what the configuration space is on which this huge symmetry acts. This space is expected to be much bigger than the coset space (\ref{coset1}). Now, already for the $d=2$ reduction there are extra degrees of freedom that must be taken into account in addition to the propagating degrees of freedom. Namely, the full moduli space involving all bosonic degrees of freedom should also include the moduli of the zweibein, which are not contained in (\ref{coset1}). For each point on the worldsheet, the zweibein is an element of the coset space ${\rm GL}(2,{\bf R})/{\rm SO}(1,1)$; although it has no local degrees of freedom any more, it still contains the global information about the conformal structure of the world sheet $\Sigma$. Consequently, we should consider the Teichm\"uller space \begin{eqnarray} {\mathcal{T}} = \frac{ \{e_\mu^{~\alpha}(x) \, | \, x\in \Sigma \} }% {{\rm SO}(1,1)\times{\rm Weyl} (\Sigma)\times{\rm Diff}_0 (\Sigma)} \label{coset2} \end{eqnarray} as part of the configuration space of the theory (see \cite{Verlinde} for a detailed description of ${\mathcal{T}}$). In fact, we should even allow for arbitrary genus of the worldsheet, and replace ${\mathcal{T}}$ by the ``universal Teichm\"uller space'' ${\widetilde{\mathcal{T}}}$. This infinite dimensional space can be viewed as the configuration space space of non-perturbative string theory \cite{FS}. For the models under consideration here, however, even ${\widetilde{\mathcal{T}}}$ is not big enough, as we must also take into account the dilaton $\rho$ and the non-propagating Kaluza Klein vector fields in two dimensions. For the former, a coset space description was proposed in \cite{JN1}. On the other hand, the Kaluza Klein vectors and the cosmological constant they could generate in two dimensions have been largely ignored in the literature. Even if one sets their field strengths equal to zero (there are arguments that the Geroch group, and hence infinite duality symmetries, are incompatible with a nonzero cosmological constant in two dimensions), there still remain topological degrees of freedom for higher genus world sheets. The existence of inequivalent conformal structures is evidently important for the null reductions, as the former are in one-to-one correspondence with the latter. Put differently, the inequivalent null reductions are precisely parametrized by the space (\ref{coset2}). The extended symmetries should thus not only act on one special null reduction (set of plane wave solutions of Einstein's equations), but relate different reductions. Indeed, it was argued in \cite{Mizo} that, for a toroidal worldsheet, the new $sl(2,{\bf R})$ transformations associated with the over-extended Chevalley generators change the conformal structure, but only for non-vanishing holonomies of the Kaluza Klein vector fields on the worldsheet. This indicates that the non-trivial realization of the hyperbolic symmetry requires the consideration of non-trivial worldsheet topologies. The dimensionally reduced theory thereby retains a memory of its two-dimensional ancestor. It is therefore remarkable that, at least for isomonodromic solutions of Einstein's theory, the $d=2$ theory exhibits a factorization of the equations of motion akin to, but more subtle than the holomorphic factorization of conformal field theories \cite{KN}. In other words, there may be a way to think of the $d=2$ theory as being composed of two chiral halves just as for the closed string. Consequently, a truncation to one null dimension may not be necessary after all if the theory factorizes all by itself. In summary, what we are after here is a group theoretic unification of all these moduli spaces that would be analogous to (\ref{coset1}) above, and fuse the matter and the topological degrees of freedom. No such description seems to be available for (\ref{coset2}) (or ${\widetilde{\mathcal{T}}}$), and it is conceivable that only the total moduli space ${\widetilde{\mathcal{M}}}$ containing both ${\mathcal{M}}$ and ${\widetilde{\mathcal{T}}}$ as well as the dilaton and the Kaluza Klein, and perhaps even the fermionic, degrees of freedom is amenable to such an interpretation. Extrapolating the previous results, we are thus led to consider coset spaces $E_{10}/H$ with ${\rm SO}(16)^\infty\subset H \subset E_{10}$. As before, the introduction of the infinitely many spurious degrees of freedom associated with the gauge group $H$ would be necessary in order to ``linearize'' the action of $E_{10}$. What are the choices for $H$? One possibility would be to follow the procedure of the foregoing section, and to define $H = {\rm SO}(16)^{\infty \infty} \subset E_{10}$ in analogy with ${\rm SO}(16)^\infty\subset E_9$ by taking its associated Lie algebra to be the linear span of all $\omega$ invariant combinations of $E_{10}$ Lie algebra elements. To extend the affine involution to the full hyperbolic algebra, we would again invoke (\ref{involution}), setting $\varepsilon=+1$ in (\ref{involution1}) (since we now assume the worldsheet to be Lorentzian), which leaves us with the two choices $s_{-1}=\pm 1$. For $s_{-1}=+1$ we would get the ``maximal compact'' subalgebra of $E_{10}$, corresponding to the compactification of ten spacelike dimensions. A subtlety here is that a definition in terms of the standard bilinear form is no longer possible, unlike for affine and finite algebras, as this would now also include part of the Cartan subalgebra of $E_{10}$: due to the existence of a negative eigenvalue of the $E_{10}$ Cartan matrix, there exists a negative norm element $\sum _i n_i h_i$ of the Cartan subalgebra, which would have to be excluded from the definition of $H$ (cf. the footnote on p.~438 of \cite{JN1}). The alternative choice $s_{-1}=-1$ would correspond to reduction on a 9+1 torus. However, for the null reduction advocated here, physical reasoning motivates us to propose yet another choice for $H$. Namely, in this case, $H$ should contain the group ${\rm ISO}(9) \subset {\rm SO}(1,10)$ leaving invariant a null vector in eleven dimensions \cite{JN2}. To identify the relevant parabolic subgroup of $E_{10}$, which we denote by ${\rm ISO}(16)^\infty$, we recall \cite{nic5} that the over-extended Chevalley generators correspond to the matrices \begin{eqnarray} e_{-1} = \frac{1}{\sqrt{2}} \left( \begin{array}{clcr} 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array}\right) \qquad f_{-1} = \frac{1}{\sqrt{2}} \left( \begin{array}{clcr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 1 & 0 \end{array}\right) \qquad h_{-1} = \frac{1}{2} \left( \begin{array}{clcr} 1 & 1 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & -2 \end{array}\right) \end{eqnarray} in a notation where we only write out the components acting on the $0,1,2$ components of the elfbein, with all other entries vanishing. Evidently, we have $h_{-1}=d-c_-$ with \begin{eqnarray} d = \left( \begin{array}{clcr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{array}\right) \qquad c_- = - \frac{1}{2} \left( \begin{array}{clcr} 1 & 1 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right) \end{eqnarray} where $d$ is the scaling operator on the dilaton $\rho$ , and $c_-$ is the central charge, alias the ``level counting operator'' of $E_{10}$, obeying $[c_-,e_{-1}]= -e_{-1}$ and $[c_-,f_{-1}]= +f_{-1}$ (and having vanishing commutators with all other Chevalley generators). Writing \begin{eqnarray} c_\pm := - \frac{1}{2} \left( \begin{array}{clcr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right) \pm \frac{1}{2} \left( \begin{array}{clcr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right) \end{eqnarray} we see that the first matrix on the right scales the conformal factor, generating Weyl transformations (called ${\rm Weyl}(\Sigma)$ in (\ref{coset2})) on the zweibein, while the second generates the local SO(1,1) Lorentz transformations. In a lightcone basis, these symmetries factorize on the zweibein, which decomposes into two chiral einbeine. Consequently, Weyl transformations and local SO(1,1) can be combined into two groups ${\rm SO}(1,1)_\pm$ with respective generators $c_\pm$, and which act separately on the chiral einbeine. One of these, ${\rm SO}(1,1)_-$ (generated by $c_-$), becomes part of $E_{10}$. The other, ${\rm SO}(1,1)_+$, acts on the residual einbein and can be used to eliminate it by gauging it to one. Since $c_\pm$ acts in the same way on the conformal factor, we also recover the result of \cite{Julia2}. We wish to include both ${\rm ISO}(9)$ and ${\rm SO}(1,1)_-$ into the enlarged local symmetry $H={\rm ISO}(16)^\infty$, and thereby unify the longitudinal symmetries with the ``transversal'' group ${\rm SO}(16)^\infty$ discussed before. Accordingly, we define ${\rm ISO}(16)^\infty$ to be the algebra generated by the ${\rm SO}(16)^\infty$ Lie algebra together with $c_-$ and $e_{-1}$, as well as all their nonvanishing multiple commutators. The ``classical'' configuration space of M-Theory should then be identified with the coset space \begin{eqnarray} {\widetilde{\mathcal{M}}} = \frac{E_{10}}{{\rm ISO}(16)^\infty} \label{coset3} \end{eqnarray} Of course, we will have to worry about the fate of these symmetries in the quantum theory. Indeed, some quantum version of the symmetry groups appearing in (\ref{coset3}) must be realized on the Hilbert space of third quantized $N=16$ supergravity, such that $E_{10}$ becomes a kind of spectrum generating (rigid) symmetry on the physical states, while the gauge group ${\rm ISO}(16)^\infty$ gives rise to the constraints defining them. Because ``third quantization'' here is analogous to the transition from first quantized string theory to string field theory, the latter would have to be interpreted as multi-string states in some sense (cf. \cite{Witten3} for earlier suggestions in this direction; note also that the coset space (\ref{coset3}) is essentially generated by half of $E_{10}$, so there would be no ``anti-string states''). According to \cite{duality}, the continuous duality symmetries are broken to certain discrete subgroups over the integers in the quantum theory. Consequently, the quantum configuration space would be the left coset $$ {\widetilde {\mathcal{F}}} = {E_{10}({\bf Z})}\backslash {\widetilde{\mathcal{M}}} $$ and the relevant partition functions would have to be new kinds of modular forms defined on ${\widetilde {\mathcal{F}}}$. However, despite recent advances \cite{Bakas, Sen}, the precise significance of the (discrete) ``string Geroch group'' remains a mystery, and it is far from obvious how to extend the known results and conjectures for finite dimensional duality symmetries to the infinite dimensional case (these statements apply even more to possible discrete hyperbolic extensions; see, however, \cite{Mizo,GM}). Moreover, recent work \cite{KS} confirms the possible relevance of quantum groups in this context (in the form of ``Yangian doubles''). Returning to our opening theme, more should be said about the 1+10 split, which would lift up the ${\rm SO}(1,1)_+\times{\rm ISO}(16)^\infty$ symmetry, and the ``bein'' which would realize the exceptional geometry alluded to in the introduction, and on which ${\rm ISO}(16)^\infty$ would act as a generalized tangent space symmetry. However, as long as the 2+9 split has not been shown to work, and a manageable realization is not known for either $E_{10}$ or ${\rm ISO}(16)^\infty$, we must leave the elaboration of these ideas to the future. It could well prove worth the effort.\\ \noindent {\bf Acknowledgments:} The results described in section 2 are based on work done in collaboration with S.~Melosch. I would also like to thank C.~Daboul, R.W.~Gebert, H.~Samtleben and P.~Slodowy for stimulating discussions and comments.
1,108,101,566,852
arxiv
\section*{I. INTRODUCTION} \end{center} Quantum teleportation deals with the transfer of information from one remote location to another without physical transport of the information or measurement on either side to confirm and verify the information content [1] . It rests on the quantum correlations for which there are no classical analogues. Bennett and coworkers proposed a theoretical scheme in 1993 using the celebrated Bohm-Aharanov-Einstein-Podolsky-Rosen (EPR) pair and demonstrated the transfer of two bits of information without classical communication with a probability of 1/4 and with classical communication (with speeds less than that of light in vacuum) with probability 3/4 [2,3]. Transfer of the information content of a spin 1/2 particle was thus shown in theory {\em with certainty} and with {\em no intermediate observer monitoring/controlling} the process. Many experiments [4-10] have been performed subsequently which provide partial experimental support for this concept. A number of remarkable theoretical concepts and schemes have also been invented for single and multi-particle teleportation [11-18]. The original EPR pair which is maximally entangled and played the role of the carrier of information has been supplemented with less-than-maximally-entangled (with respect to EPR pair) three and four-particle correlated states [19-22]. \par Some of the difficulties associated with multi-particle teleportation using correlated states are \begin{enumerate} \item Maximum entanglement in the same sense as in the case of EPR pair cannot be achieved under experimental conditions at present. \item Three or four-particle correlated states often require additional measurement(s) (Charlie) as an intermediate step as opposed to a pair of particles. \item The use of three-particle maximally entangled Greenberger-Horne-Zeilinger (GHZ) states for the teleportation of a single particle as quantum carrier and projection basis leads to failure of the process even in theory since four out of eight GHZ states have zero projections for the single-particle leading to null results. This means that it will be impossible for the receiver to reconstruct the unknown state sent by the sender [13]. \item Unitary transformations of the GHZ states which are robust with respect to tracing of one of the particles and which obviate the need for an intermediate observer have not been examined carefully. It must be mentioned that Bell states are merely unitary transformations from other two-particle states, which, however play a vital role in information processing. They are different from an unentangled or partially entangled state by a unitary transformation or two. Therefore it is necessary to identify proper, genuinely entangled states for multiparticle teleportation which has not been done in a systematic manner until now. \item The extent of correlation between particles which is a direct measure of entanglement is not defined uniformly for multiparticle systems, as against a clear-cut definition for two particle systems [23-25]. \end{enumerate} In this article we address all of the above problems and propose schemes for both optics experiments and using quantum gates. This article is organized as follows : \\ (a) In section II different complete set(s) of orthonormal entangled projection basis of three-particles is (are) proposed by using a unitary transformation of GHZ basis. It is shown that in this new basis the observer Charlie is not needed and direct teleportation results instead of controlled process. Correlation coefficients are proposed as a measurement criterion for entanglement of multiple paticles using standard Ursell-Mayer type expansion based on the principles of many body statistical mechanics. In addition to this, three-particle GHZ basis has been used as {\em quantum carrier} as well as {\em projection basis} for single particle teleportation. \\ (b) In section III two different sets of genuinely entangled four-particle states are proposed for the teleportation of an arbitrary two-particle state. The generalization of the same for $N$-particle system is also suggested in detail. \\ (c) In section IV teleportation using quantum gates and three and four qubit computational bases is described with the appropriate quantum circuit. This is followed by conclusion. \section*{II. THREE-PARTICLE ENTANGLED BASIS AND QUANTUM TELEPORTATION WITHOUT PARTICLE AVERAGING} We propose a three-particle basis (123) in the following as \begin{eqnarray} \left| \chi \right\rangle_{123}^{(1),(2)} = \frac{\left| \phi \rangle_{12}^+ \right. \otimes \left| 0 \rangle_3 \right. \pm \left| \phi \rangle_{12}^- \right. \otimes \left| 1 \rangle_3 \right. }{\sqrt{2}} & , & \left| \chi \right\rangle_{123}^{(3),(4)} = \frac{\left| \phi \rangle_{12}^+ \right. \otimes \left| 1 \rangle_3 \right. \pm \left| \phi \rangle_{12}^- \right. \otimes \left| 0 \rangle_3 \right. }{\sqrt{2}}\ , \nonumber \\ \left| \chi \right\rangle_{123}^{(5),(6)} = \frac{\left| \psi \rangle_{12}^+ \right. \otimes \left| 0 \rangle_3 \right. \pm \left| \psi\rangle_{12}^- \right. \otimes \left| 1 \rangle_3 \right. }{\sqrt{2}} & {\rm and} & \left| \chi \right\rangle_{123}^{(7),(8)} = \frac{\left| \psi \rangle_{12}^+\right. \otimes \left| 1 \rangle_3 \right. \pm \left| \psi \rangle_{12}^- \right. \otimes \left| 0 \rangle_3 \right. }{\sqrt{2}} \end{eqnarray} where \begin{equation} \left| \psi\right\rangle^{\pm}_{12} = \frac{1}{\sqrt{2}} \left[ \, \left| 01 \right\rangle_{12} \pm \left| 10 \right\rangle_{12} \, \right] ~~~{\rm and}~~~ \left| \phi\right\rangle^{\pm}_{12} = \frac{1}{\sqrt{2}} \left[ \, \left| 00 \right\rangle_{12} \pm \left| 11 \right\rangle_{12} \, \right]\ . \end{equation} These states are linear combinations of three-particle GHZ states. The entanglement properties of these states are similar to the three-particle GHZ states if we consider the extent of correlation and to W states for robustness of entanglement with respect to tracing of third particle [31,32]. The extent of correlation of entangled states is measured with the help of the correlation coefficients defined using the well known statistical mechanical formula involving averages for many-body systems [26-30]. Correlation coefficients for two-particle and three-particle systems are defined as \begin{eqnarray} C^{ij}_{\alpha \beta } & = & \left\langle \sigma^i_{\alpha} \sigma^j_{\beta} \right\rangle - \left\langle \sigma ^i_{\alpha} \right\rangle \left\langle \sigma ^j_{\beta} \right\rangle \ and \\ C^{ijk}_{\alpha \beta \gamma } & = & \left\langle \sigma^i_{\alpha} \sigma^j_{\beta} \sigma^k_{\gamma} \right\rangle - \left\langle \sigma ^i_{\alpha} \right\rangle \left\langle \sigma ^j_{\beta} \sigma^k_{\gamma} \right\rangle - \left\langle \sigma ^j_{\beta} \right\rangle \left\langle \sigma ^i_{\alpha} \sigma^k_{\gamma} \right\rangle - \left\langle \sigma ^k_{\gamma} \right\rangle \left\langle \sigma ^i_{\alpha} \sigma^j_{\beta} \right\rangle + 2\left\langle \sigma ^i_{\alpha} \right\rangle \left\langle \sigma ^j_{\beta} \right\rangle \left\langle \sigma ^k_{\gamma} \right\rangle . \nonumber \\ & & \end{eqnarray} where $\sigma$'s are the Pauli spin matrices for the indicated particles and \{ $\alpha$, $\beta$, $\gamma$ = ${x , y , z}$\}. Table I lists the non-zero correlation coefficients for the four Bell states of a pair of particles. The averages are calculated and compared, for the states \{$\left| \chi \right\rangle^{(1)} _{123} - \left| \chi \right\rangle^{(8)}_{123}$\} and the eight GHZ states in Table II . The extent of correlation between three particles is the same in both the sets as can be seen from Table II; however, the advantage associated with the entangled basis proposed here over GHZ states is that the states in Eq. 1 are robust with respect to the tracing of 3rd particle, i.e., on tracing the remaining pair is still entangled [31-33]. It is worth mentioning here that a linear combination of three-particle GHZ states such as \begin{equation} \left| \chi\right\rangle_{123} = \frac{1}{2}\left[\,\left|000\right\rangle_{123} + \left| 110\right\rangle_{123} + \left| 001\right\rangle_{123} + \left| 111\right\rangle_{123} \right] \end{equation} possesses no genuine three-particle quantum correlation. When calculated, all the correlation coefficients associated with the above state shows zero value, because the state can be expressed as the direct product state $ \left| \chi \right\rangle _{123}=\frac{1}{\sqrt{2}} [\left| 00 \right\rangle _{12} + \left| 11 \right\rangle _{12}] \otimes\frac{1}{\sqrt{2}}[\left| 0 \right\rangle _{3} + \left| 1 \right\rangle _{3}]. $ If we use GHZ basis as a quantum channel to teleport unknown information encoded in a single particle then the basis set given by Eq. 1 can be used as projection basis and it obviates the earlier difficulties of missing elements of basis set (when GHZ basis functions are used as a projection basis). \par In this scheme, the single-particle information is with Alice, given by $ \left| \phi \right\rangle_1 = a \left| 0 \right\rangle_1 + b \left| 1 \right\rangle_1 $ where $ \left\langle \phi _1 | \phi _1 \right\rangle = 1$ and $|a|^2 +|b|^2 = 1$. The GHZ state (234), $ \left| \psi \right\rangle^{GHZ} _{234} = \frac{1}{\sqrt{2}} \left[ \,\left| 000 \right\rangle _{234} + \left| 111 \right\rangle _{234} \right] $, shared by Alice (23) and Bob (4) along with particle (1), gives rise to product representation for the four particle state as \begin{equation} \left| \psi \right\rangle _{1234} = \left| \phi \right\rangle _1 \otimes \left| \psi \right\rangle ^{GHZ}_{234} = \frac{a \left[ \, \left| 0000 \right\rangle _{1234} + \left| 0111 \right\rangle _{1234} \, \right] }{\sqrt{2}} + \frac{b \left[ \, \left| 1000 \right\rangle _{1234} + \left| 1111 \right\rangle _{1234} \, \right] }{\sqrt{2}}. \end{equation} Alice's measurement process using her basis states \{$\left| \chi \right\rangle^{(1)} _{123} - \left| \chi \right\rangle^{(8)}_{123}$\} is based on the following decomposition of her state, \begin{eqnarray} \left| \psi \right\rangle _{1234} & = & \frac{1}{2\sqrt{2}} \left\{ \left| \chi \right\rangle^{(1)} _{123} \left[ a\left| 0 \right\rangle _4 - b \left| 1 \right\rangle _4 \right] + \left| \chi \right\rangle^{(2)} _{123} \left[ a\left| 0 \right\rangle _4 + b \left| 1 \right\rangle _4 \right] + \left| \chi \right\rangle ^{(3)}_{123} \left[ a\left| 0 \right\rangle _4 + b \left| 1 \right\rangle _4 \right] \right. \nonumber \\ & & +\left| \chi \right\rangle^{(4)} _{123} \left[ -a\left| 0 \right\rangle _4 + b \left| 1 \right\rangle _4 \right] + \left| \chi \right\rangle^{(5)} _{123} \left[ a\left| 1 \right\rangle _4 + b \left| 0 \right\rangle _4 \right] + \left| \chi \right\rangle^{(6)} _{123} \left[ -a\left| 1 \right\rangle _4 + b \left| 0 \right\rangle _4 \right] \nonumber \\ & & +\left. \left| \chi \right\rangle^{(7)} _{123} \left[ a\left| 1 \right\rangle _4 - b \left| 0 \right\rangle _4 \right] + \left| \chi \right\rangle^{(8)}_{123} \left[ a\left| 1 \right\rangle _4 + b \left| 0 \right\rangle _4 \right] \right\}. \end{eqnarray} \newcommand{$\left| \chi_{123}^{(5)} \right\rangle $}{$\left| \chi_{123}^{(5)} \right\rangle $} Four equally probable results for Bob's particle (4) are possible and also there is no need for another observer to assist Alice in transferring the information to Bob. One of the four probable outcomes is a direct teleportation as indicated by $I^{4}$ in Table III. The other three outcomes require one unitary transformation as given in Table III. It is important to mention here that our protocol is an example of {\em direct teleportation as against controlled teleportation} where Alice needs Charlie to assist her in sending the unknown information to Bob with Charlie carrying one of the entangled particles out of three. In the present case Alice has two particles on her side so that she can do a direct three-particle measurement. Our protocol also overcomes earlier difficulties such as getting null results in four out of eight projections performed by Alice so that the receiver would never be able to recover the unknown state in four such cases.\par Gorbachev and Trubilko [13] have shown that, by using a basis which consists of the direct product of a single-particle state with a two-particle entangled state, the teleportation of an arbitrary EPR pair can be realized. In their scheme, Alice interacts her unknown EPR state with the GHZ state and projects her three particles on the three-particle basis states (123), given by ${(\left|\pi\right\rangle^{\pm}_{1}\ \otimes\left|\psi\right\rangle^{\pm}_{23} , \left|\pi\right\rangle^{\pm}_{1}\ \otimes\left|\phi\right\rangle^{\pm}_{23})}$ where $\left| \psi\right\rangle^{\pm}_{23}$ and $\left| \phi\right\rangle^{\pm}_{23}$ are Bell States for the pair (23) ( Eq. 1) and $\left| \pi \right\rangle^{\pm}_{1} = \frac{1}{\sqrt{2}} \left[ \,\left| 0 \right\rangle _{1} {\pm} \left| 1 \right\rangle _{1} \right]$ . This leads to Bob's EPR pair being in a teleported state and the original arbitrary EPR state can be found by doing a simple unitary transformation for all the outcomes. We have demonstrated above that the projection basis in Eq. 1 can be used for satisfactory teleportation of a single particle. To contrast with the result of Gorbachev and Trubilko we give below the scheme to teleport an arbitrary EPR pair through a different set of three-particle entangled projection basis given by \begin{eqnarray} \left| \varphi \right\rangle ^{(1),(2)} _{123} = \frac{ \left| 0 \right\rangle _1 \otimes \left| \phi \right\rangle ^+_{23} \pm \left| 1 \right\rangle _1 \otimes \left| \phi \right\rangle ^-_{23}}{\sqrt{2}} & , & \left| \varphi \right\rangle ^{(3),(4)} _{123} = \frac{ \left| 1 \right\rangle _1 \otimes \left| \phi \right\rangle ^+_{23} \pm \left| 0 \right\rangle _1 \otimes \left| \phi \right\rangle ^-_{23}}{\sqrt{2}}\ , \nonumber \\ \left| \varphi \right\rangle ^{(5),(6)} _{123} = \frac{ \left| 0 \right\rangle _1 \otimes \left| \psi \right\rangle ^+_{23} \pm \left| 1 \right\rangle _1 \otimes \left| \psi \right\rangle ^-_{23}}{\sqrt{2}} & {\rm and} & \left| \varphi \right\rangle ^{(7),(8)} _{123} = \frac{ \left| 1 \right\rangle _1 \otimes \left| \psi \right\rangle ^+_{23} \pm \left| 0 \right\rangle _1 \otimes \left| \psi \right\rangle ^-_{23}}{\sqrt{2}}. \end{eqnarray} The sets given by Eq. 1 and Eq. 8 differ in the ordering of the particles. In angular momentum algebraic parlance they refer to different coupling schemes and are related to each other through a unitary tranformation containing a 6j coefficient. Alice can use any one of the Bell pairs to transfer information to Bob for e.g. $ \left| \psi \right\rangle _{12} = a \left| 01 \right\rangle_{12} + b \left| 10 \right\rangle_{12}$. The GHZ state $ \left| \psi \right\rangle^{GHZ} _{345} = \frac{1}{\sqrt{2}} \left[ \,\left| 000 \right\rangle _{345} + \left| 111 \right\rangle _{345} \right]$ is composed of Alice's particle 3 and Bob's particles 4 and 5. Thus the unknown two-particle state and the GHZ state give rise to a five-particle state as \begin{equation} \left| \psi \right\rangle _{12345}= \left| \psi \right\rangle _{12} \otimes \left| \psi \right\rangle ^{GHZ}_{345} = [ a \left| 01 \right\rangle _{12} + b \left| 10 \right\rangle _{12}] \otimes \left[\frac{\left| 000 \right\rangle _{345} + \left| 111 \right\rangle _{345}}{\sqrt{2}} \right]. \end{equation} Alice's measurements are projections on the three-particle (123) states given by Eq. 8, namely, \begin{eqnarray} \lefteqn{\left|\psi\right\rangle_{12345} =} & & \nonumber \\ & & \frac{1}{2\sqrt{2}} \left\{ \left| \varphi \right\rangle ^{(1)}_{123} \left[ a \left| 11 \right\rangle _{45} + b \left| 00 \right\rangle _{45} \right] + \left| \varphi \right\rangle ^{(2)}_{123} \left[ a \left| 11 \right\rangle _{45} - b \left| 00 \right\rangle _{45} \right] + \left| \varphi \right\rangle ^{(3)}_{123} \left[ -a \left| 11 \right\rangle _{45} + b \left| 00 \right\rangle _{45} \right] \right. \nonumber \\ & & + \left| \varphi \right\rangle ^{(4)}_{123} \left[ a \left| 11 \right\rangle _{45} + b \left| 00 \right\rangle _{45} \right] + \left| \varphi \right\rangle ^{(5)}_{123} \left[ a \left| 00 \right\rangle _{45} + b \left| 11 \right\rangle _{45} \right] + \left| \varphi \right\rangle ^{(6)}_{123} \left[ a \left| 00 \right\rangle _{45} - b \left| 11 \right\rangle _{45} \right] \nonumber \\ & & \left. + \left| \varphi \right\rangle ^{(7)}_{123} \left[ -a \left| 00 \right\rangle _{45} + b \left| 11 \right\rangle _{45} \right] + \left| \varphi \right\rangle ^{(8)}_{123} \left[ a \left| 00 \right\rangle _{45} + b \left| 11 \right\rangle _{45} \right] \right\}. \end{eqnarray} The two-particle (45) state is in one of the four distinct outcomes which can be transformed back to the original arbitrary EPR pair easily through a single qubit unitary transformation by Bob. The required transformations are listed in Table IV. \par The basis sets given by Eq. 8 and Eq. 1 work successfully as projection bases for the teleportation of a single particle and an arbitrary EPR pair respectively as well, by doing two different two-qubit transformations on Alice's side. For example, if we consider teleportation of a particle through GHZ state using projection basis given by Eq. 8, the direct product state of four particles is \begin{equation} \left| \psi \right\rangle _{1234} = \frac{a}{\sqrt{2}} \left[ \, \left| 0000 \right\rangle _{1234} + \left| 0111 \right\rangle _{1234} \right] + \frac{b}{\sqrt{2}} \left[ \, \left| 1000 \right\rangle _{1234} + \left| 1111 \right\rangle _{1234} \right]. \end{equation} A direct three-particle measurement using basis set of Eq. 8 will not be able to achieve the desired task, thus Alice does two different unitary transformations on her particles (23) and (12) respectively given by unitary matrices $U^{(1)}_{23}$ and $U^{(2)}_{12}$ , where \begin{equation} U_{23}^{(1)} = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right) _{23} ~~~~ {\rm and} ~~~~ U_{12}^{(2)} = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right) _{12}. \end{equation} The two transformations on $\left|\psi\right\rangle_{1234}$ are represented as \begin{equation} \left| \psi \right\rangle ^{(1)}_{1234} = I_1 \otimes U_{23}^{(1)} \otimes I_4 \left| \psi \right\rangle _{1234} = \frac{a}{\sqrt{2}} \left[ \, \left| 0000 \right\rangle _{1234} + \left| 0101 \right\rangle _{1234} \right] + \frac{b}{\sqrt{2}} \left[ \, \left| 1000 \right\rangle _{1234} + \left| 1101 \right\rangle _{1234} \right] \end{equation} and \begin{equation} \left| \psi \right\rangle ^{(2)}_{1234} = U_{12}^{(2)} \otimes I_3 \otimes I_4 \left| \psi \right\rangle ^{(1)} _{1234} = \frac{a}{\sqrt{2}} \left[ \, \left| 0000 \right\rangle _{1234} + \left| 0101 \right\rangle _{123} \right] + \frac{b}{\sqrt{2}} \left[ \, \left| 1100 \right\rangle _{1234} + \left| 1001 \right\rangle _{1234} \right] \end{equation} where $I_{i}$ represents an identity matrix for $i^{th}$ particle. $\left|\psi\right\rangle^{(2)}_{1234}$ is re-expressed using the basis set of Eq. 8 for Alice to project her particles onto any of these eight states, as \begin{eqnarray} \left| \psi \right\rangle ^{(2)}_{1234} & = & \frac{1}{2\sqrt{2}} \left\{ \left| \varphi \right\rangle ^{(1)}_{123} \left[ a \left| 0 \right\rangle _4 + b \left| 1 \right\rangle _4 \right] + \left| \varphi \right\rangle ^{(2)}_{123} \left[ a \left| 0 \right\rangle _4 - b \left| 1 \right\rangle _4 \right] + \left| \varphi \right\rangle ^{(3)}_{123} \left[ a \left| 0 \right\rangle _4 + b \left| 1 \right\rangle _4 \right] \right. \nonumber \\ & & + \left| \varphi \right\rangle ^{(4)}_{123} \left[ -a \left| 0 \right\rangle _4 + b \left| 1 \right\rangle _4 \right] + \left| \varphi \right\rangle ^{(5)}_{123} \left[ a \left| 1 \right\rangle _4 - b \left| 0 \right\rangle _4 \right] + \left| \varphi \right\rangle ^{(6)}_{123} \left[ a \left| 1 \right\rangle _4 + b \left| 0 \right\rangle _4 \right] \nonumber \\ & & \left. + \left| \varphi \right\rangle ^{(7)}_{123} \left[ -a \left| 1 \right\rangle _4 + b \left| 0 \right\rangle _4 \right] + \left| \varphi \right\rangle ^{(8)}_{123} \left[ a \left| 1 \right\rangle _4 + b \left| 0 \right\rangle _4 \right] \right\}. \end{eqnarray} Depending on Alice's measurement Bob may have to apply appropriate unitary transformations to recover Alice's information. \par It is worth mentioning here that any of the projection basis given by Eq. 8 can be used as a quantum {\em carrier} to teleport a single particle using GHZ basis as the {\em projection basis} for Alice's qubits. The basis set in Eq. 8 possesses same correlation coefficients as that of GHZ states with respect to particle tracing, and is more robust as compared to GHZ states, which makes it a suitable set for being a quantum carrier. The four-particle direct product state composed of Alice's information and the quantum carrier $\left| \varphi \right\rangle ^{(1)} _{234}$ is \begin{eqnarray} \left| \psi \right\rangle _{1234} & = & \left[ \,a \left| 0 \right\rangle _1 + b \left| 1 \right\rangle _1 \right] \otimes \left| \varphi \right\rangle ^{(1)} _{234} \nonumber \\ & = & \frac{a}{2} \left[ \, \left| 0000 \right\rangle _{1234} + \left| 0011 \right\rangle _{1234} + \left| 0100 \right\rangle _{1234} - \left| 0111 \right\rangle _{1234} \right] \nonumber \\ & = & +\frac{b}{2} \left[ \, \left| 1000 \right\rangle _{1234} + \left| 1011 \right\rangle _{1234} + \left| 1100 \right\rangle _{1234} - \left| 1111 \right\rangle _{1234} \right]. \end{eqnarray} A simple decomposition based on Alice's measurement in the GHZ basis (123) leads to \begin{eqnarray} \lefteqn{\left| \psi \right\rangle _{1234} = } & & \nonumber \\ & & \frac{1}{2\sqrt{2}}\left\{ \left[ \, \left| 000 \right\rangle _{123} + \left| 111 \right\rangle _{123} \right] \left[ a \left| 0 \right\rangle _4 -b \left| 1 \right\rangle _4 \right] + \left[ \, \left| 000 \right\rangle _{123} - \left| 111 \right\rangle _{123} \right] \left[ a \left| 0 \right\rangle _4 +b \left| 1 \right\rangle _4 \right] \right. \nonumber \\ & & + \left[ \, \left| 001 \right\rangle _{123} + \left| 110 \right\rangle _{123} \right] \left[ a \left| 1 \right\rangle _4 +b \left| 0 \right\rangle _4 \right] + \left[ \, \left| 001 \right\rangle _{123} - \left| 110 \right\rangle _{123} \right] \left[ a \left| 1 \right\rangle _4 -b \left| 0 \right\rangle _4 \right] \nonumber \\ & & + \left[ \, \left| 010 \right\rangle _{123} + \left| 101 \right\rangle _{123} \right] \left[ a \left| 0 \right\rangle _4 +b \left| 1 \right\rangle _4 \right] + \left[ \, \left| 010 \right\rangle _{123} - \left| 101 \right\rangle _{123} \right] \left[ a \left| 0 \right\rangle _4 -b \left| 1 \right\rangle _4 \right] \nonumber \\ & & + \left. \left[ \, \left| 011 \right\rangle _{123} + \left| 100 \right\rangle _{123} \right] \left[ -a \left| 1 \right\rangle _4 +b \left| 0 \right\rangle _4 \right] + \left[ \, \left| 011 \right\rangle _{123} - \left| 100 \right\rangle _{123} \right] \left[ -a \left| 1 \right\rangle _4 -b \left| 0 \right\rangle _4 \right] \right\}. \nonumber \\ & & \end{eqnarray} The above expression shows direct teleportation of Alice's information in two of the cases with equal probabilities. In all other six cases appropriate unitary transformations are needed subject to Alice's measurement results. In addition to this we would like to mention another but similar set of three-particle states having the same degree of correlation and robustness as states given by Eq. 1 and Eq. 8. These states are used as {\em projection basis} for the teleportation of a particle and an arbitrary EPR pair through three-particle GHZ state and can be used as a {\em quantum carrier} for single-particle teleportation using three-particle GHZ basis as projections. They are given as \begin{eqnarray} \left| \chi \right\rangle_{123}^{(1)',(2)'} = \frac{\left| \phi \rangle_{13}^+ \right. \otimes \left| 0 \rangle_2 \right. \pm \left| \phi \rangle_{13}^- \right. \otimes \left| 1 \rangle_2 \right. }{\sqrt{2}} & , & \left| \chi \right\rangle_{123}^{(3)',(4)'} = \frac{\left| \phi \rangle_{13}^+ \right. \otimes \left| 1 \rangle_2 \right. \pm \left| \phi \rangle_{13}^- \right. \otimes \left| 0 \rangle_2 \right. }{\sqrt{2}}\ , \nonumber \\ \left| \chi \right\rangle_{123}^{(5)',(6)'} = \frac{\left| \psi \rangle_{13}^+ \right. \otimes \left| 0 \rangle_2 \right. \pm \left| \psi\rangle_{13}^- \right. \otimes \left| 1 \rangle_2 \right. }{\sqrt{2}} & {\rm and} & \left| \chi \right\rangle_{123}^{(7)',(8)'} = \frac{\left| \psi \rangle_{13}^+\right. \otimes \left| 1 \rangle_2 \right. \pm \left| \psi \rangle_{13}^- \right. \otimes \left| 0 \rangle_2 \right. }{\sqrt{2}} . \nonumber \\ & & \end{eqnarray} Thus we demonstrate that for the teleportation of a particle, GHZ basis functions can be used as {\em projections} as well as {\em quantum carriers}. \section*{ III. MULTIPARTITE ENTANGLEMENT AND \\ TELEPORTATION} In this section the entanglement properties of four- particle entangled states and generalization of the same to multi-particle systems are given. In addition, we discuss the teleportation of arbitrary two-particle state with generalization of the protocol to a $N$-particle system using genuine multipartite states as quantum channels. The extent of entanglement is assessed by a well established statistical mechanical formula for correlation coefficients [28-30]. Correlation measures for more than three particles can be defined using Ursell-Mayer type cluster coefficients. The use of quantum virial coefficients [29-30] as a criterion for determining correlations between spins (particles) resolves ambiguities in defining the degree of entanglement between multiple particles as against a clear existing definition for a pair of particles. We therefore, use the following expression for the four particle correlation coefficient, \begin{eqnarray} C^{1234}_{\alpha \beta \gamma \delta} & = & \left\langle \sigma^1_{\alpha} \sigma^2_{\beta} \sigma^3_{\gamma}\sigma^4_{\delta} \right\rangle - \left\langle \sigma ^1_{\alpha} \right\rangle \left\langle \sigma ^2_{\beta} \sigma^3_{\gamma} \sigma^4_{\delta}\right\rangle - \left\langle \sigma ^2_{\beta} \right\rangle \left\langle \sigma ^1_{\alpha} \sigma^3_{\gamma} \sigma^4_{\delta}\right\rangle - \left\langle \sigma ^3_{\gamma} \right\rangle \left\langle \sigma ^1_{\alpha} \sigma^2_{\beta} \sigma^4_{\delta}\right\rangle \nonumber \\ & - & \left\langle \sigma ^4_{\delta} \right\rangle \left\langle \sigma ^1_{\alpha} \sigma^2_{\beta} \sigma^3_{\gamma}\right\rangle +2 \left\langle \sigma ^1_{\alpha} \right\rangle \left\langle \sigma ^2_{\beta} \right\rangle \left\langle \sigma ^3_{\gamma} \sigma^4_{\delta} \right\rangle +2 \left\langle \sigma ^1_{\alpha} \right\rangle \left\langle \sigma ^3_{\gamma} \right\rangle \left\langle \sigma ^2_{\beta} \sigma^4_{\delta} \right\rangle +2 \left\langle \sigma ^1_{\alpha} \right\rangle \left\langle \sigma ^4_{\delta} \right\rangle \left\langle \sigma ^2_{\beta} \sigma^3_{\gamma} \right\rangle \nonumber \\ & +2 & \left\langle \sigma ^2_{\beta} \right\rangle \left\langle \sigma ^3_{\gamma} \right\rangle \left\langle \sigma ^1_{\alpha} \sigma^4_{\delta} \right\rangle + 2 \left\langle \sigma ^2_{\beta} \right\rangle \left\langle \sigma ^4_{\delta} \right\rangle \left\langle \sigma ^1_{\alpha} \sigma^3_{\gamma} \right\rangle +2 \left\langle \sigma ^3_{\gamma} \right\rangle \left\langle \sigma ^4_{\delta} \right\rangle \left\langle \sigma ^1_{\alpha} \sigma^2_{\beta} \right\rangle - \left \langle \sigma^1_{\alpha} \sigma^2_{\beta} \right \rangle \left \langle \sigma^3_{\gamma} \sigma^4_{\delta} \right \rangle \nonumber \\ & -& \left \langle \sigma^1_{\alpha} \sigma^3_{\gamma} \right \rangle \left \langle \sigma^2_{\beta} \sigma^4_{\delta} \right \rangle - \left \langle \sigma^1_{\alpha} \sigma^4_{\delta} \right \rangle \left \langle \sigma^2_{\beta} \sigma^3_{\gamma} \right \rangle - 6 \left \langle \sigma^1_{\alpha} \right \rangle \left \langle \sigma^2_{\beta} \right \rangle \left \langle \sigma^3_{\gamma} \right \rangle \left \langle \sigma^4_{\delta} \right \rangle . \end{eqnarray} The general expression for the $N$-particle correlation coefficient can be obtained by solving the equations for cluster functions derived formally from the $N$-th quantum virial coefficient (quantum trace) [28-29]. \par Correlation coefficients calculated by using the above expression have been used as entanglement criterion for four-particle states in this paper. Rigolin proposed a set of generalized Bell basis set for the teleportation of two-particle state which is a direct product of two, two-particle Bell states. The four-particle correlation coefficients for all these 16 orthonormal generalized Bell basis are zero. One can form another orthonormal basis set of four-particle states similar to Rigolin's generalized Bell basis, one member of which can be given as \begin{equation} \left| \psi\right\rangle_{1234}^{(1)} = \frac{1}{2}\left[\,\left|0000\right\rangle_{1234}+ \left| 1001\right\rangle_{1234}+\left| 0110\right\rangle_{1234}+\left| 1111\right\rangle_{1234} \right] \end{equation} This set also works properly for the teleportation of arbitrary two-particle states with only single qubit unitary transformations on Bob's side. However, it also does not have genuine four particle entanglement (all the four-particle correlation coeficients are zero). Rigolin's state(s) and the basis shown above (Eq. 20) are linear combinations of GHZ states and possess no genuine multi-particle correlation which is quite similar to what we have discussed in the previous section with three-particle states (Eq. 5). An example of a set of sixteen four-particle states which possess genuine four-particle correlation (Eq. 19) is given by Yeo and Chua [18]. The non-zero correlation coefficients associated with their set of states is indicated in Table V. \par Here we propose two different sets of states which also possess genuine four-particle entanglement which can be used successfully for the teleportation of arbitrary two-particle states. \subsection{\label{sec:level2}First set} In the first scheme, we propose a set of orthonormal basis states which are linear combinations of GHZ states, namely \begin{eqnarray} \left| \phi \right\rangle_{1234}^{(1),(2)} & = & \frac{\left| 0 \rangle_1 \right. \otimes \left| \phi \rangle_{23}^+ \right. \otimes \left| 0 \rangle_4 \right. \pm \left| 1 \rangle_1 \right. \otimes \left| \phi \rangle_{23}^- \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}} , \nonumber \\ \left| \phi \right\rangle_{1234}^{(3),(4)} & = & \frac{\left| 0 \rangle_1 \right. \otimes \left| \phi \rangle_{23}^+ \right. \otimes \left| 1 \rangle_4 \right. \pm \left| 1 \rangle_1 \right. \otimes \left| \phi \rangle_{23}^- \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}} , \nonumber \\ \left| \phi \right\rangle_{1234}^{(5),(6)} & = & \frac{\left| 0 \rangle_1 \right. \otimes \left| \psi \rangle_{23}^+ \right. \otimes \left| 0 \rangle_4 \right. \pm \left| 1 \rangle_1 \right. \otimes \left| \psi \rangle_{23}^- \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}}, \nonumber \\ \left| \phi \right\rangle_{1234}^{(7),(8)} & = & \frac{\left| 0 \rangle_1 \right. \otimes \left| \psi \rangle_{23}^+ \right. \otimes \left| 1 \rangle_4 \right. \pm \left| 1 \rangle_1 \right. \otimes \left| \psi \rangle_{23}^- \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}}, \nonumber \\ \left| \phi \right\rangle_{1234}^{(9),(10)}& = & \frac{\left| 1 \rangle_1 \right. \otimes \left| \phi \rangle_{23}^+ \right. \otimes \left| 0 \rangle_4 \right. \pm \left|0 \rangle_1 \right. \otimes \left| \phi \rangle_{23}^- \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}}, \nonumber \\ \left| \phi \right\rangle_{1234}^{(11),(12)} & = & \frac{\left| 1 \rangle_1 \right. \otimes \left| \phi \rangle_{23}^+ \right. \otimes \left| 1 \rangle_4 \right. \pm \left| 0 \rangle_1 \right. \otimes \left| \phi \rangle_{23}^- \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}} , \nonumber \\ \left| \phi \right\rangle_{1234}^{(13),14)} & = & \frac{\left| 1 \rangle_1 \right. \otimes \left| \psi \rangle_{23}^+ \right. \otimes \left| 0 \rangle_4 \right. \pm \left| 0 \rangle_1 \right. \otimes \left| \psi \rangle_{23}^- \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}} {~~~\rm and} \nonumber \\ \left| \phi \right\rangle_{1234}^{(15),(16)} & = & \frac{\left| 1 \rangle_1 \right. \otimes \left| \psi \rangle_{23}^+ \right. \otimes \left| 1 \rangle_4 \right. \pm \left| 0 \rangle_1 \right. \otimes \left| \psi \rangle_{23}^- \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}}. \end{eqnarray} The above basis has the advantage that it has genuine four-particle entanglement as these states cannot be written as the direct product of two-particle states. We have calculated the non-zero correlation coefficients for this set and listed them in Table VI. The maximum value $({\pm}1)$ of correlation coefficients suggests that the extent of entanglement is maximum between four-particles which supports its use as a quantum carrier as well as a projection basis. These states are robust with respect to two particle tracing (14, 23) such that when traced over (14) or (23), the other two particles will be in a correlated state. \par Teleportation protocol for the two particle state $\left|\phi\right\rangle_{12} = a\left|00\right\rangle_{12}+b\left|01\right\rangle_{12} +c\left|10\right\rangle_{12}+d\left|11\right\rangle_{12}$ is given below. Alice and Bob can use any one of the entangled state as a quantum carrier described in the given set, e.g., \begin{equation} \left| \phi\right\rangle_{3456}^{(1)} = \frac{1}{2}\left[\,\left|0000\right\rangle_{3456}+ \left| 1001\right\rangle_{3456}+\left| 0110\right\rangle_{3456}-\left| 1111\right\rangle_{3456} \right] \end{equation} where particles 3 and 4 are with Alice and 5 and 6 are with Bob. The initial direct product of the six-particle state is given by \begin{eqnarray} \left| \psi \right\rangle_{123456} & = & \frac{a}{2} \left[\left| 000000 \right\rangle _{123456} + \left| 001001 \right\rangle _{123456} + \left| 000110 \right\rangle _{123456} - \left| 001111 \right\rangle _{123456} \right] \nonumber \\ & + & \frac{b}{2}\left[ \left| 010000 \right\rangle _{123456} + \left| 011001 \right\rangle _{123456} + \left| 010110 \right\rangle _{123456} - \left| 011111 \right\rangle _{123456} \right] \nonumber \\ & + & \frac{c}{2} \left[\left| 100000 \right\rangle _{123456} + \left| 101001 \right\rangle _{123456} + \left| 100110 \right\rangle _{123456} - \left| 101111 \right\rangle _{123456} \right] \nonumber \\ & + & \frac{d}{2}\left[ \left| 110000 \right\rangle _{123456} + \left| 111001 \right\rangle _{123456} + \left| 110110 \right\rangle _{123456} - \left| 111111 \right\rangle _{123456} \right]. \end{eqnarray} Direct teleportation is the result, as is seen when Eq. 23 is re-expressed in the basis set of Eq. 21, i.e. \begin{eqnarray} \left|\psi\right\rangle_{123456} & = & \frac{\left|\phi\right\rangle^{(1)}_{1234}}{4}\left[a\left|00\right\rangle_{56}+b\left|01\right\rangle_{56}+c\left|10\right\rangle_{56}+d\left|11\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(2)}_{1234}}{4}\left[a\left|00\right\rangle_{56}+b\left|01\right\rangle_{56}-c\left|10\right\rangle_{56}-d\left|11\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(3)}_{1234}}{4}\left[a\left|10\right\rangle_{56}-b\left|11\right\rangle_{56}+c\left|00\right\rangle_{56}-d\left|01\right\rangle_{56}\right] \nonumber \\ & + &\frac{\left|\phi\right\rangle^{(4)}_{1234}}{4}\left[a\left|10\right\rangle_{56}-b\left|11\right\rangle_{56}-c\left|00\right\rangle_{56}+d\left|01\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(5)}_{1234}}{4}\left[a\left|01\right\rangle_{56}+b\left|00\right\rangle_{56}-c\left|11\right\rangle_{56}-d\left|10\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(6)}_{1234}}{4}\left[a\left|01\right\rangle_{56}+b\left|00\right\rangle_{56}+c\left|11\right\rangle_{56}+d\left|10\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(7)}_{1234}}{4}\left[-a\left|11\right\rangle_{56}+b\left|10\right\rangle_{56}+c\left|01\right\rangle_{56}-d\left|00\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(8)}_{1234}}{4}\left[-a\left|11\right\rangle_{56}+b\left|10\right\rangle_{56}-c\left|01\right\rangle_{56}+d\left|00\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(9)}_{1234}}{4}\left[a\left|10\right\rangle_{56}+b\left|11\right\rangle_{56}+c\left|00\right\rangle_{56}+d\left|01\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(10)}_{1234}}{4}\left[-a\left|10\right\rangle_{56}-b\left|11\right\rangle_{56}+c\left|00\right\rangle_{56}+d\left|01\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(11)}_{1234}}{4}\left[a\left|00\right\rangle_{56}-b\left|01\right\rangle_{56}+c\left|10\right\rangle_{56}-d\left|11\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(12)}_{1234}}{4}\left[-a\left|00\right\rangle_{56}+b\left|01\right\rangle_{56}+c\left|10\right\rangle_{56}-d\left|11\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(13)}_{1234}}{4}\left[-a\left|11\right\rangle_{56}-b\left|10\right\rangle_{56}+c\left|01\right\rangle_{56}+d\left|00\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(14)}_{1234}}{4}\left[a\left|11\right\rangle_{56}+b\left|10\right\rangle_{56}+c\left|01\right\rangle_{56}+d\left|00\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(15)}_{1234}}{4}\left[a\left|01\right\rangle_{56}-b\left|00\right\rangle_{56}-c\left|11\right\rangle_{56}+d\left|10\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\phi\right\rangle^{(16)}_{1234}}{4}\left[-a\left|01\right\rangle_{56}+b\left|00\right\rangle_{56}-c\left|11\right\rangle_{56}+d\left|10\right\rangle_{56}\right]. \end{eqnarray} Thus if Alice makes a joint measurement on her particles ${(1234)}$, Bob's two particles (56) will be projected onto one of the sixteen equally probable states. Bob recovers the information by applying appropriate unitary transformations having Alice inform him about her classical outcome(s). The advantage here is the direct product of two single qubit unitary transformation listed in Table VII instead of a joint unitary transformation such as a C-NOT gate for Bob to recover the unknown information. \subsection{\label{sec:level2}Second set} In this subsection we propose another basis which is a linear combination of direct products of the three-particle GHZ states and a single particle as \begin{eqnarray} \left| \varphi \right\rangle_{1234}^{(1),(2)} = \frac{\left| \chi \rangle_{123}^{(1)''} \right. \otimes \left| 0 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(3)''} \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}}, & & \left| \varphi \right\rangle_{1234}^{(3),(4)} = \frac{\left| \chi \rangle_{123}^{(2)''} \right. \otimes \left| 0 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(4)''} \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}}\ , \nonumber \\ \left| \varphi \right\rangle_{1234}^{(5),(6)} = \frac{\left| \chi \rangle_{123}^{(1)''} \right. \otimes \left| 1 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(3)''} \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}}, & & \left| \varphi \right\rangle_{1234}^{(7),(8)} = \frac{\left| \chi \rangle_{123}^{(2)''} \right. \otimes \left| 1 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(4)''} \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}}\ , \nonumber \\ \left| \varphi \right\rangle_{1234}^{(9),(10)} = \frac{\left| \chi \rangle_{123}^{(5)''} \right. \otimes \left| 0 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(7)''} \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}} , & & \left| \varphi \right\rangle_{1234}^{(11),(12)} = \frac{\left| \chi \rangle_{123}^{(6)''} \right. \otimes \left| 0 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(8)''} \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}}\ , \nonumber \\ \left| \varphi \right\rangle_{1234}^{(13),(14)} = \frac{\left| \chi \rangle_{123}^{(5)''} \right. \otimes \left| 1 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(7)''} \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}} & {~ ~ ~ \rm and} & \left| \varphi \right\rangle_{1234}^{(15),(16)} = \frac{\left| \chi \rangle_{123}^{(6)''} \right. \otimes \left| 1 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(8)''} \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}} \nonumber \\ & & \end{eqnarray} where \begin{eqnarray} \left| \chi \right\rangle_{123}^{(1)'',(2)''} = \frac{1}{\sqrt{2}} \left[ \, \left| 000 \right\rangle_{123} \pm \left| 111 \right\rangle_{123} \, \right] & , & \left| \chi \right\rangle_{123}^{(3)'',(4)''} = \frac{1}{\sqrt{2}} \left[ \, \left| 010 \right\rangle_{123} \pm \left| 101 \right\rangle_{123} \, \right] , \ \nonumber \\ \left| \chi \right\rangle_{123}^{(5)'',(6)''} =\frac{1}{\sqrt{2}} \left[ \, \left| 011 \right\rangle_{123} \pm \left| 100 \right\rangle_{123} \, \right] & {\rm and} & \left| \chi \right\rangle_{123}^{(7)'',(8)''} = \frac{1}{\sqrt{2}} \left[ \, \left|001 \right\rangle_{123} \pm \left| 110 \right\rangle_{123} \, \right] \end{eqnarray} are the eight GHZ states corresponding to three particles (123). \par The following representation of the above basis set enables us to generate a $2N$-particle entangled basis as described further, \begin{eqnarray} \left| \chi \right\rangle_{1234}^{(1),(2)} & = & \frac{\left| 0 \rangle_1 \right. \otimes \left| \phi \rangle_{24}^+ \right. \otimes \left| 0 \rangle_3 \right. \pm \left| 1 \rangle_1 \right. \otimes \left| \psi \rangle_{24}^+ \right. \otimes \left| 1 \rangle_3 \right. }{\sqrt{2}} , \nonumber \\ \left| \chi \right\rangle_{1234}^{(3),(4)} & = & \frac{\left| 0 \rangle_1 \right. \otimes \left| \phi \rangle_{24}^+ \right. \otimes \left| 1 \rangle_3 \right. \pm \left| 1 \rangle_1 \right. \otimes \left| \psi \rangle_{24}^+ \right. \otimes \left| 0 \rangle_3 \right. }{\sqrt{2}} , \nonumber \\ \left| \chi \right\rangle_{1234}^{(5),(6)} & = & \frac{\left| 0 \rangle_1 \right. \otimes \left| \phi \rangle_{24}^- \right. \otimes \left| 0 \rangle_3 \right. \pm \left| 1 \rangle_1 \right. \otimes \left| \psi \rangle_{24}^- \right. \otimes \left| 1 \rangle_3 \right. }{\sqrt{2}} , \nonumber \\ \left| \chi \right\rangle_{1234}^{(7),(8)} & = & \frac{\left| 0 \rangle_1 \right. \otimes \left| \phi \rangle_{24}^- \right. \otimes \left| 1 \rangle_3 \right. \pm \left| 1 \rangle_1 \right. \otimes \left| \psi \rangle_{24}^- \right. \otimes \left| 0 \rangle_3 \right. }{\sqrt{2}} , \nonumber \\ \left| \chi \right\rangle_{1234}^{(9),(10)} & = & \frac{\left| 1 \rangle_1 \right. \otimes \left| \phi \rangle_{24}^+ \right. \otimes \left| 0 \rangle_3 \right. \pm \left| 0 \rangle_1 \right. \otimes \left| \psi \rangle_{24}^+ \right. \otimes \left| 1 \rangle_3 \right. }{\sqrt{2}}, \nonumber \\ \left| \chi \right\rangle_{1234}^{(11),(12)} & = & \frac{\left| 1 \rangle_1 \right. \otimes \left| \phi \rangle_{24}^+ \right. \otimes \left| 1 \rangle_3 \right. \pm \left| 0 \rangle_1 \right. \otimes \left| \psi \rangle_{24}^+ \right. \otimes \left| 0 \rangle_3 \right. }{\sqrt{2}}, \nonumber \\ \left| \chi \right\rangle_{1234}^{(13),(14)} & = & \frac{\left| 1 \rangle_1 \right. \otimes \left| \phi \rangle_{24}^- \right. \otimes \left| 0 \rangle_3 \right. \pm \left| 0 \rangle_1 \right. \otimes \left| \psi \rangle_{24}^- \right. \otimes \left| 1 \rangle_3 \right. }{\sqrt{2}} {~ ~ ~ \rm and} \nonumber \\ \left| \chi \right\rangle_{1234}^{(15),(16)} & = & \frac{\left| 1 \rangle_1 \right. \otimes \left| \phi \rangle_{24}^- \right. \otimes \left| 1 \rangle_3 \right. \pm \left| 0 \rangle_1 \right. \otimes \left| \psi \rangle_{24}^- \right. \otimes \left| 0 \rangle_3 \right. }{\sqrt{2}}. \end{eqnarray} These set of states (Eq. 25/Eq. 27) are also maximally entangled four-particle states and cannot be written as direct products of lesser number of particles. Also the four-particle correlation coefficients listed in Table VIII for the above states are non-zero and are also maximal ${(\pm 1)}$. The set has an additional advantage in terms of robustness, i.e. when traced with respect to the 2nd or 4th particle the remaining three particles (123, 134) are entangled. In addition, it is robust with respect to two-particle (13) tracing which leaves other two particle (24) in an entangled state. This makes the above set suitable for the teleportation of an arbitrary two-particle state. Using one of the above states as the state shared by Alice and Bob, namely, \begin{equation} \left| \varphi\right\rangle_{3456}^{(1)} = \frac{1}{2}\left[\,\left|0000\right\rangle_{3456}+ \left| 1110\right\rangle_{3456}+\left| 0101\right\rangle_{3456}+\left| 1011\right\rangle_{3456} \right] \end{equation} where particles (34) are with Alice and particles (56) are with Bob, the joint state of six-particles composed of Alice's particles (1234) and Bob's particles (56) give rise to \begin{eqnarray} \left| \psi \right\rangle_{123456} & = & \frac{a}{2} \left[\left| 000000 \right\rangle _{123456} + \left| 000101 \right\rangle _{123456} + \left| 001110 \right\rangle _{123456} + \left| 001011 \right\rangle _{123456} \right] \nonumber \\ & + & \frac{b}{2}\left[ \left| 010000 \right\rangle _{123456} + \left| 010101 \right\rangle _{123456} + \left| 011110 \right\rangle _{123456} + \left| 011011 \right\rangle _{123456} \right] \nonumber \\ & + & \frac{c}{2} \left[\left| 100000 \right\rangle _{123456} + \left| 100101 \right\rangle _{123456} + \left| 101110 \right\rangle _{123456} + \left| 101011 \right\rangle _{123456} \right] \nonumber \\ & + & \frac{d}{2}\left[ \left| 110000 \right\rangle _{123456} + \left| 110101 \right\rangle _{123456} + \left| 111110 \right\rangle _{123456} + \left| 111011 \right\rangle _{123456} \right]. \end{eqnarray} Re-expressing Eq. 29 in terms of basis set proposed in (Eq. 25), we have \begin{eqnarray} \left|\psi\right\rangle_{123456} & = & \frac{\left|\varphi\right\rangle^{(1)}_{1234}}{4}\left[a\left|00\right\rangle_{56}+b\left|01\right\rangle_{56}+c\left|10\right\rangle_{56}+d\left|11\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(2)}_{1234}}{4}\left[a\left|00\right\rangle_{56}-b\left|01\right\rangle_{56}-c\left|10\right\rangle_{56}+d\left|11\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(3)}_{1234}}{4}\left[a\left|00\right\rangle_{56}+b\left|01\right\rangle_{56}-c\left|10\right\rangle_{56}-d\left|11\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(4)}_{1234}}{4}\left[a\left|00\right\rangle_{56}-b\left|01\right\rangle_{56}+c\left|10\right\rangle_{56}-d\left|11\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(5)}_{1234}}{4}\left[a\left|01\right\rangle_{56}+b\left|00\right\rangle_{56}+c\left|11\right\rangle_{56}+d\left|10\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(6)}_{1234}}{4}\left[a\left|01\right\rangle_{56}-b\left|00\right\rangle_{56}-c\left|11\right\rangle_{56}+d\left|10\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(7)}_{1234}}{4}\left[a\left|01\right\rangle_{56}+b\left|00\right\rangle_{56}-c\left|11\right\rangle_{56}-d\left|10\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(8)}_{1234}}{4}\left[a\left|01\right\rangle_{56}-b\left|00\right\rangle_{56}+c\left|11\right\rangle_{56}-d\left|10\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(9)}_{1234}}{4}\left[a\left|10\right\rangle_{56}+b\left|11\right\rangle_{56}+c\left|00\right\rangle_{56}+d\left|01\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(10)}_{1234}}{4}\left[-a\left|10\right\rangle_{56}+b\left|11\right\rangle_{56}+c\left|00\right\rangle_{56}-d\left|01\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(11)}_{1234}}{4}\left[a\left|10\right\rangle_{56}+b\left|11\right\rangle_{56}-c\left|00\right\rangle_{56}-d\left|01\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(12)}_{1234}}{4}\left[-a\left|10\right\rangle_{56}+b\left|11\right\rangle_{56}-c\left|00\right\rangle_{56}+d\left|01\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(13)}_{1234}}{4}\left[a\left|11\right\rangle_{56}+b\left|10\right\rangle_{56}+c\left|01\right\rangle_{56}+d\left|00\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(14)}_{1234}}{4}\left[-a\left|11\right\rangle_{56}+b\left|10\right\rangle_{56}+c\left|01\right\rangle_{56}-d\left|00\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(15)}_{1234}}{4}\left[a\left|11\right\rangle_{56}+b\left|10\right\rangle_{56}-c\left|01\right\rangle_{56}-d\left|00\right\rangle_{56}\right] \nonumber \\ & + & \frac{\left|\varphi\right\rangle^{(16)}_{1234}}{4}\left[-a\left|11\right\rangle_{56}+b\left|10\right\rangle_{56}-c\left|01\right\rangle_{56}+d\left|00\right\rangle_{56}\right] . \end{eqnarray} It is obvious that, at the most, the unitary transformations which Bob needs to apply reduce to a joint single qubit unitary transformation. Table IX lists all the unitary transformations which might be needed to recover the original state. A different orthogonal set of states is given by \begin{eqnarray} \left| \varphi \right\rangle_{1234}^{(1)',(2)'} = \frac{\left| \chi \rangle_{123}^{(1)''} \right. \otimes \left| 0 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(4)''} \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}} & , & \left| \varphi \right\rangle_{1234}^{(3)',(4)'} = \frac{\left| \chi \rangle_{123}^{(3)''} \right. \otimes \left| 1 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(2)''} \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}}\ , \nonumber \\ \left| \varphi \right\rangle_{1234}^{(5)',(6)'} = \frac{\left| \chi \rangle_{123}^{(1)''} \right. \otimes \left| 1 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(4)''} \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}} & , & \left| \varphi \right\rangle_{1234}^{(7)',(8)'} = \frac{\left| \chi \rangle_{123}^{(3)''} \right. \otimes \left| 0 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(2)''} \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}}\ , \nonumber \\ \left| \varphi \right\rangle_{1234}^{(9)',(10)'} = \frac{\left| \chi \rangle_{123}^{(5)''} \right. \otimes \left| 0 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(8)''} \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}} & , & \left| \varphi \right\rangle_{1234}^{(11)',(12)'} = \frac{\left| \chi \rangle_{123}^{(7)''} \right. \otimes \left| 1 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(6)''} \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}}\ , \nonumber \\ \left| \varphi \right\rangle_{1234}^{(13)',(14)'} = \frac{\left| \chi \rangle_{123}^{(5)''} \right. \otimes \left| 1 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(8)''} \right. \otimes \left| 0 \rangle_4 \right. }{\sqrt{2}} & {\rm and} & \left| \varphi \right\rangle_{1234}^{(15)',(16)'} = \frac{\left| \chi \rangle_{123}^{(7)''} \right. \otimes \left| 0 \rangle_4 \right. \pm \left| \chi \rangle_{123}^{(6)''} \right. \otimes \left| 1 \rangle_4 \right. }{\sqrt{2}} . \nonumber \\ & & \end{eqnarray} This set also has properties similar to the states given by Eq. 25/Eq. 27. The non-zero four-particle correlation coefficient associated with all the sixteen basis states are listed in Table X. It is an easy exercise to verify that this set works successfully towards teleportation of a two-particle system with only single qubit unitary transformations on Bob's side. \par It is possible to generalize the above protocol for the $N$-particle system using the basis set given below. The sequential manner in which they are constructed ensures that entanglement properties of these states are preserved down to a pair of particles when they are systematically averaged. \par Consider the two-particle Bell states given by Eq. 2 with particles 1 and 2 replaced by 2 and 3. The sixteen four-particle states (1234) can then be given as \[ \frac{1}{\sqrt{2}} \left[ \left( \begin{array}{c} \left| 0 \right\rangle \\ \left| 1 \right\rangle \\ \end{array} \right) _1 \otimes \left( \begin{array}{c} \left| \phi ^+ \right\rangle \\ \left| \psi ^+ \right\rangle \\ \end{array} \right) _{23} \otimes \left( \begin{array}{c} \left| 0 \right\rangle \\ \left| 1 \right\rangle \\ \end{array} \right) _4 \pm \left( \begin{array}{c} \left| 1 \right\rangle \\ \left| 0 \right\rangle \\ \end{array} \right) _1 \otimes \left( \begin{array}{c} \left| \phi ^- \right\rangle \\ \left| \psi ^- \right\rangle \\ \end{array} \right) _{23} \otimes \left( \begin{array}{c} \left| 1 \right\rangle \\ \left| 0 \right\rangle \\ \end{array} \right) _4 \right] . \] Relabelling the sixteen states as \begin{equation} \left( \begin{array}{l} \left| \chi ^{(1),(2)} \right\rangle _{1234} \\ \left| \chi ^{(3),(4)} \right\rangle _{1234} \\ \left| \chi ^{(5),(6)} \right\rangle _{1234} \\ \left| \chi ^{(7),(8)} \right\rangle _{1234} \\ \left| \chi ^{(9),(10)} \right\rangle _{1234} \\ \left| \chi ^{(11),(12)} \right\rangle _{1234} \\ \left| \chi ^{(13),(14)} \right\rangle _{1234} \\ \left| \chi ^{(15),(16)} \right\rangle _{1234} \\ \end{array} \right) = \frac{1}{\sqrt{2}} \left [ \left( \begin{array}{c} \left| 0 \right\rangle \left| \phi^+ \right\rangle \left| 0 \right\rangle \\ \left| 0 \right\rangle \left| \phi^+ \right\rangle \left| 1 \right\rangle \\ \left| 0 \right\rangle \left| \psi^+ \right\rangle \left| 0 \right\rangle \\ \left| 0 \right\rangle \left| \psi^+ \right\rangle \left| 1 \right\rangle \\ \left| 1 \right\rangle \left| \phi^+ \right\rangle \left| 0 \right\rangle \\ \left| 1 \right\rangle \left| \phi^+ \right\rangle \left| 1 \right\rangle \\ \left| 1 \right\rangle \left| \psi^+ \right\rangle \left| 0 \right\rangle \\ \left| 1 \right\rangle \left| \psi^+ \right\rangle \left| 1 \right\rangle \\ \end{array} \right) \pm \left( \begin{array}{c} \left| 1 \right\rangle \left| \phi^- \right\rangle \left| 1 \right\rangle \\ \left| 1 \right\rangle \left| \phi^- \right\rangle \left| 0 \right\rangle \\ \left| 1 \right\rangle \left| \psi^- \right\rangle \left| 1 \right\rangle \\ \left| 1 \right\rangle \left| \psi^- \right\rangle \left| 0 \right\rangle \\ \left| 0 \right\rangle \left| \phi^- \right\rangle \left| 1 \right\rangle \\ \left| 0 \right\rangle \left| \phi^- \right\rangle \left| 0 \right\rangle \\ \left| 0 \right\rangle \left| \psi^- \right\rangle \left| 1 \right\rangle \\ \left| 0 \right\rangle \left| \psi^- \right\rangle \left| 0 \right\rangle \\ \end{array} \right) \right ] , \end{equation} the six-particle generalized entangled states are given by, \[ \frac{1}{\sqrt{2}} \left [\left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right) _1 \otimes \left( \begin{array}{l} \left| \chi^{(1)} \right\rangle _{2345} \\ \left| \chi^{(3)} \right\rangle _{2345} \\ \left| \chi^{(5)} \right\rangle _{2345} \\ \left| \chi^{(7)} \right\rangle _{2345} \\ \left| \chi^{(9)} \right\rangle _{2345} \\ \left| \chi^{(11)} \right\rangle _{2345} \\ \left| \chi^{(13)} \right\rangle _{2345} \\ \left| \chi^{(15)} \right\rangle _{2345} \\ \end{array} \right) \otimes \left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right) _6 \pm \left( \begin{array}{c} 1 \\ 0 \\ \end{array} \right) _1 \otimes \left( \begin{array}{l} \left| \chi^{(2)} \right\rangle _{2345} \\ \left| \chi^{(4)} \right\rangle _{2345} \\ \left| \chi^{(6)} \right\rangle _{2345} \\ \left| \chi^{(8)} \right\rangle _{2345} \\ \left| \chi^{(10)} \right\rangle _{2345} \\ \left| \chi^{(12)} \right\rangle _{2345} \\ \left| \chi^{(14} \right\rangle _{2345} \\ \left| \chi^{(16)} \right\rangle _{2345} \\ \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ 0 \\ \end{array} \right) _6 \right] . \] The $2N$-particle generalization of the above which contains a set of maximally entangled states can be written down immediately as \begin{eqnarray}& & \frac{1}{\sqrt{2}} \left[ \left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right) _1 \otimes \left( \begin{array}{l} \left| \chi^{(1)} \right\rangle _{23 \ldots 2N-1} \\ \left| \chi^{(3)} \right\rangle _{23 \ldots 2N-1} \\ \vdots \\ \left| \chi^{(2^{2N-2}-3)} \right\rangle _{23 \ldots 2N-1} \\ \left| \chi^{(2^{2N-2}-1)} \right\rangle _{23 \ldots 2N-1} \\ \end{array} \right) \otimes \left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right) _{2N} \right. \nonumber \\ & & \pm \left. \left( \begin{array}{c} 1 \\ 0 \\ \end{array} \right) _1 \otimes \left( \begin{array}{l} \left| \chi^{(2)} \right\rangle _{23 \ldots 2N-1} \\ \left| \chi^{(4)} \right\rangle _{23 \ldots 2N-1} \\ \vdots \\ \left| \chi^{(2^{2N-2}-2)} \right\rangle _{23 \ldots 2N-1} \\ \left| \chi^{(2^{2N-2})} \right\rangle _{23 \ldots 2N-1} \\ \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ 0 \\ \end{array} \right) _{2N} \right] . \nonumber \end{eqnarray} The $2N$-particle generalized entangled state for the second set can also be obtained in a similar way as above. \section*{IV.TELEPORTATION USING QUANTUM GATES AND COMPUTATIONAL BASIS} In this section we analyze the above schemes using quantum gates and the three and four-qubit computational basis. \subsection*{A. Teleportation of a single qubit through GHZ state} Three qubit states can be prepared by using the appropriate quantum network as given below with the gates required [34-38] on the circuit for the creation of the GHZ state(s). If we give three inputs $\left| 0 \right\rangle _1$, $\left| 0 \right\rangle _2$ and $\left| 0 \right\rangle _3$ then the GHZ state $\frac{1}{\sqrt{2}} \left[ \,\left| 000 \right\rangle_{123 \ is prepared as indicated in Fig. 1. The quantum circuit required for Alice's unknown qubit to be teleported is shown in Fig. 2. The input for the quantum circuit in Fig. 2 is \[ \left| \psi \right\rangle^{(0)}_{1234} = [a \left| 0 \right\rangle _1 + b \left| 1 \right\rangle _1] \otimes \frac{1}{\sqrt{2}} \left[ \left| 000 \right\rangle _{234} + \left| 111 \right\rangle _{234} \right] . \] Alice sends her qubits 1 and 3 through the C-NOT gate keeping 1 as the control qubit and 3 as the target qubit and obtains \begin{equation} \left| \psi \right\rangle^{(1)}_{1234} = \frac{a}{\sqrt{2}} \left[ \left| 0000 \right\rangle _{1234} + \left| 0111 \right\rangle _{1234} \right] + \frac{b}{\sqrt{2}} \left[ \left| 1010 \right\rangle _{1234} + \left| 1101 \right\rangle _{1234} \right] . \end{equation} Then she sends her qubit 1 through a Hadamard gate with the result \begin{eqnarray} \left| \psi \right\rangle ^{(2)}_{1234} & = & \frac{a}{2} \left[ \left| 0000 \right\rangle _{1234} + \left| 1000 \right\rangle _{1234} + \left| 0111 \right\rangle _{1234} + \left| 1111 \right\rangle _{1234} \right] \nonumber \\ & + & \frac{b}{2} \left[ \left| 0010 \right\rangle _{1234} - \left| 1010 \right\rangle _{1234} + \left| 0101 \right\rangle _{1234} - \left| 1101 \right\rangle _{1234} \right]. \end{eqnarray} This she follows by sending qubit 2 through a Hadamard gate again which results in the state as \begin{eqnarray} \left| \psi \right\rangle ^{(3)}_{1234} & = & \frac{a}{2\sqrt{2}} \left[ \left| 0000 \right\rangle _{1234} + \left| 0100 \right\rangle _{1234} + \left| 1000 \right\rangle _{1234} + \left| 1100 \right\rangle _{1234} \right. \nonumber \\ & & + \left. \left| 0011 \right\rangle _{1234} - \left| 0111 \right\rangle _{1234} + \left| 1011 \right\rangle _{1234} - \left| 1111 \right\rangle _{1234} \right] \nonumber \\ & + & \frac{b}{2\sqrt{2}} \left[ \left| 0010 \right\rangle _{1234} + \left| 0110 \right\rangle _{1234} - \left| 1010 \right\rangle _{1234} - \left| 1110 \right\rangle _{1234} \right. \nonumber \\ & & + \left. \left| 0001 \right\rangle _{1234} - \left| 0101 \right\rangle _{1234} - \left| 1001 \right\rangle _{1234} + \left| 1101 \right\rangle _{1234} \right]. \end{eqnarray} A simple rearrangement will decompose the above state into four equally probable measurement outcomes with Bob's particle being projected in one of the four states as \begin{eqnarray} \left| \psi \right\rangle^{(3)}_{1234} & = & \frac{1}{2\sqrt{2}} \left\{ \left| 000 \right\rangle _{123} \left[ a \left| 0 \right\rangle _4 +b \left| 1 \right\rangle _4 \right] + \left| 001 \right\rangle _{123}\left[ a \left| 1 \right\rangle _4 +b \left| 0 \right\rangle _4 \right] \right. \nonumber \\ & + & \left| 010 \right\rangle _{123}\left[ a \left| 0 \right\rangle _4 -b \left| 1 \right\rangle _4 \right] + \left| 011 \right\rangle _{123}\left[ -a \left| 1 \right\rangle _4 +b \left| 0 \right\rangle _4 \right] \nonumber \\ & + & \left| 110 \right\rangle _{123}\left[ a \left| 0 \right\rangle _4 +b \left| 1 \right\rangle _4 \right] + \left| 101 \right\rangle _{123}\left[ a \left| 1 \right\rangle _4 -b \left| 0 \right\rangle _4 \right] \nonumber \\ & + & \left. \left| 100 \right\rangle _{123} \left[ a \left| 0 \right\rangle _4 -b \left| 1 \right\rangle _4 \right] + \left| 111 \right\rangle _{123}\left[ -a \left| 1 \right\rangle _4 -b \left| 0 \right\rangle _4 \right] \right.. \end{eqnarray} Table XI lists the required gates on Bob's side which will be activated based on Alice's communication through a classical channel.\par \subsection*{B. Teleportation of an arbitrary EPR pair through GHZ basis} The quantum circuit to accomplish this task is given in Figure 3. In the figure, U's are single qubit unitary transformations on qubits 4 and 5 respectively. Qubits 1, 2 and 3 are with Alice and 4 and 5 are with Bob. The input to the quantum circuit is \begin{eqnarray} \left| \psi \right\rangle ^{(0)}_{12345} & = & [a \left| 01 \right\rangle _{12} + b \left| 10 \right\rangle _{12}] \otimes \frac{1}{\sqrt{2}} \left[ \left| 000 \right\rangle _{345} + \left| 111 \right\rangle _{345} \right] \nonumber \\ & = & \frac{a}{\sqrt{2}} \left[ \left| 01000 \right\rangle _{12345} + \left| 01111 \right\rangle _{12345} \right] + \frac{b}{\sqrt{2}} \left[ \left| 10000 \right\rangle _{12345} + \left| 10111 \right\rangle _{12345} \right] . \end{eqnarray} The sequence is Alice's transmission of her qubits 1 and 3 through C-NOT gate while keeping qubit 1 as control and 3 as target. This she follows with the transmission of 1 and 2 through the Hadamard gate. The processes are given in the same sequence by the wave functions $\left| \psi \right\rangle^{(1)}_{12345} $, $\left| \psi \right\rangle^{(2)}_{12345} $ and $\left| \psi \right\rangle^{(3)}_{12345} $, where \begin{eqnarray} \left| \psi \right\rangle^{(1)}_{12345} & = & \frac{a}{\sqrt{2}} \left[ \left| 01000 \right\rangle _{12345} + \left| 01111 \right\rangle _{12345} \right] +\frac{b}{\sqrt{2}} \left[ \left| 10100 \right\rangle _{12345} + \left| 10011 \right\rangle _{12345} \right], \\ \left| \psi \right\rangle ^{(2)}_{12345} & = & \frac{a}{2} \left[ \left| 01000 \right\rangle _{12345} + \left| 11000 \right\rangle _{12345} + \left| 01111 \right\rangle _{12345} + \left| 11111 \right\rangle _{12345} \right] \nonumber \\ & + & \frac{b}{2} \left[ \left| 00100 \right\rangle _{12345} - \left| 10100 \right\rangle _{12345} + \left| 00011 \right\rangle _{12345} - \left| 10011 \right\rangle _{12345} \right], \\ & {\rm and} & \nonumber \\ \left| \psi \right\rangle^{(3)}_{12345} & = & \frac{a}{2\sqrt{2}} \left[\left| 00000 \right\rangle _{12345} - \left| 01000 \right\rangle _{12345} + \left| 10000 \right\rangle _{12345} - \left| 11000 \right\rangle _{12345} \right. \nonumber \\ & & +\left. \left| 00111 \right\rangle _{12345} - \left| 01111 \right\rangle _{12345} + \left| 10111 \right\rangle _{12345} - \left| 11111 \right\rangle _{12345} \right] \nonumber \\ & + & \frac{b}{2\sqrt{2}} \left[\left| 00100 \right\rangle _{12345} + \left| 01100 \right\rangle _{12345} - \left| 10100 \right\rangle _{12345} - \left| 11100 \right\rangle _{12345} \right. \nonumber \\ & & +\left. \left| 00011 \right\rangle _{12345} + \left| 01011 \right\rangle _{12345} - \left| 10011 \right\rangle _{12345} - \left| 11011 \right\rangle _{12345} \right]. \end{eqnarray} Decomposing this in terms of the computational three qubit (123) basis set, we get, \begin{eqnarray} \left| \psi \right\rangle ^{(3)}_{12345} & = & \frac{1}{2\sqrt{2}} \left\{ \left| 000 \right\rangle _{123} \left[ a \left| 00 \right\rangle _{45} +b \left| 11 \right\rangle _{45} \right]+ \left| 010 \right\rangle _{123}\left[ -a \left| 00 \right\rangle _{45} +b \left| 11\right\rangle _{45} \right] \right. \nonumber \\ & + & \left| 100 \right\rangle _{123}\left[ a \left| 00 \right\rangle _{45} -b \left|11 \right\rangle _{45} \right] + \left| 110 \right\rangle _{123}\left[ -a \left| 00 \right\rangle _{45} -b \left| 11 \right\rangle _{45} \right] \nonumber \\ & + & \left| 001 \right\rangle _{123}\left[ a \left| 11 \right\rangle _{45} +b \left| 00 \right\rangle _{45} \right] + \left| 011 \right\rangle _{123}\left[ -a \left| 11 \right\rangle _{45} +b \left| 00 \right\rangle _{45} \right] \nonumber \\ & + & \left. \left| 101 \right\rangle _{123} \left[ a \left| 11 \right\rangle _{45} -b \left| 00 \right\rangle _{45} \right] + \left| 111 \right\rangle _{123}\left[ -a \left| 11 \right\rangle _{45} -b \left| 00 \right\rangle _{45} \right] \right\}. \end{eqnarray} It is clear from the above that Bob's measurements are all equally probable and require at the most one two-qubit gate leading to four equal outcomes. Table XII gives the two qubit gates required for measurements.\par \subsection*{C. Teleportation of a single qubit through entangled basis of three qubits} The three-qubit entangled basis (Eq. 6) can be prepared by applying the Hadamard gate on the first qubit of GHZ basis as \[ \frac{1}{\sqrt{2}} \left[ \, \left| 000 \right\rangle _{123} + \left| 111 \right\rangle _{123} \right] \stackrel{H^1}{\longrightarrow} \frac{1}{2} \left[ \, \left| 000 \right\rangle _{123} + \left| 100 \right\rangle _{123} +\left| 011 \right\rangle _{123} - \left| 111 \right\rangle _{123} \right] \] The quantum circuit to prepare the above state is given in Fig. 4. The quantum circuit required to teleport the single qubit through the above three qubit entangled state is given in Fig. 5. The input to the circuit is \begin{eqnarray} \left| \psi \right\rangle^{(0)}_{1234} & = & \frac{a}{2} \left[ \left| 0000 \right\rangle _{1234} + \left| 0011 \right\rangle _{1234} + \left| 0100 \right\rangle _{1234} - \left| 0111 \right\rangle _{1234} \right] \nonumber \\ & + & \frac{b}{2} \left[ \left| 1000 \right\rangle _{1234} + \left| 1011 \right\rangle _{1234} + \left| 1100 \right\rangle _{1234} - \left| 1111 \right\rangle _{1234} \right] . \end{eqnarray} The four-qubit direct product state can be decomposed into four equally probable results (similar to Eq. 16 and Eq. 17) on Bob's qubit as \begin{eqnarray} \left| \psi \right\rangle^{(3)}_{1234} & = & \frac{1}{2\sqrt{2}} \left\{ \left| 000 \right\rangle _{123} \left[ a \left| 0 \right\rangle _4 +b \left| 1 \right\rangle _4 \right] + \left| 001 \right\rangle _{123}\left[ a \left| 1 \right\rangle _4 +b \left| 0 \right\rangle _4 \right] \right. \nonumber \\ & + & \left| 010 \right\rangle _{123}\left[ a \left| 0 \right\rangle _4 -b \left| 1 \right\rangle _4 \right] + \left| 011 \right\rangle _{123}\left[ -a \left| 1 \right\rangle _4 +b \left| 0 \right\rangle _4 \right] \nonumber \\ & + & \left| 110 \right\rangle _{123}\left[ a \left| 0 \right\rangle _4 +b \left| 1 \right\rangle _4 \right] + \left| 101 \right\rangle _{123}\left[ a \left| 1 \right\rangle _4 -b \left| 0 \right\rangle _4 \right] \nonumber \\ & + & \left. \left| 100 \right\rangle _{123} \left[ a \left| 0 \right\rangle _4 -b \left| 1 \right\rangle _4 \right] + \left| 111 \right\rangle _{123}\left[ -a \left| 1 \right\rangle _4 -b \left| 0 \right\rangle _4 \right] \right.. \end{eqnarray} The gates required for detection are summarized in Table XI. \subsection*{D. Quantum circuits for two-qubit teleportation} Here we suggest the quantum circuits for preparing different four-qubit entangled states and the network required to teleport arbitrary two qubits through these quantum carriers using four-qubit computational bases. \par The quantum circuit required to prepare set of orthonormal states given by Eq. 21 and Eq. 25 are given in figure 6 and figure 8 respectively. Depending on the input given all the 16 orthonormal states can be prepared in the two different sets. In addition to this, figures 7 and 9 provide the quantum network to teleport an arbitrary two-qubit state using the four-particle entangled states (quantum carriers) given by figures 6 and 8, respectively. The symbols have their usual meanings as discussed earlier and algebra related to the process is straightforward. The quantum network to prepare six-qubit entangled state and to teleport a three qubit arbitrary state through it can be developed on similar grounds. The unitary transformation required on Bob's side are all single qubit unitary transformations and are quite simple to achieve. \section*{V. CONCLUSION} We have given in this paper schemes for the single-particle teleportation through three-particle GHZ states using three-particle entangled basis set(s) and vice versa. The use of GHZ states and sets of three-particle basis set(s) as {\em quantum carriers} and as a set of {\em projection basis} has been explored. Our protocol obviates the earlier difficulties regarding missing basis elements of projection basis and Alice does not need any assistance (Charlie/Cliff) in the process of communication with Bob. In that sense ours is direct teleportation and not a controlled one, in contrast to schemes proposed earlier. We have discussed the entanglement of multipartite states using statistical correlation coefficients and have proposed multiparticle entangled states which possess genuine multiparticle entanglement. We have demonstrated the teleportation of arbitrary two-particle states using the states proposed as quantum carriers with the added advantage of the requirement of only the direct products of single qubit unitary transformations on Bob's side instead of a joint unitary transformation involving two or more particles. We have taken a step forward to suggest the generalization of the protocol for the teleportation of the $N$-particle state through a $2N$-particle genuinely entangled quantum channel which can be formed by taking proper care to ensure maximum genuine entanglement. In addition, we have analyzed and verified all the protocols discussed here through the use of quantum gates with appropriate quantum circuits. \section*{VI. ACKNOWLEDGMENT} AK is grateful to IIT Madras for a graduate fellowship. MSK would like to acknowledge his gratitude to his mentor, Professor Bryan Sanctuary, McGill University, Montreal, Canada for introducing him to the subject of quantum teleportation. This project is funded by the IIT Madras research funds. \newpage
1,108,101,566,853
arxiv
\section*{Introduction} It has been known for a long time that the target space geometry of the sigma model is intimately related with the number of supersymmetries it possesses. In particular, Bruno Zumino showed that $(2,2)$ supersymmetry in $D=1$ requires the bosonic part of a Lagrangian to be a K\"ahler manifold \cite{1}. Later on, Alvarez-Gaume and Freedman \cite{2} proved that $(4,4)$ supersymmetry further restricts the target space geometry to be hyper-K\"ahler (HK). Next, the analysis of the supersymmetric sigma-models with Wess-Zumino terms \cite{hkt} and heterotic $(4,0)$ supersymmetric sigma models \cite{{3},{4}} brought about hyper-K\"ahler geometries with torsion (HKT)\footnote{ The same type of bosonic target HKT geometry as in the $D=1$ case was present in the $N=8, D=1$ analytic bi-harmonic superspace, see e.g. \cite{bis} and references therein.}. Apart from their evident application to non-linear sigma-models, HK and HKT geometries arise also in the moduli spaces for a certain class of black holes \cite{4}, in the target space of a bound state of a D-string and D-five-branes, etc. Unfortunately, all these applications, though very interesting, are rather complicated. Moreover, the mathematical description of supersymmetric sigma models with HK and/or HKT target space geometries is quite involved. Therefore it seems to be a promising idea to simplify everything in such a way as to provide the simplest theory where the HK geometry arises as a consequence of supersymmetry, and where all main properties of the theory can understood. Clearly enough, supersymmetric mechanics should be a good choice, in this respect. The supersymmetric mechanics with $N=4$ supersymmetry possesses a number of specific features which make it selected with respect, not only to its higher-dimensional counterparts, but also to mechanics with a different number of supersymmetries. Firstly, $N=4, D=1$ supersymmetry is rather simple. Moreover, just in the $N=4, D=1$ case the most general action may be easily written in terms of superfields as an integral over the whole superspace (in close analogy with $(2,2)$ supersymmetry in $d=2$). Secondly, all known $N=4$ supermultiplets in $D=1$ are off-shell, so the corresponding actions can be written in standard superspace (see e.g. \cite{ikl} and refs. therein). One should stress that just in $N=4, D=1$ superspace one may define a new class of nonlinear supermultiplets which contain a functional freedom in the defining relations \cite{{ks},{di},{bk41}}. Let us recall that we formulated the problem of how to describe $N = 4$ and $N = 8$, $D = 1$ sigma models with HK metrics in the target space in \cite{lectures}, advocating the use of nonlinear supermultiplets. Finally, in one dimension there is a nice duality between cyclic variables in Lagrangian and coupling constants. Indeed, if some one dimensional Lagrangian has a cyclic variable, say $\phi$, then the corresponding conserved momentum $p_\phi$ acquires a constant value $m$. Performing a Routh transformation over $\phi$ we will get a theory with a smaller number of bosonic fields but with a coupling constant $m$. Obviously, this procedure may be reversed to dualize the coupling constant $m$ into a new bosonic field $\phi$. Clearly enough, the resulting Lagrangian will possess an isometry with the Killing vector $\partial/\partial \phi$. In what follows we will heavily use just these features of supersymmetric mechanics. It is known that four dimensional bosonic hyper-K\"ahler manifolds with (at least) one isometry may be divided into two types that are in fact distinct from each other\footnote{Here we closely follow \cite{basf}.}. The first kind, which is sometimes called translational (or triholomorphic), corresponds to a Killing vector with self-dual covariant derivatives. In the supersymmetric case the translational isometry commutes with supersymmetry. For the four dimensional hyper-K\"ahler manifolds with a translational isometry there is a preferred coordinate system where the bosonic sigma model action reads \cite{hk1} \begin{equation}}\newcommand{\ee}{\end{equation}\label{hk1} S_1 = \int dt \left[ \frac{1}{g} \left( \dot\phi +\omega_i \dot{x}^i \right)^2 + g \left( \eta_{ij} {\dot x}^i \dot{x}^j \right)\right], \ee where $\eta_{ij}$ is a flat three dimensional metric and $\omega_i(x^j)$ and $g(x^i)$ are constrained to satisfy the conditions \begin{equation}}\newcommand{\ee}{\end{equation}\label{hk1a} \partial_i g = \pm \epsilon_{ijk}\partial_j \omega_k. \ee It immediately follows from \p{hk1a} that the metric $g(x^i)$ satisfies the three dimensional Laplace equation. Let us observe that one may always choose the ``gauge'' $\omega_3=0$ by a proper redefinition of the field $\phi$. The second type of four dimensional hyper-K\"ahler manifolds, which are called manifolds with rotational isometry, encompasses all other Killing vector fields. Once again, one may find a preferred coordinate system in which the sigma model action gets the simplest form \cite{hk2} \begin{equation}}\newcommand{\ee}{\end{equation}\label{hk2} S_2=\int dt\left[ \frac{1}{\Psi_u} \left( \dot\phi +i \Psi_z \dot{z} - i \Psi_{\bar z} \dot{{\bar z}}\right)^2+ {\Psi_u} \left( {\mbox e}^\Psi \dot{z} \dot{\bar z} + {\dot u}^2\right)\right]. \ee Here, additionally, the function $\Psi=\Psi(z,{\bar z},u)$ satisfies the Toda equation \begin{equation}}\newcommand{\ee}{\end{equation}\label{hk2a} \Psi_{z {\bar z}} + \left( {\mbox e}^{\Psi}\right)_{uu} =0. \ee The purpose of the present Letter is to construct $N=4$ supersymmetric extensions of both bosonic HK metrics \p{hk1} and \p{hk2}. The line of our construction looks as follows. It is rather easy to realize that after removing the cyclic variable $\phi$ in both actions \p{hk1} and \p{hk2}, we will get the three dimensional mechanics with a conformally flat metric in the case of \p{hk1} and with a more complicated metric in the case of \p{hk2}. Therefore we start from the general $N=4, D=1$ three dimensional mechanics, properly constrain it to get the needed three dimensional bosonic manifolds, and then perform a dualization of a coupling constant, initially present in the action, to reproduce the full action. To close this Section let us mention that the $N=4$ supersymmetric mechanics for the action with translational isometry \p{hk1} has been already constructed: on the component level in \cite{{G},{ks},{bks11}} and in the harmonic superspace in \cite{di}. As for the action with rotational isometry \p{hk2}, to the best of our knowledge, no explicit supersymmetric action has been constructed. In the next Sections we are going to consider both cases on the same footing. Surprisingly, for the case \p{hk1} the system we found admits $N=8, D=1$ supersymmetry. In the following, we simplify our presentation by considering only the bosonic parts of the corresponding actions. The complete expressions, including all fermionic terms, may be easily restored, if needed. \section*{From three dimensional to hyper-K\"ahler sigma models} The most general three dimensional sigma model with $N=4, D=1$ supersymmetry can easily be constructed within standard $N=4, D=1$ superspace which may be parameterized by the following coordinates: $ t, \theta_i,{\bar\theta}{}^i; i=1,2$. We choose as our basic superfields the ordinary $N=4$ chiral superfield ${\cal Z}$ obeying the conditions \begin{equation}}\newcommand{\ee}{\end{equation}\label{sf1} D^i {\overline{\cal Z}}=0, \quad \overline D_i {\cal Z}=0, \ee and the so called ``old tensor" supermultiplet \cite{leva} which may be described by a real superfield ${\cal U}$ subjected to the following constraints: \begin{equation}}\newcommand{\ee}{\end{equation}\label{sf2} D^i D_i \;{\cal U} = \overline D^i\overline D_i \;{\cal U}=0, \ee where the $N=4$ spinor covariant derivatives $D^i,\overline D_i$ obey the standard super-Poincar\'e algebra $$\left\{ D^i, \overline D_j \right\} =2i \delta^i_j \partial_t.$$ The chiral superfield ${\cal Z}$ describes two physical bosons $z,\bar z$, four fermions $\psi^i, {\bar\psi}_i$ and two auxiliary bosonic fields $A, \bar A$ which may be defined as\footnote{As usual, $|$ denotes the restriction to $\theta^i=\bar\theta_i=0$.} \begin{equation}}\newcommand{\ee}{\end{equation}\label{comp1} z={\cal Z}|,\; {\bar z}={\overline{\cal Z}}|,\; \psi^i =D^i{\cal Z}|, \; {\bar\psi}_i=\overline D_i {\overline{\cal Z}}|, \; A= D^iD_i {\cal Z}|,\; {\bar A}=\overline D_i \overline D^i {\overline{\cal Z}}|. \ee Concerning the ``old tensor" supermultiplet, it comprises one physical boson $u$, once again four fermions $\xi^i,{\bar\xi}_j$ and a triplet of auxiliary components $A^{(ij)}$ \begin{equation}}\newcommand{\ee}{\end{equation}\label{comp2} u={\cal U}|,\; \xi^i =D^i {\cal U}|, \; {\bar\xi}_i =\overline D_i {\cal U}|, \; A_{(ij)} =i \left[ D_{(i},\overline D_{j)}\right] {\cal U}|. \ee What is extremely important for our construction is that among the components of the superfield ${\cal U}$ there is a constant $m$ \cite{leva}. Indeed, from the basic constraints \p{comp2} it immediately follows that \begin{equation}}\newcommand{\ee}{\end{equation}\label{g} \frac{\partial}{\partial t} \left[ D^i, \overline D_i\right] {\cal U} =0 \; \Rightarrow \; \left[ D^i, \overline D_i\right] {\cal U}=4m=\mbox{ const}. \ee The most general sigma model action may be easily written in the full $N=4, d=1$ superspace as \begin{equation}}\newcommand{\ee}{\end{equation}\label{a1} S= -\int dt d^4 \theta \; {\cal F}({\cal Z}, {\overline{\cal Z}}, {\cal U})\equiv -\frac{1}{4}\int dt D^2 \overline D{}^2 \; {\cal F}({\cal Z}, {\overline{\cal Z}}, {\cal U}), \ee where ${\cal F}({\cal Z}, {\overline{\cal Z}}, {\cal U})$ is an arbitrary real function of ${\cal Z}, {\overline{\cal Z}}, {\cal U}$. After passing to the components \p{comp1},\p{comp2} and eliminating the auxiliary fields by their equations of motion, the bosonic part of the action \p{a1} takes the following form: \begin{equation}}\newcommand{\ee}{\end{equation}\label{a2} S_{bos}=\int dt \left[\left( F_{uu} {\dot u}^2 - 4F_{z{\bar z}}{\dot z}{\dot{\bar z}}\right) -F_{uu}m^2 +2im\left(F_{uz}{\dot z} - F_{u{\bar z}}\dot{\bar z} \right) \right]. \ee Now we have at hands the most general three dimensional sigma model action. The next task is to put a proper restriction on the prepotential ${\cal F}({\cal Z}, {\overline{\cal Z}}, {\cal U})$ to reproduce the three dimensional part of the metrics \p{hk1} and \p{hk2} and to perform the dualization of the coupling constant $m$. \subsection*{HK sigma model with translational isometry} For the HK sigma model with translational isometry the bosonic three dimensional part of the action should be conformally flat as in \p{hk1}. It is immediately clear from \p{a2} that conformal flatness is achieved if \begin{equation}}\newcommand{\ee}{\end{equation}\label{eq1} F_{z{\bar z}}=-F_{uu} \; \Rightarrow F_{uu}+F_{z{\bar z}}=0. \ee Thus, the necessary conditions to reproduce \p{hk1} is to choose the prepotential ${\cal F}$ to be a three dimensional harmonic function. In this case, the three dimensional metric reads \begin{equation}}\newcommand{\ee}{\end{equation}\label{eq1a} g=F_{uu} \ee which, as consequence of \p{eq1}, obeys the three dimensional Laplace equation. Thus, we partially achieved the needed action. All that we still need is to get the full four dimensional action \p{hk1}. Fortunately, the action \p{a2} already contains the coupling constant $m$ which may be dualized into a fourth bosonic field. Let us supply the action \p{a2} with an additional term \begin{equation}}\newcommand{\ee}{\end{equation}\label{a2a} {\tilde S}_{bos} = S_{bos} + \int dt\;m \dot\phi. \ee Varying \p{a2a} over the new bosonic field $\phi$ we will simply recover that $m=const$. But if we will instead vary the action \p{a2a} over $m$, which is now an independent variable, we will immediately get \begin{equation}}\newcommand{\ee}{\end{equation}\label{m1} m = \frac{1}{2F_{uu}} \left[\dot\phi+2i\left(F_{uz}{\dot z} - F_{u{\bar z}}\dot{\bar z} \right)\right]. \ee Plugging \p{m1} back into the action \p{a2a} we will finally get \begin{equation}}\newcommand{\ee}{\end{equation}\label{finA} {\tilde S}_{bos} =\int dt\left\{ F_{uu} \left( {\dot u}^2 + 4 {\dot z}{\dot{\bar z}}\right) + \frac{1}{4F_{uu}} \left[\dot\phi+ 2i\left(F_{uz}{\dot z} - F_{u{\bar z}}\dot{\bar z} \right)\right]^2 \right\}. \ee Comparing the action \p{finA} with \p{hk1} one may find that they completely coincide (modulo unessential numerical factors) after passing in the action \p{hk1} to the complex coordinates $z=x^1+ix^2$, choosing the gauge $\omega_3=0$ and with conditions \p{hk1a} being solved exactly. Thus, we constructed an $N=4$ supersymmetric mechanics which possesses HK geometry with translation isometry in its bosonic target space. Surprisingly, the condition we have to impose on the prepotential \p{eq1} to be a harmonic function is completely sufficient to realize an additional $N=4$ supersymmetry which commutes with the manifest one and still preserves the action \p{a1} \cite{bikl}. Therefore, we conclude that our action \p{finA}, with all fermionic terms being restored, provide us with $N=8,D=1$ supersymmetric mechanics with HK geometry in the bosonic sector. \subsection*{HK sigma model with rotational isometry} Now we will turn to the second type of HK metrics \p{hk2}. Once more, comparing the kinetic term of the action \p{a2} with the corresponding part of the action \p{hk2} one may conclude that now, in order to reproduce the action \p{hk2}, we have to impose the following constraints on the prepotential $F$: \begin{equation}}\newcommand{\ee}{\end{equation}\label{eq2} F_{z{\bar z}}=-\mbox{e}^{\Psi} F_{uu}, \; F_{uu} = \partial_u \Psi . \ee One may check that the integrability condition of the system \p{eq2} results in the equation \begin{equation}}\newcommand{\ee}{\end{equation}\label{eq2a} \frac{\partial}{\partial u} \left[\Psi_{ z {\bar z}} + \left( {\mbox e}^{\Psi}\right)_{uu}\right] =0, \ee which is just a weak variant of the condition \p{hk2a}, necessary in order to get the HK metric. As the last step we will perform the same dualization of the constant $m$ as in the previous section. As a result we end up with the following bosonic action: \begin{equation}}\newcommand{\ee}{\end{equation}\label{finB} {\hat S}_{bos} =\int dt\left\{ \Psi_{u} \left( {\dot u}^2 + 4 \mbox{e}^\Psi {\dot z}{\dot{\bar z}}\right) + \frac{1}{4\Psi_{u}} \left[\dot\phi+ 2i\left(\Psi_{z}{\dot z} - \Psi_{{\bar z}}\dot{\bar z} \right)\right]^2 \right\}, \ee where we partially integrated \p{eq2} as $F_u =\Psi$. Clearly, we have the same action as in \p{hk2}. Hence, we conclude that the action \p{finB}, with all fermionic terms being restored, correctly reproduces $N=4,D=1$ supersymmetric mechanics possessing the HK geometry with rotational isometry in the bosonic sector. \section*{Discussions and Outlook} The study of supersymmetric quantum mechanics models endowed with $N = 4,8$ supersymmetry represents one of the most up-to-date and prolific direction of development. While building upon the researches initiated in \cite{leva}, the new developments aim at a more complete understanding of the structure of the corresponding higher-dimensional supersymmetric field theories, as well as of the AdS(2)/CFT(1) correspondence. In this paper we constructed $N=4, D=1$ supersymmetric sigma models with HK geometry in the bosonic target space which possess (at least) one translational/rotational isometry. In the case of HK geometry with translational isometry we found that the action possesses an additional hidden $N=4$ supersymmetry and therefore it is $N=8$ supersymmetric \cite{bikl}. We also explicitly demonstrated that the conditions which select these types of HK metrics follow from the invariance under $N=4$ supersymmetry. Being almost manifestly supersymmetric, our construction leaves one serious question unanswered. Indeed, we obtained the fourth bosonic physical field through the dualization of the coupling constant, which from the beginning is present in one of the supermultiplets we started from. Thus, the question is: how is $N=4$ supersymmetry realized on this new field $\phi$? The transformations properties of $\dot\phi$ under $N=4$ supersymmetry may be immediately found from its definition \p{m1}. We have explicitly checked that on-shell the time derivative could be taken off and the transformation law of $\phi$ is local under $N=4$ supersymmetry \begin{equation}}\newcommand{\ee}{\end{equation} \delta\phi=8i \left( \epsilon^i D_i F_u -\bar\epsilon_i \overline D{}^i F_u \right). \ee So, at least we deal with the local realization of $N=4$ supersymmetry, but it is still unclear, whether it might be realized off-shell. Another interesting question concerns the explicit realization of the hidden $N=4$ supersymmetry in case of translational isometry and the possibility of existence of hidden $N=4$ supersymmetry in the case of the action with rotational isometry. Of course, in this respect the superfield off-shell formulation could be very useful. Unfortunately, at present we do not have a such formulation at hands. Finally, we would like to stress that considered $D=1$ sigma-models are much simpler then their $D=4$ and even $D=2$ counterparts. Moreover, they are completely ready for quantization. In principle, one may hope to find the exact spectrum for some peculiar solutions of three-dimensional Laplace and Toda equations. Let us observe that even the three dimensional action \p{a2} is interesting, owing to the very specific interaction terms. This is especially intriguing in view of the existence of two examples of integrable systems (with translational isometry) where interactions are defined in such a way \cite{{gib},{ners}}. Hopefully, the extended supersymmetry will not spoil integrability. We hope to report the corresponding results elsewhere. \section*{Acknowledgements} The authors are grateful to F.~Delduc and E.~Ivanov for useful correspondences. S.K. would like to thank the INFN--Laboratori Nazionali di Frascati for the warm hospitality extended to him during the course of this work. This work was partially supported by the European Community's Marie Curie Research Training Network under contract MRTN-CT-2004-005104 Forces Universe, by INTAS under contract 05-7928 and by grants RFBR-06-02-16684, DFG~436 Rus~113/669/03.
1,108,101,566,854
arxiv
\section{Introduction} Let $a_{1},\dots,a_{r},z_{1},\ldots,z_{r}\in\mathbb{C}$ be parameters with $\Re(a_{1}),\Re(a_{1}+a_2),\dots,\Re(a_1+\cdots+a_{r})>0$, $|z_{1}|,\ldots,|z_{r}|\le1$, and $z_1,\ldots,z_r\neq0$. For $s_{1},\dots,s_{r}\in\mathbb{C}$ with $\Re(s_1),\ldots,\Re(s_r)>1$, the Hurwitz-Lerch multiple zeta functions are defined by \begin{align*} \zeta(s_{1},\ldots,s_{r};a_{1},\ldots,a_{r};z_{1},\ldots,z_{r}) & :=\sum_{0\le m_{1},\ldots,m_{r}}\frac{z_{1}^{m_{1}}\cdots z_{r}^{m_{r}}}{(m_{1}+a_{1})^{s_{1}}\cdots(m_{1}+\cdots+m_{r}+a_{1}+\cdots+a_{r})^{s_{r}}}, \end{align*} where we set $-\pi<\arg(m_1+\cdots+m_j+a_1+\cdots+a_j)\le\pi$ for $1\le j\le r$. The function $\zeta(s;a;z)$ was studied by Lipschitz \cite{Lip57,Lip89} and Lerch \cite{Ler87}. It is known that if we fix $s$ which is not a positive integer, $\zeta(s;a;z)$ can be continued to $\mathbb{C}\setminus[1,\infty)$ as a function of $z$, and for a fixed $z\in\mathbb{C}\setminus[1,\infty)$, $\zeta(s;a;z)$ can be continued as a function of $s$ to the whole complex plain except for the possible simple poles at $s=1,2,3,\ldots$ (for details, see \cite{EMOT1}). Special values at non-positive integers are given by Apostol \cite{Apo51}. When $|z|=1$, he showed \begin{align*} \zeta(-n;a;z) & =-\frac{B_{n+1}(a;z)}{n+1} \end{align*} for a non-negative integer $n$, where $B_{n+1}(a;z)$ is the Apostol-Bernoulli polynomial defined by the generating function \[ \frac{xe^{ax}}{ze^{x}-1}=\sum_{n\ge0}B_{n}(a;z)\frac{x^{n}}{n!}. \] Note that $B_n(a;1)=B_n(a)$ is the Bernoulli polynomial and $(-1)^nB_n(1,1)=B_n$ is the Bernoulli number. The Hurwitz-Lerch multiple zeta functions have been studied by many authors. In \cite{Kam06}, Kamano investigate the function with $z_1=\cdots=z_r=1$ and those values at non-positive integer points. In \cite{EM20,EM21}, Essouabri and Matsumoto also studied the function that generalizes its denominators. In \cite{FKMT17}, Furusho, Komori, Matsumoto, and Tsumura showed some interesting analytic properties and defined a desingularization analogue. In \cite{Kom09}, Komori studied the function and gave an analytic continuation and limit values at non-positive integer points. Here, motivated by the second-named author's work \cite{Onozuka13}, we shall show the asymptotic behavior of the Hurwitz-Lerch multiple zeta function at non-positive integer points. For integers $i$ and $r$ with $1\le i\le r$, put $n(i,r):=n_{i}+\cdots+n_{r}$, $l(i,r):=l_{i}+\cdots+l_{r}$, and $\epsilon(i,r):=\epsilon_{i}+\cdots+\epsilon_{r}$. For $j=1,\dots,r-1$ and $d_{1},\dots,d_{r-1}\in\{0,1\}$, set \begin{align*} S_{j}^{(d_{j})} & =S_{j}^{(d_{j})}(l_{1},\dots,l_{r})\\ & :=\begin{cases} \{(n_{1},\dots,n_{r})\in\mathbb{Z}_{\ge0}^{r}\mid n(j+1,r)\le l(j+1,r)+(r-j),n(1,r)=l(1,r)+r\} & \text{if }d_{j}=0,\\ \{(n_{1},\dots,n_{r})\in\mathbb{Z}_{\ge0}^{r}\mid l(j,r)+(r-j)<n(j+1,r),n(1,r)=l(1,r)+r\} & \text{if }d_{j}=1 \end{cases} \end{align*} and \[ S^{(d_{1},\dots,d_{r-1})}=S^{(d_{1},\dots,d_{r-1})}(l_{1},\dots,l_{r}):=\bigcap_{j=1}^{r-1}S_{j}^{(d_{j})}. \] For positive integers $n_{1},\dots,n_{r}$ and $d_{1},\dots,d_{r-1}\in\{0,1\}$, let \begin{align*} h^{(d_{1},\dots,d_{r-1})}(n_{1},\dots,n_{r}) & =h^{(d_{1},\dots,d_{r-1})}(n_{1},\dots,n_{r};l_{1},\dots,l_{r};\epsilon_{1},\ldots,\epsilon_{r})\\ & :=(-1)^{l_{r}}l_{r}!\prod_{j=1}^{r-1}h_{j}^{(d_{j})}(n_{1},\dots,n_{r};l_{1},\dots,l_{r};\epsilon_{1},\ldots,\epsilon_{r}), \end{align*} where \[ h_{j}^{(d_{j})}(n_{1},\dots,n_{r};l_{1},\dots,l_{r};\epsilon_{1},\ldots,\epsilon_{r}):=\begin{cases} \displaystyle\frac{(-1)^{l_{j}}(-(n(j+1,r)-l(j,r)-(r-j)))!}{(-(n(j+1,r)-l(j+1,r)-(r-j)))!}& \text{if }d_{j}=0,\\ \displaystyle\frac{\epsilon(j+1,r)}{\epsilon(j,r)}\cdot\frac{(n(j+1,r)-l(j+1,r)-(r-j)-1)!}{(n(j+1,r)-l(j,r)-(r-j)-1)!}& \text{if }d_{j}=1. \end{cases} \] In this paper, we present asymptotic behavior of the Hurwitz-Lerch multiple zeta function at non-positive integer points. \begin{thm}\label{main} Let $r\ge2$ and $a_{1},\dots,a_{r},z_{1},\ldots,z_{r}$ be complex parameters with $\Re(a_{1}),\Re(a_{1}+a_2),\dots,\Re(a_1+\cdots+a_{r})>0$ and $z_{1},\ldots,z_{r}\notin(1,\infty)$, and $\epsilon_{1},\ldots,\epsilon_{r}$ complex numbers. Suppose that $\left|\epsilon_{1}\right|,\ldots,\left|\epsilon_{r}\right|$ are sufficiently small with $\epsilon_{j}\neq0,\epsilon(j,r)\neq0$, and $\left|\epsilon_{k}/\epsilon(j,r)\right|\ll1$ as $(\epsilon_{1},\ldots,\epsilon_{r})\rightarrow(0,\ldots,0)$ for $j=1,\ldots,r$ and $k=j,\ldots,r$. For non-negative integers $l_{1},\dots,l_{r}$, we have \begin{align*} &\zeta(-l_{1}+\epsilon_{1},\dots,-l_{r}+\epsilon_{r};a_{1,}\ldots,a_{r};z_{1},\ldots,z_{r})\\ &=(-1)^{l(1,r)+r}\sum_{d_{1},\dots,d_{r-1}\in\{0,1\}}\sum_{(n_{1},\dots,n_{r})\in S^{(d_{1},\dots,d_{r-1})}}\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}h^{(d_{1},\dots,d_{r-1})}(n_{1},\dots,n_{r})\\ & \quad+\sum_{j=1}^{r}O(|\epsilon_{j}|). \end{align*} \end{thm} \begin{rem} When $a_1=\cdots=a_r=1$ and $z_1=\cdots=z_r=1$, we can obtain \cite[Theorem 2]{Onozuka13}. \end{rem} Some examples are given as follows. \begin{ex} When $r=2$ and $(l_1,l_2)=(0,0)$, by Theorem \ref{main}, we have \begin{align*} \zeta(\epsilon_1,\epsilon_2;a_1,a_2;z_1,z_2) &=B_{1}(a_1;z_1) B_{1}(a_{2};z_{2}) +\frac{1}{2} B_{2}(a_1;z_1) B_{0}(a_{2};z_{2})\\ &\quad +\frac{1}{2} B_{0}(a_1;z_1) B_{2}(a_{2};z_{2}) \frac{\epsilon_{2}}{\epsilon_{1}+\epsilon_{2}}+\sum_{j=1}^{2}O(|\epsilon_{j}|). \end{align*} When $z_1,z_2\neq1$, since we can easily obtain $B_{0}(a;z) =0$ and $B_{1}(a;z) =(z-1)^{-1}$, we have \begin{align*} \zeta(\epsilon_1,\epsilon_2;a_1,a_2;z_1,z_2) &=(z_1-1)^{-1}(z_2-1)^{-1} +\sum_{j=1}^{2}O(|\epsilon_{j}|). \end{align*} When $z_1=1,z_2\neq1$, since $B_{n}(a;1) $ is the Bernoulli polynomial and $B_{2}(a;z) =2a/(z-1)-2 z/(z-1)^2$, we have \begin{align*} \zeta(\epsilon_1,\epsilon_2;a_1,a_2;1,z_2) &=\left(a_{1}-\frac{1}{2}\right) \frac{1}{z_{2}-1} +\left(\frac{1}{z_2-1}a_2-\frac{ z_2}{(z_2-1)^2}\right)\frac{\epsilon_{2}}{\epsilon_{1}+\epsilon_{2}}+\sum_{j=1}^{2}O(|\epsilon_{j}|). \end{align*} When $z_1\neq1,z_2=1$ and $z_1=z_2=1$ , we have \begin{align*} &\zeta(\epsilon_1,\epsilon_2;a_1,a_2;z_1,1) =\frac{1}{z_{1}-1} \left(a_1+a_{2}-\frac{3}{2}\right) -\frac{ 1}{(z_1-1)^2}+\sum_{j=1}^{2}O(|\epsilon_{j}|),\\ &\zeta(\epsilon_1,\epsilon_2;a_1,a_2;1,1) =\left(a_{1}-\frac{1}{2}\right) \left(a_{2}-\frac{1}{2}\right) +\frac{1}{2}\left(a_1^2-a_1+\frac{1}{6}\right)+\frac{1}{2}\left(a_2^2-a_2+\frac{1}{6}\right)\frac{\epsilon_{2}}{\epsilon_{1}+\epsilon_{2}}+\sum_{j=1}^{2}O(|\epsilon_{j}|), \end{align*} respectively. In the last section, we give other examples. \end{ex} \begin{rem} Some of the special cases of the limit values were also studied by Akiyama and Tanigawa \cite{AT01}, Komori \cite{Kom09}, and Sasaki \cite{Sas09}. \end{rem} \section{preliminaries} We give some results on the Apostol-Bernoulli polynomial. Let $\lambda(z)$ be one of the poles of the generating function $xe^{ax}/(ze^{x}-1)$ closest to $x=0$ except for $x=0$, and $S(n,k)$ the Stirling number of the second kind. (For $z=0$, we define $\lambda(0)=\infty$.) \begin{lem}\label{apos} For $|x|<|\lambda(z)|$, we have $$\frac{xe^{-ax}}{1-ze^{-x}}=\sum_{n\ge0}(-1)^{n}B_{n}(a;z)\frac{x^{n}}{n!}.$$ \end{lem} \begin{proof} By the definition of the Apostol-Bernoulli polynomial, we have \begin{align*} \frac{xe^{-ax}}{1-ze^{-x}}&=\frac{(-x)e^{a(-x)}}{ze^{-x}-1} =\sum_{n\ge0}(-1)^{n}B_{n}(a;z)\frac{x^{n}}{n!}.\qedhere \end{align*} \end{proof} \begin{thm}[Apostol \cite{Apo51}]\label{a51} For a complex number $z\neq1$, the Apostol-Bernoulli polynomial $B_n(a;z)$ can be written by $$ B_n(a;z)=\sum_{k=0}^n\binom{n}{k}\beta_k(z)a^{n-k}, $$ where $\beta_k(z)=B_k(0;z)$ is defined by $$ \beta_k(z)=\frac{k}{(z-1)^k}\sum_{l=0}^{k-1}l!(-z)^{l}(z-1)^{k-1-l}S(k-1,l). $$ \end{thm} \begin{lem}\label{estimate2} Let $z\neq1$. For a positive integer $n$, we have \begin{align*} |B_{n}(a;z)| \ll\frac{n!}{(\log2)^n}e^{(\log2)|a|}\max\left\{|z|^{n-1}|z-1|^{-n},|z-1|^{-1}\right\}, \end{align*} where the implicit constant does not depend on $a$, $z$, and $n$. \end{lem} \begin{proof} Note that the estimate of the ordered Bell number is known as $\sum_{l=0}^{k-1}l!S(k-1,l)\ll(k-1)!/(\log2)^k$. By the previous theorem, we have \begin{align*} \beta_k(z)&\le\frac{k}{|z-1|^k}\max\left\{|z|^{k-1},|z-1|^{k-1}\right\}\sum_{l=0}^{k-1}l!S(k-1,l)\\ &\ll \frac{k!}{(\log 2)^k|z-1|^k}\max\left\{|z|^{k-1},|z-1|^{k-1}\right\}. \end{align*} Hence we have \begin{align*} |B_n(a;z)|&\ll\sum_{k=1}^n\binom{n}{k}\frac{k!}{(\log 2)^k|z-1|^k}\max\left\{|z|^{k-1},|z-1|^{k-1}\right\}|a|^{n-k}\\ &\ll n!\sum_{k=1}^n\frac{1}{(n-k)!}\frac{|a|^{n-k}}{(\log 2)^k|z-1|^k}\max\left\{|z|^{k-1},|z-1|^{k-1}\right\}\\ &\ll \frac{n!}{(\log2)^n}e^{(\log2)|a|}\max\left\{|z|^{n-1}|z-1|^{-n},|z-1|^{-1}\right\}.\qedhere \end{align*} \end{proof} \begin{lem}\label{estimate3} For a positive integer $n$, we have \begin{align*} |B_{n}(a;1)| \ll\frac{n!}{(2\pi)^{n}}e^{2\pi|a|}, \end{align*} where the implicit constant does not depend on $a$ and $n$. \end{lem} \begin{proof} Since $|B_n|\ll n!/(2\pi)^n$, we have \begin{align*} |B_n(a;1)|&\le\sum_{k=0}^n\binom{n}{k}|B_{n-k}||a|^k\\ &\ll\sum_{k=0}^n\frac{n!}{k!}\frac{|a|^k}{(2\pi)^{n-k}}\\ &\ll\frac{n!}{(2\pi)^{n}}e^{2\pi|a|}.\qedhere \end{align*} \end{proof} \section{Meromorphic Continuation} By the definition of the gamma function, for $\Re(s_1),\ldots,\Re(s_r)>1$, $\Re(a_{1}),\Re(a_{1}+a_2),\dots,\Re(a_1+\cdots+a_{r})>0$, and $|z_1|,\ldots,|z_r|\le1$ with $z_1,\ldots,z_r\neq0$, we have \begin{align*} &\Gamma(s_1)\cdots\Gamma(s_r)\frac{z_{1}^{m_{1}}\cdots z_{r}^{m_{r}}}{(m_{1}+a_{1})^{s_{1}}\cdots(m_{1}+\cdots+m_{r}+a_{1}+\cdots+a_{r})^{s_{r}}}\\ &=\int_0^\infty \cdots \int_0^\infty e^{-(m_1+a_1)u_1-\cdots-(m_1+\cdots+m_r+a_1+\cdots+a_r)u_r}z_{1}^{m_{1}}\cdots z_{r}^{m_{r}}u_1^{s_1-1}\cdots u_r^{s_r-1}du_1\cdots du_r. \end{align*} By taking the sum of the both sides, we have \begin{align*} & \Gamma(s_{1})\cdots\Gamma(s_{r})\zeta(s_{1},\ldots,s_{r};a_{1,}\ldots,a_{r};z_{1},\ldots,z_{r})\\ & =\int_{0}^{\infty}\cdots\int_{0}^{\infty}\frac{e^{-a_{1}(u_{1}+\cdots+u_{r})}}{1-z_{1}e^{-(u_{1}+\cdots+u_{r})}}\cdot\frac{e^{-a_{2}(u_{2}+\cdots+u_{r})}}{1-z_{2}e^{-(u_{2}+\cdots+u_{r})}}\cdot\cdots\cdot\frac{e^{-a_{r}u_{r}}}{1-z_{r}e^{-u_{r}}}u_{1}^{s_{1}-1}\cdots u_{r}^{s_{r}-1}du_{1}\cdots du_{r}. \end{align*} Here, we use a change of variables $$x_1\cdots x_j=u_j+\cdots+u_r\iff u_j=x_1\cdots x_j(1-x_{j+1})$$ for $j=1,\ldots,r$, where $x_{r+1}=0$. Since the Jacobian is $x_1^{r-1}\cdots x_{r-1}$, we have \begin{align*} & \Gamma(s_{1})\cdots\Gamma(s_{r})\zeta(s_{1},\ldots,s_{r};a_{1,}\ldots,a_{r};z_{1},\ldots,z_{r})\\ & =\int_{0}^{1}\cdots\int_{0}^{1}\int_{0}^{\infty}\frac{e^{-a_{1}x_{1}}}{1-z_{1}e^{-x_{1}}}\cdot\frac{e^{-a_{2}x_{1}x_{2}}}{1-z_{2}e^{-x_{1}x_{2}}}\cdot\cdots\cdot\frac{e^{-a_{r}x_{1}\cdots x_{r}}}{1-z_{r}e^{-x_{1}\cdots x_{r}}}\\ & \quad\quad(x_{1}(1-x_{2}))^{s_{1}-1}(x_{1}x_{2}(1-x_{3}))^{s_{2}-1}\cdots(x_{1}\cdots x_{r-1}(1-x_{r}))^{s_{r-1}-1}(x_{1}\cdots x_{r})^{s_{r}-1}\\ & \quad\quad x_{1}^{r-1}\cdots x_{r-1}dx_{1}\cdots dx_{r}\\ & =\int_{0}^{1}\cdots\int_{0}^{1}\int_{0}^{\infty}x_{1}^{s_{1}+\cdots+s_{r}-r-1}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\\ & \quad\quad(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}\\ & \quad\quad\frac{x_1e^{-a_{1}x_{1}}}{1-z_{1}e^{-x_{1}}}\cdot\frac{x_1x_2e^{-a_{2}x_{1}x_{2}}}{1-z_{2}e^{-x_{1}x_{2}}}\cdot\cdots\cdot\frac{x_1x_2\cdots x_re^{-a_{r}x_{1}\cdots x_{r}}}{1-z_{r}e^{-x_{1}\cdots x_{r}}}dx_{1}\cdots dx_{r}. \end{align*} Fix a small positive number $c$ with $0<c<\min_{1\le j\le r}\{|\lambda(z_j)|\}$. Put \begin{align*} X_{1} & :=\int_{0}^{1}\cdots\int_{0}^{1}\int_{0}^{c}x_{1}^{s_{1}+\cdots+s_{r}-r-1}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\\ & \quad\quad(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}\\ & \quad\quad\frac{x_1e^{-a_{1}x_{1}}}{1-z_{1}e^{-x_{1}}}\cdot\frac{x_1x_2e^{-a_{2}x_{1}x_{2}}}{1-z_{2}e^{-x_{1}x_{2}}}\cdot\cdots\cdot\frac{x_1x_2\cdots x_re^{-a_{r}x_{1}\cdots x_{r}}}{1-z_{r}e^{-x_{1}\cdots x_{r}}}dx_{1}\cdots dx_{r},\\ X_{2} & :=\int_{0}^{1}\cdots\int_{0}^{1}\int_{c}^{\infty}x_{1}^{s_{1}+\cdots+s_{r}-r-1}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\\ & \quad\quad(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}\\ & \quad\quad\frac{x_1e^{-a_{1}x_{1}}}{1-z_{1}e^{-x_{1}}}\cdot\frac{x_1x_2e^{-a_{2}x_{1}x_{2}}}{1-z_{2}e^{-x_{1}x_{2}}}\cdot\cdots\cdot\frac{x_1x_2\cdots x_re^{-a_{r}x_{1}\cdots x_{r}}}{1-z_{r}e^{-x_{1}\cdots x_{r}}}dx_{1}\cdots dx_{r}. \end{align*} Then we have \begin{align*} & \Gamma(s_{1})\cdots\Gamma(s_{r})\zeta(s_{1},\ldots,s_{r};a_{1,}\ldots,a_{r};z_{1},\ldots,z_{r})\\ & =X_{1}+X_{2}. \end{align*} By Lemma \ref{apos}, we have \begin{align*} X_{1}& =\int_{0}^{1}\cdots\int_{0}^{1}\int_{0}^{c}x_{1}^{s_{1}+\cdots+s_{r}-r-1}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\protect\\ & \quad\quad(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}\protect\\ & \quad\quad\biggl(\sum_{n_{1}\ge0}(-1)^{n_1}B_{n_{1}}(a_1;z_1)\frac{x_{1}^{n_{1}}}{n_{1}!}\biggr)\cdot\biggl(\sum_{n_{2}\ge0}(-1)^{n_2}B_{n_{2}}(a_{2};z_{2})\frac{(x_{1}x_{2})^{n_{2}}}{n_{2}!}\biggr)\cdots\protect\\ & \quad\quad\biggl(\sum_{n_{r}\ge0}(-1)^{n_r}B_{n_{r}}(a_{r};z_{r})\frac{(x_{1}\cdots x_{r})^{n_{r}}}{n_{r}!}\biggr)dx_{1}\cdots dx_{r}. \end{align*} Thus we find \begin{align*} X_{1} & =\int_{0}^{1}\cdots\int_{0}^{1}\int_{0}^{c}x_{1}^{s_{1}+\cdots+s_{r}-r-1}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\\ & \quad\quad(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}\\ & \quad\quad\sum_{k\ge0}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}(-1)^k\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}x_{1}^{k}x_{2}^{n_{2}+\cdots+n_{r}}\cdots x_{r}^{n_{r}}dx_{1}\cdots dx_{r}. \end{align*} \\ For a non-negative integer $N$, put \begin{align*} Y_{1} & :=\int_{0}^{1}\cdots\int_{0}^{1}\int_{0}^{c}x_{1}^{s_{1}+\cdots+s_{r}-r-1}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\times(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}\\ & \quad\quad\sum_{0\le k\le N}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}(-1)^k\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}x_{1}^{k}x_{2}^{n_{2}+\cdots+n_{r}}\cdots x_{r}^{n_{r}}dx_{1}\cdots dx_{r},\\ Y_{2} & :=\int_{0}^{1}\cdots\int_{0}^{1}\int_{0}^{c}x_{1}^{s_{1}+\cdots+s_{r}-r-1}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\times(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}\\ & \quad\quad\sum_{k>N}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}(-1)^k\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}x_{1}^{k}x_{2}^{n_{2}+\cdots+n_{r}}\cdots x_{r}^{n_{r}}dx_{1}\cdots dx_{r}. \end{align*} Note that $X_{1}=Y_{1}+Y_{2}$. By changing the order of sums and integrals of $Y_1$, we have \begin{align} Y_{1} & =\sum_{0\le k\le N}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}(-1)^k\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}\int_{0}^{c}x_{1}^{k+s_{1}+\cdots+s_{r}-r-1}dx_{1}\nonumber \\ & \quad\int_{0}^{1}x_{2}^{n_{2}+\cdots+n_{r}+s_{2}+\cdots+s_{r}-(r-1)-1}(1-x_{2}){}^{s_{1}-1}dx_{2}\nonumber \\ & \quad\int_{0}^{1}x_{3}^{n_{3}+\cdots+n_{r}+s_{3}+\cdots+s_{r}-(r-2)-1}(1-x_{3}){}^{s_{2}-1}dx_{3}\cdots\nonumber \\ & \quad\int_{0}^{1}x_{r}^{n_{r}+s_{r}-2}(1-x_{r}){}^{s_{r-1}-1}dx_{r}\nonumber \\ & =\sum_{0\le k\le N}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}(-1)^k\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}\label{Y1} \cdot\frac{c^{k+s_{1}+\cdots+s_{r}-r}}{k+s_{1}+\cdots+s_{r}-r} \\ & \quad\cdot\frac{\Gamma(n_{2}+\cdots+n_{r}+s_{2}+\cdots+s_{r}-(r-1))\Gamma(s_{1})}{\Gamma(n_{2}+\cdots+n_{r}+s_{1}+s_{2}+\cdots+s_{r}-(r-1))}\nonumber \\ & \quad\cdot\frac{\Gamma(n_{3}+\cdots+n_{r}+s_{3}+\cdots+s_{r}-(r-2))\Gamma(s_{2})}{\Gamma(n_{3}+\cdots+n_{r}+s_{2}+\cdots+s_{r}-(r-2))}\cdots\nonumber \\ & \quad\cdot\frac{\Gamma(n_{r}+s_{r}-1)\Gamma(s_{r-1})}{\Gamma(n_{r}+s_{r-1}+s_{r}-1)}.\nonumber \end{align} When $z_1,\ldots,z_r\neq1$, by Theorem \ref{a51}, $Y_{1}$, as a function of $(z_1,\ldots,z_r)$, can be continued to $$ \{(z_1,\ldots,z_r)\in\mathbb{C}^r\mid z_1\neq1,\ldots ,z_r\neq1\}. $$ Hence, $Y_{1}$ can be continued meromorphically to $\mathbb{C}^{2r}$ as a function of $(s_1,\ldots,s_r,z_1,\ldots,z_r)$. Generally, when $z_{p_1},\ldots,z_{p_{\xi}}\neq1$ and $z_{q_1}=\cdots=z_{q_{r-\xi}}=1$, $Y_{1}$ can be continued meromorphically to $\mathbb{C}^{r+\xi}$ as a function of $(s_1,\ldots,s_r,z_{p_1},\ldots,z_{p_{\xi}})$ since $B_n(a;1)$ is the Bernoulli polynomial. Now, we consider $Y_{2}$. By changing the exponent of $x_1$, we have \begin{align*} Y_{2} & =\int_{0}^{1}\cdots\int_{0}^{1}\int_{0}^{c}x_{1}^{s_{1}+\cdots+s_{r}-r+N}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\cdot(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}\\ & \quad\quad\sum_{k>N}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}(-1)^k\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}x_{1}^{k-N-1}x_{2}^{n_{2}+\cdots+n_{r}}\cdots x_{r}^{n_{r}}dx_{1}\cdots dx_{r}. \end{align*} We consider the convergence of the sum of the last line. For $z_1,\ldots,z_r\neq1$, by Lemma \ref{estimate2}, we have\\ \begin{align} & \sum_{k>N}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}\biggl|\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}\biggr|\cdot|x_{1}^{k-N-1}x_{2}^{n_{2}+\cdots+n_{r}}\cdots x_{r}^{n_{r}}|\label{2.3}\\ & \ll c^{-N-1}e^{(\log2)(|a_1|+\cdots+|a_r|)}\nonumber\\ &\quad\sum_{n_{1}\ge1}\left(\frac{\max\left\{|z_1|^{n_1-1}|z_1-1|^{-n_1},|z_1-1|^{-1}\right\}c^{n_1}}{(\log2)^{n_1}}\right)\cdots\sum_{n_{r}\ge1}\left(\frac{\max\left\{|z_r|^{n_r-1}|z_r-1|^{-n_r},|z_r-1|^{-1}\right\}c^{n_r}}{(\log2)^{n_r}}\right).\nonumber \end{align} Hence, by replacing $c>0$ with a smaller one if necessary, the above series converges in $[0,c]\times[0,1]^{r-1}$. Moreover, for a small $c>0$, the above series converges uniformly as a function of $(z_1,\ldots,z_r)$ in $$ T_c:=\{(z_1,\ldots,z_r)\in\mathbb{C}^r\mid |z_j-1|>2\sqrt{c},\ |z_j|<(\log2)/\sqrt{c}\quad(j=1,\ldots,r)\} $$ since $|z-1|>2\sqrt{c}$ and $|z|<(\log2)/\sqrt{c}$ yield $c|z|/(|z-1|\log2)<1/2$. Thus, series \eqref{2.3} can be continued to $T_c$. Similarly, when $z_{p_1},\ldots,z_{p_{\xi}}\neq1$ and $z_{q_1}=\cdots=z_{q_{r-\xi}}=1$, series \eqref{2.3} can be continued to some region contained in $(\mathbb{C}\setminus\{1\})^\xi$ as a function of $(z_{p_1},\ldots,z_{p_\xi})$ by Lemma \ref{estimate3}. Put \[ F(x_{1},\dots,x_{r}):=\sum_{k>N}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}(-1)^k\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}x_{1}^{k-N-1}x_{2}^{n_{2}+\cdots+n_{r}}\cdots x_{r}^{n_{r}}. \] From the above arguments, we see that $F(x_{1},\dots,x_{r})$ is holomorphic function on some region containing $[0,c]\times[0,1]^{r-1}$. For $i_{2},\dots,i_{r}\in\{0,1/2\}$, put \begin{align*} \widetilde{Y}_{2}(i_{2},\dots,i_{r}) & :=\int_{0+i_{r}}^{1/2+i_{r}}\cdots\int_{0+i_{2}}^{1/2+i_{2}}\int_{0}^{c}x_{1}^{s_{1}+\cdots+s_{r}-r+N}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\\ & \quad\quad(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}F(x_{1},\dots,x_{r})dx_{1}\cdots dx_{r}. \end{align*} Note that \[ Y_{2}=\sum_{i_{2},\dots,i_{r}\in\{0,1/2\}}\widetilde{Y}_{2}(i_{2},\dots,i_{r}). \] If $i_{j}=0$ for all $j=2,\dots,r$, by repeating the integration by parts, we have \begin{align} & \int_{0}^{1/2}x_{j}^{s_{j}+\cdots+s_{r}-(r-j)-1}(1-x_{j}){}^{s_{j-1}-1}G(x_{1},\dots,x_{r})dx_{j}\nonumber \\ & =\sum_{l=0}^{n}(-1)^{l}\frac{(1/2)^{s_{j}+\cdots+s_{r}-(r-j)+l}}{(s_{j}+\cdots+s_{r}-(r-j))_{l+1}}\cdot\left[\frac{d^{l}}{dx_{j}^{l}}((1-x_{j}){}^{s_{j-1}-1}G(x_{1},\dots,x_{r}))\right]_{x_{j}=1/2}\label{integration by part1}\\ & \quad+(-1)^{n+1}\int_{0}^{1/2}\frac{x_{j}^{s_{j}+\cdots+s_{r}-(r-j)+n}}{(s_{j}+\cdots+s_{r}-(r-j))_{n+1}}\cdot\frac{d^{n+1}}{dx_{j}^{n+1}}((1-x_{j}){}^{s_{j-1}-1}G(x_{1},\dots,x_{r}))dx_{j},\nonumber \end{align} where $G(x_{1},\dots,x_{r})$ be a holomorphic function on some region containing $[0,c]\times[0,1]^{r-1}$. The last integral converges if $\Re(s_{j}+\cdots+s_{r}-(r-j)+n)>-1$. The right-hand side is meromorphic on the same region and the possible poles are simple poles located on $s_{j}+\cdots+s_{r}=r-j-p\quad(p=0,\dots,n)$. Similarly, if $i_{j}=1/2$, we also have \begin{align} & \int_{1/2}^{1}x_{j}^{s_{j}+\cdots+s_{r}-(r-j)-1}(1-x_{j}){}^{s_{j-1}-1}G(x_{1},\dots,x_{r})dx_{j}\nonumber \\ & =\sum_{l=0}^{n}\frac{(1/2)^{s_{j-1}+l}}{(s_{j-1})_{l+1}}\cdot\left[\frac{d^{l}}{dx_{j}^{l}}(x_{j}^{s_{j}+\cdots+s_{r}-(r-j)-1}G(x_{1},\dots,x_{r}))\right]_{x_{j}=1/2}\label{integration by part2}\\ & \quad+\int_{1/2}^{1}\frac{(1-x_{j})^{s_{j-1}+n}}{(s_{j-1})_{n+1}}\cdot\frac{d^{n+1}}{dx_{j}^{n+1}}((1-x_{j}){}^{s_{j-1}-1}G(x_{1},\dots,x_{r}))dx_{j}.\nonumber \end{align} The last integral converges if $\Re(s_{j-1}+n)>-1$. The right-hand side is meromorphic on the same region and the possible poles are simple poles at $s_{j-1}=-p\quad(p=0,\dots,n)$. Thus, for $i_{2},\dots,i_{r}\in\{0,1/2\}$, we see that the function $\widetilde{Y}_{2}(i_{2},\dots,i_{r})$ can be continued meromorphically to the region \begin{align*} \{(s_{1},\dots,s_{r},z_{p_1},\ldots,z_{p_{\xi}})\mid&\Re(s_{j}+\cdots+s_{r}-(r-j)+n)>-1,\Re(s_{j-1}+n)>-1\quad(j=2,\dots,r),\\ &\Re(s_{1}+\cdots+s_{r}-r+N)>-1,\\ &|\lambda(z_{p_j})|>c,\ |z_{p_j}-1|>2\sqrt{c},\ |z_{p_j}|<(\log2)/\sqrt{c}\quad(j=1,\dots,\xi)\} \end{align*} when $z_{p_1},\ldots,z_{p_{\xi}}\neq1$ and $z_{q_1}=\cdots=z_{q_{r-\xi}}=1$. For $i_{2},\dots,i_{r}\in\{0,1/2\}$, let \[ U_{j}^{p}=U_{j}^{p}(i_{2},\dots,i_{r}):=\begin{cases} \{(s_{1},\dots,s_{r})\mid s_{j}+\cdots+s_{r}=r-j-p\} & \text{if }i_{j}=0,\\ \{(s_{1},\dots,s_{r})\mid s_{j-1}=-p\} & \text{if }i_{j}=1/2. \end{cases} \] The possible poles as a function of $(s_{1},\dots,s_{r})$ are simple poles located on the region \[ \bigcup_{p=0}^{n}\bigcup_{j=2}^{r}U_{j}^p. \] Now we consider $X_{2}$. When $z_{p_1},\ldots,z_{p_{\xi}}\neq1$ and $z_{q_1}=\cdots=z_{q_{r-\xi}}=1$, we can easily check that $X_2$ can be continued to $(\mathbb{C}\setminus[1,\infty))^\xi$ as a function of $(z_{p_1},\ldots,z_{p_\xi})$. For $i_{2},\dots,i_{r}\in\{0,1/2\}$, put \begin{align*} \widetilde{X}_{2}(i_{2},\dots,i_{r}) & :=\int_{0+i_{r}}^{1/2+i_{r}}\cdots\int_{0+i_{2}}^{1/2+i_{2}}\int_{c}^{\infty}x_{1}^{s_{1}+\cdots+s_{r}-r-1}x_{2}^{s_{2}+\cdots+s_{r}-(r-1)-1}\cdots x_{r}^{s_{r}-2}\\ & \quad(1-x_{2}){}^{s_{1}-1}\cdots(1-x_{r}){}^{s_{r-1}-1}\\ & \quad\frac{x_1e^{-a_{1}x_{1}}}{1-z_{1}e^{-x_{1}}}\cdot\frac{x_1x_2e^{-a_{2}x_{1}x_{2}}}{1-z_{2}e^{-x_{1}x_{2}}}\cdot\cdots\cdot\frac{x_1x_2\cdots x_re^{-a_{r}x_{1}\cdots x_{r}}}{1-z_{r}e^{-x_{1}\cdots x_{r}}}dx_{1}\cdots dx_{r}. \end{align*} Note that \[ X_{2}=\sum_{i_{2},\dots,i_{r}\in\{0,1/2\}}\widetilde{X}_{2}(i_{2},\dots,i_{r}). \] Now we reset \[ F(x_{1},\dots,x_{r}):=\frac{x_1e^{-a_{1}x_{1}}}{1-z_{1}e^{-x_{1}}}\cdot\frac{x_1x_2e^{-a_{2}x_{1}x_{2}}}{1-z_{2}e^{-x_{1}x_{2}}}\cdot\cdots\cdot\frac{x_1x_2\cdots x_re^{-a_{r}x_{1}\cdots x_{r}}}{1-z_{r}e^{-x_{1}\cdots x_{r}}}. \] Then for $z_1,\ldots,z_r\in\mathbb{C}\setminus(1,\infty)$, we see that the function $F(x_{1},\dots,x_{r})$ is holomorphic function on some region containing $[c,\infty)\times[0,1]^{r-1}$. Since $\Re(a_1+\cdots+a_j)>0$ for all $j=1,\ldots,r$ and \begin{align*} &-a_{1}x_{1}-a_{2}x_{1}x_{2}-\cdots-a_{r}x_{1}\cdots x_{r}\\ &=-a_1x_1(1-x_2)-\cdots-(a_1+\cdots+a_{r-1})x_1\cdots x_{r-1}(1-x_r)-(a_1+\cdots+a_{r})x_1\cdots x_{r}, \end{align*} the functions \[ \left|\Bigl(\frac{d}{dx_{2}}\Bigr)^{n_{2}}\cdots\Bigl(\frac{d}{dx_{r}}\Bigr)^{n_{r}}F(x_{1},\dots,x_{r})\right| \] decrease rapidly on $[c,\infty)\times[0,1]^{r-1}$ if $x_{1}$ tends to $\infty$. From the similar argument in \eqref{integration by part1} and \eqref{integration by part2}, we see that $\widetilde{X}_{2}(i_{2},\dots,i_{r})$ is meromorphic on \begin{align*} \{(s_{1},\dots,s_{r},z_{p_1},\ldots,z_{p_{\xi}})\mid&\Re(s_{j}+\cdots+s_{r}-(r-j)+n)>-1,\Re(s_{j-1}+n)>-1\quad(j=2,\dots,r),\\ &z_{p_j}\in\mathbb{C}\setminus[1,\infty)\quad(j=1,\dots,\xi)\} \end{align*} when $z_{p_1},\ldots,z_{p_{\xi}}\neq1$ and $z_{q_1}=\cdots=z_{q_{r-\xi}}=1$. The possible poles as a function of $(s_{1},\dots,s_{r})$ are simple poles located on the region \[ \bigcup_{p=0}^{n}\bigcup_{j=2}^{r}U_{j}^p. \] \section{Proof of Theorem \ref{main}} In the previous section, we divide the Hurwitz-Lerch multiple zeta function as follows: \begin{align*} & \zeta(s_{1},\ldots,s_{r};a_{1,}\ldots,a_{r};z_{1},\ldots,z_{r})\\ & =\frac{1}{\Gamma(s_{1})\cdots\Gamma(s_{r})}(X_{1}+X_{2})\\ & =\frac{1}{\Gamma(s_{1})\cdots\Gamma(s_{r})}(Y_{1}+Y_{2}+X_{2})\\ & =\frac{1}{\Gamma(s_{1})\cdots\Gamma(s_{r})}\biggl(Y_{1}+\sum_{i_{2},\dots,i_{r}\in\{0,1/2\}}\widetilde{Y}_{2}(i_{2},\dots,i_{r})+\sum_{i_{2},\dots,i_{r}\in\{0,1/2\}}\widetilde{X}_{2}(i_{2},\dots,i_{r})\biggr). \end{align*} \subsection{Calculations for $Y_1$} We consider $Y_{1}$, first. From \eqref{Y1}, we have \begin{align*} \frac{Y_{1}}{\Gamma(s_{1})\cdots\Gamma(s_{r})} & =\sum_{0\le k\le N}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}(-1)^k\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}\\ & \quad\cdot\frac{c^{k+s_{1}+\cdots+s_{r}-r}}{k+s_{1}+\cdots+s_{r}-r}\cdot\frac{1}{\Gamma(s_{r})}\\ & \quad\cdot\frac{\Gamma(n_{2}+\cdots+n_{r}+s_{2}+\cdots+s_{r}-(r-1))}{\Gamma(n_{2}+\cdots+n_{r}+s_{1}+s_{2}+\cdots+s_{r}-(r-1))}\\ & \quad\cdot\frac{\Gamma(n_{3}+\cdots+n_{r}+s_{3}+\cdots+s_{r}-(r-2))}{\Gamma(n_{3}+\cdots+n_{r}+s_{2}+\cdots+s_{r}-(r-2))}\cdots\\ & \quad\cdot\frac{\Gamma(n_{r}+s_{r}-1)}{\Gamma(n_{r}+s_{r-1}+s_{r}-1)}. \end{align*} For $i=1,\dots,r$, let $s_{i}=-l_{i}+\epsilon_{i}$. Putting \begin{align*} H(l_{1},\dots,l_{r};n_{1},\dots,n_{r};\epsilon_{1},\dots,\epsilon_{r}) & :=\frac{c^{k-l(1,r)+\epsilon(1,r)-r}}{k-l(1,r)+\epsilon(1,r)-r}\cdot\frac{1}{\Gamma(-l_{r}+\epsilon_{r})}\\ & \quad\cdot\frac{\Gamma(n(2,r)-l(2,r)+\epsilon(2,r)-(r-1))}{\Gamma(n(2,r)-l(1,r)+\epsilon(1,r)-(r-1))}\\ & \quad\cdot\frac{\Gamma(n(3,r)-l(3,r)+\epsilon(3,r)-(r-2))}{\Gamma(n(3,r)-l(2,r)+\epsilon(2,r)-(r-2))}\\ & \quad\cdots\\ & \quad\cdot\frac{\Gamma(n_{r}-l_{r}+\epsilon_{r}-1)}{\Gamma(n_{r}-l_{r-1}-l_{r}+\epsilon_{r-1}+\epsilon_{r}-1)}, \end{align*} we have \begin{align} &\frac{Y_{1}}{\Gamma(s_{1})\cdots\Gamma(s_{r})} \label{eq:Y1}\\ &=\sum_{0\le k\le N}\sum_{n_{1}+\cdots+n_{r}=k,n_{1},\dots,n_{r}\ge0}(-1)^k\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}H(l_{1},\dots,l_{r};n_{1},\dots,n_{r};\epsilon_{1},\dots,\epsilon_{r}).\nonumber \end{align} Since \begin{align*} \frac{\Gamma(n(i+1,r)-l(i+1,r)+\epsilon(i+1,r)-(r-i))}{\Gamma(n(i+1,r)-l(i,r)+\epsilon(i,r)-(r-i))} & =\frac{(\epsilon(i+1,r))_{n(i+1,r)-l(i+1,r)-(r-i)}}{(\epsilon(i,r))_{n(i+1,r)-l(i,r)-(r-i)}}\cdot\frac{\Gamma(\epsilon(i+1,r))}{\Gamma(\epsilon(i,r))}, \end{align*} we have \begin{align*} H(l_{1},\dots,l_{r};n_{1},\dots,n_{r};\epsilon_{1},\dots,\epsilon_{r}) & =\frac{c^{k-l(1,r)+\epsilon(1,r)-r}}{k-l(1,r)+\epsilon(1,r)-r}\cdot\frac{1}{(\epsilon_{r})_{-l_{r}}}\cdot\frac{1}{\Gamma(\epsilon(1,r))}\\ & \quad\cdot\frac{(\epsilon(2,r))_{n(2,r)-l(2,r)-(r-1)}}{(\epsilon(1,r))_{n(2,r)-l(1,r)-(r-1)}}\\ & \quad\cdot\frac{(\epsilon(3,r))_{n(3,r)-l(3,r)-(r-2)}}{(\epsilon(2,r))_{n(3,r)-l(2,r)-(r-2)}}\\ & \quad\cdots\\ & \quad\cdot\frac{(\epsilon(r,r))_{n(r,r)-l(r,r)-1}}{(\epsilon(r-1,r))_{n(r,r)-l(r-1,r)-1}}, \end{align*} where $(\epsilon)_{n}:=\Gamma(n+\epsilon)/\Gamma(\epsilon)$. Note that \begin{align*} (\epsilon)_{-n} & =\frac{1}{(\epsilon-1)\cdots(\epsilon-n)}=\frac{(-1)^{n}}{n!}+O(|\epsilon|)\quad(n\ge0),\\ (\epsilon)_{n} & =(\epsilon)\cdots(\epsilon+n-1)=\epsilon(n-1)!+O(|\epsilon|^{2})\quad(n>0). \end{align*} Let $N:=l(1,r)+r$. For $k<N$, we have \begin{align*} \frac{c^{k-l(1,r)+\epsilon(1,r)-r}}{k-l(1,r)+\epsilon(1,r)-r}\cdot\frac{1}{(\epsilon_{r})_{-l_{r}}}\cdot\frac{1}{\Gamma(\epsilon(1,r))} & =\frac{c^{k+\epsilon(1,r)-N}}{k+\epsilon(1,r)-N}\cdot\frac{1}{(\epsilon_{r})_{-l_{r}}}\cdot\frac{1}{\Gamma(\epsilon(1,r))}\\ & =O(|\epsilon(1,r)|) . \end{align*} If $k=N$, we have \begin{align*} \frac{c^{k-l(1,r)+\epsilon(1,r)-r}}{k-l(1,r)+\epsilon(1,r)-r}\cdot\frac{1}{(\epsilon_{r})_{-l_{r}}}\cdot\frac{1}{\Gamma(\epsilon(1,r))} & =\frac{c^{\epsilon(1,r)}}{(\epsilon_{r})_{-l_{r}}}\cdot\frac{1}{\Gamma(\epsilon(1,r)+1)}\\ & =\displaystyle(-1)^{l_{r}}l_{r}!+O(|\epsilon_{r}|)+O(|\epsilon(1,r)|). \end{align*} Note that $D:=n(j+1,r)-l(j,r)-(r-j)\le n(j+1,r)-l(j+1,r)-(r-j)=:U$ for $j=1,\dots,r-1$. If $D\le U\le0$, we have \begin{align*} \frac{(\epsilon(j+1,r))_{n(j+1,r)-l(j+1,r)-(r-j)}}{(\epsilon(j,r))_{n(j+1,r)-l(j,r)-(r-j)}} & =\frac{(-1)^{l_{j}}(-(n(j+1,r)-l(j,r)-(r-j)))!}{(-(n(j+1,r)-l(j+1,r)-(r-j)))!}+O(|\epsilon(j+1,r)|)+O(|\epsilon(j,r)|). \end{align*} If $D\le0<U$, we have \begin{align*} \frac{(\epsilon(j+1,r))_{n(j+1,r)-l(j+1,r)-(r-j)}}{(\epsilon(j,r))_{n(j+1,r)-l(j,r)-(r-j)}} & =O(|\epsilon(j+1,r)|). \end{align*} If $0<D\le U$, since $\left|\varepsilon_{k}/\varepsilon(j,r)\right|\ll1$ for $j=1,\ldots,r$ and $k=j,\ldots,r$, we have \begin{align*} &\frac{(\epsilon(j+1,r))_{n(j+1,r)-l(j+1,r)-(r-j)}}{(\epsilon(j,r))_{n(j+1,r)-l(j,r)-(r-j)}} \\ & =\frac{\epsilon(j+1,r)}{\epsilon(j,r)}\left(\frac{(n(j+1,r)-l(j+1,r)-(r-j)-1)!}{(n(j+1,r)-l(j,r)-(r-j)-1)!}+O(|\epsilon(j+1,r)|)+O(|\epsilon(j,r)|)\right)\\ & =\frac{\epsilon(j+1,r)}{\epsilon(j,r)}\cdot\frac{(n(j+1,r)-l(j+1,r)-(r-j)-1)!}{(n(j+1,r)-l(j,r)-(r-j)-1)!}+O(|\epsilon(j+1,r)|). \end{align*} Therefore we have \begin{align*} &\frac{Y_{1}}{\Gamma(s_{1})\cdots\Gamma(s_{r})} \\ & =(-1)^{l(1,r)+r}\sum_{d_{1},\dots,d_{r-1}\in\{0,1\}}\sum_{(n_{1},\dots,n_{r})\in S^{(d_{1},\dots,d_{r-1})}}\frac{B_{n_{1}}(a_1;z_1)\cdots B_{n_{r}}(a_{r};z_{r})}{n_{1}!\cdots n_{r}!}h^{(d_{1},\dots,d_{r-1})}(n_{1},\dots,n_{r})\\ &\quad+\sum_{j=1}^rO(|\epsilon(j,r)|). \end{align*} \subsection{Calculations for $\widetilde{Y_{2}}$ and $\widetilde{X_{2}}$ } Recall that all possible simple poles of $\widetilde{Y}_{2}(i_{2},\dots,i_{r})$ and $\widetilde{X}_{2}(i_{2},\dots,i_{r})$ are located on the region \[ \bigcup_{p=0}^{n}\bigcup_{j=2}^{r}U_{j}^p, \] where \[ U_{j}^p=\begin{cases} \{(s_{1},\dots,s_{r})\mid s_{j}+\cdots+s_{r}=r-j-p\} & \text{if }i_{j}=0,\\ \{(s_{1},\dots,s_{r})\mid s_{j-1}=-p\} & \text{if }i_{j}=1/2. \end{cases} \] For $(s_{1},\dots,s_{r})=(-l_1+\epsilon_1,\dots,-l_r+\epsilon_r)$, we have \[ \widetilde{Y}_{2}(i_{2},\dots,i_{r})\times(s(2)-q(2))\times\cdots\times(s(r)-q(r))=O(1) \] where \[ s(j)-q(j):=\begin{cases} s_{j}+\cdots+s_{r}+(l_{j}+\cdots+l_{r}) & \text{if }i_{j}=0,\\ s_{j-1}+l_{j-1} & \text{if }i_{j}=1/2. \end{cases} \] Thus we have \begin{align*} \frac{\widetilde{Y}_{2}(i_{2},\dots,i_{r})}{\Gamma(s_{1})\cdots\Gamma(s_{r})} & =O\Biggl(|\epsilon_1\cdots\epsilon_r|\cdot\prod_{\substack{2\le j\le r\\i_{j}=0}}\frac{1}{|\epsilon(j,r)|}\cdot\prod_{\substack{2\le j\le r\\i_{j}=1/2}}\frac{1}{|\epsilon_{j-1}|}\Biggr)\\ & =O\Biggl(|\epsilon_r|\cdot\prod_{\substack{2\le j\le r\\i_{j}=0}}\frac{|\epsilon_{j-1}|}{|\epsilon(j,r)|}\Biggr). \end{align*} Let $j_1,\ldots,j_t$ be all indices satisfying $i_{j_1},\ldots,i_{j_t}=0$ and $j_1<\cdots<j_t$. Then we have \begin{align*} \frac{\widetilde{Y}_{2}(i_{2},\dots,i_{r})}{\Gamma(s_{1})\cdots\Gamma(s_{r})} &=O\left(|\epsilon_r|\cdot\prod_{u=1}^t\frac{|\epsilon_{j_u-1}|}{|\epsilon(j_u,r)|}\right)\\ &=O\left(|\epsilon_{j_1-1}|\cdot\prod_{u=1}^{t-1}\frac{|\epsilon_{j_{u+1}-1}|}{|\epsilon(j_u,r)|}\cdot\frac{|\epsilon_{r}|}{|\epsilon(j_t,r)|}\right)\\ &=O\left(|\epsilon_{j_1-1}|\right) \end{align*} since $\left|\varepsilon_{k}/\varepsilon(j,r)\right|\ll1$ for $j=1,\ldots,r$ and $k=j,\ldots,r$. In a similar way, we can also estimate \begin{align*} \frac{\widetilde{X}_{2}(i_{2},\dots,i_{r})}{\Gamma(s_{1})\cdots\Gamma(s_{r})} &=O\left(|\epsilon_{j_1-1}|\right). \end{align*} \section{Appendix} Here, we shall give some examples. Put $\bm{a}=(a_1,\ldots,a_r)$, $\bm{z}=(z_1,\ldots,z_r)$, and \[ B_{(n_1,\ldots,n_r)}(\bm{a};\bm{z}):=\prod_{j=1}^rB_{n_j}(a_j;z_j) \] for simplicity. \begin{ex} When $r=2$, we have \begin{align*} \zeta(&-1+\epsilon_1,\epsilon_2;a_1,a_2;z_1,z_2) \\ &\qquad=\frac{1}{2} B_{(2,1)}(\bm{a};\bm{z}) +\frac{1}{3} B_{(3,0)}(\bm{a};\bm{z})-\frac{1}{6} B_{(0,3)}(\bm{a};\bm{z}) \frac{\epsilon_{2}}{\epsilon_{1}+\epsilon_{2}}+\sum_{j=1}^{2}O(|\epsilon_{j}|), \\ \zeta(&\epsilon_1,-1+\epsilon_2;a_1,a_2;z_1,z_2) \\ &\qquad=\frac{1}{2} B_{(2,1)}(\bm{a};\bm{z}) +\frac{1}{2} B_{(1,2)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(3,0)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,3)}(\bm{a};\bm{z}) \frac{\epsilon_{2}}{\epsilon_{1}+\epsilon_{2}}+\sum_{j=1}^{2}O(|\epsilon_{j}|), \\ \zeta(&-1+\epsilon_1,-1+\epsilon_2;a_1,a_2;z_1,z_2) \\ &\qquad=\frac{1}{4} B_{(2,2)}(\bm{a};\bm{z}) +\frac{1}{3} B_{(3,1)}(\bm{a};\bm{z}) +\frac{1}{8} B_{(4,0)}(\bm{a};\bm{z}) -\frac{1}{24} B_{(0,4)}(\bm{a};\bm{z}) \frac{\epsilon_{2}}{\epsilon_{1}+\epsilon_{2}}+\sum_{j=1}^{2}O(|\epsilon_{j}|), \end{align*} where the Apostol-Bernoulli polynomials for $0\le n\le 4$ are as follows: \begin{align*} &B_{0}(a;z) =0, \qquad B_{1}(a;z) =\frac{1}{z-1}, \qquad B_{2}(a;z) =\frac{2}{z-1}a-\frac{2 z}{(z-1)^2},\\ &B_{3}(a;z) =\frac{3}{z-1}a^2-\frac{6 z}{(z-1)^2}a+\frac{3 z(z+1)}{(z-1)^3},\\ & B_{4}(a;z) =\frac{4}{z-1}a^3-\frac{12 z }{(z-1)^2}a^2+\frac{12 z(z+1)}{(z-1)^3}a-\frac{4 z(z^2+4z+ 1)}{(z-1)^4}. \end{align*} \end{ex} \begin{ex}When $r=3$, we have \begin{align*} &\zeta(\epsilon_1,\epsilon_2,\epsilon_3;a_1,a_2,a_3;z_1,z_2,z_3) \\ &=-B_{(1,1,1)}(\bm{a};\bm{z}) -\frac{1}{2} B_{(2,0,1)}(\bm{a};\bm{z}) -\frac{1}{2} B_{(2,1,0)}(\bm{a};\bm{z}) -\frac{1}{2} B_{(1,2,0)}(\bm{a};\bm{z}) -\frac{1}{6} B_{(3,0,0)}(\bm{a};\bm{z})\\ &\quad -\frac{1}{2} B_{(1,0,2)}(\bm{a};\bm{z}) \frac{\epsilon_{3}}{\epsilon_{2}+\epsilon_{3}} -\biggl(\frac{1}{2} B_{(0,2,1)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,3,0)}(\bm{a};\bm{z}) \biggr)\frac{\epsilon_{2}+\epsilon_3}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}}\\ &\quad -\biggl(\frac{1}{2} B_{(0,1,2)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,0,3)}(\bm{a};\bm{z}) \biggr)\frac{\epsilon_{3}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}}+\sum_{j=1}^{3}O(|\epsilon_{j}|), \end{align*} \begin{align*} &\zeta(-1+\epsilon_1,\epsilon_2,\epsilon_3;a_1,a_2,a_3;z_1,z_2,z_3)\\ &=-\frac{1}{2} B_{(2,1,1)}(\bm{a};\bm{z}) -\frac{1}{4} B_{(2,2,0)}(\bm{a};\bm{z}) -\frac{1}{3} B_{(3,1,0)}(\bm{a};\bm{z}) -\frac{1}{3} B_{(3,0,1)}(\bm{a};\bm{z}) -\frac{1}{8} B_{(4,0,0)}(\bm{a};\bm{z}) \\ &\quad -\frac{1}{4} B_{(2,0,2)}(\bm{a};\bm{z}) \frac{\epsilon_{3}}{\epsilon_{2}+\epsilon_{3}} +\biggl(\frac{1}{6} B_{(0,3,1)}(\bm{a};\bm{z}) +\frac{1}{24} B_{(0,4,0)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{2}+\epsilon_{3}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}} \\ &\quad +\biggl(\frac{1}{4} B_{(0,2,2)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,1,3)}(\bm{a};\bm{z}) +\frac{1}{24} B_{(0,0,4)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{3}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}}+\sum_{j=1}^{3}O(|\epsilon_{j}|) , \end{align*} \begin{align*} &\zeta(\epsilon_1,-1+\epsilon_2,\epsilon_3;a_1,a_2,a_3;z_1,z_2,z_3) \\ &=-\frac{1}{2} B_{(2,1,1)}(\bm{a};\bm{z}) -\frac{1}{2} B_{(2,2,0)}(\bm{a};\bm{z}) -\frac{1}{3} B_{(3,1,0)}(\bm{a};\bm{z}) -\frac{1}{6} B_{(3,0,1)}(\bm{a};\bm{z}) -\frac{1}{2} B_{(1,2,1)}(\bm{a};\bm{z}) \\ &\quad -\frac{1}{3} B_{(1,3,0)}(\bm{a};\bm{z}) -\frac{1}{12} B_{(4,0,0)}(\bm{a};\bm{z}) \\ &\quad +\frac{1}{6} B_{(1,0,3)}(\bm{a};\bm{z}) \frac{\epsilon_{3}}{\epsilon_{2}+\epsilon_{3}} -\biggl(\frac{1}{6} B_{(0,3,1)}(\bm{a};\bm{z}) + \frac{1}{12} B_{(0,4,0)}(\bm{a};\bm{z}) \biggr)\frac{\epsilon_{2}+\epsilon_{3}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}} \\ &\quad +\biggl(\frac{1}{6} B_{(0,1,3)}(\bm{a};\bm{z}) +\frac{1}{12} B_{(0,0,4)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{3}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}}+\sum_{j=1}^{3}O(|\epsilon_{j}|), \end{align*} \begin{align*} &\zeta(\epsilon_1,\epsilon_2,-1+\epsilon_3;a_1,a_2,a_3;z_1,z_2,z_3) \\ &=-\frac{1}{2} B_{(2,1,1)}(\bm{a};\bm{z}) -\frac{1}{4} B_{(2,2,0)}(\bm{a};\bm{z}) -\frac{1}{4} B_{(2,0,2)}(\bm{a};\bm{z}) -\frac{1}{2} B_{(1,2,1)}(\bm{a};\bm{z}) -\frac{1}{2} B_{(1,1,2)}(\bm{a};\bm{z}) \\ &\quad -\frac{1}{6} B_{(3,1,0)}(\bm{a};\bm{z}) -\frac{1}{6} B_{(3,0,1)}(\bm{a};\bm{z}) -\frac{1}{6} B_{(1,3,0)}(\bm{a};\bm{z}) -\frac{1}{24} B_{(4,0,0)}(\bm{a};\bm{z}) \\ &\quad -\frac{1}{6} B_{(1,0,3)}(\bm{a};\bm{z}) \frac{\epsilon_{3}}{\epsilon_{2}+\epsilon_{3}} \\ &\quad -\biggl(\frac{1}{4} B_{(0,2,2)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,3,1)}(\bm{a};\bm{z}) +\frac{1}{24} B_{(0,4,0)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{2}+\epsilon_{3}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}} \\ &\quad -\biggl(\frac{1}{6} B_{(0,1,3)}(\bm{a};\bm{z}) +\frac{1}{24} B_{(0,0,4)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{3}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}} +\sum_{j=1}^{3}O(|\epsilon_{j}|). \end{align*} \end{ex} \begin{ex}When $r=4$, we have \begin{align*} &\zeta(\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4;a_1,a_2,a_3,a_4;z_1,z_2,z_3,z_4) \\ &=B_{(1,1,1,1)}(\bm{a};\bm{z}) +\frac{1}{2} B_{(2,1,1,0)}(\bm{a};\bm{z}) +\frac{1}{2} B_{(2,1,0,1)}(\bm{a};\bm{z}) +\frac{1}{2} B_{(2,0,1,1)}(\bm{a};\bm{z}) +\frac{1}{2} B_{(1,2,1,0)}(\bm{a};\bm{z}) \\ &\quad +\frac{1}{2} B_{(1,2,0,1)}(\bm{a};\bm{z}) +\frac{1}{4} B_{(2,2,0,0)}(\bm{a};\bm{z}) +\frac{1}{2} B_{(1,1,2,0)}(\bm{a};\bm{z}) +\frac{1}{4} B_{(2,0,2,0)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(3,1,0,0)}(\bm{a};\bm{z})\\ &\quad +\frac{1}{6} B_{(3,0,1,0)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(3,0,0,1)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(1,3,0,0)}(\bm{a};\bm{z}) +\frac{1}{24} B_{(4,0,0,0)}(\bm{a};\bm{z}) \\ &\quad +\biggl( \frac{1}{2} B_{(1,1,0,2)}(\bm{a};\bm{z}) +\frac{1}{4} B_{(2,0,0,2)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{4}}{\epsilon_{3}+\epsilon_{4}} \\ &\quad+\biggl( \frac{1}{2} B_{(1,0,2,1)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(1,0,3,0)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{3}+\epsilon_{4}}{\epsilon_{2}+\epsilon_{3}+\epsilon_{4}} \\ &\quad+\biggl( \frac{1}{2} B_{(1,0,1,2)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(1,0,0,3)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{4}}{\epsilon_{2}+\epsilon_{3}+\epsilon_{4}} \\ &\quad +\biggl( \frac{1}{2} B_{(0,2,1,1)}(\bm{a};\bm{z}) +\frac{1}{4} B_{(0,2,2,0)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,3,0,1)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,3,1,0)}(\bm{a};\bm{z}) \\ &\qquad\qquad +\frac{1}{24} B_{(0,4,0,0)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{2}+\epsilon_{3}+\epsilon_{4}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}+\epsilon_{4}} \\ &\quad +\biggl( \frac{1}{2} B_{(0,1,2,1)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,1,3,0)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,0,3,1)}(\bm{a};\bm{z}) +\frac{1}{24} B_{(0,0,4,0)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{3}+\epsilon_{4}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}+\epsilon_{4}} \\ &\quad +\biggl( \frac{1}{2} B_{(0,1,1,2)}(\bm{a};\bm{z}) +\frac{1}{4} B_{(0,0,2,2)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,1,0,3)}(\bm{a};\bm{z}) +\frac{1}{6} B_{(0,0,1,3)}(\bm{a};\bm{z}) \\ &\qquad\qquad +\frac{1}{24} B_{(0,0,0,4)}(\bm{a};\bm{z}) \biggr) \frac{\epsilon_{4}}{\epsilon_{1}+\epsilon_{2}+\epsilon_{3}+\epsilon_{4}} \\ &\quad +\frac{1}{4} B_{(0,2,0,2)}(\bm{a};\bm{z}) \frac{ \epsilon_{4}(\epsilon_{2}+\epsilon_{3} +\epsilon_{4}) }{(\epsilon_{3}+\epsilon_{4}) (\epsilon_{1}+\epsilon_{2}+\epsilon_{3}+\epsilon_{4})}. \end{align*} \end{ex} \section*{Acknowledgements} This work was supported by JSPS KAKENHI Grant Number JP19K14511.
1,108,101,566,855
arxiv
\section{Finite Clusters for the Exact Diagonalization Calculations} In the main text and the supplementary information we have presented results of exact diagonalization (ED) and finite temperature Lanczos method (FTLM) calculations. The finite clusters are shown in Fig.~\ref{fig:lattices}, they are frequently referred to as ``S-" (square) or ``T-" (triangular) followed by the number of sites. For example, S-10 is the 10 site square cluster, and T-15 is the 15 site triangular cluster. \begin{figure}[h] \includegraphics[width=0.35\linewidth]{./lattice_triangular_3_0_0_3.pdf} \includegraphics[width=0.22\linewidth]{./lattice_triangular_2_m2_2_4.pdf} \includegraphics[width=0.35\linewidth]{./lattice_triangular_5_1_0_3.pdf} \includegraphics[width=0.32\linewidth]{./lattice_square_2_m2_2_2.pdf} \includegraphics[width=0.32\linewidth]{./lattice_square_3_m1_1_3.pdf} \includegraphics[width=0.32\linewidth]{./lattice_square_4_0_0_4.pdf} \caption{\label{fig:lattices} Finite triangular and square clusters treated with ED or FTLM in the main text and supplementary materials.} \end{figure} \section{Curie-Weiss fits for the triangular lattice} As an example, we show fits to the inverse magnetic susceptibility for the T-12 cluster in Fig.~\ref{fig:CW_fits}. \begin{figure*} \includegraphics[width=\linewidth]{./susceptibility_triangular_12.pdf} \caption{Curie-Weiss fits to the inverse susceptibility for the T-12 cluster for various representative fillings.} \label{fig:CW_fits} \end{figure*} \section{Curie-Weiss temperature for the Square Lattice Hubbard Model} In the main text we discussed the Curie-Weiss (CW) temperature $\Theta$ for the triangular lattice Hubbard model with nearest-neighbor hoppings as a function of (hole) filling. Interestingly, this simple model admits a ferromagnetic (FM) CW temperature, consistent with findings of the Cornell experiment~\cite{tang2020n}. To provide a comparative check, we carried out numerical calculations for the square lattice case. \begin{figure} \includegraphics[width=200pt]{./curieweiss_comparison_square.pdf}% \caption{ Curie-Weiss temperature vs (hole) density for the Hubbard model with $U/t=20$ on the square lattice denoted by``S-$n$'' where $n$ represents the number of sites, as compared to the Cornell experiment (Ref.~\cite{tang2020n}, denoted by ``Tang et al.''). } \label{fig:CWsquare} \end{figure} \para{} Figure~\ref{fig:CWsquare} shows results for the square lattice with 8, 10, and 16 sites, respectively denoted as S-8, S-10 and S-16. The CW fits were performed in a temperature range $0.8 < T/t < 5.5$, similar to the range chosen in the Cornell experiment. (As mentioned in the main text, this range sensitively affects $\Theta$.) We find that $\Theta<0$ for all fillings, corresponding to effective antiferromagnetic (AFM) interactions. The exception is $8$ sites, where $\Theta>0$ for two fillings (related by particle-hole symmetry of the square lattice Hubbard model); for larger system sizes, this tendency goes away. Additionally, for our largest size ($16$ sites) the magnitude of the increase of the CW temperature on going from half filling towards lower temperature is smaller than that observed in the experiment. \section{Finite Temperature static spin structure factor for other cases} \para{} The momentum-dependent static spin structure factor (in the $zz$ channel) is defined as \begin{equation} S^{zz}({\bf q},T) \equiv \frac{1}{N} \sum_{i,j} e^{-i\bfq \cdot (\bfr_i - \bfr_j)} \langle S^{z}_i S^{z}_j \rangle_{\mathrm{th}} \end{equation} where $N$ is the number of lattice sites and where $\langle \cdots \rangle_{\mathrm{th}}$ represents the thermal average. In the main text we presented calculations for the spin structure factor, after subtracting out the high temperature ($T/t=5$) signal, for the nearest-neighbor Hubbard model on the triangular T-12 cluster with $U/t=20$ for various representative fillings. Here we show the data without subtraction, but normalize our color plots such that the maxima and minima acquire the same color across all fillings and temperatures. For ${\bf q } = (0,0)$, the spin structure factor is $S^{zz}(0,0) = \frac{1}{N} \langle S_z^2\rangle_{\mathrm{th}}$, thus for a FM ground state the spin structure factor scales as $N$ (note that multiple total $S_z$ sectors are degenerate at zero temperature for a FM). More directly, the ED spectrum shows a ground state multiplet with $S \neq 0$. Strictly speaking, long-range FM can occur only at $T=0$ since the Hohenberg-Mermin-Wagner theorem rules out true long range order at finite temperature in a two (or lower) dimensional system with continuous symmetry \cite{mermin1966prl,hohenberg1967pr}. \para{} In Fig.~\ref{fig:tri_noninteracting_susc} and Fig.~\ref{fig:square_susc} we show some other results(without subtraction) for comparison: (1) the T-12 cluster in the non-interacting limit and (2) the S-10 cluster at $U/t=20$ at $T/t=0.2$ and $T/t = 1$. For the T-12 case, we see maxima at ${\bf K}$ points at $f=0.5$ which persist to lower fillings, but the weight migrates to ${\bf M}$ points for higher fillings. For the S-10 case, as expected, signs of ordering at half filling ($f=0.5$) at $\bfq=(\pi,\pi)$ are seen. On the introduction of additional holes on top of half filling the peak intensity migrates away from $(\pi,\pi)$. Identical results are observed for the removal of holes from the half filled case (a consequence of particle-hole symmetry of the square lattice Hubbard model) and are hence not shown. In either of the two additional cases discussed here we do not see any evidence for an underlying ground state FM at any filling. \begin{figure} \includegraphics[height=120pt,page=5]{{./canonical_select_static_structure_factor_2_m2_2_4_U_1em5_t_1.pdf}} \includegraphics[height=120pt,page=7]{{./canonical_select_static_structure_factor_2_m2_2_4_U_1em5_t_1.pdf}} \caption{Spin structure factor (normalized to have the same maximum and minimum across all fillings) for the T-12 cluster across various fillings for the weakly interacting Hubbard model ($U/t=10^{-5}$) at two temperatures $T=0.2t$ and $T=t$. The red hexagon in each panel marks the Brillouin zone boundary. } \label{fig:tri_noninteracting_susc} \end{figure} \begin{figure} \includegraphics[height=120pt,page=5]{{./canonical_static_structure_factor_3_m1_1_3_U_20_t_1.pdf}} \includegraphics[height=120pt,page=7]{{./canonical_static_structure_factor_3_m1_1_3_U_20_t_1.pdf}} \caption{\label{fig:square-sq} Spin structure factor (normalized to have the same maximum and minimum across all fillings) for the S-10 cluster across various fillings for $U/t=20$ at two temperatures $T=0.2t$ and $T=t$. The red square in each panel marks the Brillouin zone boundary. } \label{fig:square_susc} \end{figure} \section{Ground state DMRG static spin structure factor} Generalizing the structure factor to other channels and taking the limit of zero temperature, we have, \begin{equation} S^{\alpha \alpha}(\bfq) \equiv S^{\alpha \alpha}({\bf q},T \rightarrow 0) \equiv \frac{1}{N} \sum_{i,j} e^{-i\bfq \cdot (\bfr_i - \bfr_j)} \langle \psi_0 | S^{\alpha}_i S^{\alpha}_j |\psi_0 \rangle \end{equation} where $\alpha=x,y,z$ and $|\psi_0 \rangle$ is the ground state of the system. (In the case of degenerate states, one must sum over all distinct ground states). For a rotationally symmetric (singlet) ground state, which is the case for the triangular Hubbard model for most (but not all) fillings, $S^{zz}(\bfq) = S^{xx}(\bfq) = S^{yy}(\bf q)$. For degenerate ground states, as is the case for a FM, choosing a single state from the degenerate multiplet and then computing the expectation values with it does not satisfy this condition. \begin{figure} \includegraphics[width=0.265\linewidth]{./XC6_bragg_peak_nup_12_ndn_12.pdf} \includegraphics[width=0.265\linewidth]{./XC6_bragg_peak_nup_18_ndn_18.pdf} \includegraphics[width=0.447\linewidth]{./XC6_bragg_peak_nup_29_ndn_29.pdf} \caption{ Static spin structure factor from DMRG for the $S_z = 0$ ground state of the length 6 XC-6 cylinder (36 sites) corresponding to fillings (strting from left) $f = 1/3$, 12 up and 12 down electrons, $f = 1/2$, 18 up and 18 down electrons, and (right) $f \approx 0.806$, 29 up and 29 down electrons ($zz$ and $xx$ ($yy$) channels are shown separately). The yellow dashed hexagon in each panel marks the Brillouin zone boundary. } \label{fig:DMRG_sf} \end{figure} In the main text we presented results of DMRG calculations on XC-6 cylinders ($36$ and $72$ sites) using a bond dimension of $16000$ targeting the ground state in the $S_z = 0$ sector and computed real space spin-spin correlation functions with respect to a reference chosen site. We found strong AFM nearest neighbor correlations for $f=1/2$ (18 up and 18 down electrons on length 6), relatively weaker and shorter range AFM correlations for $f=1/3$ (12 up and 12 down electrons on length 6) and FM correlations at $f \approx 0.833$ (60 up and 60 down electrons on length 12). In Fig.~\ref{fig:DMRG_sf} we show the momentum-dependent static spin structure factor for representative cases on length 6 XC-6 cylinders. As expected, for $f=1/2$ the (Bragg) peaks are at the $\bfK$ point of the Brillouin zone, consistent with 120 degree spiral order (Note that the $xx$, $yy$ and $zz$ channels are identical for the singlet ground state and are summed to yield $S^{tot}(\bf q)$). In comparison, the weight at the $\bfK$ points is clearly diminished for $f=1/3$. For $f \approx 0.806$, the $xx(yy)$ and $zz$ channels are clearly different. For the $xx (yy)$ channel there is a peak at $\bfq = 0 $ consistent with FM. In the $zz$ channel there is no intensity associated with the $\bfq = 0$ points, this is a consequence of the sum rule corresponding to total $S_z =0$. We check for finite size effects to build further confidence in our findings. For example, Fig.~\ref{fig:finiteSizeScale} shows our results for the case of the FM at $f \approx 0.833$ on length 6 and 12 XC-6 cylinders. The general static structure factor is visually similar, however, on increasing the size the weight at $\bf q = 0$ is found to increase. For length 6, $\langle S^2 \rangle$ associated with the ground state is found to be $\approx 20$, (corresponding to $S=4$) and for length 12 it is $\approx 56$ (corresponding to $S=7$). This is consistent with a Bragg peak, signalling long range FM. \begin{figure} \includegraphics[width=0.495\linewidth]{./XC6_6x6_Bragg_peak_nup_30_ndn_30.pdf} \includegraphics[width=0.495\linewidth]{./XC6_12x6_Bragg_peak_nup_60_ndn_60.pdf} \caption{ Static spin structure factor from DMRG at filling $f \approx 0.833$ on the XC-6 cylinder of length 6 i.e., 36 sites (left) and length 12 i.e., 72 sites (right). } \label{fig:finiteSizeScale} \end{figure} \section{Ground state for $f=1/3$} For $f=1/3$ the T-12 and T-15 clusters have a FM ground state. Even the T-9 cluster shows a low energy multiplet in close competition with singlets in the spectrum. Fig.~\ref{fig:f_1b_3_triangular} shows the gap of the FM to other states decreasing by a factor of $5$ on going from T-12 to T-15 revealing multiple competing states. This required further investigation on a larger cluster with DMRG, analysis of which suggested the ground state is not a FM, but one which displays short range AFM correlations. \begin{figure} \includegraphics[height=230pt,page=7]{{./eigenspectrum_model_hubbard_shape_triangular_3_0_0_3_t_1_U_20.pdf}} \includegraphics[height=230pt,page=9]{{./eigenspectrum_model_hubbard_shape_triangular_2_m2_2_4_t_1_U_20.pdf}} \includegraphics[height=230pt,page=11]{{./eigenspectrum_model_hubbard_shape_triangular_5_1_0_3_t_1_U_20.pdf}} \caption{From left to right: Exact diagonalization spectra for the T-9, T-12 and T-15 clusters for $f=1/3$. The lower panels highlight the multiplet structure of the ground state. Note the small scale of the energy gaps, which required further analysis on a bigger system with DMRG.} \label{fig:f_1b_3_triangular} \end{figure} \end{widetext} \end{document}
1,108,101,566,856
arxiv
\section{Introduction} \label{sec:intro} Galaxy clusters play a significant role in the study of the large-scale particle acceleration process. These clusters are a concoction of dark matter, galaxies, and hot gas. The hot gas in the Intra-Cluster Medium (ICM) takes upto 15 - 17\% of cluster total mass (\citealt{vanweeren_2019SSRv..215...16V}), and it emits in the X-ray band through the thermal bremsstrahlung process. Thus X-ray observations provide crucial information about cluster mass and dynamical state \citep{Sarazin_2002}. The gravitational potential energy released during cluster mergers heats the ICM and is channeled through shocks and turbulence in the ICM. These shocks and turbulence accelerate relativistic particles and amplify magnetic fields. As a result, we get large-scale non-thermal synchrotron emissions from clusters in radio wavelength. These diffuse radio emissions come from highly relativistic particles spiraling around cluster magnetic fields and generally show a steep integrated spectral index ($\alpha<-1$ , where $S_{\nu} \propto \nu^{-\alpha}$) \citep{vanweeren_2019SSRv..215...16V}. The emissions are categorised as halo, minihalo, relic, and phoenix \citep{Feretti_2012A&ARv..20...54F,vanweeren_2019SSRv..215...16V}. Halos and minihalos are centrally located diffuse structures. Halos are formed in the Megaparsec scale (size $\sim$ 0.5 - 2 Mpc) in massive merging clusters as a result of turbulent re-acceleration of relativistic electrons in the ICM \citep{Brunetti_2014IJMPD..2330007B}. In comparison, minihalos (size $\sim$ 100 - 500 kpc) are seen in relaxed and cool core clusters mainly and are formed due to minor merger or gas sloshing or due to AGN feedback \citep{ZuHone_2013, Raja_2020ApJ...889..128R, Richard_Laferri_re_2020}. The cluster radio relics are found mainly at the cluster outskirts or peripheral region \citep{Enblin_refId0}, generally have an elongated shape, and are polarised at high frequency. The relics can have the largest linear size (LLS) extending upto 3 Mpc \citep{Hoang_2021arXiv210600679H}. The relics often coincide with cluster merger shocks. This spatial coincidence backs up the idea that relics are shock-generated. The formation of relics, for some clusters, is supported by the theory of Diffusive Shock Acceleration (DSA) of particles where the particles are accelerated to relativistic speed at the shock front from the thermal pool of Intra-Cluster Medium (ICM) \citep{Drury_1984AdSpR...4b.185D, Ensslin_1998A&A...332..395E, Hoeft_2007MNRAS.375...77H, Kang_2013ApJ...764...95K}. However, the DSA mechanism is questioned by the particle acceleration efficiency of weak shocks that are generally observed in clusters \citep{Hoeft_2007MNRAS.375...77H, Botteon_2016MNRAS.460L..84B, Gennaro_2018}. The reported relic luminosities and Mach numbers inferred from radio and X-ray observations point to another possibility where the shock re-accelerates the slightly relativistic old fossil plasma existing in the ICM. This theory is consistent with weaker shocks for generating luminous radio relics \citep{Botteon_2016MNRAS.460L..84B,vanWeeren_2017NatAs...1E...5V, Stuardi_2019MNRAS.489.3905S,rajpurohit2020A&A...636A..30R}. In another scenario, the shock adiabatically compresses an old AGN bubble and re-energizes fossil plasma. This phenomena is responsible for the generation of radio phoenixes at smaller cluster-centric distance (\citealt{Enblin_refId0,Slee_2001AJ....122.1172S,ensslin_bruggen2002_10.1046/j.1365-8711.2002.05261.x,Kempner:2003eh,Ferrari:2008jr}). These radio phoenixes have a rather roundish and filamentary morphology, mostly show ultra-steep spectra with $\alpha < -2$ and also hint of spectral curvature at high-frequency \citep{Slee_2001AJ....122.1172S, Kale_2018MNRAS.480.5352K,Mandal_2019A&A...622A..22M}. Despite the available multi-wavelength study, exact particle acceleration mechanisms in the formation of diffuse radio emission are still to be understood properly. Also, we need to increase observational evidence to establish the correlations between cluster properties and diffuse emission.\\ \textbf{Abell 1351} (A1351) is a massive ($M = 1.4 - 4.2 \times 10^{15}\ \mathrm{M}_{\odot}$) merging cluster at a redshift 0.325 \citep[][hereafter \citetalias{Barrena_2014MNRAS.442.2216B}]{Barrena_2014MNRAS.442.2216B}. It has a mass distribution extended along the north-northeast (N-NE) and south-southwest (S-SW) direction (\citealt{Bohringer_2000,Dahle_2002ApJS..139..313D}, \citetalias{Barrena_2014MNRAS.442.2216B}). The presence of diffuse radio emission in A1351 was first detected by \citealt{Owen1999}. Using Karl G. Jansky Very Large Array (VLA) 1.4 GHz data \citealt{Giacintucci_2009ApJ...704L..54G, giovannini_2009A&A...507.1257G} classified this Mpc scale diffuse emission in the cluster as giant radio halo. The halo is also detected by \citealt{Botteon2022} using LOFAR Two-meter Sky Survey Data Release 2 (LoTSS-DR2). As reported previously, the cluster has an X-ray luminosity of $L_x(0.1-2.4\ \mathrm{keV}) = 8.31\times10^{44}\ \mathrm{h}_{50}^{-2}\ \mathrm{ergs}^{-1}$ \citep{Bohringer_2000} and a radio luminosity $P_{1.4\mathrm{\ GHz}} = 1.2-1.3\times10^{25}$ h$^{-2}_{70}$ W Hz$^{-1}$ \citep{Giacintucci_2009ApJ...704L..54G,giovannini_2009A&A...507.1257G}. The diffuse emission shows a bright edge at radio wavelength, which was classified as a "ridge" by \citealt{Giacintucci_2009ApJ...704L..54G} (hereafter \citetalias{Giacintucci_2009ApJ...704L..54G}). Previously bright radio edge blended with the halo emission has been noticed in a few clusters (eg. \citealt{Brown_2011,Macario_2011,Shimwell_2014MNRAS.440.2901S,Wang_2018}). Both \citetalias{Giacintucci_2009ApJ...704L..54G} and \citetalias{Barrena_2014MNRAS.442.2216B} suggested the complex, diffuse structure of A1351 can be a halo relic combination. Here we present the first ever spectral index map of the diffuse emission in A1351 using Giant Metrewave Radio Telescope (GMRT) 610 MHz and VLA 1.4 GHz data. We also analyzed \textit{Chandra} X-ray data to look for a possible hint of shock front at the location of the radio-bright edge.\\ \begin{table} \caption{A1351 Cluster properties} \begin{tabular}{cc} \hline \hline Parameter & Value\\ \hline RA$_{\mathrm{J2000}}$ & 11h 42m 30.8s \\ DEC$_{\mathrm{J2000}}$ & $+58d\ 32\arcmin\ 20\arcsec$\\ Mass (M$_{sys}$) & $1.4-4.2\times10^{15}\ \mathrm{M}_{\odot}$\\ Redshift ($z$) & 0.325\\ $L_x(0.1-2.4\ \mathrm{keV})$ & 8.31$ \times 10^{44}\ \mathrm{h_{50}^{-2}\ erg\ s^{-1}}$\\ kT & $8.69^{+1.01}_{-0.54}$ keV\\ $\sigma_v$ & $1524^{+96}_{-74}\ \mathrm{km\ s^{-1}}$\\ \hline \end{tabular} \tablecomments{Cluster properties as mentioned by \citealt{Barrena_2014MNRAS.442.2216B}} \end{table} The outline of the paper is as follows. We present the radio observations and data reduction procedure from GMRT and VLA in section~\ref{obs}. In Section~\ref{result} we present the results from radio observation. We discuss about the X-ray observation and results in section~\ref{x-ray} and ~\ref{xray_results} respectively. We discuss about the diffuse emission property, particle acceleration mechanism and cluster magnetic field in Section~\ref{disc}, followed by a summary in Section~\ref{concl}. We have assumed a $\Lambda$CDM cosmology with $\Omega_m$ = 0.3, $\Omega_{\lambda}$ = 0.7 and $H_0$ =70\,km\,s$^{-1}$\,Mpc$^{-1}$ throughout this paper. At the redshift of the Abell 1351 ($z = 0.325$), 1\arcsec corresponds to 4.704\,kpc. \section{Radio Observation} \label{obs} In this section, we discuss the observation and data reduction procedure of A1351 with GMRT\footnote{GMRT Data Archive: \url{https://naps.ncra.tifr.res.in/goa/data/search}} 610 MHz and VLA\footnote{VLA Data Archive: \url{https://archive.nrao.edu/archive/advquery.jsp}} 1.4 GHz. \begin{table*} \centering \caption{The archival observation summary for GMRT and VLA and the RMS reached using uniform weighting are listed below} \label{tab:Table1} \hspace{0.01in} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccc} \hline \hline Telescope & Observation date & Frequency & Bandwidth & Time on Source & Beam & P.A. & RMS \\ Configuration & & ( MHz) & ( MHz) & (min) & & & ($\mu$ Jy/beam) \\ \hline GMRT & Feb 2010 & 610 & 30 & 865 & $3.87\arcsec \times 3.40\arcsec $ & $+31.50$\degree & 105 \\ VLA A & April 1994 & 1400 & 43.75 & 30 & $1.38\arcsec \times 1.04\arcsec $ & $-3.79$\degree & 47\\ VLA C & April 2000 & 1400 & 43.75 & 125 & $12.79\arcsec \times 9.76\arcsec $ & $+28.43$\degree & 42\\ VLA D & March 1995 & 1400 & 43.75 & 15 & $51.32\arcsec \times 33.55\arcsec $ & $+49.98$\degree & 244\\ \hline \end{tabular} } \end{table*} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{final_figures/Figure1.pdf} \caption{Left: GMRT 610 MHz high resolution image contours (blue) with restoring beam $3.87\arcsec \times 3.40\arcsec $, P.A $31.50\degree$ overlapped with SDSS-DR12 Optical image. The contours levels increases with a factor of 3 starting from 3$\sigma_\mathrm{rms}$ where $\sigma_\mathrm{rms}=105\ \mu$Jy/beam. Right: GMRT 610 MHz full resolution color image with restoring beam $5.17\arcsec \times4.67\arcsec, $ P.A 46.06$\degree$ overlaid with red X-ray contours. The X-ray contours levels increases with a factor of 2.} \label{fig: radio_full_res} \end{center} \end{figure*} \subsection{GMRT 610 MHz} The cluster was observed with GMRT during Feb 2010 (project code \textit{17\_019} ) in dual-frequency (610 / 235 MHz) mode where the 610 MHz and 235 MHz visibilities are observed on RR and LL polarisation, respectively, with 865 min on-source time. For the GMRT Hardware Backened correlator, the bandwidth was split into upper and lower sidebands (USB and LSB), each having 16 MHz. For the two sidebands, .lta and .ltb files were generated for both frequencies. The observational summary of the cluster is presented in Table~\ref{tab:Table1}. Source Peeling and Atmospheric Modelling (SPAM); (\citealt{refId0, Intema_refId0}), a python-based data reduction recipe, was used for the GMRT data reduction. This semi-automated pipeline uses Astronomical Image Processing System (AIPS) tasks for data reduction. At first, The raw data in FITS format was used for pre-calibration. In the pre-calibration part, the data from the best available scans of the primary calibrator (here 3C286) was used for calibration. \citealt{scaife_10.1111/j.1745-3933.2012.01251.x} model was used to set the flux density. For 610 MHz, data for two sidebands were pre-calibrated separately and then joined. For 235 MHz, only part of the bandwidth (USB) was available for pre-calibration. The pre-calibrated visibility data set of UVFITS format was used for further process. After RFI mitigation and bad data editing, several rounds of direction-independent self-calibration were performed. Finally, direction-dependent calibration was done for 610 MHz data to correct ionospheric phase errors by peeling the bright sources within the field of view. Due to the poor data quality, the direction-dependent approach could not be applied for 235 MHz, and due to the poor image sensitivity, it was not further used in our analysis. The final calibrated dataset, obtained in FITS format, was converted to Common Astronomy Software Application (CASA\footnote{\url{https://casa.nrao.edu/}}) measurement set (.ms) format using the CASA task \textit{importgmrt}. Further imaging was done in CASA. \subsubsection{Imaging Diffuse Emission at 610 MHz} In the full resolution ($5.17\arcsec\times4.67\arcsec$, P.A. 46.06$\degree$) image of the cluster (Fig.~\ref{fig: radio_full_res} right panel) made using Briggs \citep{Briggs_1995AAS...18711202B} robust 0, the presence of diffuse emission is visible along with the discrete radio galaxies. This bright extended emission corresponds to the halo edge or the ridge emission of A1351. The edge has a rough extension of 570 kpc at 610 MHz. To bring out the diffuse emission properly, at first, we made a high-resolution image using Briggs \citep{Briggs_1995AAS...18711202B} robust parameter -1 and selecting \textit{uv}-range greater than 1.7 k$\lambda$ (which roughly corresponds to 570 kpc at the redshift of 0.325) to ignore any contribution from the extended emission. We masked the individual radio galaxies from this image. We obtained the model visibility of these masked galaxies using CASA task \textit{tclean} on the full dataset. These model visibilities were subtracted from the data using task \textit{uvsub} in CASA. The final image for the diffuse emission was then made using Briggs robust 0, selecting a \textit{uv}-range below 20 k$\lambda$ and using 5 k$\lambda$ \textit{uv}-taper with a restoring beam $22.46\arcsec\times19.88\arcsec$, P.A. $14.03\degree$ (Figure~\ref{fig: diffuse} color map). This choice of tapering and \textit{uv}-range helped in achieving the best diffuse structure. We changed the wprojplanes parameter in \textit{tclean} to -1 to compensate for GMRT's non-coplanar baselines, which resulted in the use of 370 w-projection planes. \subsection{VLA 1.4 GHz} \begin{figure} \includegraphics[width=0.47\textwidth]{final_figures/Figure2.pdf} \caption{GMRT 610 MHz low-resolution color image with restoring beam $22.46\arcsec \times 19.88\arcsec$, P.A $14.03\degree$ overlapped with VLA low resolution image contours in red and GMRT low-resolution image contours in black. The contours levels for GMRT are placed at $(-1,1,2,3,4,5,6)\times 3\sigma_\mathrm{rms}$ where $\sigma_\mathrm{rms}=260\ \mu$Jy/beam. Negative contours are dashed. The VLA contours are placed at levels $(1,2,3,4,5,6)\times 3\sigma_\mathrm{rms}$ where $\sigma_\mathrm{rms}=50\ \mu$Jy/beam. The white and yellow dashed lines shows the shape of the ridge region at 610 MHz and 1.4 GHz respectively. The yellow cross marks the location of cluster center from \citetalias{Barrena_2014MNRAS.442.2216B}.} \label{fig: diffuse} \end{figure} A1351 was observed with VLA A array during April 1994 (project code \textit{AB0699}) in dual polarisation mode with 30 min on-source time. It was also observed with VLA C and D configuration in dual polarisation mode on April 2000 (project code \textit{AO0149}) for 125 min and March 1995 (project code \textit{AM0469}) for 15 min respectively. The observational summary is presented in Table ~\ref{tab:Table1}. Common astronomy Software Aplication (CASA) was used for the data analysis. At first, the data for different configurations were treated individually for RFI removal and calibration. 3C286 was used as flux calibrator for all three configurations, and the flux density was set using the model described in \citet{scaife_10.1111/j.1745-3933.2012.01251.x}. After calibration was applied, the target data were split into new measurement sets for each of the data sets, and a few rounds of phase-only self-calibration were performed on the split measurement sets for all the configurations. The beam size and RMS reached using uniform weighting is given in Table ~\ref{tab:Table1}. The weights of the three data sets were equalized using CASA task \textit{statweight}, and then finally, the self-calibrated data sets were combined using CASA task \textit{concat} for further imaging. \subsubsection{Imaging Diffuse Emission at 1.4 GHz:} We made a high-resolution image with the combined data using Briggs robust parameter -1. To get the diffuse emission, the individual point source emissions were masked from the high resolution image and then subtracted from the visibility data as explained previously. Finally, after subtraction of flux contribution from individual point sources, the image for diffuse emission was made using Briggs robust 0 and selecting a \textit{uv}-range below 20 k$\lambda$ and using 5 k$\lambda$ \textit{uv}-taper (Figure~\ref{fig: diffuse} red contours). \section{Results from Radio Observation} \label{result} In this section, we discuss the results from GMRT 610 MHz and VLA 1.4 GHz radio Data. We also show the spectral index distribution of the diffuse emission. \subsection{GMRT 610 MHz} Optical study showed the cluster has two main sub-cluster in the northern region surrounding the two brightest cluster galaxies BCG1 and BCG2 (\citetalias{Barrena_2014MNRAS.442.2216B}). BCG1 also coincides with the X-ray peak. Figure~\ref{fig: radio_full_res} (left panel) is the high-resolution radio image of the cluster, made using uniform weighting, overlaid on the optical image from SDSS DATA Release 12 \citep{sdss_2015ApJS..219...12A}. BCG1 and few other galaxies are vissible in radio band, but BCG2 does not have visible radio emission. The source TG in Figure \ref{fig: radio_full_res} (left) is a tailed radio galaxy. Figure~\ref{fig: diffuse} represents the low-resolution image of the diffuse emission after subtraction of the individual galaxies, overlapped with VLA red contours and GMRT black contours. The radio-bright edge, or the ridge (as mentioned in \citetalias{Giacintucci_2009ApJ...704L..54G} and \citetalias{Barrena_2014MNRAS.442.2216B}), has a Largest Linear Size (LLS) of $\sim$ 570 kpc with an elongation in the north-west south-east direction and a width of 260 kpc at 610 MHz. The shape of the bright ridge at 610 MHz has been highlighted with white dashed contour in Figure~\ref{fig: diffuse}. The brightest region of the edge is located $\sim$ 470 kpc away from BCG1 and $\sim$ 290 kpc away from source TG.\\ \subsubsection{Integrated Flux Density Estimation:} For estimation of integrated Flux density of the total diffuse emission of the cluster, we selected the region within the $3\sigma_\mathrm{rms}$ contour of Figure~\ref{fig: diffuse}. The flux density within that contour region for GMRT 610 MHz was found to be $86.67 \pm 5.49$ mJy where $\sigma_\mathrm{rms}=260\ \mu$Jy/beam. For calculating the uncertainty in flux density estimation, we used the equation \begin{equation} \label{eq1} \Delta S=\sqrt{(\sigma_\mathrm{cal}\ S)^2+(\sigma_\mathrm{rms} \sqrt{N_\mathrm{beam}})^2} \end{equation} where an uncertainty of 6\% was assumed due to calibration error($\sigma_\mathrm{cal}$) following \citealt{chandra_2004ApJ...612..974C}, ${N_\mathrm{beam}}$ is number of beams within the $3\sigma_\mathrm{rms}$ contour for GMRT image where $\sigma_\mathrm{rms}$ is the image RMS noise. The flux density of the radio edge (within 9$\sigma_\mathrm{rms}$ contour) was found 42.20 $\pm$ 2.68 mJy.\\ To crosscheck the value of flux density of the entire diffuse emission, we attempted another approach as described in \citet{Raja_2020ApJ...889..128R}. In this approach, the flux densities of the galaxies were estimated from the high-resolution image (left panel, Figure~\ref{fig: radio_full_res}) using PyBDSF (Python Blob Detector and Source Finder) \citep{Mohan_2015ascl.soft02007M}. Furthermore, these flux density values were subtracted from the total emission within the $3\sigma_\mathrm{rms}$ contour to get the amount of flux density from diffuse emission. We found the integrated flux density in this approach to be 87.18 $\pm$ 6.11 mJy. \subsection{VLA 1.4 GHz} \begin{figure*} \begin{center} \includegraphics[width=6.0in]{final_figures/Figure3_top.pdf} \\ \includegraphics[width=6.0in]{final_figures/Figure3_bottom.pdf} \\ \caption{Top (Left): Spectral index map between 1.4 GHz and 610 MHz with beam $22.46\arcsec \times 19.88\arcsec$, P.A. $14.03\degree$ overlapped with GMRT low-resolution image contours in Black. The contours levels for GMRT are placed at $(1,2,3,4,5,6)\times 3\sigma_\mathrm{rms}$ where $\sigma_\mathrm{rms} = 260\ \mu$Jy/beam. The edge region has been highlighted with white dashed contour. Bottom (Left): Spectral index map highlighting only the edge region. Top (Right): spectral index error map between 1.4 GHz and 610 MHz. Bottom (Right): spectral index error map for only the edge region.} \label{fig: spec_map} \end{center} \end{figure*} The red contours in Figure~\ref{fig: diffuse} represent low-resolution images obtained with VLA 1.4 GHz observation. The diffuse emission is more extented in the northern region surrounding BCG1 and BCG2 at 1.4 GHz. This emission, classified as halo previously by \citetalias{Giacintucci_2009ApJ...704L..54G}, has a quite asymmetric and elongated structure. We have estimated the flux density of diffuse emission within $3\sigma_\mathrm{rms}$ contour to be $24.10 \pm 2.44$ mJy where $\sigma_\mathrm{rms}=50\ \mu$Jy/beam. We used the same equation (equation \ref{eq1}) for estimating error in flux density measurement assuming 10\% uncertainity due to calibration error. The flux density of the radio-bright ridge (highlighted with yellow dashed contour in Figure~\ref{fig: diffuse}) was estimated to be 10.85 $ \pm $ 1.10 mJy, which gave a radio luminosity $P_{1.4 \mathrm{\ GHz}} = 4.46 \pm 0.61\times10^{24}$ W $Hz^{-1}$. The luminosity was calculated using equation \ref{eq3}. \begin{equation} \label{eq3} P_{1.4 \mathrm{\ GHz}}= \frac{4\pi D_L^2(z)}{(1+z)^{(\alpha+1)}}\times S_{1.4\mathrm{\ GHz}} \end{equation} Where $D_L(z)$ is luminosity distance at the redshift $z$. This estimation of radio luminosity is as per the radio luminosity estimated by \citetalias{Giacintucci_2009ApJ...704L..54G} for the radio ridge region. \begin{figure*} \begin{center} \begin{tabular}{lccr} \includegraphics[width=0.5\textwidth]{final_figures/Figure4_Left.pdf} & \includegraphics[width=0.4\textwidth]{final_figures/Figure4_Right.pdf} \\ \end{tabular} \caption{Left: \textit{Chandra} X-ray surface brightness map of A1351 overlaid with GMRT 610 MHz low resolution image contours (black) and VLA 1.4 GHz low resolution image contours (cyan) with restoring beam $22.46\arcsec \times 19.88\arcsec$, P.A. $14.03\degree$. The contours levels are placed at levels (-1,1,2,3,4,5,6)$\times3\sigma_\mathrm{rms}$ where $\sigma_\mathrm{rms} = 260\ \mu$Jy/beam for 610 MHz image and $\sigma_\mathrm{rms} = 50\ \mu$Jy/beam for 1.4 GHz image. The negative contours are dashed. The wedge region (magenta) was used to produce surface brightness profile, dashed regions (Blue) T1 and T2 are used to estimate the temperature across the discontinuity. The arc between the T1 and T2 region represents the position of the discontinuity. Right: The surface brightness profile over the wedge region (magenta, in the left panel). The inset shows the simulated gas density model. The Temperatures measured in T1 and T2 regions are $T_{1} = 3.57^{+2.38}_{-0.55}$ keV and $T_{2} = 7.26^{+5.74}_{-0.79}$ keV.} \label{fig: xray shock} \end{center} \end{figure*} \subsection{Spectral Index Estimation} To measure the spectral index, we made images with the same \textit{uv}-range and the same restoring beam at the two frequencies. Generally, the diffuse emissions have a wider spread in the low-frequency domain. However, the northern section of the halo is not noticeable in the Figure~\ref{fig: diffuse} owing to the lesser sensitivity of the low-resolution image of 610 MHz. So, for spectral index estimation, the common region having diffuse emission present in both the frequencies was selected using CASA polygon drawing, and the flux densities in the region were noted. The spectral index was estimated using equation $\alpha=log(S_1/S_2)/log(\nu_1/\nu_2)$ where S and $\nu$ denotes flux densities at different frequencies and frequencies respectively. The integrated spectral index of the entire radio structure was found to be $\alpha_\mathrm{total} = -1.72 \pm 0.33$, where the error in the spectral index was calculated using the equation \ref{eq2}.\\ \begin{equation} \label{eq2} \Delta\alpha=\frac{1}{\rm{log}(\nu_1/\nu_2)}\times \sqrt{\frac{(\Delta S_1)^2}{S_1^2}+\frac{(\Delta S_2)^2}{S_2^2}} \end{equation} We found the edge has a steep spectra with integrated spectral index $\alpha = -1.63 \pm 0.33$. \subsubsection{Spectral Index Map:} Spectral index mapping is an essential and informative aspect of radio data analysis as it helps better understand the particle acceleration process in the cluster. The spectral index distribution can give us an idea of where the particles are recently accelerated, thus indicating shock acceleration/re-acceleration scenarios. The spectral index of freshly accelerated particles is flatter, and it steepens as the particles lose energy post acceleration. To make the spectral index map between 610 MHz and 1.4 GHz, we made images in each frequency with Brigg's robust 0 (\citealt{Briggs_1995AAS...18711202B}), choosing the same \textit{uv}-range of 20 k$\lambda$ and applying \textit{uv}-taper of 5 k$\lambda$ in both cases. This choice of \textit{uv}-range and tapering brought out the diffuse emission best in both frequencies. The final images were obtained with the same resolution of $22.46\arcsec \times 19.88\arcsec$, PA $14.03\degree$, with the same image size and cell size. The low-frequency image was regridded using the CASA task \textit{imregrid}. The emission has less spread in some regions at 610 MHz (Figure ~\ref{fig: diffuse}). So the surface brightness below $3\sigma_\mathrm{rms}$ level of the GMRT 610 MHz images was masked. This masked region was copied to choose the same region from VLA 1.4 GHz image. Finally, the spectral index for each pixel was calculated using the CASA tool \textit{immath}. The error in the spectral index was also calculated for each pixel (Figure~\ref{fig: spec_map}).\\ Furthermore, we made the spectral index map of the bright edge/ridge region particularly, following the same way and choosing a surface brightness mask below $9\sigma_\mathrm{rms}$ level of the GMRT 610 MHz image. In Figure~\ref{fig: spec_map}, a gradient of the spectral index can be observed along the south to north direction of the edge. This gradient resembles the shock downstream steepening of the spectral index for radio relics, where when a shock passes through ICM, it accelerates or re-accelerates the electrons to relativistic speed. At the location of the shock front, as the particles are recently accelerated, the diffuse emission gives relatively flatter spectra. In the post-shock region, the high-energy particles lose energy faster through synchrotron losses and the inverse Compton effect, so a steepening in the spectral index is noticed \citep{Gennaro_2018,rajpurohit2020A&A...636A..30R}. \subsection{Radio Mach Number} In the DSA model of particle acceleration, the particles from the thermal pool of ICM travel back and forth in the pre and post-shock region and gain energy in this process \citep{Hoeft_2007MNRAS.375...77H, Brunetti_2014IJMPD..2330007B}. This model predicts that the shock Mach number for planar shock $\mathcal{M}$ is related to the integrated spectral index over a region with different plasma ages as \begin{equation} \mathcal{M}_{\alpha} = \sqrt{\frac{(\alpha-1)}{(\alpha+1)}} \end{equation} For the bright ridge/edge in A1351 we found a shock Mach number of 2.05 considering the integrated spectral index $\alpha$ = -1.63 over the region that shows spectral steepening similar to radio relics (Fig \ref{fig: spec_map} Bottom Left). \section{Chandra X-ray Observation} \label{x-ray} We analyzed archival \textit{Chandra}\footnote{\textit{Chandra} Data Archive: \url{https://cda.harvard.edu/chaser/}} X-ray observations (ObsId 15136) of A1351. The total 33 ksec observations were taken in VFAINT mode. For this study, we employed a systematic calibration and analysis pipeline, which uses the \textit{Chandra} Interactive Analysis of Observations (CIAO) and subsequent scripts in IDL and python. The details of our data reduction pipeline are described in \citep{Datta_2014ApJ...793...80D, Schenck_2014AJ....148...23S, Hallman_2018ApJ...859...44H, Raja_2020ApJ...889..128R, Rahaman_2021MNRAS.505..480R}, which initially consisted of several bash and IDL scripts. As A1351 is a comparatively low redshift cluster ($z = 0.325$), the extended X-ray emission spills over all four ACIS-I chips. Therefore, we were not able to use the local background (as in, e.g., \citealt{Raja_2020ApJ...889..128R}). Hence, we subtracted the background contribution present in the observation by extracting background spectra from the "blank-sky" background files. These `blank-sky' background files are available in the \textit{Chandra} calibration database (CALDB), which represents particle background and unresolved cosmic X-ray background. The ``blank-sky'' background was re-normalized in the 9.5 - 12 keV band. The \textit{Chandra} effective area is negligible in the 9.5 - 12 keV band, and all the 9.5 - 12 keV flux in the sky data is due to particle background \citep{Hickox2006ApJ...645...95H}. \begin{table*} \centering \caption{Best fit parameters of broken power law model (using PROFFIT) are listed below.} \label{tab:Table2} \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccc} \hline \hline $\alpha1$ & $\alpha2$ & $r_\mathrm{sh}$(arcmin) & norm & Jump ($C$) & $\chi^2/D.o.f$ & $\mathcal{M}_\mathrm{SB}$\\ \hline 0.16 $\pm$ 0.14 & 2.01 $\pm$ 0.35 & 1.55 $\pm$ 0.01 & $5.43 \pm 2.02 e-05$ & 1.24 $\pm$ 0.25 & 15.80/18 (0.88) & $1.34^{+0.20}_{-0.19}$ \\ \hline \end{tabular} } \end{table*} Next, we removed point sources from the data by providing SAOImage DS9 region file containing point sources. These point sources were detected using the tool {\it wavdetect} inbuilt into CIAO in the 0.7 - 8.0 keV band with the scales of 1, 2, 4, 8, and 16 pixels. These were further inspected visually for any false detection or if {\it wavdetect} failed to detect any real point sources. Regions with point sources were removed from both data and blank-sky background files to avoid negative subtraction. After removing the point sources, we created light curves for individual ObsId in full energy and 9.5 - 12 keV bands. Light curves were binned at 259 seconds per bin for data as well as blank-sky backgrounds, and count rates higher than 3$\sigma$ were removed (background flares) using the \textit{deflare} tool. These steps produced calibrated and clean data free from bad events as well as contaminating point sources. After cleaning the data, we used \textit{merge\_obs} with binning 4 to produce a surface brightness map. The exposure corrected, background and point sources subtracted, 0.7 - 8.0 keV surface brightness image is shown in Figure \ref{fig: xray shock}. We performed spectral analysis using XSPEC version: 12.9.1 \citep{Arnaud_1996ASPC..101...17A} in between 0.7 - 8.0 keV energy range. We used \textit{CIAO} task \textit{dmextract} to create spectra from each region from each observations and \textit{specextract} to calculate Auxiliary Response File (ARF) and Redistribution Matrix File (RMF). The APEC (Astrophysical Plasma Emission Code) and PHABS (PHotoelectric ABsorption) models were fitted to the spectra from each region using C-statistics \citep{Cash1979}. The metallicity of the cluster was kept frozen at 0.3 $Z_\odot$ throughout the cluster, where $Z_\odot$ is the solar abundance \citep{Anders_1989GeCoA..53..197A}. Redshift (0.325; \citetalias{Barrena_2014MNRAS.442.2216B}) and $N_\mathrm{H}$ ($7.12 \times 10^{19}\mathrm{cm^{-2}}$; \citealt{HI4PI_2016A&A...594A.116H}) were also kept frozen. Only APEC normalization and temperature were fitted for each region. \section{Results from X-ray Observation}\label{xray_results} Figure~\ref{fig: xray shock} shows the X-ray emission from the cluster is slightly elongated towards the N-NE S-SW direction, which implies a merging cluster \citetalias{Barrena_2014MNRAS.442.2216B}. The cluster also hosts a large-scale diffuse radio emission (a radio halo and a radio bright edge/ridge). The contour of the radio edge is denser towards the southwest direction. These denser contours may have resulted due to compression by a shock front. Therefore, to look for a possible hint of cluster shock, we analyzed the cluster X-ray surface brightness profiles using PROFFIT v1.5 \citep{Eckert_2011A&A...526A..79E, Eckert_2016ascl.soft08011E}. \subsection{X-ray Surface Brightness Discontinuity:} To estimate any discontinuity in surface brightness profile, we proceeded in the following way. We created a number of concentric annuli over the wedge region with $6\arcsec$ bin size. This bin size was chosen to get a sufficient count in each bin. We iteratively choose the wedge region to maximize the jump of the surface brightness discontinuity. The surface brightness profile over the wedge region was fitted with a broken power-law model (bknpow, in-build in PROFFIT v1.5). The broken power-law density model can be defined as \begin{equation} \begin{split} n(r) & = Cn_0\Big(\frac{r < r_\mathrm{sh}}{r_\mathrm{sh}}\Big)^{-\alpha1}\\ & = n_0\Big(\frac{r > r_\mathrm{sh}}{r_\mathrm{sh}}\Big)^{-\alpha2} \end{split} \end{equation} where $C$ is the compression factor, $n(r)$ is electron number density, $n_0$ is normalization constant, $r_\mathrm{sh}$ is the radial distance of the shock, $r$ is the distance from the center of the wedge region, and $\alpha1$ and $\alpha2$ are the power-law indices for the respective profiles. We found a discontinuity in the surface brightness profile over the wedge region (magenta) shown in Figure \ref{fig: xray shock}. The best-fitted parameters and the profile is shown in Table~\ref{tab:Table2} and Figure~\ref{fig: xray shock} (right panel) respectively. The position of the discontinuity in surface brightness is shown with an arc between the T1 and T2 regions in the left panel of Figure \ref{fig: xray shock}. The broken power law was fitted with a reduced $\chi^2$ value of 0.88. \subsection{X-ray Temperature Jump:} To characterize the surface brightness discontinuity as shock or cold front, we estimated the temperature across the regions of the discontinuity. As the exposure of the \textit{Chandra} observation was short, we were not able to make a temperature profile, but from the dedicated regions across the discontinuity (e.g., T1 and T2 in Figure \ref{fig: xray shock}). We iteratively choose the width of the region to have threshold counts in the corresponding spectra. We took regions with 41$\arcsec$ of width, labeled as T1 and T2 in the left panel of Figure \ref{fig: xray shock}. The temperature was estimated by fitting the thermal APEC and PHABS model as discussed in section \ref{x-ray}. The estimated temperatures in the pre-shock or upstream region, $T_{1}$ and post-shock or downstream region, $T_{2}$ are listed in Table \ref{tab:xray_temp}, where $T_{1}$ and $T_{2}$ are temperatures from the regions T1 and T2 respectively as labeled in Figure \ref{fig: xray shock} (left panel). We found that there is a significant jump in the temperature along the direction similar to the surface brightness discontinuity. Therefore, the presence of both discontinuities hints towards the shock front. \begin{center} \begin{table}[h!] \caption{\label{tab:xray_temp}Shock upstream (T$_{1}$) and downstream T$_{2}$ temperatures and Mach number from temperature jump is listed below.} \begin{tabular}{ l c r } \hline \hline T$_{1}$(T$_{up}$) & T$_{2}$(T$_{down}$) & $\mathcal{M}_\mathrm{T}$ \\ \hline $3.57^{+2.38}_{-0.55}$ keV & $7.26^{+5.74}_{-0.79}$ keV & $1.96^{+1.36}_{-0.86}$\\ \hline \end{tabular} \end{table} \end{center} \subsection{X-ray Mach Numbers} Assuming the jumps as shock front, we calculated the Mach number using the Rankine-Hugoniot shock jump condition given as \begin{equation} \frac{\rho_2}{\rho_1} = C = \frac{(1+\gamma)\times \mathcal{M}_\mathrm{SB}^2}{2+(\gamma - 1)\times \mathcal{M}_\mathrm{SB}^2} \end{equation} where $\rho_1$ \& $\rho_2$ are densities at pre and post shock region respectively and for mono-atomic gas, considering $\gamma = 5/3$, we get \begin{equation} C = \frac{4\mathcal{M}_\mathrm{SB}^2}{3+\mathcal{M}_\mathrm{SB}^2} \end{equation} This gives a shock Mach number from density jump across the shock edge, $\mathcal{M}_{SB} = 1.34^{+0.20}_{-0.19}$. \\ The Mach number from temperature jump was calculated using Rankine-Hugoniot temperature jump condition \begin{equation} \frac{T_2}{T_1} = \frac{(5\mathcal{M}_\mathrm{T}^2 - 1)(\mathcal{M}_\mathrm{T}^2 + 3)}{16\mathcal{M}_\mathrm{T}^2} \end{equation} where, $T_1$ and $T_2$ are the pre-shock (upstream) and post-shock (downstream) temperature respectively (Table \ref{tab:xray_temp}). The Mach number from temperature jump was found to be $\mathcal{M}_\mathrm{T} = 1.96^{+1.36}_{-0.86}$. \section{Discussion} \label{disc} \begin{figure} \includegraphics[width=0.49\textwidth]{final_figures/Figure5.pdf} \caption{Radio Luminosity $P_{1.4\ \mathrm{GHz}}$ vs. Largest Linear Size (LLS) of the relics adopted from the literature (\citealt{Nuza_2017MNRAS.470..240N}). The relic in A1351, marked with a red diamond, follows observed trend and given its size, it is one of the highly luminous relics. } \label{fig: lls_P} \end{figure} The diffuse radio emission in A1351 is quite asymmetric in nature with the presence of the bright region at the southern part of the radio halo (Figure \ref{fig: diffuse}). This type of radio-bright edges within radio halos has been noticed previously for a few clusters but with smaller sizes (e.g. \citealt{Markevitch2005, Macario_2011, Shimwell_2014MNRAS.440.2901S,Wang_2018}). Here, the bright edge has an LLS of $\sim$570 kpc at 610 MHz and situated at a distance of $\sim$470 kpc away from the cluster center. This size and location indicate that the edge can be a relic. As mentioned in \citetalias{Barrena_2014MNRAS.442.2216B}, the cluster is undergoing a merging in the N-NE S-SW direction where the passage of an axial shock can form the relic. The spectral index distribution in A1351 is in agreement with the other reported cases where cluster radio shocks resulted in radio relics in cluster peripheral region (see review by \citealt{vanweeren_2019SSRv..215...16V}). We have compared the LLS and luminosity ($P_{1.4\ \mathrm{GHz}}$) of the relic in A1351 with other known relics from \citealt{Nuza_2017MNRAS.470..240N} (Figure \ref{fig: lls_P}). We see that the relic in A1351 (Marked in Red diamond) follows the observed trend in $P_{1.4\ \mathrm{GHz}}$-LLS plot. Given its LLS, The relic is one of the highly luminous relics with a radio power $P_{1.4\ \mathrm{GHz}}$ = $4.46\times10^{24}$ W $Hz^{-1}$. For the case of A1351, the shock Mach number derived from Radio observation ($\mathcal{M}_{\alpha}$ = 2.05) is slightly higher than that observed from the X-ray temperature jump ($\mathcal{M}_T$ = $1.96^{+1.36}_{-0.86}$) and density jump ($\mathcal{M}_{SB}$ = $1.34^{+0.20}_{-0.19}$). The discrepancy is expected as the radio synchrotron emission is dependent on the amplitude of the magnetic field fluctuations. These fluctuations decrease slower than the density and temperature fluctuations leading to the higher values of radio-derived Mach number seen for most radio relics \citep{Domnguez_Fernndez_2020}. \citet{Wittor_2021MNRAS.506..396W} showed the radio-derived Mach numbers are more skewed to the higher value of Mach number distribution while the X-ray derived Mach numbers reflect their average distribution. Moreover, the discrepancy observed in the derived values of Mach numbers is due to the systematic errors that can affect the different analyses. The radio estimations can suffer errors in flux measurement due to shallow data or source contamination \citep{Hoang_2017MNRAS}. Also, the measurement of Mach number from spectral index is challenging. Mach number measured using the injection index close to the shock front is theoretically more precise in determining the shock properties. However, it is more difficult to achieve as it needs identification of the position of the shock precisely. An experimentally more robust approach to Mach number is using the volume-averaged spectral index. Although, this approach is theoretically less precise as it involves electrons from different ages across the relic \citep{Colafrancesco_2017MNRAS, Wittor_2021MNRAS.506..396W}. On the other hand, the influence of the projection effect on X-ray analyses makes the radio findings a more robust technique \citep{Hong_2015ApJ,Akamatsu_2017A&A}. In case of temperature estimation, there is the possibility of both underestimating the post-shock temperature as well as overestimating the pre-shock temperature, which can lead to underestimation of Mach number ($\mathcal{M}_T$). However, the temperature jump is still less subject to the projection effect than the density jump and thus the former one gives a more reliable result \citep{Ogrean_2013MNRAS, Akamatsu_2017A&A}. In our case, the value of the shock Mach number obtained from radio observation is quite consistent with the shock Mach number derived from the X-ray temperature jump. This supports the DSA mechanism of particle acceleration at the shock front. The spectral index gradient is also in accordance with that. There have been very few radio relics in the literature where DSA mechanism for particle acceleration with weak shocks could be established \citep{Bourdin_2013ApJ...764...82B, shimwell_2015MNRAS.449.1486S, botteon2016elgordoMNRAS.463.1534B, Locatelli_2020MNRAS.496L..48L, Rajpurohit_2021arXiv210405690R}. The relic in A1351 is a viable candidate for supporting DSA. \subsection{Electron acceleration efficiency}\label{acc_eff} The LLS - P$_{1.4\ \mathrm{GHz}}$ graph (Figure \ref{fig: lls_P}) shows that the relic in A1351 is highly luminous, which would require high acceleration efficiency for the thermal electrons. To check the consistency of the DSA model and the electron acceleration efficiency needed to produce the observed radio luminosity for A1351, we used the relationship as described in \citealt{Hoeft_2008MNRAS.391.1511H,Locatelli_2020MNRAS.496L..48L}. The acceleration efficiency $\zeta_e$ is related to the cluster magnetic field B, X-ray properties, and radio luminosity $P_\nu$ at frequency $\nu$ by the equation \begin{multline} \zeta_e = \frac{P_\nu}{6.4\times10^{-34}} \frac{B^2+B_\mathrm{CMB}^2}{B^{1-\frac{\alpha}{2}}} \frac{\mathrm{Mpc^2}}{A}\Big(\frac{\nu}{1.4\ \mathrm{GHz}}\Big)^{\frac{-\alpha}{2}}\\ \Big(\frac{7\ \mathrm{keV}}{T_\mathrm{d}}\Big)^{\frac{3}{2}} \frac{10^{-4}}{n_\mathrm{ed}} \end{multline} \begin{figure} \includegraphics[width=0.47\textwidth]{final_figures/Figure6.pdf} \caption{The electron acceleration efficiency $\zeta_e$ plotted against magnetic field at the shock location. The value of the magnetic field has been kept within few $\mu$G.} \label{fig: xi} \end{figure} Where $\alpha$ is the spectral index -1.63, $T_\mathrm{d}$ is the downstream temperature $7.26^{+5.74}_{-0.79}$ keV, $n_\mathrm{ed}$ is the downstream electron density [$0.26^{+0.12}_{-0.15}\ \times\ 10^{-3}$] cm$^{-3}$ and $B_{\rm CMB}$ is the equivalent magnetic field of Cosmic Microwave Background at the cluster redshift. The surface area, $A$ of the relic was taken as LLS $\times$ LLS following \citep{Locatelli_2020MNRAS.496L..48L,Rajpurohit_2020A&A...636A..30R}. $B_{\rm CMB}$ was evaluated at redshift 0.325 using the equation \ref{eq9} following \citealt{Hoeft_2007MNRAS.375...77H}. \begin{equation} \label{eq9} B_\mathrm{CMB} = 3.24(1+z)^2 \mu G \end{equation} Figure \ref{fig: xi} shows that in the presence of a few $\mu$G magnetic fields (0.5 - 5 $\mu$G), the efficiency $\zeta_e$ at the shock location varies from $\sim10\%$ to $\sim0.2\%$. This range of acceleration efficiency is comparable to other cases where DSA model could explain the observed luminosity of the respective radio relics (e.g. A521, El Gordo \citealt{Botteon2020A&A...634A..64B}). The high luminosity and the weak shock with $\mathcal{M} \sim 2.05 $ raise the possibility that DSA might not be the sole mechanism for powering the relic and that shock re-acceleration might be involved as well. However, in presence of a strong magnetic field, the acceleration of thermal electrons via the DSA is still a possible option for the relic in A1351. Deeper observations are required to shed more light about the particle acceleration process. \subsection{Equipartition Magnetic Field}\label{B_eq} Cluster magnetic field can be approximated assuming equipartition of energy between cosmic ray particle and magnetic field (\citealt{Govoni_2004,Bonafede_2009,vanWeeren_2009,Parekh_2020MNRAS.499..404P,Pandge2022}). To quantify the magnetic field at the relic location in A1351, we assumed that the system satisfies minimum energy density condition. The minimum energy density $u_\mathrm{min}$ is obtained using equation \ref{eq11} following \citealt{Govoni_2004}. \begin{equation} \label{eq11} u_\mathrm{min} = \xi(-\alpha,\nu_1, \nu_2)(1+k)^{\frac{4}{7}}(\nu_0)^{\frac{-4\alpha}{7}}(1+z)^{\frac{12-4\alpha}{7}} \Big(\frac{I_0}{d}\Big)^{\frac{4}{7}} \end{equation} Where $\xi(-\alpha,\nu_1, \nu_2) = 1.57 \times 10^{-13}$ for a frequency range $\nu_1$(10 MHz) - $\nu_2$(100 GHz) and spectral index $\alpha$ = -1.63 \citep{Govoni_2004}, $\nu_{0}$ = 1400 MHz, $I_0$ was estimated by dividing the flux density (10.82 mJy) of the relic at 1.4 GHz by the solid angle of $85\arcsec \times 103\arcsec$, and the depth $d$ of the relic was obtained in kpc taking an average of the largest (550 kpc at 1.4 GHz) and smallest (220 kpc at 1.4 GHz) linear size of the relic at 1.4 GHz. We assumed the ratio of the energy content of cosmic-ray protons and electrons $k = 1$ \citep{Parekh_2020MNRAS.499..404P}. Thereafter, the equipartition magnetic field $B_\mathrm{eq}$ can be found using equation \ref{eq12} \begin{equation} \label{eq12} B_\mathrm{eq} = \Big(\frac{24\pi}{7} u_\mathrm{min} \Big)^{1/2} \end{equation} The magnetic field found with this approach at A1351's relic location is $\sim$ 1.8 $\mu$G. A modified formula to compute the magnetic field strength of synchrotron radio sources ($B\arcmin_\mathrm{eq}$) is to consider an upper and lower energy cut-offs for the cosmic ray electrons rather than frequency cut-offs \citep{Brunetti_1997, Beck_2005AN....326..414B}. Assuming $\gamma_\mathrm{min} = 100\ (<< \gamma_\mathrm{max})$, the revised magnetic field is calculated using equation \ref{eq13}. \begin{equation}\label{eq13} B\arcmin_\mathrm{eq} \sim 1.1 \gamma_\mathrm{min}^{\frac{1+2\alpha}{3-\alpha}} B_\mathrm{eq}^\frac{7}{2(3-\alpha)} \end{equation} Where $\gamma$ is the Lorentz factor, $B_\mathrm{eq}$ is the equipartion magnetic field obtained using equation \ref{eq12}. The revised magnetic field found for A1351 relic is $\sim 5.3\ \mu$G. The equipartition theory relies on several assumptions. The particle energy distribution at the cluster is poorly known. Projection effects can influence the extent of the relic, and the assumption of the relic's depth can affect the derived value of magnetic field. A better constraint on the magnetic field can be given through Faraday Rotation Measure (RM). For few radio relics, the magnetic field derived using RM study are in agreement with equipartition estimation (eg. coma cluster: \citealt{Bonafede2013}, MACSJ0717.5$\pm$3745: \citealt{Bonafede2009macs,Rajpurohit2022} ). However, for few cases the magnetic fields derived from these two methods differ (eg. A2345: \citealt{Bonafede2009_A2345,Stuardi2021}; A3667: \citealt{Johnston-Hollitt2003PhD,deGasperin2021}). Polarisation study is needed to give a better estimation of the magnetic field. \section{Summary} \label{concl} In this paper, we present the first time analysis and result of GMRT 610 MHz, Chandra X-ray data of A1351 and a reanalysis of the VLA 1.4 GHz data to understand the origin of the peculiar radio edge emission present in the cluster. With \textit{Chandra} data we searched for any possible hint of shock at the location of the bright radio edge in A1351. Our findings are summerized below. \begin{itemize} \item From radio observation, we measured a total diffuse emission flux at 610 MHz and 1.4 GHz to be $86.67 \pm 5.49$ mJy and $24.10 \pm 2.44$ mJy, respectively. The average spectral index of the cluster diffuse emission was found to be $\alpha_\mathrm{total} = -1.72 \pm 0.33$. The bright edge of the cluster has an LLS of $\sim$ 570 kpc at 610 MHz, radio luminosity $P_{1.4\ \mathrm{GHz}}$ = $4.46\times10^{24}$ W $\mathrm{Hz}^{-1}$ and an integrated spectral index $\alpha = -1.63 \pm 0.33$ giving a Mach number $\mathcal{M}_{\alpha}$ = 2.05. The spectral index map shows a spectral index gradient at the edge location. \item Using \textit{Chandra} observation, we found the presence of shock at the location of the edge. The shock Mach numbers found from X-ray temperature and density jumps are $1.96^{+1.36}_{-0.86}$ and $1.34^{+0.20}_{-0.19}$, respectively. \item Its larger size, gradient in the spectral index map, and position of the X-ray shock support the idea that the edge is a radio relic. The relic follows the $P_{1.4\ \mathrm{GHz}}$-LLS trend with other observed relics from the NVSS survey and stands as one of the powerful relics considering its size. \item We found a magnetic field of $\sim 5.3\ \mu$G in the relic's location assuming equipartition. In presence of this high magnetic field, it is very likely that the relic in A1351 might have originated from DSA of thermal electrons. The formation of this highly luminous relic in presence of weak shock makes it one of the unique cases which can unfold interesting information about cluster scale acceleration processes. \item Future deep X-ray observation can help us study the cluster X-ray morphology in more detail. Future polarisation study can give better understanding of the cluster magnetic field. Multi-frequency radio observations are needed to search for spectral curvature and also to study this halo relic system in more detail. \end{itemize} \section*{acknowledgements} We thank the anonymous reviewer for the comments and suggestions. We thank IIT Indore for giving out the opportunity to carry out the research project. MR would like to thank DST for INSPIRE fellowship program for financial support (IF160343). MR also acknowledges financial support from Ministry of Science and Technology of Taiwan (MOST 109-2112-M-007-037-MY3). SC would like to thank Aishrila Mazumder and Gourab Giri for fruitful discussions. This research has made use of the data available in GMRT and VLA archives. We thank the staff of the GMRT who have made these observations possible. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We also thank the NRAO staff for making VLA observation possible. This research has used data obtained from the \textit{Chandra} Data Archive and the \textit{Chandra} Source Catalog and software provided by the \textit{Chandra} X-ray Center (CXC) in the application packages CIAO and Sherpa. This research made use of Astropy,\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{Astropy_2013A&A...558A..33A,astropy_2018}, Matplotlib \citep{Matplotlib_Hunter:2007}, and APLpy, an open-source plotting package for Python \citep{APLpy_2012ascl.soft08017R}. \section*{Data Availability} The archival radio data used in our work are available in the GMRT Online archive (\url{https://naps.ncra.tifr.res.in/goa/data/search}, project code 17\_019),the VLA Data Archive (\url{https://archive.nrao.edu/archive/advquery.jsp}, project codes AB0699, AO0149 and AM0469) and the Chandra data archive (\url{https://cda.harvard.edu/chaser/}, ObsId 15136)
1,108,101,566,857
arxiv
\section*{Acknowledgment} We would like to thank Rohun Kulkarni and Margaret Tung for helping with data collection. Ajay Mandlekar acknowledges the support of the Department of Defense (DoD) through the NDSEG program. We acknowledge the support of Toyota Research Institute (``TRI''); this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. } \renewcommand*{\bibfont}{\footnotesize} \printbibliography \end{document} \section{Introduction} \label{sec:intro} Imitation learning (IL) is a powerful paradigm to teach robots to perform manipulation tasks by allowing them to learn from expert demonstrations~\cite{pomerleau1989alvinn}, but IL has mostly been limited to single-arm manipulation tasks~\cite{zhang2017deep, mandlekar2020learning}. By contrast, many real-world manipulation tasks require multiple robot arms to operate simultaneously, such as lifting a heavy object, passing an object from one arm to the other, or assembling a desk. However, a limited number of works~\cite{zollner2004programming, gribovskaya2008combining, silverio2015learning} have tried to apply IL techniques to multi-arm manipulation tasks, mainly due to the difficulty of collecting single-operator demonstrations within this setting. Asking a human to control more than one robotic arm simultaneously can impose significant cognitive burden~\cite{orun2019effect} and is often only possible for two robotic arms but no more. Furthermore, such systems can require sophisticated human-control interfaces~\cite{lipton2017baxter, laghi2018shared}, such as Virtual Reality devices which are not widely available, consequently limiting the set of users that can participate in data collection. \begin{figure} \setlength{\fboxrule}{1pt} \setlength{\fboxsep}{0pt} \centering \begin{subfigure}{0.23\textwidth} \centering \fbox{\includegraphics[width=\columnwidth]{figures/uncoordinated-subtask-no-background.jpg}} \end{subfigure} \hfill \begin{subfigure}{0.23\textwidth} \centering \fbox{\includegraphics[width=\columnwidth]{figures/coordinated-subtask-no-background.jpg}} \end{subfigure} \caption{\textbf{Multi-Stage Multi-Arm Manipulation with Mixed Coordination.} Table assembly is a canonical example of a multi-stage mixed coordinated task, where each arm must complete an independent, parallelized column assembly subtask \textit{(left)}, after which each arm must coordinate to lift and align the tabletop component to complete the task \textit{(right)}. We build a system that allows for remote teleoperators to collaboratively collect task demonstrations on such multi-stage multi-arm manipulation tasks.} \label{fig:pullfig} \vspace{-15pt} \end{figure} To address these limitations, we present \textsc{Multi-Arm RoboTurk}\xspace (MART\xspace), a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms and collect demonstrations for multi-arm tasks. MART\xspace addresses the limitations of prior multi-arm systems because it frees users from cognitive burden by only having each control a single arm, allowing demonstration collection for multi-arm tasks while only requiring users to have access to a smartphone and web browser. Thus, MART\xspace lowers the barriers to entry for exploring the wider taxonomy of multi-arm tasks, and allowed us to collect demonstrations for five novel two-arm and three-arm tasks from users physically separated by thousands of kilometers. After collecting and analyzing human demonstration data from these tasks, we gained the following critical insight: most multi-arm tasks do not require global coordination throughout its full duration. Consider a table assembly task (Fig~\ref{fig:pullfig}) in which each leg can be assembled independently but requires coordinated execution when aligning the tabletop. Is coordination explicitly necessary throughout? To explore this claim, we performed extensive experiments training state-of-the-art IL variants with different levels of centralized and distributed control, representing explicit coordination and fully decoupled execution, respectively. We \textit{a priori} expected that centralized versions should be able to coordinate actions from multiple arms the best and outperform other variants. However, we observed that centralized agents perform poorly across several tasks compared to distributed variants. We hypothesize this may be caused by the centralized agent ``hallucinating'' incorrect correlations between arms from the limited set of demonstrations, rendering the task harder than it really is. While distributed agents do not suffer from this limitation, we observed that distributed agents can struggle to learn sections of a task where more than one arm needs to synchronize to accomplish the goal. To address both of these issues, we propose a method for directly modeling both centralized and decoupled policies via a base-residual model trained in a two step process. Our guiding intuition is that the base policy's architecture choice can dictate the either fully coordinated or fully decoupled dominating behavior, while the residual policy can encourage the resulting composite policy to exhibit desired complementary traits. The composite policy mitigates overfitting in the centralized base policy case via a decentralized residual architecture and improves coordination in the decentralized base policy case via a centralized residual architecture . Our experiments demonstrate that using this augmented policy structure outperforms baselines that are fully centralized or decentralized across all of our benchmark tasks that require mixed coordination. In summary, our contributions are as follows: \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item We present \textsc{Multi-Arm RoboTurk}\xspace (MART\xspace), a scalable multi-agent data collection system that allows us to gather demonstrations on diverse multi-arm tasks from humans remotely located via an easy and intuitive interface, lowering the barriers to entry for exploring the wider taxonomy of multi-arm tasks. \item We provide a set of novel realistic multi-arm benchmark tasks ranging from the fully decoupled to fully coordinated setting that allow us to analyze these emergent mixed coordination properties, including a three-arm task that, to our knowledge, is the first of its kind. \item We collect and evaluate human demonstrations on simulated versions of our tasks\footnote{Our system can be used ``as is'' for the collection with real-world robots, similar to Mandlekar et al.~\cite{mandlekar2019scaling}. However, due to the current COVID measures, we could not get access to our robots.} against multiple baselines, and show that fully centralized or decentralized policy models suffer during tasks requiring mixed coordination. \item We propose and evaluate a base-residual policy framework that allows policy models to better adapt to the mixed coordination setting, and show that policies augmented with this model are able to outperform all prior baselines across all of our tasks. \end{enumerate} \section{Introduction} \label{sec:intro} Imitation Learning (IL)~\cite{pomerleau1989alvinn} is a promising paradigm for learning complex robotic manipulation skills by reproducing behaviors from a set of human demonstrations. However, IL has mostly been limited to single-arm manipulation tasks~\cite{zhang2017deep, mandlekar2020learning}. By contrast, many real-world manipulation tasks require multiple robot arms to coordinate to achieve a goal - such as lifting a heavy object, passing an object from one arm to the other, or assembling a desk. Multi-arm domains have been relatively unexplored in the context of IL due to the difficulty of collecting human demonstrations. Prior attempts to collect human demonstrations in this setting has suffered from 3 key limitations. Human demonstrations are often collected via teleoperation~\cite{zhang2017deep, mandlekar2018roboturk} by allowing a human to control one or more robotic arms and guide them through a manipulation task, but asking a human to control more than one robotic arm simultaneously can (1) impose significant cognitive burden and (2) is often only possible for 2 robotic arms. Furthermore, (3) such systems can require sophisticated human-control interfaces such as Virtual Reality devices [cite tellex, other bimanual teleoperation we've seen], which are not widely available, consequently limiting the set of users that can participate in data collection. \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\columnwidth]{example-image-a} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\columnwidth]{example-image-b} \caption{} \end{subfigure} \hfill \caption{a) bbbbbbb, b) dsssllelt. llslslslsls} \label{fig:pullfig} \end{figure} To address these limitations we present \textsc{Multi-Arm RoboTurk}\xspace (MART\xspace) - a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms and collect demonstrations for collaborative multi-arm tasks. MART\xspace addresses the limitations of prior multi-arm systems because it (1) frees users from cognitive burden by only having each control a single arm, (2) allows for collecting demonstrations on tasks requiring more than 2 arms, and (3) only requires users to have access to a smartphone and web browser to participate in data collection. While MART\xspace allows for collecting demonstrations from collaborating users on multi-arm manipulation tasks, learning from these demonstrations presents unique challenges for offline imitation learning. What are the challenges? list here? I don't see them Different tasks may require varying degrees of coordination between each arm; however, we observed that most behaviors demonstrated by humans using our system consisted of \textit{mixed coordination}, where users would only coordinate their movements at certain task-critical moments. This is not a challenge, it is an observation For the other parts of a demonstration, each user would focus on completing their own subtask while ignoring what other users (or robot arms) were doing. we need to motivate why we are exploring the improvements to standard (Centralized) IL polciies, no? the motivation being system to collect demos --> we observe mixed coordination behavior --> we need to rethink our policy infra to better account for these observations in our collected data To further investigate this phenomenon, we conducted a user study across several human operators, and observed consistency in . We then proceeded to train standard offline policies TODO: we need to make the argument here crisp. What did we observe in our data? How do the methods we investigate, user study we did, experiments we present, tie-in to these observations, the mixed-coordination setting, and our overall goal of learning multi-arm manipulation? To further investigate this phenomenon, we conduct a user study across several human operators, and analyze how demonstrated behaviors change over time. We also train policies on the collected demonstrations and compare several centralized and de-centralized methods that impose different degrees of coordination when generating robot arm actions. Finally, we present a new algorithm that does X. TODO: prune the contributions below to 3 (or 4 max) that are related to the TODO above Concretely, our contributions are as follows: \begin{enumerate} \item We present Multi-Arm RoboTurk (MART), a scalable multi-agent data collection system for gathering human demonstrations on multimanual, collaborative tasks, \item We conduct a user study on MART and show that the system is easy to learn and can produce high quality demonstrations after a relatively short operator burn-in period, \item We provide a set of realistic multimanual benchmark tasks ranging from the fully decoupled to fully coordinated setting, including a trimanual task that, to our knowledge, is the first of its kind to leverage data from more than two agents, \item We evaluate our collected task demonstrations against a variety of baselines and associated variants, and show that a decentralized model formulation aligns better with the underlying data structure and outperforms current centralized IL methods, \item We propose and evaluate a novel residual policy framework that leverages the collaborative nature of our collected demonstrations to better synchronize actions between agents and mitigate overfitting, and show that such a method substantially improves base policy performance. \end{enumerate} \section{Related Work} \label{sec:related} \begin{figure}[t!] \centering \vspace{1mm} \includegraphics[width=0.9\linewidth]{figures/system_design.pdf} \caption{\textbf{Multi-Arm RoboTurk System Diagram.} Our system enables multiple remote users physically separated by thousands of kilometers to collaboratively teleoperate robot arms and collect multi-arm task demonstrations. Each operator uses their smartphone to control one robot arm and receives a video stream, tailored to a specific robot arm viewpoint, in their web browser.} \label{fig:system-diagram} \vspace{-15pt} \end{figure} \textbf{Multi-Agent Reinforcement Learning:} Multi-Agent Reinforcement Learning~\cite{tan1993multi, busoniu2008comprehensive} in cooperative settings has been widely studied~\cite{lowe2017multi, foerster2016learning, mataric1997reinforcement, sukhbaatar2016learning, foerster2017counterfactual, jiang2018learning}, and applied to domains such as video games~\cite{peng2017multiagent} and visual question answering~\cite{das2017learning}. Exploration in such settings can be more burdensome than in the single-agent setting due to the larger action space and dependence between agent actions. \input{fig-model_architecture} \textbf{Multi-Agent Imitation Learning:} Most work in Multi-Agent Imitation Learning~\cite{song2018multi, le2017coordinated, vsovsic2016inverse, bogert2014multi} focuses on the paradigm of Inverse Reinforcement Learning~\cite{abbeel2004apprenticeship, abbeel2011inverse}, in which multi-agent demonstrations are used to infer a reward function, and the reward function is optimized via Reinforcement Learning (RL). However, this can require extensive agent interaction due to the RL process. Chernova et al.~\cite{chernova2007multiagent} has also explored multi-agent imitation learning in an interactive setting, where humans can provide corrective actions to the agent, but the method was demonstrated on simple 2D domains. Instead, we focus on Behavioral Cloning (BC)~\cite{pomerleau1989alvinn}, a common approach for imitation learning that trains a policy from a demonstration dataset in an offline manner. While centralized and decentralized structures for policies and reward functions have been studied extensively in the multi-agent IRL setting~\cite{song2018multi}, they have not been explored significantly in BC settings. In general, learning from multi-arm demonstrations on manipulation tasks is unexplored. \textbf{Bimanual Robot Manipulation:} Bimanual manipulation is a practical problem of great interest~\cite{smith2012dual}. Reinforcement Learning (RL) has been applied to bimanual manipulation tasks~\cite{kroemer2015towards, amadio2019exploiting, chitnis2020efficient, chitnis2020intrinsic}, but RL methods must deal with the increased burden of exploration due to the presence of two arms. Prior work has tried to address the exploration burden by assuming access to parametrized skills such as reaching and twisting~\cite{chitnis2020efficient}, by encouraging efficient exploration via intrinsic motivation~\cite{chitnis2020intrinsic}, and leveraging movement primitives from human demonstrations~\cite{amadio2019exploiting}. RL in this setting has mainly been limited to short-horizon single-stage tasks such as twisting a bottle cap. By contrast, in our work, by collecting human demonstrations, we are able to circumvent the exploration burden and train performant policies on challenging, multi-stage, multi-arm manipulation tasks. Imitation Learning (IL) on bimanual tasks is less common. Some prior works~\cite{zollner2004programming, gribovskaya2008combining, silverio2015learning} have leveraged the paradigm of programming by demonstration (PbD), but these approaches often requires extensive modeling assumptions, and may not generalize well to different environment configurations. Systems allowing for bimanual teleoperation are relatively uncommon. Laghi et al.~\cite{laghi2018shared} built a system that allows a user to simultaneously control two robot arms using special sensors that track the user's arms. Lipton et al.~\cite{lipton2017baxter} built a system that allows a remote teleoperator to control a bimanual Baxter robot using a Virtual Reality (VR) interface. Unlike MART\xspace, neither of these systems are suitable for multi-arm settings with more than two arms, and both rely on special purpose hardware that is not widely available, restricting the set of people that can use the system. Bimanual manipulation has also been studied in the context of assistive settings~\cite{edsinger2007two}. \section{Preliminaries} \label{sec:problem} We formalize the problem of solving a robot manipulation task as an infinite-horizon discrete-time Markov Decision Process (MDP), $\mathcal{M} = (\mathcal{S}, \mathcal{A}, \mathcal{T}, R, \gamma, \rho_0)$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{T}(\cdot | s, a)$, is the state transition distribution, $R(s, a, s')$ is the reward function, $\gamma \in [0, 1)$ is the discount factor, and $\rho_0(\cdot)$ is the initial state distribution. At every step, an agent observes $s_t$, uses a policy $\pi$ to choose an action, $a_t = \pi(s_t)$, and observes the next state, $s_{t+1} \sim \mathcal{T}(\cdot | s_t, a_t)$, and reward, $r_t = R(s_t, a_t, s_{t+1})$. The goal is to learn an policy $\pi$ that maximizes the expected return: $\mathbb{E}[\sum_{t=0}^{\infty} \gamma^t R(s_t, a_t, s_{t+1})]$. We tackle the problem of multi-robot manipulation; we assume this corresponds to a factorization of the states and actions for each robot $s = (s^1, s^2, \dots, s^n)$, $a = (a^1, a^2, \dots, a^n)$. In this setting, we define a \textit{centralized} agent as an agent that uses the entire state, $s$, to generate an action, $a$, for all robots, and a \textit{decentralized} agent as an agent that generates each robot-specific action, $a^i$, by only using the corresponding robot observation, $s^i$. Consequently, a centralized agent uses the observation from all robot arms to jointly determine each robot's action, while a decentralized agent independently generates each robot action without considering observations from the other robot arms. As our goal is to leverage demonstrations gathered from our novel system, we now briefly review offline imitation learning methods that can be used to learn from human demonstrations. Behavioral Cloning (BC)~\cite{pomerleau1989alvinn} is a common and simple method for learning from a set of demonstrations $\mathcal{D}$. It trains a policy $\pi_{\theta}(s)$ to learn the actions in the demonstrations with the objective: $\arg\min_{\theta} \mathbb{E}_{(s, a) \sim \mathcal{D}} ||\pi_{\theta}(s) - a||^2$. Hierarchical Behavioral Cloning (HBC) seeks to learn hierarchical policies that encourage temporal abstraction and can be a better way to learn from offline human demonstrations~\cite{mandlekar2020iris, mandlekar2020learning}. HBC consists of a low-level policy that is conditioned on future observations $s_g \in \mathcal{S}$ (termed \textit{subgoals}) and learns sequences of actions that can be used to achieve them, and a high-level policy that predicts future subgoals given a current observation. The low-level policy is a subgoal-conditioned recurrent neural network (RNN) $\pi_L(s , s_g)$ that is trained on $T$-length temporal state-action sequences to produce an action sequence $a_t, \dots, a_{t + T - 1}$, conditioned on the state sequence $s_t, \dots, s_{t + T - 1}$, and the subgoal $s_{t + T}$. The high-level policy $\pi_H(s)$ is trained to predict subgoal observations $s_{t + T}$ that are $T$ timesteps in the future from the current observation $s_t$, and is often a conditional Variational Autoencoder (cVAE)~\cite{kingma2013auto} that learns a conditional distribution $\pi_H(s_{t + T} | s_t)$~\cite{mandlekar2020iris, mandlekar2020learning}. \section{MART: Multi-Arm RoboTurk} \label{sec:system} In this section, we first review the RoboTurk platform, and then show how we extended it to develop MART\xspace (Fig.~\ref{fig:system-diagram}). \subsection{RoboTurk Overview} RoboTurk~\cite{mandlekar2018roboturk, mandlekar2019scaling} is a platform that allows remote users to collect real or simulated task demonstrations through low-latency teleoperation. Users log in to a website with a real-time video stream of the robot workspace from their robot's unique vantage point, and control their robot's end effector using their smartphone as a 6-DoF motion controller. \input{fig-tasks} To facilitate low-latency video streaming to each user's web browser, the platform leverages Web Real-Time Communication (WebRTC) to establish low-latency communication links between a user's web browser, smartphone, and the remote teleoperation server which interfaces with the robot environment. We summarize the main platform components: \textit{Teleoperation Server:} A process dedicated to a single user that interfaces with the user endpoint and the robot. It maintains its own robot simulator instance and two WebRTC connections -- one to the user's phone, and another to the user's web browser. It uses the first connection to receive phone commands and control the robot arm and the second connection to send rendered frames of the robot workspace to the user's web browser. \textit{User Endpoint:} The user views a video stream of the workspace in their web browser and controls the robot arm by moving their smartphone in free space. The phone pose is mapped to an end effector command \subsection{Extending RoboTurk for Collaborative Teleoperation} Extending RoboTurk to incorporate multiple robotic manipulators and enable real-time user collaboration required important system design considerations (Fig.~\ref{fig:system-diagram}). \textit{Collaborative Teleoperation:} To enable multiple users to control robot arms in the same workspace, we extended the teleoperation server to maintain multiple communication channels -- two per user, one to each user's phone and the other to each user's web browser. The server receives phone commands from each user and uses some synchronization logic to determine when to send commands to the simulated robot arms (described below). It also renders user-specific viewpoints from cameras in the workspace (see Fig.~\ref{fig:benchmark_tasks}) and sends each to the corresponding user's web browser. \textit{Robot Command Synchronization:} To facilitate teleoperation that feels natural, we would like users to perceive that simulation is real-time (e.g. 1 second of simulation time takes 1 second). However, robot simulation is discrete-time, and requires controls for all robot arms to proceed. Unfortunately, controlling multiple arms in a single simulation from multiple phones creates a synchronization issue because of variable latency in each user's network connection. Phone commands from the different users can be received by the teleoperation server at different rates and different times. To address this issue, we wait for new phone messages to be received on all phone connections before actuating all robot arms and proceeding to the next timestep. We found this synchronization to be extremely helpful at ensuring that each user perceives simulation to run in real-time. \section{Learning Mixed Coordination} \label{sec:algorithm} After collecting and analyzing demonstrations collected by MART\xspace, we observed that most multi-arm tasks do not require global coordination throughout their full duration, and instead only require coordination during specific subtask segments. Centralized policies that directly model the full joint state-action mapping are liable to overfit in sections that do not require coordination. To better address the problem of learning from these mixed-coordination demonstrations, we develop several variants of HBC (Fig~\ref{fig:algos}a) that combine centralized and decentralized components, as described below \textbf{Full Decentralization (d-HBC):} We consider per-arm policy models and partition our collected demonstrations into arm-specific observations and actions (Fig~\ref{fig:algos}b). This means training high-level policies $\pi^1_H(s^1), \dots, \pi^n_H(s^n)$ and low-level policies $\pi^1_L(s^1, s_g^1), \dots, \pi^n_L(s^n, s_g^n)$ -- one per robot arm. This architecture is fully decentralized as each set of policies generates an arm action purely from that arm's observation, disregarding other arms completely. \textbf{Partial Decentralization (d[h/l]-HBC):} We outline a simple modification to HBC that allows for \textit{partial} decentralization. We establish two variants by factorizing either (1) the high-level policy or (2) the low-level policy to be decentralized. Notice that this is a compromise between centralized HBC, where nothing is factorized, and decentralized HBC (d-HBC), where both are factorized. In dh-HBC (Fig~\ref{fig:algos}c), the high-level is decentralized -- $n$ high-level policies produce subgoals $s_g = (s_g^1, \dots, s_g^n)$ which are fed to a centralized low-level policy $\pi_L(s, s_g)$. In dl-HBC (Fig~\ref{fig:algos}d), the high-level policy is centralized and the low-level policy is decentralized -- $n$ low-level policies produce arm actions $(a^1, \dots, a^n)$. \textbf{Mixed Coordination with Residual Learning (r[d]-HBC):} A more nuanced approach is to endow a pretrained policy with desired properties through a separate residual network that perturbs its action. In this way, we can choose complementary architectures that help mitigate the underlying pitfalls of the base policy architecture -- thus, if the base policy is centralized, then we provide agent-specific residual networks to reduce overfitting and encourage greater generalization. Conversely, we can provide a centralized residual network for a decentralized base policy to facilitate coordination in sections of the task that may need it. Concretely, given an action from a pretrained policy $\bar{a} = \pi(s)$, our residual network $\rho(\bar{a}, s)$ takes this action and the state as input, and outputs a small correction to the action \begin{align} \label{eq:res} a = \bar{a} + \delta, \quad \delta = \rho(\bar{a}, s), \quad ||\delta||_2 < \epsilon, \: \text{$\epsilon$ small} \end{align} where we constrain the L2 norm of the perturbation to be smaller than $\epsilon$ to prevent the residual network from dominating the overall policy behavior. This results in two variants -- r-HBC (Fig~\ref{fig:algos}e), where we train a decentralized HBC base policy and then learn a centralized residual network, and rd-HBC (Fig~\ref{fig:algos}f), where we train a centralized HBC base policy and then learn a decentralized residual network. \section{Experimental Setup} \label{sec:exp} In this section, we describe our benchmark multi-arm tasks, and data collection setup. \textbf{Tasks:} All tasks were designed using MuJoCo~\cite{todorov2012mujoco} and the robosuite framework~\cite{robosuite2020} (see Fig.~\ref{fig:benchmark_tasks}). All robot arms are controlled using Operational Space Controllers~\cite{khatib1987unified}. Observations contain per-robot end-effector pose and task-specific object information. For decentralized setups, we partitioned the state space based on information relevant to each agent. {\textit{Two Arm Multi-Cube Lifting:}} Two robot arms must lift two blocks placed on a table. This pedagogical task is fully \textit{decoupled} since each arm can lift a block independently. {\textit{Two Arm Drink Tray Lifting:}} Two robot arms must lift and hold a tray for 1.5 seconds without tipping the drinks on the tray over. This pedagogical task represents the fully \textit{coordinated} case where each arm must consider the other's actions in order to carefully lift and stabilize the tray. {\textit{Two Arm Assembly:}} Two robot arms must assemble a hospital bed composed of a base, two columns, and tabletop. The arms need to place the columns in the base and then coordinate to lift and align the tabletop over the columns. This task is challenging for several reasons - it is multi-stage and requires fine-grained manipulation for assembling the columns and table with varying levels of coordination over the task. The columns can be assembled independently by each arm, but the tabletop assembly requires coordination. {\textit{Two Arm Pick-Place Handover:}} Two robot arms must work together to transfer a hammer from a closed container on a shelf to a target bin on another shelf. One robot arm must retrieve the hammer from the closed container, while the other arm must simultaneously clear the target bin by moving a cube (trash) to a nearby receptacle. Finally, one arm hands the hammer over to the other arm to place in the target bin. This task is challenging because it is multi-stage and contains subtasks that require different levels of coordination. {\textit{Three Arm Lift Wiping:}} A dirty tabletop must be cleaned, but has a tray of drinks on top of it. Two arms must lift and move the tray without spilling the drinks while a third arm wipes the patch of dirt on the table underneath. Solving this task requires asymmetrical coordination -- two arms must coordinate to move the tray out of the way without spilling the drinks while the third arm can operate in parallel, wiping the tabletop when the tray is cleared. \textbf{Data Collection:} We collect a set of experienced user demonstrations on all five novel tasks, as well as additional demonstrations on our three mixed coordination tasks from multiple user groups with varying levels of experience as part of a user study. Our user study consists of three unique user pairs for the two arm tasks, and two unique groups of three for the three arm task, with each dataset consisting of roughly 50-100 successful demonstrations. \input{table-user_study_demonstrations} \section{Results} \label{sec:results} In this section, we analyze our novel contributions, and show that (a) users can effectively coordinate using MART, and (b) our residual framework is able to outperform all other baseline models across all of our multi-arm tasks. \subsection{System Analysis: Do operators have a hard time with coordination?} \input{table-algo_benchmarks} \input{table-algo_user_study} Since the coordinated subtasks require implicit communication between operators and are more subject to system issues such as latency, we expect coordination to be the major bottleneck of collecting successful demonstrations. To quantify if coordination was an issue, we examine the difficulty of our tasks by evaluating the marginal degradation that each type of sub-task contributes to operator task completion rate. For the Assembly task and Pick-Place Handover task, both tasks first have an uncoordinated subtask followed by a coordinated subtask. We therefore measure the marginal degradation of the uncoordinated subtask by measuring the difference between its best possible success rate (100\%) and the uncoordinated subtask success rate. The degradation is measured for the coordinated subtask by calculating the difference between its best possible success rate (i.e. the uncoordinated subtask success rate) and the coordinated subtask success rate. For the Lift Wipe task, since the order of the subtasks is reversed with coordinated subtask being followed by the uncoordinated subtask, we reverse the order of calculations. Table~\ref{table:coordination} demonstrates that for the two-arm tasks, the marginal degradation of uncoordinated subtasks were higher than for coordinated subtasks by roughly $20\%$, meaning that operators failed more frequently on the uncoordinated subtask sections. For the three-arm task we see that the degradation rate for the coordinated subtask is slightly higher ($9\%$). Taken together, these results show that coordination does not pose a significant barrier to operators for completing a task demonstration successfully, highlighting that MART\xspace is suitable for collecting collaborative task demonstrations despite operators being physically separated by large distances. \subsection{Data Analysis} \label{sec:results} We evaluate all models on experienced-user demonstrations collected for all tasks, seen in Table~\ref{task_benchmark_algo_results}. We also evaluate a subset of models on demonstrations collected during our user study, presented in Table~\ref{task_user_study_algo_results}. We record the best checkpoint rollout success rate over the course of training, and report the mean and standard deviation across five random seeds. \textbf{Are centralized and decentralized variants of standard IL methods sufficient for learning from multi-arm task demonstrations?} We first discuss our two single-stage tasks. d-HBC outperforms HBC by a wide margin ($84.5\%$ vs. $38.5\%$) on the Multi-Cube Lifting task. This is expected since human operators lifted their own cubes independently. Interestingly, d-HBC and HBC perform comparably on the Drink-Tray Lifting task. We hypothesize that this is because the task is short-horizon and the demonstrators grasped each handle at roughly the same time, allowing each independent agent in d-HBC to just focus on grasping its handle and lifting independent of the other agent. Indeed, on the longer horizon Three Arm Lifting Wiping task, where the arms must coordinate to lift and move the tray for longer periods of time, we see HBC outperforms d-HBC ($83.7\%$ vs. $50.0\%$). On the Handover task, d-HBC slightly outperforms HBC ($24.4\%$ vs. $16.0\%$). This might be because significant portions of the Handover task do not require the arms to be aware of each other's actions. On the Assembly task, both perform poorly ($\sim 5\%$). Based on these results, we conclude that for our more challenging multi-stage tasks, neither d-HBC nor HBC consistently outperforms the other. We also note that the BC-RNN baseline performs poorly across all tasks compared to HBC and the other variants, highlighting the substantial benefits of hierarchy in the multi-arm setting. \textbf{Can partially decentralized hierarchical models sufficiently capture mixed coordination properties to better succeed at multi-arm tasks?} Our naive variations dh-HBC and dl-HBC at best perform marginally better than the lowest performing centralized or decentralized HBC baseline, and sometimes perform worse than both baselines, as in the Drink-Tray Lifting ($<70\%$) and Pick-Place Handover ($<16\%$) tasks. These results highlight how mixed coordinated settings cannot easily be solved with naive approaches. \textbf{Can our proposed residual framework better capture mixed coordination properties to improve policy performance on multi-arm tasks?} In contrast to the partially decentralized baselines, our residual models r-HBC and rd-HBC consistently outperform all baselines across all of our tasks. We hypothesize that because our residual model allows for small action perturbations, our framework can produce a policy that endows the base policy with complementary behavior in states that incur high action error, without compromising base policy behavior in well-fit states. The consistent performance improvements exhibited by our residual-augmented policies highlight the potential of our framework to be applied to a wide range of multi-arm tasks with varying levels of mixed coordination, from the highly coordinated instance (Three Arm Lifting Wiping) to the weakly coordinated case (Two Arm Pick-Place Handover). We also observed that rd-HBC performed best in the short-horizon tasks such as Drink-Tray Lifting ($86.7\%$ vs. $75.3\%$), whereas r-HBC outperformed in the more complex, multi-stage tasks such as Lifting Wiping ($94.0\%$ vs. $58.6\%$), highlighting how inductive bias still plays a major role in choosing a suitable base policy that may lead to the best success rates. \textbf{How robust is our proposed residual framework to varying demonstration quality?} We expect model performance to degrade as demonstration quality reduces due to less-experienced operators, and find that our r-HBC model still performs as well or better ($17.3\%$ vs. $9.3\%$ for Pick-Place Handover, $86.7\%$ vs. $71.3\%$ for Lifting Wiping) than our other baselines in that condition. This shows that our proposed model is robust enough to improve performance despite noisy training signals, and can learn from a diverse distribution of demonstrations. \textbf{What are the limitations of the proposed residual framework?} While our residual framework has shown promising results in improving current multi-arm IL methods for multi-arm tasks, we observe room to improve, especially in the more challenging tasks such as the Assembly and Pick-Place Handover tasks. While we defer this to future work, we highlight MART\xspace as the means for conveniently gathering data necessary to explore these novel emergent properties underlying such multi-arm tasks. \section{Conclusion} We introduced MART\xspace, a scalable teleoperation system for gathering real-time multi-arm manipulation task demonstrations, and showed that IL methods can leverage this data to train performant policies over a wide range of realistic and novel multi-arm tasks requiring varying degrees of collaboration. We also explored potential methods for better modeling mixed coordination policies, and showed that a residual-augmented framework is able to outperform all of our other baselines on our tasks. Imitation learning for multi-arm manipulation has been limited due to the difficulty of collecting demonstrations, but we are excited by the prospect of MART\xspace lowering this barrier and enabling further research in this setting. \label{sec:conclusion}
1,108,101,566,858
arxiv
\section{Introduction} Predicting the behavior of discrete dynamical systems is, in general, both the ``most wanted'' and the hardest task. Moreover, the difficulty does not decrease when considering finite phase spaces. Indeed, when the system is not solvable, numerical simulation is the only possibility to compute future states of the system. In this paper we consider the well-known discrete dynamical system of sandpiles (SPM). Roughly speaking, its dynamics is as follows. Consider the toppling of grains of sand on a (clean) flat surface, one by one. After a while, a sandpile has formed. At this point, the simple addition of even a single grain may cause avalanches of grains to fall down along the sides of the sandpile. Then, the growth process of the sandpile starts again. Remark that this process can be naturally extended to arbitrary dimensions although for $d>3$, the physical meaning is not clear. The first complexity results about SPM appeared in ~\cite{gm1,gm2} where the authors proved the computation universality of SPM. For that, they modelled wires and logic gates with sandpiles configurations. Inspired by these constructions, C.~Moore and M.~Nilsson considered the \emph{prediction problem} (PRED) for SPM \emph{i.e.}\@\xspace the problem of computing the stable configuration (fixed point) starting from a given initial configuration of the sandpile. C.~Moore and M.~Nilsson proved that PRED is in $\textsf{NC}\xspace^3$ for dimension $1$ and that it is \textsf{P}\xspace-complete for $d\geq 3$ leaving $d=2$ as an open problem \cite{moore99}. (Recall that \textsf{P}\xspace-completeness plays for parallel computation a role comparable to \textsf{NP}\xspace-comp\-lete\-ness for non-deterministic computation. It corresponds to problems which cannot be solved efficiently in parallel (see~\cite{ghr95}) or, equivalently, which are \textit{inherently sequential}). Later, P.B.~Miltersen improved the bound for $d=1$ showing that PRED is in \textsf{LOGDCFL}\xspace ($\subseteq\textsf{AC}\xspace^1$) and that it is not in $\textsf{AC}\xspace^{1-\epsilon}$ for any $\epsilon>0$ \cite{miltersen07}. Therefore, in any case, one-dimensional sandpiles are capable of (very) elementary computations such as computing the max of $n$ bits. Both C.~Moore and P.B.~Miltersen underline that \begin{quote}\textit ``having a better upper-bound than \textsf{P}\xspace for PRED for two-dimensional sandpiles would be most interesting.'' \end{quote} In this paper, we address a slightly different problem: the avalanche problem (AP). Here, we start with a monotone configuration of the sandpile. We add a grain of sand to the initial pile. This eventually causes an avalanche and we address the question of the complexity of deciding whether a certain given position --initially with no grain of sand-- will receive some grains in the future. Like for the (PRED) problem, (AP) can be formulated in higher dimensions. In order to get acquainted with AP, we introduce its one-dimensional version first. \smallskip One-dimensional sandpiles can be conveniently represented by a finite sequence of integers $x_1, x_2, \ldots, x_k, \ldots, x_n$. The sandpiles are represented as a sequence of \emph{columns} and each $x_i$ represents the number of grains contained in column $i$. In the classical SPM, a grain falls from column $i$ to $i+1$ if and only if the height difference $x_i-x_{i+1}\geq 2$. Kadanoff's sandpile model (KSPM) generalizes SPM~\cite{kadanoff89,goles08} by adding a parameter $p$. The setting is the same except for the local rule: one grain falls to the $p-1$ adjacent columns if the difference between column $i$ and $i+1$ is greater than $p$. Assume $x_k=0$, for a value of $k$ ``far away'' from the sandpile. The avalanche problem asks whether adding a grain at column $x_1$ will cause an avalanche such that at some point in the future $x_k\geq 1$, that is to say that an avalanche is triggered and reaches the ``flat'' surface at the bottom. This problem can be generalized for two-dimensional sandpiles and is related to the question addressed by C.~Moore and P.B.~Miltersen. In this paper we prove that in the two-dimensional case, AP is \textsf{P}\xspace-complete. The proof is obtained by reduction from the Circuit Value Problem where the circuit only contains monotone gates --- that is, AND's and OR's (see section~\ref{sec:2d} for details). \smallskip We stress that our proof for the two-dimensional case needs some further hypothesis/constraints for monotonicity and determinism (see section~\ref{sec:2d}). If both properties are technical requirements for the proof's sake, monotonicity also has a physical justification. Indeed, if KSPM is used for modelling real physical sandpiles, then the image of a monotone non-increasing configuration has to be monotone non-increasing since gravity is the only force considered here. We have chosen to design the Kadanoff automaton for $d=2$ by considering a certain definition of the three-dimensional sandpile which does not correspond to the one of Bak's \emph{et al.} in~\cite{bak88}. This hypothesis is not restrictive. It is just used for constructing the transition rules. Bak's construction was done similarly. Nevertheless, our result depends on the way the three dimensional sandpile is modelled. In our case, we have decided to formalise the sandpile as a monotone decreasing pile in three dimensions where $x_{i,j}\geq\max\{x_{i+1,j},x_{i,j+1}\}$ (here $x_{i,j}$ denotes the sand grains initial distribution) together with Kadanoff's avalanche dynamics ruled by parameter $p$. The pile $(i,j)$ can give a grain either to every pile $(i+1,j),\ldots,(i+p-1,j)$ or to every pile $(i,j+1),\ldots,(i,j+p-1)$ if the monotonicity is not violated. With such a rule and if we use the height difference for defining the monotonicity, we can define the transition rules of the automaton for every value of the parameter $p$. In the case where the value of the parameter $p$ equals $2$, we find in our definition of monotonicity something similar with Bak's SPM in two dimensions. Actually, both models are different because the definitions of the three dimensional piles differ. That is the reason why we succeed in proving the \textsf{P}\xspace-completeness result which remains an open problem with Bak's definition. \smallskip The paper is organized as follows. Section~\ref{sec:defs} introduces the definitions of the Kadanoff sandpile model in one dimension and presents the avalanche problem. Section~\ref{sec:2d} generalizes the Kadanoff sandpile model in two dimensions and presents the avalanche problem in two dimension, which is proved \textsf{P}\xspace-complete for any value of the Kadanoff parameter $p$. Finally, section~\ref{sec:ccl} concludes the paper and proposes further research directions. \section{Sandpiles and Kadanoff model in one dimension}\label{sec:defs} A sandpile \emph{configuration} is a distribution of sand grains over a lattice (here \ensuremath{\mathbb{Z}}\xspace). Each site of the lattice is associated with an integer which represents its sand content. A configuration is \emph{finite} if only a finite number of sites has non-zero sand content. Therefore, in the sequel, a finite configuration on \ensuremath{\mathbb{Z}}\xspace will be identified with an ordered sequence of integers $x_1,x_2, \ldots, x_n$ in which $x_1$ (resp. $x_n$) is the first (resp. the last) site with non-zero sand content. A configuration $x$ is \emph{monotone} if $\forall i\in\ensuremath{\mathbb{Z}}\xspace$, $x_i\geq x_{i+1}$. A configuration $x$ is \emph{stable} if $\forall i\in\ensuremath{\mathbb{Z}}\xspace$, $x_i-x_{i+1}< p$ \emph{i.e.}\@\xspace if the difference between any two adjacent sites is less than Kadanoff's parameter $p$. Let SM$(n)$ denote the set of stable monotone configurations of the form $^\omega x_1,x_2,\ldots, x_{n-1}, x_n^\omega$ and of length $n$, for $x_i\in\ensuremath{\mathbb{N}}\xspace$. Given a configuration $x$, $a\in\ensuremath{\mathbb{N}}\xspace$ and $j\in\ensuremath{\mathbb{Z}}\xspace$, we use the notation $^\omega ax_j$ (resp. $x_ja^\omega$) to say that $\forall i\in\ensuremath{\mathbb{Z}}\xspace$, $i<j\rightarrow x_i=a$ (resp. $\forall i\in\ensuremath{\mathbb{Z}}\xspace$, $i>j\rightarrow x_i=a$). Finally, remark that any configuration $^\omega x_1,x_2,\ldots, x_{n-1}, x_n0^\omega$ can be identified with its \emph{height difference} sequence $^\omega 0,(x_1-x_2), \ldots, (x_{n-1}-x_n), x_n, 0^\omega$ \enspace. \smallskip Consider a stable monotone configuration $^\omega x_1,x_2, \dots, x_n^\omega$. Adding one more sand grain, say at site $i$, may cause that the site $i$ topples some grains to its adjacent sites. In their turn the adjacent sites receive a new grain of sand and may also topple, and so on. This phenomenon is called an \emph{avalanche}. The avalanche ends when the system evolves to a new stable configuration. \begin{figure} \centering \includegraphics[scale=.6]{avalanche.eps} \caption{Avalanches for $p=3$ with 9 columns. Here, $x_i+1$ (resp. $x_i+2$) indicates that column $i$ has received some grains once (resp. twice), $x_i-1$ that column $i$ has given some grains according to the dynamics; a dark shaded site indicates the toppling site, a light shaded site indicates a site that could topple in the future. Times goes top-down.} \label{fig:avalanche} \end{figure} In this paper, topplings are controlled by the \emph{Kadanoff's parameter} $p\in\ensuremath{\mathbb{N}}\xspace$ which completely determines the model and its dynamics. In KSPM$(p)$, $p-1$ grains will fall from site $i$ if $x_i-x_{i+1}\geq p$ and the new configuration becomes \[ ^\omega x_1\cdots (x_{i-1})(x_i-p+1)(x_{i+1}+1)\cdots(x_{i+p-2}+1)(x_{i+p-1}+1)(x_{i+p})\cdots x_n0^\omega \enspace. \] In other words, the site $i$ distributes one grain to each of its $(p-1)$ right adjacent sites. Equivalently, if we mesure the height differences after applying the dynamics, we get $(h_{i-1}+p-1)(h_i-p)(h_{i+1})(h_{i+2})\cdots (h_{i+p-2})(h_{i+p-1}+1)\enspace,$ where $h_{i-1}=(x_{i-1}-x_i)$ and all remaining heights do not change. In other words, the height difference $h_i$ gives rise to an increase of $(p-1)$ grains of sand to height $h_{i-1}$ and an increase of one grain to height $h_{i+p-1}$. We consider the problem of deciding whether some column on the right of column $x_{n}$ (more precisely for column $x_k$ for $n<k\leq n+p-1$) will receive some grains according to the Kadanoff's dynamics. Since the initial configuration is stable, it is not difficult to prove that avalanches will reach at most the column $n+p-1$ (see figure~\ref{fig:avalanche} for example). Remark that given a configuration, several sites could topple at the same time. Therefore, at each time step, one might have to decide which site or which sites are allowed to topple. According to the update policy chosen, there might be different images of the same configuration. However, it is known ~\cite{goles02} that for any given initial number of sand grains $n$, the orbit graph is a lattice and hence, for our purposes, we may only consider one decision problem to formalize AP: \myproblem{AP}{A configuration $x\in$ SM$(n)$ and $k\in\ensuremath{\mathbb{N}}\xspace$ s.t. $n\!<k\!\leq n+p-1$}{Does there exist an avalanche such that $x_k\geq 1$?} Let us consider some examples. Let $p=3$ and consider a stable bi-infinite configuration such that its height differences is as follows $^\omega00\underline{2}2022120000^\omega$. We add just one grain at $x_1$ (the site underlined in the configuration). Then, the next step is $^\omega02\underline{0}21222120000^\omega$. And so in one step we see that no avalanche can be triggered, hence the answer to $AP$ is negative. As a second example, consider the following sequence of height differences (always with $p=3$): $^\omega0\underline{3}122122221201200^\omega$. There are several possibilities for avalanches from the left to the right but none of them arrives to the 0's region. So the answer to the decision problem is still negative. To get an idea of what happens for a positive instance of the problem, consider the following initial configuration: $^\omega0 \underline{3}12222100^\omega$ with parameter $p=3$. \smallskip The full proof of Theorem~\ref{th:1d} is a bit technical and will be given in the journal version of the paper. \begin{theorem}\label{th:1d} AP is in $\textsf{NC}\xspace^1$ for KSPM in dimension $1$ and $p>1$. \end{theorem} \begin{proof}[Sketch of the proof.] The first step is to prove that, in this situation, the Kadanoff's rule can be applied only once at each site for any initial monotone stable configuration. Using this result one can see that a site $k$ such that $x_k=0$ in the initial configuration and $x_k>1$ in the final one, must have received grains from site $k-p$. This site, in its turn, must have received grains from $k-2p$ and so on until a ``firing'' site $i$ with $i\in[\![1,p-1]\!]$. The height difference for all of these sites must be $p-1$. The existence of this sequence and the values of the height differences can be checked by a parallel iterative algorithm on a PRAM in time \O{\log n}. \end{proof} \section{Sandpiles and Kadanoff model in two dimensions} \label{sec:2d} There are several possibilities to define extensions of the Kadanoff dynamics to two dimensional sandpiles. Let us first extend the basic definitions introduced in section~\ref{sec:defs}. \smallskip A two-dimensional sandpile \emph{configuration} is a distribution of grains of sand over the $\mathbb{N}\times\mathbb{N}$ lattice. As in the one-dimensional case, a configuration is \emph{finite} if only a finite number of sites has non-zero sand content. Therefore, in the sequel, a finite configuration on $\mathbb{N}\times\mathbb{N}$ will be identified by a mapping from $\mathbb{N}\times\mathbb{N}$ into $\mathbb{N}$, giving a number of grains of sand to every position in the lattice. Thus, a configuration will be denoted by $x_{i,j}$ as $(i,j)\mapsto\mathbb{N}$. A configuration $x$ is \emph{monotone} if $\forall i,j\in\mathbb{N}\times\mathbb{N}$, $x_{i,j}$ is such that $x_{i,j}\geq 0$ and $x_{i,j}\geq\max\{x_{i+1,j},x_{i,j+1}\}$. So we have a monotone sandpile, in the same sense as in~\cite{duchi06}. A configuration $x$ is \emph{horizontally stable} (resp. \emph{vertically}) if $\forall i,j\in\mathbb{N}\times\mathbb{N},\quad x_{i,j}-x_{i+1,j}<p$ (resp. $\forall i,j\in\mathbb{N}\times\mathbb{N},\quad x_{i,j}-x_{i,j+1}<p$) and is \emph{stable} if it is both horizontally and vertically stable. In other words, it is a generalisation of the Kadanoff model in one dimension, that is the configuration is stable if the difference between any two adjacent sites is less than the Kadanoff parameter $p$. To this configuration, we apply the Kadanoff dynamics for a given integer $p\geq 1$. The application can be done if and only if the new configuration remains monotone. Example~\ref{ex:kspm2d} illustrates the case which violates the condition of monotonicity of the Kadanoff dynamics. \begin{example}\label{ex:kspm2d} Consider the initial configuration given in the bottom left matrix of the following figure {\tiny\[\begin{array}{ccc} \begin{matrix} 0&1&0&0\\ \boxed{2}&3&0&0\\ 8&4&2&2\\ 8&4&3&2 \end{matrix}&&\\ &&\\ \uparrow v&&\\ &&\\ \begin{matrix} 0&0&0&0\\ 2&2&0&0\\ 8&\boxed{6}&2&2\\ 8&4&3&2 \end{matrix} &\stackrel{h}{\rightarrow}& \begin{matrix} 0&0&0&0\\ 2&2&0&0\\ 8&4&3&\boxed{3}\\ 8&4&3&2 \end{matrix}\\ \end{array}\]} Values count for the number of grains of a site. We see that we cannot apply the Kadanoff's dynamics for a value of parameter $p=3$ from the boxed site. Indeed, the resulting configurations do not remain monotone neither by applying the dynamics horizontally nor vertically (resp. $\uparrow\! v$ and $\stackrel{h}{\rightarrow}$). A site which violates the condition has been boxed in the resulting configurations (it might be not unique).\qed \end{example} Recall that the Kadanoff operator applied to site $(i,j)$ for a given $p$ consists in giving a grain of sand to any site in the horizontal or vertical line, i.e $\{(i,j+1), ....(i,j+p-1)\}$ or $\{(i+1,j), ...(i+p-1,j)\}$. Similarly to the one-dimensional case, we associate to the previous avalanches their height difference. Any configuration can be identified by the mapping of its \emph{horizontal height difference} (resp. vertical): $h_{\rightarrow}:(i,j)\mapsto x_{i,j}-x_{i+1,j}$ (resp. $h_{\uparrow}:(i,j)\mapsto x_{i,j}-x_{i,j+1}$). The height difference allows to define the notions of monotonicity and stability in a straightforward way. However, notice that when considering the dynamics defined over height differences, we work with a different lattice though isomorphic to the initial one. The relationship between them is depicted on figure~\ref{fig:chenilles}. For a better understanding of the dynamics, recall that in one dimension an avalanche at site $i$ changes the heights of sites $i-1$ and $i+p-1$. In two dimensions, there are height changes on the line but also to both sides of it. The dynamics is simpler to depict than to write it down formally. It will be presented throughout examples and figures in the sequel. An example of the Kadanoff's dynamics applied horizontally (resp. vertically) is given in figure~\ref{fig:chenillesHVp=4}. More precisely, the Kadanoff's dynamics for a value of parameter $p=4$ is depicted in figure~\ref{fig:Chenilles-p=4}. Observe that we do not need to take into account the number of grains of sand in the columns. It sufficies to take the graph of the edges adjacent to each site (depicted by thick lines) and to store the height differences. So, from now on, we will restrict ourselves to the lattice and to the dynamics defined on the height differences. In figure~\ref{fig:Chenilles-p=4}, we only keep the information required for applying the dynamics in the simplified view. In fact, the local function is depicted by figure~\ref{fig:chenilles} that we will call \emph{Chenilles} (horizontal and vertical, respectively). Figure~\ref{fig:chenilles} explains how the dark site with coordinates $(i,j)$ with a height difference of $p$ gives grains either horizontally (figure~\ref{fig:chenilles} left) or vertically (figure~\ref{fig:chenilles} right). \begin{example}[Obtaining Bak's] In the case $p=2$ and if we assume the real sandpile is defined as in~\cite{duchi06} (i.e. $x_{i,j}\geq\max\{x_{i+1,j},x_{i,j+1}\}$), we get the templates from figure~\ref{fig:chenilleBak}.\qed \end{example} In order to be applied, the automaton's dynamics requires to test if the local application gives us a non-negative configuration. \subsection{\textsf{P}\xspace-completeness} Changing from dimension $1$ to $2$ (or greater), the statement of AP has to be adapted. Consider a finite configuration $x$ which is non-zero for sites $(i,j)$ with $i,j\geq0$, stable and monotone and let $Q$ be the sum of the height differences. Let us denote by $n$ the maximum index of non-zero height differences along both axis. Then, SM$(n)$ denotes the set of monotone stable configurations of the form given by a lower-triangular matrix of size $n\times n$. To generalise the avalanche problem in two dimensions, we have to find a generic position which is far enough from the initial sandpile but close enough to be attained. To get rough bounds, we have followed the following approach. For the upper bound, the worst case occurs when all the grains are arranged on a single site (with a height difference of $Q$) which is at an end of one of the axis and they fall down. For the lower bound, it is the same reasoning, except that the pile containing the grains is at the origin. Thus, we may restate our decision problem as follows: \myproblem{AP (dimension 2)}{A configuration $x\in$ SM$(n)$, $(k,\ell)\in\ensuremath{\mathbb{N}}\xspace\times\ensuremath{\mathbb{N}}\xspace$ such that $x_{k,\ell}=0$ and $\frac{\sqrt{2}}{2}n\leq\|(k,\ell)\|\leq n+Q$ (where $Q$ is the sum of the height differences).}{Does there exist an avalanche (obtained by using the vertical and horizontal chenilles) such that $x_{k,\ell}\geq 1$?} where $\|.\|$ denotes the standard Euclidean norm. \smallskip To prove the \textsf{P}\xspace-completeness of AP we will proceed by reduction from the monotone circuit value problem (MCVP), i.e given a circuit with $n$ inputs $\{\alpha_1, ...,\alpha_n\}$ and logic gates AND, OR we want to answer if the output value is one or zero (refer to~\cite{ghr95} for a detailed statement of the problem). NOT gates are not allowed but the problem remains \textsf{P}\xspace-complete for the following reason: using De Morgan's laws $\overline{a\wedge b}=\overline{a}\vee\overline{b}$ and $\overline{a\vee b}=\overline{a}\wedge\overline{b}$, one can shift negation back through the gates until they only affect the inputs themselves. For the reduction, we have to construct, by using sandpile configurations, wires (figure~\ref{fig:wire}), logic AND gates (figure~\ref{fig:andgate}), logic OR gates (figure~\ref{fig:orgate}), cross-overs (figure~\ref{fig:crossover}) and signal multipliers for starting the process (figure~\ref{fig:sigmul}). We also need to define a way to deterministically update the network; to do this, we can apply the chenille's templates any way such that it is spatially periodical, for instance from the left to the right and from the top to the bottom. Our main result is thus: \begin{theorem} $AP$ is \textsf{P}\xspace-complete for KSPM in dimension two and any $p\geq 2$. \end{theorem} \begin{proof} The fact that our problem is in \textsf{P}\xspace is already known since C.~Moore and M.~Nilsson paper~\cite{moore99}. The proof is done by proving that the total number of avalanches required to relax a sandpile is polynomial in the system size. The remaining open problem in their study was the case $d=2$ for which they wrote ``\emph{The reader may [...] find a clever embedding of non-planar Boolean circuits}'', which is precisely what will be done hereafter. For the reduction, one has to take an arbitrary instance of (MCVP) and to build an initial configuration of a sandpile for the Kadanoff's dynamics for $p=2$ (or greater). Remark that, in the case $p=2$, KSPM corresponds to Bak's model~\cite{bak88} in two dimensions with a sandpile such that $x_{i,j}\geq\max\{x_{i+1,j},x_{i,j+1}\}$. To complete the proof, we have to design: \begin{itemize} \item a wire (figure~\ref{fig:wire}); \item the crossing of information (figure~\ref{fig:crossover}); \item a AND gate (figure~\ref{fig:andgate}); \item a OR gate (figure~\ref{fig:orgate}); \item a signal multiplier (figure~\ref{fig:sigmul}). \end{itemize} The construction is shown graphically for $p=2$ but can be done for greater values. For $p=2$, the horizontal and vertical chenilles are given in figure~\ref{fig:chenilleBak}. According to~\cite{goldschlager77}, the reduction is in \textsf{NC} since MCVP is logspace complete for \textsf{P}\xspace. Recall that the decision problem only adds a sand grain to one site, say $(0,0)$. To construct the entry vector to an arbitrary circuit we have to construct from the starting site wires to simulate any variable $\alpha_i=1$. (If $\alpha_i=0$ nothing is done: we do not construct a wire from the initial site. Else, there will be a wire to simulate the value 1). \end{proof} \begin{remark} For $p\geq 3$, the construction of the AND gate is easier than for $p=2$. The dynamics is obtained from figure~\ref{fig:chenilles} and the construction of an AND gate is depicted on figure~\ref{fig:and-p=3}. \end{remark} \section{Conclusion and future work}\label{sec:ccl} We have proved that the avalanche problem for the KSPM model in two dimensions is \textsf{P}\xspace-complete with a sandpile defined as in~\cite{duchi06} and for every value of the parameter $p$. Let us also point out that in the case where $p=2$, this model corresponds to the two dimensional Bak's model with a pile such that $x_{i,j}\geq 0$ and $x_{i,j}\geq\max\{x_{i+1,j},x_{i,j+1}\}$. In this context, we also proved that this physical version (with a two dimension sandpile interpretation) is \textsf{P}\xspace-complete. It is important to notice that, by directly taking the two dimensional Bak's tokens game (given a graph such that a vertex has a number of token greater or equal than its degree, it gives one token to each of its neighbors), its computation universality was proved in~\cite{gm2} by designing logical gates in non-planar graphs. Furthermore, by using the previous construction, C. Moore \emph{et\ al.}\@\xspace proved the \textsf{P}\xspace-completeness of this problem for lattices of dimensions $d$ with $d\geq 3$. But the problem remained open for two dimensional lattices. Furthermore, it was proved in~\cite{goles06} that, in the above situation, it is not possible to build circuits because the information is impossible to cross. The two dimensional Bak's operator corresponds, in our framework, to the application of the four rotations of the template (see figure~\ref{fig:BakRotate}). But this model is not anymore the representation of a two dimensional sandpile as presented in~\cite{duchi06}, that is with $x_{i,j}\geq 0$ and $x_{i,j}\geq\max\{x_{i+1,j},x_{i,j+1}\}$. To define a reasonable two dimensional model, consider a monotone sandpile decreasing for $i\geq0$ and $j\geq0$. Over this pile we define the extended Kadanoff's model as a local avalanche in the growing direction of the $i-j$ axis such that monotonicity is allowed. Certainly, one may define other local applications of Kadanoff's rule which also match with the physical sense of monotonicity. For instance, by considering the set ${(i+1,j),(i+1,j+1),(i,j+1)}$ as the sites to be able to receive grains from site $(i,j)$. In this sense it is interesting to remark that the two dimensional sandpile defined by Bak (i.e for nearest neighbors, also called the von Neumann neihborhood, a site gives a token to each of its four neighbors if and only if it has enough tokens) can be seen as the application of the Kadanoff rule for $p=2$ by applying to a site, if there are at least four tokens, the horizontal $(\rightarrow)$ and the vertical $(\downarrow)$ chenille simultaneoulsly (see figure~\ref{fig:BakRotate}). Similarily, for an arbitrary $p$, one may simultaneously apply other conbinations of chenilles which, in general, allows us to get \textsf{P}\xspace-complete problems. For instance, when there are enough tokens, the applications of the four chenilles (i.e. $\leftarrow$,$\rightarrow$,$\uparrow$ and $\downarrow$) gives raise to a new family of local templates called \emph{butterflies} (because of their four wings). It is not so difficult to construct wires and circuits for butterflies. Hence, for this model of sandpiles, the decison problem will remain \textsf{P}\xspace-complete. One thing to analyze from an algebraic and complexity point of view is to classify every local rule derivated from the chenille application. Further, one may define a more general sandpile dynamics which contains both Bak's and Kadanoff's ones: i.e given an integer $p \geq2$, we allow the application of every Kadanoff's update for $q\leq p$. We are studying this dynamics and, as a first result, we observe yet that in one dimension there are several fixed points and also, given a monotone circuit with depth $m$ and with $n$ gates, we may simulate it on a line with this generalized rule for a given $p\geq m+n$. For the one-dimensional avalanche problem as defined in section~\ref{sec:defs}, it can be proved that it belongs tho the class \textsf{NC}\xspace for $p=2$ and that it remains in the same class when the first $p$ columns contain more than one grain (\emph{i.e.}\@\xspace that there is no hole in the pile). We are in the way to prove the same in the general case. \section*{Acknowledgements} We thank Pr.~Enrico Formenti for helpful discussions and comments while Pr.~Eric Goles was visiting Nice and writing this paper. \bibliographystyle{plain}
1,108,101,566,859
arxiv
\section{Introduction and main result} Adaptive time-stepping finite element methods (AFEM) for evolutionary PDE usually lead to a sequence of timesteps and meshes, which yield a partition of the time interval $0=t_0 < t_1 < \dots < t_N = T$ and one triangulation $\mathcal{T}_i$ for each time interval $[t_{i-1},t_i)$. The complexity of the discrete solution is thus related to the total number of degrees of freedom needed to represent it on the whole interval, which in turn is equivalent to $\sum_{i=1}^N \#\mathcal{T}_i$. In this article we study spaces of functions which can be approximated using such time-space partitions with an error of order $\left( \sum_{i=1}^N \#\mathcal{T}_i\right)^{-s}$ for different $s > 0$. The results that we obtain are similar in spirit to those of~\cite{BDDP02,GM14}, where the spaces corresponding to stationary PDE are considered. Our goal is not to prove the optimality of AFEM but rather to understand which convergence rates are to be expected for the solutions of evolutionary PDE given their regularity. In this paper we aim at establishing the first results in this direction, thus at some points we sacrifice generality in order to have a clearer presentation of the basic ideas and set the foundation for further research in this area. In order to roughly state our main result, we need to introduce some notation, which will be explained in detail later. Given a polyhedral space domain $\Omega\subset \mathbb{R}^n$, $n \ge 1$, we let $\mathbb{T}$ denote the set of all triangulations that are obtained through bisection from an initial triangulation $\mathcal{T}_0$ of $\Omega$. For each $\mathcal{T}\in \mathbb{T}$ we denote by $\#\mathcal{T}$ the number of elements of the partition For $\mathcal{T} \in \mathbb{T}$, we let $\mathbb{V}_{\mathcal{T}}^r$ denote the finite element space of continuous piecewise polynomial functions of fixed order $r$, i.e., \[ \mathbb{V}_{\mathcal{T}}^r :=\{g\in C(\overline{\Omega}): \ g\big|_{T}\in \Pi^r\ \text{for all } T\in \mathcal{T}\}, \] where $\Pi^r$ denotes the set of polynomials of total degree (strictly) less than $r$. Let $r_1,r_2\in\mathbb{N}$ denote the polynomial orders in time and space, respectively. Let $\{ 0=t_0<t_1<\ldots<t_N=T \}$ be a partition of the time interval and $\mathcal{T}_1,\ldots, \mathcal{T}_N \in \mathbb{T}$ be partitions of the space domain $\Omega$, where $\mathcal{T}_i$ corresponds to the subinterval $[t_{i-1},t_i)$, $i=1,\ldots N$. The time-space partition as illustrated in Figure \ref{fig:1} is then given by \[ \mathcal{P}=\left(\{0=t_0<t_1<\ldots<t_N=T\}, \{\mathcal{T}_1,\ldots, \mathcal{T}_N\}\right) \quad \text{with}\quad \# \mathcal{P}=\sum_{i=1}^N \#\mathcal{T}_i \] and $\mathbb{P}$ is the set of all those time-space partitions. This is the precise kind of time-space partitions produced by time-stepping adaptive methods. \begin{figure} \includegraphics[width=9cm]{partition_domain.pdf} \caption{ Time-space partition $\mathcal{P}$ } \label{fig:1} \end{figure} The finite element space $\overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}$ subject to such a partition $\mathcal{P}$ is defined as \begin{align*} \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2} &:=\{g: [0,T)\times\Omega\rightarrow \mathbb{R}: g_{[t_{i-1},t_i)\times\Omega} \in \Pi^{r_1} \otimes \mathbb{V}_{\mathcal{T}_i}^{r_2},\text{ for all }i=1,2,\dots,N \}, \end{align*} i.e., $g \in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}$ if and only if $g(t,\cdot)\in \mathbb{V}_{\mathcal{T}_i}^{r_2}$ for all $t\in [t_{i-1},t_i) $ and $g(\cdot,x)\big|_{[t_{i-1},t_i)}\in \Pi^{r_1}$ for all $x\in \Omega$, and all $i=1,2,\dots,N$. Discrete solutions of adaptive time-stepping methods, e.g. those which use Discontinuous Galerkin (DG) in time, belong to spaces of this type. We define the best $m$-term approximation error by \[ \overline{\sigma}_m(f)=\inf_{\# \mathcal{P}\leq m}\inf_{g\in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}}\|f-g\|_{L_2([0,T)\times\Omega)}. \] In this article we measure the error in $L_2([0,T)\times\Omega)$ and leave the general case of $L_p([0,T), L_q(\Omega))$ and other generalizations as future work. For $s>0$ we define the approximation class $\overline{\mathbb{A}}_s$ as the set those functions whose best $m$-term approximation error is of order $m^{-s}$, i.e., \[ \overline{\mathbb{A}}_s:=\{f\in L_2([0,T)\times\Omega): \ \exists c>0 \text{ such that } \overline\sigma_m(f)\leq c\, m^{-s}, \ \forall m\in \mathbb{N}\}. \] Equivalently, we can define $\overline{\mathbb{A}}_s$ through a semi-norm as follows: \[ \overline{\mathbb{A}}_s:=\{f\in L_2([0,T)\times\Omega): \ |f|_{\overline{\mathbb{A}}_s}<\infty\}\quad \text{with}\quad |f|_{\overline{\mathbb{A}}_s}:=\sup_{m\in \mathbb{N}}m^s\,\overline\sigma_m(f). \] Alternatively, this definition is equivalent to saying that $f\in \overline{\mathbb{A}}_s$ if there is a constant $c$ such that for all $\varepsilon>0$, there exists a time-space partition $\mathcal{P}$ that satisfies \begin{equation}\label{appr-class-st} \inf_{g\in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}} \|f-g\|_{L_2([0,T)\times\Omega)} \leq c \varepsilon \quad \text{and}\quad \#\mathcal{P} \leq \varepsilon^{-1/s}, \end{equation} and $|f|_{\overline{\mathbb{A}}_s}$ is equivalent to the infimum of all constants $c$ that satisfy \eqref{appr-class-st}. Our main result is stated in terms of Besov spaces, which will be defined in the next section, and reads as follows. \begin{main} Let $0 < s_i < r_i$, $i=1,2$, $0 < q_1 \le \infty$, $1\leq q_2\leq \infty $ with $s_1 > \big(\frac1{q_1}-\frac12\big)_+$ and $s_2 > n (\frac1{q_2}-\frac12\big)_+$. Then \[ B^{s_1}_{q_1,q_1}([0,T),L_2(\Omega)) \cap L_2([0,T),B_{q_2,q_2}^{s_2}(\Omega)) \subset \overline{\mathbb{A}}_s \quad\text{for}\quad s = \frac{1}{\frac1{s_1}+\frac{n}{s_2}}. \] \end{main} This result is a consequence of Theorem~\ref{space-time-poly}, where, given $f \in B^{s_1}_{q_1,q_1}([0,T),L_2(\Omega) \cap L_2([0,T),B_{q_2,q_2}^{s_2}(\Omega))$, and $\varepsilon > 0$ we construct a time-space partition $\mathcal{P}$ that satisfies \[ \# \mathcal{P}\le c_1 \varepsilon^{-\big(\frac1{s_1}+\frac n{s_2}\big)} \] and a function $F \in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}$ such that \[ \| f - F \|_{L_2([0,T)\times\Omega)} \le c_2 \,\varepsilon \, \left[ | f |_{B^{s_1}_{q_1,q_1}([0,T),L_2(\Omega))} + \| f \|_{L_2([0,T),B_{q_2,q_2}^{s_2}(\Omega))} \right]. \] Here $B^s_{p,q}(I,X)$ denote Besov spaces of $X$-valued functions with respective seminorms $|\cdot|_{B^s_{p,q}(I,X)}$, cf.\ Section \ref{subsec-22}. {It is worth noting that in order to determine the largest spaces, integrability powers $0<p<1$ must be considered. This makes some proofs more complicated than if we were to consider only $p\ge 1$. } Our construction is performed in two steps. The first one uses a Greedy algorithm to obtain the partition of the time domain, resorting in a Whitney-type estimate for vector-valued functions. That is, we interpret functions in $L_2([0,T)\times\Omega)$ as functions from $[0,T)$ into $L_2(\Omega)$ as is customary in the study of evolutionary PDE, and develop a nonlinear approximation theory for this situation, by revisiting and extending some results from Storozhenko and Oswald~\cite{Sto77,OS78}. This is presented in Section~\ref{sec:generalizedWhitney}, after defining Besov spaces of vector-valued functions in Section~\ref{sec:Besov}. In Section~\ref{sec:onevariable} we revisit the known results for the stationary case and perform the aforementioned first step by applying the Greedy algorithm to vector-valued functions. In Section~\ref{sec:timespace} we combine those two results and prove our main result. We end this article presenting some discussion and comparison of the approximation classes for space-time discretizations. We finally mention that we will use $A \lesssim B$ inside some statements, proofs and reasonings in order to denote $A \le c B$ with a constant $c$ that depends on the parameters indicated in the corresponding statement. As usual, $A\simeq B$ means $A\lesssim B$ and $B \lesssim A$. \section{Besov spaces of vector-valued functions}% \label{sec:Besov} The goal of this section is to define and understand some properties of Besov spaces of functions from a real interval $I$ into a Banach space. From now on, we let $X$ be a separable Banach space with norm $\| \cdot \|_X$. We first introduce the moduli of smoothness and state and prove some of their properties, which are analogous to those corresponding to the case of real-valued functions. Afterwards we define the corresponding Besov spaces and state and prove some embeddings. \subsection{Moduli of smoothness of vector-valued functions on an interval} We start this section by providing new definitions of moduli of smoothness for vector-valued functions, which are analogous to the ones already known for real-valued functions, and stating and proving some of their basic properties. It is worth mentioning that there is a forerunner regarding moduli of smoothness and Whitney-type estimates of vector-valued functions, cf. \cite{DF90}. However, our definition (which is an immediate generalization of the classical moduli of real-valued functions) differs from the one given in \cite{DF90} (which is more elaborate and tricky). In particular, in \cite{DF90} a duality approach is used between the given Banach space and its dual in order to reduce the definitions and results for abstract functions to real ones. But there is a price to pay: the results are restricted to the set of bounded functions. Therefore even classical Banach spaces like $L_p(I,X)$ cannot be considered entirely. Given $0<p\le\infty$, a real interval $I${$=[a,b)$ with $|I|=b-a$}, and a function $f : I \to X$, we say that $f \in L_p(I,X)$ if $f$ is measurable and $\| f \|_{L_p(I,X)} := \Big(\int_I \| f(t) \|_X^p {\text d} t\Big)^{1/p} < \infty$ if $p<\infty$ and $\| f \|_{L_\infty(I,X)} = \esssup_{t \in I} \| f(t) \|_X $. For such a function $f$, $r\in\mathbb{N}$ and $0<|h|<\frac{{|I|}}r$, the $r$-th order difference $\Delta_h^rf : I_{rh} \to X$ is defined as \[ \Delta_h^r f(t) = \sum_{i=0}^r {r \choose i} (-1)^{r-i} f(t+ih),\qquad t \in I_{rh} := \{ t \in I : t+rh \in I\}, \] which clearly satisfies $\Delta_h^r f = \Delta_h \Delta_h^{r-1} f$ and $\| \Delta_h^r f \|_{L_p(I_{rh})}^{\min\{1,p\}} \le 2 \| \Delta_h^{r-1} f \|_{L_p(I_{(r-1)h})}^{\min\{1,p\}}$, understanding that $\Delta_h f = \Delta_h^1 f$ and $\Delta_h^0 f = f$. The modulus of smoothness is defined as \begin{equation}\label{omega1} \omega_r(f,I,u)_p := \sup_{0<|h| \leq u}\|\Delta_h^r f \|_{L_p(I_{rh},X)} = \sup_{0<h \leq u}\|\Delta_h^r f \|_{L_p(I_{rh},X)} ,\qquad u > 0, \end{equation} which is clearly increasing as a function of $u$, and the \emph{averaged} modulus of smoothness is defined, for $u>0$, as \begin{equation}\label{w1} w_r(f,I,u)_p:=\left(\frac{1}{2u}\int_{-u}^u\|\Delta_h^r f\|^p_{L_p(I_{rh},X)} \, {\text d} h\right)^\frac 1p =\left(\frac{1}{u}\int_{0}^u\|\Delta_h^r f\|^p_{L_p(I_{rh},X)} \, {\text d} h\right)^\frac 1p. \end{equation} The well-known definitions for $f : \Omega \to \mathbb{R}$, with $\Omega$ a domain of $\mathbb{R}^n$, $n \ge 1$, are as follows. For $h \in \mathbb{R}^n$, the domain of $\Delta_h^r f$ is the set $\Omega_{rh} := \{ x \in \Omega : x, x+h, \dots,x+rh \in \Omega\}$, and the moduli of smoothness $\omega_r(f,\Omega,u)_p$, $w_r(t,\Omega,u)_p$ are defined for $u>0$ via \begin{align} \omega_r(f,\Omega,u)_p &:= \sup_{0< |h| \leq u}\|\Delta_h^r f \|_{L_p(\Omega_{rh})} , \label{mod-smooth-B} \\ w_r(f,\Omega,u)_p&:=\left(\frac{1}{(2u)^n}\int_{[-u,u]^n} \|\Delta_h^r f\|^p_{L_p(\Omega_{rh})} \, {\text d} h\right)^\frac 1p. \notag \end{align} As a consequence of the fact that $ \Delta_{mh}^1 f(x) = \sum_{i=0}^{m-1} \Delta_h^1 f(x+ih)$, for $m\in\mathbb{N}$, we can prove by induction $\| \Delta_{mh}^r f \|_{L_p(A_{rmh})} \le m^r \| \Delta_h^r f \|_{L_p(A_{rh})}$, for $A = I$ or $A = \Omega$ (for details see~\cite[Sect.~3.1]{PP87}). As an immediate consequence of this, \begin{equation}\label{homogeneity} \omega_r(f,A,mu)_p^{\min\{1,p\}} \le m^r \omega_r(f,A,u)_p^{\min\{1,p\}}, \quad u > 0. \end{equation} From the properties stated above, we have \begin{equation}\label{inductivebound} w_{r+1}(f,A,u)^{\min\{1,p\}}_p \le 2 w_r(f,A,u)^{\min\{1,p\}}_p. \end{equation} Finally, we notice that if $f : [a,b) \to X$ and $\hat f : [0,1) \to X$ with $\hat f(t) = f(a+t(b-a))$, then, for $u>0$ \begin{equation}\label{scaling} \begin{split} \omega_r(f,[a,b),u)_p &= (b-a)^{1/p} \, \omega_r(\hat f, [0,1), (b-a)^{-1} u)_p, \\ w_r(f,[a,b),u)_p &= (b-a)^{1/p} \, w_r(\hat f, [0,1), (b-a)^{-1}u)_p. \end{split} \end{equation} Now we prove that the two moduli of smoothness $w_r$ and $\omega_r$ as defined above in~\eqref{omega1} and~\eqref{w1} are equivalent. This result is well-known and proved for real-valued functions in~\cite[Lem. 6.5.1]{DL93}. The proof for vector-valued functions is analogous and we sketch it here for completeness. \begin{lem}\label{lem-equiv} Given $0<p<\infty$ and $r \in \mathbb{N}$ the two definitions of moduli of smoothness $w_r(\cdot,\cdot,\cdot)_p$ and $\omega_r(\cdot,\cdot,\cdot)_p$ are equivalent, more precisely \begin{align*} w_r(f,I,u)_p \le \omega_r(f,I,u)_p \le c w_r(f,I,u)_p, \end{align*} for all $f \in L_p(I,X)$, $I = [a,b)$ and $0 < u < |I|/r$, where the constant $c$ depends only on $r$ and $p$, but is otherwise independent of $f$, $I$, and $u$. \end{lem} \begin{proof} The fact that $w_r(f,I,u)_p\leq \omega_r(f,I,u)_p$ is obvious. Therefore, it remains to prove the converse inequality. We prove the result for the reference situation of $I = [0,1)$, the general case follows by scaling using~\eqref{scaling}. We use the reproducing formula \begin{equation} \label{repr-form-1} \Delta_h^rf(t)=\sum_{l=1}^r(-1)^l{r\choose l}\left[\Delta_{ls}^r f(t+lh)-\Delta_{h+ls}^r f(t)\right], \end{equation} which holds if $t\in [0,1-rh]$ and \[ t+lh+rls\leq 1 \quad \text{and}\quad t+rh+rls\leq 1. \] This together yields the range $t\in [0,1-rh-r^2s]$. Formula \eqref{repr-form-1} is proved by induction, starting with the observation that \begin{align*} \Delta^1_hf(t) &=f(t+h)-f(t)\\ &=f(t+h)-f(t+h+ls)+f(t+h+ls)-f(t)\\ &= - \big[\Delta^1_{ls}f(t+h)-\Delta^1_{h+ls}f(t) \big]. \end{align*} We now consider $0<h \le u\leq \frac{1}{4r}$ and $0\leq t\leq \frac 12$. This gives us the upper bound $s<\frac{1}{4r^2}$. Integrating formula \eqref{repr-form-1} yields \[ \int_0^{1/2}\|\Delta^r_h f(t)\|_X^p\mathrm{d} t\lesssim \sum_{l=1}^r \int_0^{1/2}\|\Delta_{ls}^r f(t+lh)\|_X^p\mathrm{d} t +\int_0^{1/2}\|\Delta^r_{h+ls}f(t)\|_X^p\mathrm{d} t. \] Thus, setting $I_{-}:=[0,1/2]$ and averaging over $s\in \left[0,u\right]$ gives \begin{align} \|\Delta^r_h f&\|_{L_p(I_{-},X)}^p \\ &\lesssim \sum_{l=1}^r\frac{1}{u}\left[ \int_0^u \int_{I_{-}}\|\Delta_{ls}^r f(t+lh)\|_X^p\mathrm{d} t\mathrm{d} s + \int_0^u \int_{I_{-}}\|\Delta_{h+ls}^r f(t)\|_X^p\mathrm{d} t\mathrm{d} s \right]\notag\\ &= \sum_{l=1}^r\frac{1}{lu}\Bigg[ \int_0^{lu} \int_{I_{-}}\|\Delta_{h'}^r f(t+lh)\|_X^p\mathrm{d} t\mathrm{d} h' + \int_h^{h+lu} \int_{I_{-}}\|\Delta_{h'}^r f(t)\|_X^p\mathrm{d} t\mathrm{d} h'\Bigg]\notag\\ &\lesssim \sum_{l=1}^r\frac{1}{(r+1)u} \int_0^{(r+1)u} \|\Delta_{h'}^r f\|_{L_p(I,X)}^p \mathrm{d} h'\notag \\ &\leq w_r(f,I,(r+1)u)_p^p, \label{est-07} \end{align} where in the second step we used the substitution $h':=ls$ in the first and $h':=h+ls$ in the second integral. By symmetry, we also have that $\|\Delta^r_{-h} f\|_{L_p(I_{+},X)}^p \le w_r(f,I,(r+1)u)_p^p$ with $I_+ = [1/2,1]$. Taking the supremum w.r.t. $0<h\leq u$ on both sides we arrive at \begin{align*} \omega_r(f,I,u)_p &\lesssim w_r(f,I, (r+1)u)_p \end{align*} Using~\eqref{homogeneity} we obtain \begin{align*} \omega_r(f,I,(r+1)u)_p \lesssim \omega_r(f,I,u)_p \lesssim w_r(f,I,(r+1)u)_p, \end{align*} which completes the proof. \end{proof} \subsection{Besov spaces and embeddings} \label{subsec-22} Using the generalized modulus of smoothness defined in the previous subsection, we introduce the Besov spaces $B^s_{p,q}(I,X)$, $s > 0$, $0<p,q\le \infty$, which contain all functions $f\in L_p(I,X)$ such that for $r:=\lfloor s\rfloor+1$ the quasi-seminorm \begin{equation} \label{seminorm-B} \begin{split} |f|_{B^{s}_{p,q}(I,X)}&:= \displaystyle\left(\int_0^{|I|/r}\left[u^{-s}\omega_r(f,I,u)_p\right]^q\frac{\mathrm{d} u}{u}\right)^{1/q}<\infty ,\qquad 0<q<\infty, \\ |f|_{B^{s}_{p,\infty}(I,X)} &:= \sup_{0<u<|I|/r}u^{-s}\omega_r(f,I,u)_p < \infty. \end{split} \end{equation} Moreover, a quasi-norm for $B^s_{p,q}(I,X)$ is given by \begin{equation}\label{norm-B} \|f\|_{B^s_{p,q}(I,X)}:=\|f\|_{L_p(I,X)} + |f|_{B^s_{p,q}(I,X)} , \end{equation} which is a norm whenever $1\leq p,q\leq \infty$. \begin{rem}\label{rem-equiv} One can replace the integral $\int_0^{|I|/r}$ by $\int_0^1$ if $|I|<\infty$ and still get an equivalent norm. {More precisely, \[ \int_0^{|I|/r} [u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u} \simeq \int_0^1 [u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u} \] with equivalence constants that depend only on $s$, $r$, $p$, $q$, but are otherwise independent of $f$ and $|I|$ as $|I| \to 0$. } { We prove this claim for $0<q<\infty$, the case $q=\infty$ is analogous. If $|I|/r<1$ then, on the one hand, $\int_0^{|I|/r} [u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u} \le \int_0^1 [u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u}$. On the other hand, $\omega_r(f,I,u)_p=\omega_r(f,I,|I|/r)_p$, when $u\geq |I|/r$. Therefore, using~\eqref{homogeneity} and the monotonicity of $\omega_r(f,I,\cdot)_p$, \begin{align*} \int_{|I|/r}^1 [u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u} &= \omega_r(f,I,|I|/r)_p^q \int_{|I|/r}^1 u^{-sq-1} \mathrm{d} u \\ &\lesssim \omega_r(f,I,|I|/(2r))_p^q \, (|I|/r)^{-sq} \\ &\lesssim \omega_r(f,I,|I|/(2r))_p^q \int_{|I|/(2r)}^{|I|/r} u^{-sq-1} \mathrm{d} u \\ &\le \int_{|I|/(2r)}^{|I|/r}[u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u}, \end{align*} which yields the second inequality for the case $|I|/r<1$. } { If $|I|/r > 1$, trivially $\int_0^1 [u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u} \le \int_0^{|I|/r} [u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u}$. Besides, using again~\eqref{homogeneity} and the monotonicity of $\omega_r(f,I,\cdot)_p$, } \[ \omega_r\left(f,I,\frac 12\right)_p\leq \omega_r(f,I,u)_p\leq \omega_r\left(f,I,\frac{|I|}r\right)_p\lesssim {|I|^r} \omega_r\left(f,I,\frac 12\right)_p, \quad \frac 12\leq u\leq |I|/r. \] Hence, \[ \int_1^{|I|/r}[u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u} \lesssim {|I|^{rq}}\omega_r\left(f,I,\frac 12\right)_p^q \lesssim {|I|^{rq}}\int_{\frac 12}^1 [u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u}, \] which proves the claim for the case $|I|/r > 1$. \end{rem} \begin{rem} Our definition for the Besov spaces above is in good agreement with the standard case: When $f:\Omega\rightarrow \mathbb{R}$, with $\Omega$ a domain of $\mathbb{R}^n$, the usual Besov spaces $B^s_{p,q}(\Omega)$ are defined as those subspaces containing all functions $f\in L_p(\Omega)$ for which \begin{equation} |f|_{B^{s}_{p,q}(\Omega)}:= \displaystyle\left(\int_0^{\diam(\Omega)}\left[u^{-s}\omega_r(f,\Omega,u)_p\right]^q\frac{\mathrm{d} u}{u}\right)^{1/q}<\infty \end{equation} (with the usual modification if $q=\infty$) and $r = \lfloor s \rfloor + 1$. Here the modulus of smoothness involved is the usual one given in \eqref{mod-smooth-B}. The space $B^s_{p,q}(\Omega)$ is then quasi-normed via $\|f\|_{B^s_{p,q}(\Omega)}:=\|f\|_{L_p(\Omega)}+|f|_{B^s_{p,q}(\Omega)}. $ For more information on these spaces we refer to \cite{DL93, T83}. \end{rem} Later on it will be useful for us to discretize the quasi-seminorm \eqref{seminorm-B} as follows. \begin{lem}\label{discrete-seminorm} The quasi-seminorm \eqref{seminorm-B} for $B^s_{p,q}(I,X)$ is equivalent to \begin{equation} \label{seminorm-B-2} \begin{split} |f|_{B^{s}_{p,q}(I,X)}^* &:= \left(\sum_{k=0}^{\infty}\left[2^{ks}\omega_r(f,I,2^{-k})_p\right]^q\right)^{1/q}, \qquad 0<q<\infty, \\ |f|_{B^{s}_{p,\infty}(I,X)}^* &:= \sup_{k\geq 0}2^{ks}\omega_r(f,I,2^{-k})_p, \end{split} \end{equation} with constants of equivalence independent of $f$ and $I$ as $|I| \to 0$. \end{lem} \begin{proof} The proof follows along the lines of the standard case, which may be found in \cite[p.~56]{DL93}. Using \eqref{homogeneity} with $m=2$ and the monotonicity of $\omega_r(f,I,\cdot)$ we see that for $u\in [2^{-k-1}, 2^{-k}]$ it holds \begin{align*} 2^{-r}\left(2^{ks}\omega_r(f,I,2^{-k})_p\right)^{\min\{1,p\}} &\leq \left(u^{-s}\omega_r(f,I,u)\right)^{\min\{1,p\}}_p\\ &\leq \left(2^{(k+1)s}\omega_r(f,I,2^{-k})_p\right)^{\min\{1,p\}}. \end{align*} Raising all terms of the inequality to the power ${\frac{1}{\min\{1,p\}}}$ we obtain \[ u^{-s}\omega_r(f,I,u)_p \simeq 2^{ks}\omega_r(f,I,2^{-k})_p \qquad \text{for} \qquad u\in [2^{-k-1}, 2^{-k}]. \] Hence, since $\int_{2^{-k-1}}^{2^{-k}}\frac{\mathrm{d} u}{u}=\ln 2\simeq 1$ we get \[ \left(\int_{2^{-k-1}}^{2^{-k}}[u^{-s}\omega_r(f,I,u)_p]^q\frac{\mathrm{d} u}{u}\right)^{1/q}\simeq 2^{ks}\omega_r(f,I,2^{-k})_p. \] This completes the proof for $0<q<\infty$ taking into account Remark \ref{rem-equiv}, after adding all terms for $k\ge0$. The case $q=\infty$ is analogous. \end{proof} \subsubsection{Embedding results} Before we provide some embeddings for the scale $B^s_{p,q}(I,X)$ needed later on, let us briefly recall what is known concerning the Besov spaces $B^s_{p,q}(\Omega)$. \begin{prop} \label{prop-emb-B} Let $s>0$ and $0<p,q\leq\infty$. \begin{itemize} \item[(i)] Let $0<\varepsilon<s$, $0< \nu\leq\infty$, and $q\leq \theta\leq\infty$, then \[ B^{s}_{p,q}(\Omega) \hookrightarrow B^{s-\varepsilon}_{p,\nu}(\Omega)\qquad\text{and}\qquad B^s_{p,q}(\Omega) \hookrightarrow B^s_{p,\theta}(\Omega). \] \item[(ii)] (Sobolev-type embedding) \ Let $0<\sigma< s$ and $p< \tau $ be such that \begin{equation} s-\frac{n}{p} \geq \sigma-\frac{n}{\tau}, \label{delta-B} \end{equation} then \begin{equation}\label{sob-emb-B} B^s_{p,q}(\Omega) \hookrightarrow B^\sigma_{\tau,\theta}(\Omega), \end{equation} where $0<\theta\leq \infty$ and, additionally, $q\leq \theta$ if an equality holds in \eqref{delta-B}. Moreover, in the limiting case when $\sigma=0$ and $\theta$ is such that \begin{equation} s-\frac{n}{p}\geq -\frac{n}{\theta}, \label{delta-2-B} \end{equation} we have \begin{equation}\label{Lim-emb-B} B^s_{p,q}(\Omega) \hookrightarrow L_{\theta}(\Omega), \end{equation} where again $q\leq \theta$ if an equality holds in \eqref{delta-2-B}. \item[(iii)] If the domain $\Omega\subset \mathbb{R}^n$ is bounded, then for $\tau\leq p$ we have the embedding \begin{equation} B^s_{p,q}(\Omega)\hookrightarrow B^s_{\tau,q}(\Omega). \end{equation} \end{itemize} \end{prop} \noindent \begin{minipage}{0.44\textwidth} \begin{rem} \begin{itemize} \item[(i)] The above results can be found in \cite[\S~2.10, 12.8]{DL93}, \cite[Thm.~1.15]{HS09}, and \cite{BS88}. \\ In the interpolation diagram aside we have illustrated the area of possible embeddings of a fixed original space $B^s_{p,q}(\Omega)$ into spaces $B^{\sigma_1}_{\tau_1,\nu_1}(\Omega)$ and $B^{\sigma_2}_{\tau_2,\nu_2}(\Omega)$. The lighter shaded area corresponds to the additional embeddings we have if the underlying domain $\Omega$ is bounded. \end{itemize} \end{rem} \end{minipage}\hfill \begin{minipage}{0.55\textwidth} \includegraphics[width=\textwidth]{embed-bd-dom.pdf} \captionof{figure}{Embeddings for $B^s_{p,q}(\Omega)$} \end{minipage}\\ \begin{itemize} \item[(ii)] In the non-limiting case (corresponding to the strict inequality in \eqref{delta-B} and \eqref{delta-2-B}) the embeddings in Proposition \ref{prop-emb-B} are known to be compact. In particular, for $\alpha>0$ and $p<\tau$, the embeddings $B^{s+\alpha}_{p,p}(\Omega)\hookrightarrow B^{s}_{\tau, \tau}(\Omega)$ ($s>0$) and $B^{\alpha}_{p,p}(\Omega)\hookrightarrow L_{\tau}(\Omega)$ ($s=0$) are compact if, and only if, \[\alpha-\frac{n}{p}>-\frac {n}{\tau}. \] \end{itemize} For the scale $B^s_{p,q}(I,X)$ there are counterparts of the embeddings from Proposition~\ref{prop-emb-B}. \begin{prop}\label{prop-emb-BX} Assume $s>0$ and $0<p,q\leq\infty$. \begin{itemize} \item[(i)] Let $0<\varepsilon<s$, $0< \nu\leq\infty$, and $q\leq \theta\leq\infty$, then \[ B^{s}_{p,q}(I,X) \hookrightarrow B^{s-\varepsilon}_{p,\nu}(I,X)\qquad\text{and}\qquad B^s_{p,q}(I,X) \hookrightarrow B^s_{p,\theta}(I,X). \] \item[(ii)] If the time interval $I$ is bounded, then for $\tau\leq p$ we have the embedding \begin{equation} B^s_{p,q}(I,X)\hookrightarrow B^s_{\tau,q}(I,X). \end{equation} \end{itemize} \end{prop} \begin{proof} The embeddings in (i) and (ii) can be proven as in the standard case, using the discrete version of the seminorm for Besov spaces, i.e., \[ |f|_{B^s_{p,q}(I,X)} \simeq |f|_{B^s_{p,q}(I,X)}^* = \left( \sum_{k=0}^{\infty} [2^{ks}\omega_r(f,2^{-k})_p]^q\right)^{\frac 1q}, \quad 0 < q < \infty, \] with the analogous one for $q = \infty$. Indeed, the second embedding in (i) is just a consequence of the monotonicity of the $\ell_q$ sequence spaces, i.e., $\ell_q\hookrightarrow \ell_{\theta}$ for $q\leq \theta$. The first embedding for $q\leq \nu$ is also clear since $2^{k(s-\varepsilon)}\leq 2^{ks}$. If $\nu<q$ one uses H\"older's inequality with $\frac q\nu>1$, which gives the desired result. \\ Moreover, (ii) follows immediately since for $\tau\leq p$ and $|I|<\infty$ we have $ L_p(I,X)\hookrightarrow L_{\tau}(I,X). $ \begin{comment} Concerning (ii), we use the embedding \pedro{$B_{p,p}^s(I) \hookrightarrow L_2(I)$ from Proposition~\ref{prop-emb-B}~(ii) (in the third line that follows)} and estimate \begin{align}\label{est-a} \|f\|_{L_2(I\times \Omega)} ={}& \left(\int_I\int_{\Omega}|f(t,x)|^2\mathrm{d} x\mathrm{d} t\right)^{1/2}\notag = \left(\int_{\Omega}\int_I|f(t,x)|^2\mathrm{d} t\mathrm{d} x\right)^{1/2} \notag\\ ={}& \left(\int_{\Omega}\|f(\cdot, x)\|_{L_2(I)}^2\mathrm{d} x\right)^{1/2} \notag\\ \lesssim{}& \left(\int_{\Omega}\|f(\cdot, x)\|_{B^s_{p,p}(I)}^2\mathrm{d} x\right)^{1/2} \notag\\ ={}& \left( \int_\Omega \| f(\cdot,x) \|_{L_p(I)}^2 \mathrm{d} x\right)^{1/2} + \left(\int_{\Omega} |f(\cdot,x)|_{B^s_{p,p}(I)}^2 \mathrm{d} x\right)^{1/2} \notag\\ ={}& \left( \int_\Omega \left( \int_I |f(t,x)|^p \mathrm{d} t\right)^{2/p} \mathrm{d} x\right)^{1/2} \notag\\ &+ \left(\int_{\Omega}\left(\int_0^1 t^{-sp}\omega_r(f(\cdot, x),I,t)_p^p\frac{\mathrm{d} t}{t}\right)^{2/p}\mathrm{d} x\right)^{1/2} \notag\\ ={}& \left\| \int_I |f(t,x)|^p \mathrm{d} t \right\|_{ L_{2/p}(\Omega) }^{1/p} \notag + \left\|\int_0^1 t^{-sp}\omega_r(f(\cdot, x),I,t)_p^p\frac{\mathrm{d} t}{t} \right\|_{ L_{2/p}(\Omega)}^{1/p}\notag \notag\\ \le{}& \left( \int_I \left\| |f(t,\cdot)|^p \right\|_{L_{2/p}(\Omega)}\mathrm{d} t \right)^{1/p} \notag\\ &+ \left(\int_0^1 t^{-sp}\left\|\omega_r(f(\cdot, x),I,t)_p^p\right\|_{L_{2/p}(\Omega)} \frac{\mathrm{d} t}{t}\right)^{1/p}, \end{align} where in the last step we used the generalized triangle inequality in both terms because $2/p\ge 1$. Clearly, \[ \int_I \left\| |f(t,\cdot)|^p \right\|_{L_{2/p}(\Omega)} \mathrm{d} t = \int_I \left| \int_\Omega |f(t,x)|^2 \mathrm{d} x \right|^{p/2} \mathrm{d} t = \| f \|_{L_p(I,L_2(\Omega))}^p. \] Using the fact that the moduli of smoothness are equivalent, i.e., $\omega_r\simeq w_r$, we see that \begin{align}\label{est-b} \left\|\omega_r(f(\cdot, x),I,u)_p^p \right\|_{L_{2/p}(\Omega)} &\simeq \left\| \frac 1u \int_0^u \left\|\Delta_{(h_1,0)}^r f(\cdot, x)|L_p(I_{rh_1})\right\|^p\mathrm{d} h_1\right\|_{L_{2/p}(\Omega)}\notag\\ &= \left\| \frac 1u \int_0^u \int_{I_{rh_1}}|\Delta_{(h_1,0)}^r f(y, x)|^p \mathrm{d} y\mathrm{d} h_1 \right\|_{L_{2/p}(\Omega)}\notag\\ &\leq \frac 1u \int_0^u \int_{I_{rh_1}} \left(\int_{\Omega}|\Delta_{(h_1,0)}^r f(y, x)|^2 \mathrm{d} x\right)^{p/2} \mathrm{d} y\mathrm{d} h_1\notag\\ &= \frac 1u\int_0^u\int_{I_{rh_1}} \left\|\Delta^r_{(h_1,0)}f(y,\cdot) \right\|_{L_2(\Omega)}^p\mathrm{d} y \mathrm{d} h_1\notag\\ &=\frac 1u\int_0^u \left\|\Delta^r_{(h_1,0)}f \right\|_{ L_p(I_{rh_1},L_2(\Omega))}^p\mathrm{d} h_1\notag\\ &\simeq \omega_r(f,I,u)_p^p, \end{align} where the third step is again a consequence of the generalized triangle inequality, which we can use since $p<2$. Inserting \eqref{est-b} into \eqref{est-a} yields \begin{align*} \|f\|_{L_2(I\times \Omega)} &\lesssim \| f \|_{L_p(I,L_2(\Omega))} + \left(\int_0^1 u^{-sp}\omega_r(f,I,u)_p^p \frac{\mathrm{d} u}{u}\right)^{1/p} \\ &= \| f \|_{L_p(I,L_2(\Omega))} + |f|_{B^s_{p,p}(I,L_2(\Omega))} \end{align*} which completes the proof. \end{comment} \end{proof} \begin{comment} \begin{cor}\label{cor-emb-BX} If $I$ is a bounded interval, we have that \[ B_{p,p}^s(I,L_2(\Omega)) \hookrightarrow L_2(I\times\Omega), \] whenever $s > 0$ and $s-1/p \pedro{\ge} -1/2$. \end{cor} \begin{proof} The result for $p < 2$ is contained in Proposition~\ref{prop-emb-BX}~(ii), using the first embedding of (i). If $p \ge 2$ and $s-1/p > -1/2$ we let $\epsilon > 0$ be such that $s-\epsilon-1/p > -1/2$ and use Proposition~\ref{prop-emb-BX}~(i) and~(iii) to see that \[ B_{p,p}^s(I,X) \hookrightarrow B_{p,2}^{s-\epsilon}(I,X) \hookrightarrow B_{2,2}^{s-\epsilon}(I,X). \] The last space is clearly contained in $L_2(I,X) = L_2(I\times\Omega)$. \end{proof} \end{comment} \begin{rem} The counterpart of the limiting embedding \eqref{Lim-emb-B} in Prop. \ref{prop-emb-B}(ii) is derived in Corollary \ref{cor-gen-whitney} as an application of our generalized Whitney's estimate presented in Proposition~\ref{prop-gen-whitney}. Moreover, the Sobolev-type embeddings as stated in Prop. \ref{prop-emb-B}(ii), formula \eqref{sob-emb-B}, should also hold. The proof in the standard case, cf. \cite[\S~ 12.8]{DL93}, involves spline representations for Besov spaces, which we have not provided for our generalized setting so far. This is out of the scope of the present paper. \end{rem} \section{Jackson- and Whitney-type theorems for vector-valued functions} \label{sec:generalizedWhitney} {In this section we prove Jackson- and Whitney-type theorems for functions defined on an interval, but valued on a Banach space. Some proofs are rather technical, and analogous to the ones presented for scalar-valued functions in \cite{Sto77, OS78}. } {Let us mention that regarding Jackson's theorem there is a proof for $1\le p \le \infty$, which is based on the $K$-functional method of interpolation~\cite[\S3.5]{PP87} and seems extendable to vector-valued functions. There is an alternative proof in~\cite[Thm.~7.1]{PP87}, which holds for $0<p\le \infty$ and avoids all the technicalities from~\cite{Sto77, OS78}. However, it is based on a contradiction argument and does not work in the vector-valued case, or at least we could not generalize it to the infinite-dimensional setting.} The proof of Whitney's theorem that we present below in Section~\ref{S:Whitney} follows the steps from~\cite[Sect.~6.1]{DeV98}. In order to do it, we need an equivalence of $L_p$-norms for vector-valued polynomials, which is contained in Lemma~\ref{lem-scaling-aux} and Corollary~\ref{cor-scaling-lpq}. After proving Whitney's estimate in $B^s_{q,q}(I,X)\cap L_p(I,X)$ in Proposition~\ref{prop-gen-whitney} we obtain the embedding $B^s_{q,q}(I,X) \subset L_p(I,X)$, and arrive at Whitney's estimate in $B^s_{q,q}(I,X)$. \subsection{Jackson's estimate} The goal of this section is to prove a Jackson-type estimate, which is stated below in~Theorem~\ref{thm-jackson} and requires some definitions. Given a separable Banach space $X$, $r \in \mathbb{N}$, and an interval $I = [a,b)$, we denote by $\mathbb{V}^r_{I,X}$ the space of $X$-valued polynomials of order $r$ w.r.t.\ time, which we define as follows: \begin{equation} \label{Vr} \mathbb{V}^r_{I,X}:=\bigg\{ P:I \to X , P(t)=\sum_{j=1}^{r}\ell_j^r(t) P_j: \ P_j\in X, \ t\in I\bigg\}, \end{equation} with $\ell_j^r$ the usual (scalar-valued) Lagrange basis functions \begin{equation} \label{Lagrange} \ell_j^r(t)=\prod_{i\neq j}\frac{t-t_i}{t_j-t_i}\qquad \text{for } t_j = a + (j-1) \frac{b-a}{r-1}, \qquad j=1,2,\dots,r. \end{equation} Notice that any basis for the space $\Pi^r$ of scalar-valued polynomials in $\mathbb{R}$, such as $1,t,t^2,\dots,t^{r-1}$, leads to the same space $\mathbb{V}^r_{I,X}$. The main result of this section is the following. \begin{thm}[Jackson's Theorem]\label{thm-jackson} Let $0<p\le\infty$ and $r \in \mathbb{N}$. Then there exists a constant $c>0$ such that for any interval $I$ and every $f \in L_p(I,X)$, there exists a vector-valued polynomial $P_r \in \mathbb{V}_{I,X}^r$, which satisfies \begin{equation}\label{Jackson0} \|f-P_r\|^p_{L_p(I,X)}\leq c \, w_r(f,I,h)_p^p\qquad \text{with}\qquad h=\frac{|I|}{2r}. \end{equation} In other words, there exist $a_0$, $a_1$, \dots, $a_{r-1}\in X$ such that, if $P_r(t)=a_0+a_1 t+\ldots + a_{r-1}t^{r-1}$, then \eqref{Jackson0} holds. \end{thm} Due to the homogeneity~\eqref{homogeneity} and the equivalence of Lemma~\ref{lem-equiv}, Jackson's estimate can also be stated as: \begin{equation}\label{Jackson} E_r(f,I)_p := \inf_{P_r \in \mathbb{V}_{I,X}^r} \|f-P_r\|^p_{L_p(I,X)} \leq c \, w_r(f,I,|I|)_p^p, \qquad \forall f \in L_p(I,X). \end{equation} In order to prove this estimate, we need several auxiliary lemmas, {which are rather technical, and analogous to the ones proved} for scalar-valued functions in \cite{Sto77, OS78}. We generalize them to our setting. The basic idea is to first study periodic functions and their higher order differences, and then relate them to differences of the functions we are actually interested in. Let $f:[a,b)\rightarrow X$ be an $X$-valued function and $f^{\ast}$ denote its periodic continuation with period $d:=b-a$, i.e., \[ f^{\ast}(t)= f(t-\ell d), \quad \text{where $\ell\in \mathbb{Z}$ is such that $t-\ell d\in [a,b)$}. \] Moreover, for $0<p<\infty$ and $k\in \mathbb{N}$ consider the integrals \begin{align} I^{\ast}_{p,k}(h)&:=\int_a^b \|\Delta^k_hf^{\ast}(t)\|_X^p\mathrm{d} t =\int_0^d \|\Delta^k_hf^{\ast}(t)\|_X^p\mathrm{d} t,\label{I_p_1}\\ I_{p,k}(h)&:=\int_a^{b-kh} \|\Delta^k_hf(t)\|_X^p\mathrm{d} t.\label{I_p_2} \end{align} Note that we do not emphasize on the fact that the expressions $I^{\ast}_{p,k}(h)$ and $I_{p,k}(h)$ also depend on the functions $f$ and $f^{\ast}$, respectively, since it will always be clear from the context which function we deal with. We start with the following result showing how the best approximation of some function $f\in L_p(I,X)$ by a constant $a_0\in X$ can be bounded using first differences of its periodic continuation $f^{\ast}$. \begin{lem}\label{lem-aux-1} Let $0<p<\infty$ and $f\in L_p(I,X)$. There exists $a_0\in X$ such that \[ \|f-a_0\|^p_{L_p(I,X)} \leq \frac 1d \int_0^d I^{\ast}_{p,1}(y)\mathrm{d} y. \] \end{lem} \begin{proof} We show how to construct $a_0\in X$ satisfying the desired inequality. Let $f^{\ast}$ denote the $d$-periodic continuation of $f$. We make the following easy observation, \begin{align*} \inf_{a_0\in X}\|f-a_0\|_{L_p(I,X)}^p &=\inf_{a_0\in X}\int_0^d \|f^{\ast}(t)-a_0\|_{X}^p\mathrm{d} t\\ &= \inf_{a_0\in X}\int_0^d \|f^{\ast}(t+y)-a_0\|_{X}^p\mathrm{d} t\\ &\leq \int_0^d \|f^{\ast}(t+y)-f^{\ast}(y)\|_{X}^p\mathrm{d} t, \quad \text{for any $y \in [0,d)$}. \end{align*} Now using the fact that $f^{\ast}$ is $d$-periodic and the left-hand side does not depend on $y$, integration from $0$ to $d$ w.r.t. $y$ yields \begin{align*} \inf_{a_0\in X}\|f-a_0\|_{L_p(I,X)}^p & \leq \frac 1d\int_0^{d}\int_0^{d}\|f^{\ast}(t+y)-f^{\ast}(y)\|_X^p\mathrm{d} t\mathrm{d} y\\ & = \frac 1d\int_a^b\int_a^b\|f(t)-f(y)\|_X^p\mathrm{d} t\mathrm{d} y = \frac 1d \int_a^b g(y)\mathrm{d} y, \end{align*} where in the last line we put $g(y):=\int_a^b\|f(t)-f(y)\|_X^p\mathrm{d} t$. Note that the set $S$ defined as \[ S:=\Big\{ z\in [a,b): \ g(z)\leq \frac 1d \int_a^b g(y)\mathrm{d} y\Big\}, \] is non-empty. Therefore, taking $z\in S$ and putting $a_0:=f(z)$ we obtain \begin{align*} \|f-a_0\|^p_{L_p(I,X)} &= \int_a^b \|f(t)-f(z)\|_{X}^p \mathrm{d} t =g(z)\\ & \leq \frac 1d \int_a^b\int_a^b \|f(t)-f(y)\|_X^p\mathrm{d} t\mathrm{d} y \\ & = \frac 1d\int_0^{d}\int_0^{d}\|f^{\ast}(t+y)-f^{\ast}(y)\|_X^p\mathrm{d} t\mathrm{d} y\\ & = \frac 1d\int_0^{d}\int_0^{d}\|f^{\ast}(t+y)-f^{\ast}(y)\|_X^p\mathrm{d} y\mathrm{d} t\\ &= \frac 1d \int_0^d I^{\ast}_{p,1}(t)\mathrm{d} t, \end{align*} which shows that $a_0:=f(z)$ with $z\in S$ yields the assertion. \end{proof} The following lemma shows that we can bound integrals of lower order differences of periodic functions with integrals involving higher order differences. \begin{lem}\label{lem-aux-2} Let $0<p<\infty$ and $k\in \mathbb{N}$. Then we have the following relation \[ \int_0^d I^{\ast}_{p,k}(y)\mathrm{d} y\le c\, \int_0^d I^{\ast}_{p,k+1}(y)\mathrm{d} y, \] with the constant $c>0$ only depending on $k$ and $p$, but otherwise independent of the function $f$ and the interval $[a,b)$. \end{lem} \begin{proof} We make use of the following identity \begin{equation}\label{est-09a} \Delta^k_{2y}f^{\ast}(t)-2^k\Delta^k_y f^{\ast}(t)=\sum_{i=1}^k{k\choose i}\sum_{m=0}^{i-1}\Delta_y^{k+1}f^{\ast}(t+my), \end{equation} which can be found in \cite[Sect. 3.3.2]{Tim63}. Let $0<p<1$. In this case we know that $|\cdot |^p$ is subadditive. This and integration from $0$ to $d$ w.r.t. $t$ in \eqref{est-09a} leads to \begin{multline*} 2^{kp}\int_0^d \|\Delta_y^k f^{\ast}(t)\|_X^p \mathrm{d} t-\int_0^d \|\Delta^k_{2y}f^{\ast}(t)\|_X^p\mathrm{d} t \\ \leq \sum_{i=1}^k{k\choose i}\sum_{m=0}^{i-1}\int_0^d \|\Delta_y^{k+1}f^{\ast}(t+my)\|_X^p\mathrm{d} t. \end{multline*} Now integrating once more from $0$ to $d$ w.r.t. $y$ and using the definition of $I^{\ast}_{p,k}$ gives \begin{equation} \label{est-08b} 2^{kp}\int_0^d I^{\ast}_{p,k}(y)\mathrm{d} y-\int_0^d I_{p,k}^*(2y)\mathrm{d} y \leq \sum_{i=1}^k{k\choose i}\, i \int_0^d I_{p,k+1}^{\ast}(y)\mathrm{d} y . \end{equation} Since $I^{\ast}_{p,k}(y)$ is $d$-periodic, we have the identity $$\int_0^d I^{\ast}_{p,k}(2y)\mathrm{d} y=\frac 12\int_0^{2d} I^{\ast}_{p,k}(y')\mathrm{d} y'=\int_0^d I^{\ast}_{p,k}(y)\mathrm{d} y. $$ Inserting this in \eqref{est-08b} we obtain \begin{equation} \label{est-08a} \left(2^{kp}-1\right)\int_0^d I^{\ast}_{p,k}(y)\mathrm{d} y \leq c_{k,p}\int_0^d I_{p,k+1}^{\ast}(y)\mathrm{d} y, \end{equation} which gives the desired estimate in the case $0<p<1$. When $1\leq p<\infty$ we proceed with \eqref{est-09a} as follows: We add $2^k\Delta^k_y f^{\ast}(t)$ on both sides of \eqref{est-09a} and integrate from $0$ to $d$ w.r.t. $t$ and from $0$ to $d$ w.r.t. $y$ afterwards in the $L_p$-norm. This gives \eqref{est-08b} but with the integrals to the power $\frac 1p$. We proceed as before and end up with \eqref{est-08a} to the power $\frac 1p$, which proves the asserted estimate. \end{proof} The following lemma shows how to bound integrals of higher order differences of the periodic extension of a function by integrals of higher order differences of the original function plus first order differences. \begin{lem} \label{lem-aux-3} Let $f\in L_p(I,X)$, where $0<p<\infty$. Then for any $k\in \mathbb{N}$ it holds \[ \int_0^{\frac dk}I^{\ast}_{p,k}(y)\mathrm{d} y\leq 2\int_0^{\frac dk}I_{p,k}(y)\mathrm{d} y + c \, d \, I_{p,1}\left(\frac dk\right). \] with the constant $c>0$ only depending on $k$ and $p$, but otherwise independent of the function $f$ and the interval $[a,b)$. \end{lem} \begin{proof} By definition $\Delta^k_y f^{\ast}(t)=\sum_{i=0}^k(-1)^{k-i}{k\choose i}f^{\ast}(t+iy)$ and the fact that $f^{\ast}$ is the $d$-periodic continuation of $f$, i.e., $f=f^{\ast}$ on $[a,b)$ and $f^{\ast}(t)=f(t-d)$ for some $t\in [b,b+d)$, we express $I_{p,k}^{\ast}$ in terms of the values of $f$ as follows: \begin{align} I^{\ast}_{p,k}(y) &=\int_a^b \|\Delta_y^k f^{\ast}(t)\|_X^p\mathrm{d} t \notag\\ &= \int_a^{b-ky} \|\Delta_y^k f(t)\|_X^p\mathrm{d} t+\sum_{j=1}^k \int_{b-jy}^{b-(j-1)y}\|S_j\|_X^p\mathrm{d} t, \qquad 0 \le y\leq \frac dk, \label{est-09} \end{align} where \[ S_j(t)=\sum_{i=0}^{j-1}(-1)^{k-i}{k\choose i}f(t+iy)+\sum_{i=j}^k(-1)^{k-i}{k\choose i}f(t+iy-d). \] \begin{minipage}{\textwidth} \includegraphics[width=12cm]{ext_f_periodic.pdf} \captionof{figure}{Express $f^{\ast}$ via $f$ with $j=3$} \end{minipage}\\[0.2cm] Now we transform $S_j$ as follows: we augment the first term of the first sum and the last of the second sum in order to obtain the value of the $k$-th difference of $f$ at the point $t$ with step $y-\frac dk$. This yields \begin{align*} S_j&= \sum_{i=0}^k (-1)^{k-i}{k\choose i} \textcolor{blue}{f\left(t+i\Big(y-\frac dk\Big)\right) } \quad {\textcolor{Magenta}{\nearrow i=0 \text{ first sum \quad }}\atop \textcolor{Cyan}{\searrow i=k \text{ second sum}}} & (=:T_1)\\ & \quad + \textcolor{Magenta}{\sum_{i=1}^{j-1}} (-1)^{k-i}{k\choose i} \left[f\left(t+iy\right)\textcolor{blue}{-f\left(t+i\Big(y-\frac dk\Big)\right)}\right]& (=:T_2(j))\\ & \quad + \textcolor{Cyan}{\sum_{i=j}^{k-1}} (-1)^{k-i}{k\choose i} \left[f\left(t+iy-d\right)\textcolor{blue}{-f\left(t+i\Big(y-\frac dk\Big)\right)}\right] & (=:T_3(j))\\ &=: T_1+T_2(j)+T_3(j). \end{align*} Since $T_1$ does not depend on $j$, \begin{align} \sum_{j=1}^k \int_{b-jy}^{b-(j-1)y}\|T_1\|_X^p \,\mathrm{d} t &=\int_{b-ky}^{b}\|T_1\|_X^p \mathrm{d} t\notag =\int_{b-ky}^{b}\|\Delta^k_{y-\frac dk}f(t)\|_X^p \,\mathrm{d} t\notag \\ &=\int_{a}^{a+ky}\|\Delta^k_{\frac dk-y}f(t)\|_X^p \mathrm{d} t =I_{p,k}\left(\frac dk-y\right), \label{est-10} \end{align} where in the third step we changed the step $y-\frac dk$ involving the $k$-th difference of $f$ into $\frac dk-y$ in order to obtain a nonnegative step. We now estimate the sum \begin{align} \sum_{j=1}^k &\int_{b-jy}^{b-(j-1)y}\|T_2(j)\|_X^p+\|T_3(j)\|_X^p \mathrm{d} t\notag\\ &\lesssim \sum_{j=1}^k\int_{b-jy}^{b-(j-1)y} \left\{ \sum_{i=1}^{j-1}{k\choose i}^p \Big\|f(t+iy)-f\left(t+i\Big(y-\frac dk\Big)\right)\Big\|_X^p\right.\notag\\ & \qquad \qquad \qquad\qquad + \left.\sum_{i=j}^{k-1}{k\choose i}^p \Big\|f(t+iy-d)-f\left(t+i\Big(y-\frac dk\Big)\right)\Big\|_X^p \right\}\mathrm{d} t\notag\\ \intertext{\small \qquad \qquad (change summation $\sum_{j=1}^k \sum_{i=1}^{j-1}=\sum_{i=1}^{k-1} \sum_{j=i+1}^{k}$ and $ \sum_{j=1}^k \sum_{i=j}^{k-1}=\sum_{i=1}^{k-1} \sum_{j=1}^{i}$)} &= \sum_{i=1}^{k-1}{k\choose i}^p \int_{b-ky}^{b-iy}\left\|f(t+iy)-f\left(t+i\Big(y-\frac dk\Big)\right)\right\|_X^p\mathrm{d} t\notag\\ & \qquad \qquad \qquad\qquad \qquad + \sum_{i=1}^{k-1}{k\choose i}^p \int_{b-iy}^b\left\|f(t+iy-d)-f\left(t+i\Big(y-\frac dk\Big)\right)\right\|_X^p \mathrm{d} t \notag\\ \intertext{\small \qquad \qquad \qquad (1st integral: Substitution $t'=t+i\left(y-\frac dk\right)$ ; reverse sum $i\mapsto k-i$)} \intertext{\small \qquad \qquad \qquad (2nd integral: Substitution $t''=t+iy-d$)} &= \sum_{i=1}^{k-1}{k\choose i}^p \int_{\frac{k-i}{k}a+\frac ikb-iy}^{\frac{k-i}{k}a+\frac ik b}\left\|f\left(t'+(k-i)\frac dk\right)-f\left(t'\right)\right\|_X^p\mathrm{d} t'\notag\\ & \qquad \qquad \qquad\qquad \qquad + \sum_{i=1}^{k-1}{k\choose i}^p \int_{a}^{a+iy}\left\|f(t'')-f\left(t''+(k-i)\frac{d}{k}\right)\right\|_X^p \mathrm{d} t''. \label{est-08} \end{align} Now \eqref{est-09}, \eqref{est-10}, and \eqref{est-08} yield \begin{align*} I^{\ast}_{p,k}(y) \leq{}& I_{p,k}(y)+I_{p,k}\left(\frac dk-y\right) \\ & +\sum_{i=1}^{k-1}{k\choose i}^p \int_{\frac{k-i}{k}a+\frac ikb-iy}^{\frac{k-i}{k}a+\frac ik b}\left\|f\left(t'+(k-i)\frac dk\right)-f\left(t'\right)\right\|_X^p\mathrm{d} t'\notag\\ & + \sum_{i=1}^{k-1}{k\choose i}^p \int_{a}^{a+iy}\left\|f(t'')-f\left(t''+(k-i)\frac{d}{k}\right)\right\|_X^p \mathrm{d} t''. \end{align*} Integrating from $0$ to $\frac dk$ w.r.t. $y$ gives \begin{align*} \int_0^{\frac dk}I^{\ast}_{p,k}(y)\mathrm{d} y \le{}& \int_0^{\frac dk}I_{p,k}(y)\mathrm{d} y+ \int_0^{\frac dk}I_{p,k}\left(\frac dk-y\right)\mathrm{d} y \\ & +c \left\{ \sum_{i=1}^{k-1}\int_0^{\frac dk} \int_{\frac{k-i}{k}a+\frac ikb-iy}^{\frac{k-i}{k}a+\frac ik b}\left\|f\left(t'+(k-i)\frac dk\right)-f\left(t'\right)\right\|_X^p\mathrm{d} t' \mathrm{d} y\right.\notag\\ & \quad + \left.\sum_{i=1}^{k-1}\int_0^{\frac dk} \int_{a}^{a+iy}\left\|f(t'')-f\left(t''+(k-i)\frac{d}{k}\right)\right\|_X^p \mathrm{d} t''\mathrm{d} y\right\}. \end{align*} \begin{minipage}{0.6\textwidth} We change the order of integration in the double integrals. For the second integral this yields \[ \int_0^{\frac dk}\int_a^{a+iy}(\ldots)\mathrm{d} t''\mathrm{d} y \longrightarrow \int_a^{a+i\frac dk}\int_{\frac{t''-a}{i}}^{\frac dk}(\ldots)\mathrm{d} y \mathrm{d} t''. \] Similarly for the first one. Moreover, observing that the integrand in both cases does not depend on $y$ we obtain \end{minipage}\hfill \begin{minipage}{0.35\textwidth} \includegraphics[width=6cm]{pic_change_integration.pdf} \end{minipage}\\ \begin{align} \int_0^{\frac dk}&I^{\ast}_{p,k}(y)\mathrm{d} y\notag\\ &\le 2\int_0^{\frac dk}I_{p,k}(y)\mathrm{d} y + c\bigg\{ \sum_{i=1}^{k-1} \int_a^{a+i\frac dk}\left(\frac{t'}{i}-\frac ai\right)\left\|f\left(t'+(k-i)\frac dk\right)-f\left(t'\right)\right\|_X^p\mathrm{d} t' \notag\\ & \qquad + \sum_{i=1}^{k-1}\int_a^{a+i\frac dk}\left(\frac dk-\frac{t''-a}{i}\right)\left\|f(t'')-f\left(t''+(k-i)\frac{d}{k}\right)\right\|_X^p \mathrm{d} t'' \bigg\} \notag\\ &= 2\int_0^{\frac dk}I_{p,k}(y)\mathrm{d} y + c \frac dk\sum_{i=1}^{k-1} \int_a^{a+i\frac dk}\left\|f(t)-f\left(t+(k-i)\frac{d}{k}\right)\right\|_X^p \mathrm{d} t.\label{est-11} \end{align} Using a telescopic sum we see that \begin{align*} \Big\|f(t)-f\left(t+(k-i)\frac dk\right)\Big\|^p_X \lesssim \sum_{j=1}^{k-i}\Big\|f\left(t+(j-1)\frac dk\right)-f\left(t+j \frac dk\right)\Big\|^p_X \end{align*} and for $i=1,2,\ldots, k-1$, \begin{align} \sum_{j=1}^{k-i}&\int_a^{a+i\frac dk}\left\|f\left(t+(j-1)\frac dk\right)-f\left(t+j\frac dk\right)\right\|_X^p\mathrm{d} t\notag\\ &=\sum_{j=1}^{k-i}\int_{a+\frac{j-1}{k}d}^{a+\frac {i+j-1}{k}d}\left\|f\left(t'\right)-f\left(t'+\frac dk\right)\right\|_X^p\mathrm{d} t'\notag\\ &\leq (k-i) \int_{a}^{a+\frac{k-1}{k}d}\left\|f\left(t'\right)-f\left(t'+\frac dk\right)\right\|_X^p\mathrm{d} t', \label{est-12} \end{align} where in the second step we used a change of variables $t':=t+(j-1)\frac{d}{k}$. Inserting \eqref{est-12} into \eqref{est-11} finally gives \begin{align*} \int_0^{\frac dk}I^{\ast}_{p,k}(y)\mathrm{d} y &\le 2\int_0^{\frac dk}I_{p,k}(y)\mathrm{d} y + d \, c_{k,p} \int_{a}^{b-\frac dk}\left\|f\left(t'\right)-f\left(t'+\frac dk\right)\right\|_X^p\mathrm{d} t'\\ &=2\int_0^{\frac dk} I_{p,k}(y)\mathrm{d} y + c_{k,p}\, d \, I_{p,1}\left(\frac dk\right), \end{align*} which completes the proof. \end{proof} The previous lemmas give the following result, which shows that we can bound the best approximation of a function $f\in L_p(I,X)$ by a constant $a_0\in X$ with the help of integrals of higher order differences and first order differences of $f$. \begin{lem}\label{aux-lem-OS78} Let $I=[a,b)$, $0<p<\infty$, and $m\in \mathbb{N}$. There exists a constant $c=c_{m,p}$, such that for every $f\in L_p(I,X)$ there exists $a_0\in X$ satisfying, for $h=\frac{b-a}{m}$, \begin{align} \|f-a_0\|^p_{L_p(I,X)} &\leq c \left[\frac 1h \int_0^h \int_a^{b-ms}\|\Delta_s^mf(t)\|_X^p\mathrm{d} t\mathrm{d} s+ \int_a^{b-h}\|\Delta_h f(t)\|_X^p\mathrm{d} t\right]\label{aux-jackson}\\ &= c \left[w_m(f,I,h)_p^p+ \|\Delta_h f(t)\|^p_{L_p(I_h,X)}\right].\notag \end{align} \end{lem} \begin{rem} Note that the second term with the first order differences in \eqref{aux-jackson} is crucial: If $f$ is a polynomial of degree $m-1$ the first integral on the right-hand side vanishes but the left-hand side might not. \end{rem} \begin{proof} We first notice that, by induction, we can easily check that \[ \Delta_{my}^m f^*(t) = \sum_{i_m=0}^{m-1} \dots \sum_{i_1=0}^{m-1} \Delta_y^m f^*(t+i_1y+\dots+i_my), \] so that $I_{p,m}^*(my) \lesssim I_{p,m}^*(y)$. Taking $a_0\in X$ as constructed in Lemma \ref{lem-aux-1} and using Lemmas \ref{lem-aux-2} and \ref{lem-aux-3} with $k=m$, setting $h = \frac dm$, we obtain \begin{align*} \|f-a_0\|_{L_p(I,X)}^p &\ \lesssim \frac 1d \int_0^d I^{\ast}_{p,1}(y)\mathrm{d} y \lesssim \frac 1d \int_0^d I^{\ast}_{p,m}(y)\mathrm{d} y \\ &\ \lesssim \frac1d \int_0^d I_{p,m}^*\left(\frac ym\right) \mathrm{d} y =\frac md \int_0^{\frac dm}I^{\ast}_{p,m}(y')\mathrm{d} y'\\ &\ \leq \frac md\left[2\int_0^{\frac dm}I_{p,m}(y)\mathrm{d} y+c_{p,m}dI_{p,1}\left(\frac dm\right)\right]\\ &\ =c'_{p,m} \frac 1h \left[ \int_0^{h}\int_a^{b-my}\|\Delta_y^m f \|_X^p\mathrm{d} t\mathrm{d} y+\int_a^{b-h}\|\Delta_h f(t)\|_X^p\mathrm{d} t\right], \end{align*} which is the desired result. \end{proof} Finally, a repeated application of Lemma \ref{aux-lem-OS78} now allows us to establish Jackson's inequality. \begin{proof}[Proof of Theorem~\ref{thm-jackson}] We assume $I = [0,1)$. The general case follows by scaling, using~\eqref{scaling}. Let $h=\frac{1}{2r}$, $f\in L_p(I,X)$, and denote the approximant $a_0\in X$ from Lemma \ref{aux-lem-OS78} by $M(f,I):=a_0$. Now define the coefficients $a_0, \ldots, a_{r-1}$ recursively as follows: \begin{align*} a_{r-1}&=M\left(\Delta^{r-1}_hf, [0,1-(r-1)h]\right)\frac{1}{h^{r-1}}\frac{1}{(r-1)!},\\ f_1(t)&=f(t)-a_{r-1}t^{r-1}, \\ a_{r-2}&=M(\Delta^{r-2}_hf_1, [0,1-(r-2)h])\frac{1}{h^{r-2}}\frac{1}{(r-2)!},\\ f_2(t)&=f_1(t)-a_{r-2}t^{r-2}=f(t)-(a_{r-1}t^{r-1}+a_{r-2}t^{r-2}), \\ \vdots & \\ a_{2}&=M(\Delta^{2}_hf_{r-3}, [0,1-2h])\frac{1}{h^{2}}\frac{1}{2!},\\ f_{r-2}(t)&=f_{r-3}(t)-a_{2}t^{2}=f(t)-(a_{r-1}t^{r-1}+a_{r-2}t^{r-2}+\ldots + a_2 t^2), \\ a_{1}&=M(\Delta^{1}_hf_{r-2}, [0,1-h])\frac{1}{h},\\ f_{r-1}(t)&=f_{r-2}(t)-a_{1}t=f(t)-(a_{r-1}t^{r-1}+a_{r-2}t^{r-2}+\ldots + a_1 t), \\ a_0&=M(f_{r-1},[0,1]). \end{align*} With \[ P_r(t)=\sum_{k=0}^{r-1}a_kt^k=a_0+a_1t+\ldots +a_{r-1}t^{r-1} \] we compute \begin{align*} \|f-P_r\|^p_{L_{p}(I,X)}&=\|f_{r-1}-a_0\|^p_{L_p(I,X)} =\|f_{r-1}-M(f_{r-1},[0,1])\|^p_{L_p(I,X)}\\ &\lesssim w_{2r}\left(f_{r-1},I,\frac{1}{2r}\right)_p^p+\|\Delta_h f_{r-1}\|_{L_p(I_h,X)}^p \intertext{\hfill (\text{which follows from applying Lem. \ref{aux-lem-OS78} with } $m=2r$)} &=w_{2r}\left(f,I,h\right)_p^p+\|\Delta_h f_{r-2}-a_1h\|_{L_p(I_h,X)}^p \\ &=w_{2r}\left(f,I,h\right)_p^p+\|\Delta_h f_{r-2}-M(\Delta_h f_{r-2},[0,1-h])\|_{L_p(I_h,X)}^p \\ & \lesssim w_{2r}\left(f,I,h\right)_p^p +w_{2r-1}\left(\Delta_h f_{r-2},[0,1-h],h\right)_p^p +\|\Delta^2_h f_{r-2}\|^p_{L_p(I_{2h},X)} \intertext{\hfill (which follows from applying Lem. \ref{aux-lem-OS78} with $m=2r-1$)} & \lesssim w_{2r-1}\left(f,I,h\right)_p^p +\|\Delta^2_h f_{r-3}-a_2 2! h^2\|^p_{L_p(I_{2h},X)} \intertext{\hfill (we used~\eqref{inductivebound})}\\ & = w_{2r-1}\left(f,I,h\right)_p^p +\|\Delta^2_h f_{r-3}-M(\Delta^2_hf_{r-3}, [0,1-2h])\|^p_{L_p(I_{2h},X)}\\ & \lesssim w_{2r-2}\left(f,I,h\right)_p^p +\|\Delta^3_h f_{r-4}-a_3 3! h^3\|^p_{L_p(I_{3h},X)} \intertext{\hfill (\text{which follows from applying Lem. \ref{aux-lem-OS78} with } $m=2r-2$)} & \qquad \vdots \\ & \lesssim w_{r+2}\left(f,I,h\right)_p^p +\|\Delta^{r-1}_h f-a_{r-1} (r-1)! h^{r-1}\|^p_{L_p(I_{(r-1)h},X)}\\ & \lesssim w_{r+1}\left(f,I,h\right)_p^p +\|\Delta^{r}_h f\|^p_{L_p(I_{rh},X)} \intertext{ \hfill (\text{which follows from applying Lem. \ref{aux-lem-OS78} with } $m=r+1$)} & \leq w_{r+1}\left(f,I,h\right)_p^p+w_r(f,I,h)_p^p\\ & \lesssim w_{r}\left(f,I,h\right)_p^p, \end{align*} which proves the theorem. \end{proof} \begin{rem} Note that Theorem \ref{thm-jackson} also holds for $p=\infty$: if we extend \eqref{I_p_1} by \[ I^{\ast}_{\infty,k}(h):=\sup_{t\in [a,b)}\|\Delta^k_hf^{\ast}(t)\|_X=\sup_{t\in [0,d) }\|\Delta^k_hf^{\ast}(t)\|_X, \] and similarly \eqref{I_p_2}, then Lemmas \ref{lem-aux-1}--\ref{aux-lem-OS78} can be extended to the case when $p=\infty$ by obvious modifications in the proofs, i.e., mostly replacing the integrals by suprema. In this case the factors `$\frac 1d$' and `$d$' in Lemmas \ref{lem-aux-1} and \ref{lem-aux-3}, respectively, disappear. \end{rem} \subsection{Whitney's estimate}\label{S:Whitney} Having established Jackson's estimate \eqref{Jackson} in Theorem~\ref{thm-jackson} we now proceed to prove Whitney's estimate. \begin{thm}[Generalized Whitney's theorem]\label{thm-gen-whitney} Let $0 < p , q \le \infty$, $r\in \mathbb{N}$, and $s>0$. If $\left( 1/q-1/p\right)_+ \le s < r$ then there exists a constant $c >0$ which depends only on $p$, $q$, $r$ such that \begin{equation}\label{whitney} E_r(f,I)_p=\inf_{P\in \mathbb{V}^r_{I,X}}\|f-P\|_{L_p(I, X)}\le c |I|^{s+\frac 1p-\frac 1q}|f|_{B^s_{q,q}(I,X)}, \end{equation} for all $f \in B^s_{q,q}(I,X)$ and for any finite interval $I$. \end{thm} Since this involves the $L_p$-norm on the left-hand side and an $L_q$-norm on the right-hand side, we first deal with the problem of how to switch from $p$-norms to $q$-norms for vector-valued polynomials. Using this together with the Jackson estimate, the fact that according to Lemma \ref{discrete-seminorm} we can express the quasi-norm of the Besov spaces $B^s_{p,q}(I,X)$ as a discrete summation instead of integrals yields Whitney's estimate. \begin{lem}\label{lem-scaling-aux} Let $0<p<\infty$ and $I=[0,1]$. On $\mathbb{V}^r_{I,X}$ the quasi-norm \[ \|P\|_p:=\left(\int_0^1 \|P(t)\|_X^p\mathrm{d} t\right)^{1/p} \] is equivalent to the norm \[ \|P\|_{\ast}:=\max_{j=1,\ldots, r}\|P_j\|_X, \qquad P_j = P(t_j), \quad t_j = \frac{j-1}{r-1}, \quad j=1,\dots,r. \] The constants involved in the equivalence depend on $r$ and $p$, but are otherwise independent of $P \in \mathbb{V}_{I,X}^r$. \end{lem} \begin{rem} At first sight, it may seem that this lemma is obvious, because it looks like an equivalence of quasi-norms in a finite-dimensional space. But this is not the case, since the space $\mathbb{V}^r_{I,X}$ is not finite-dimensional, when $X$ is an arbitrary Banach space. \\ With slight modifications in the proof, Lemma \ref{lem-scaling-aux} also holds for $p=\infty$ and the quasi-norm $ \|P\|_{\infty}=\sup_{t\in [0,1]}\|P(t)\|_X. $ \end{rem} \begin{proof} Let $\{ \ell_j \}_{j=1}^r$ denote the Lagrange basis of $\Pi^r$ corresponding to the equally spaced nodes $t_j = \frac{j-1}{r-1}$, $j=1,\dots,r$ on $[0,1]$, i.e., \[ \ell_j(t)=\prod_{i\neq j}\frac{t-t_i}{t_j-t_i} , \quad \text{so that}\quad \ell_j(t_i) = \delta_{ij} \quad \text{and}\quad P = \sum_{j=1}^r P_j \ell_j, \text{ if $P \in \mathbb{V}_{I,X}^r$}. \] Obviously, for $P \in \mathbb{V}_{I,X}^r$, \begin{align*} \|P\|_p&=\left(\int_0^1 \|P(t)\|_X^p\mathrm{d} t\right)^{1/p}=\left(\int_0^1 \Big\|\sum_{j=1}^{r} \ell_j(t)P_j\Big\|_X^p\mathrm{d} t\right)^{1/p}\\ &\leq c_{p,r} \sum_{j=1}^{r}\left(\int_0^1 \ell_j^p(t) \|P_j\|_X^p\mathrm{d} t\right)^{1/p} \leq c_{p,r} \max_{j=1,\ldots, r}\|P_j\|_X = c_{p,r} \|P\|_{\ast}. \end{align*} Let now $P = \sum_{j=1}^r P_j \ell_j \in \mathbb{V}_{I,X}^r$ and let $i$ be such that $\|P_i\|_X=\max_j \|P_j\|_X=\|P\|_{\ast}$. Then, for each $t \in I$, we have \begin{align} \|P(t)\|_X &= \Big\|\sum_{j=1}^{r}\ell_j(t)P_j\Big\|_X \notag \geq |\ell_i(t)|\|P_i\|_X-\sum_{j\neq i}|\ell_j(t)|\|P_j\|_X \\ &\geq \|P\|_{\ast}\bigg(|\ell_i(t)|-\sum_{j\neq i}|\ell_j(t)|\bigg). \label{est-lj} \end{align} Since at the point $t=t_i$ we have $\ell_i(t_i)=1$ and $\ell_j(t_i)=0$ for all $j\neq i$, there exists $\delta>0$ such that \[ |t-t_i|<\delta \quad \Longrightarrow \quad |\ell_i(t)|>\frac 34>\frac 14>\sum_{j\neq i}|\ell_j(t)|; \] notice that $\delta > 0$ can be chosen independent of $i$, but will depend on $r$. Hence, \[ |\ell_i(t)|-\sum_{i\neq j}|\ell_j(t)|>\frac 12. \] Hence, \eqref{est-lj} gives us \[ \|P(t)\|_X\geq \frac 12 \|P\|_{\ast}\qquad \text{for}\qquad |t-t_i|<\delta. \] Raising to the power $p$ and averaging over the interval $(t_i-\delta, t_i+\delta)\cap I$ yields \[ \|P\|_{\ast}\leq \left(\frac{2^{p}}{\delta}\int_{(t_i-\delta, t_i+\delta)\cap I}\|P(t)\|^p_X \mathrm{d} t\right)^{1/p}\le \bar c_{p,r} \left(\int_{I}\|P(t)\|^p_X \mathrm{d} t\right)^{1/p}= \bar c_{p,r}\|P\|_{p}, \] and the assertion follows. \end{proof} By a scaling argument we obtain from the previous Lemma the following equivalence of $L_p(I,X)$ norms in $\mathbb{V}_{I,X}^r$ on an arbitrary interval $I$. The proof is very simple and is thus omitted. \begin{cor}\label{cor-scaling-lpq} Let $0<p,q\leq\infty$ and $r \in \mathbb{N}$. Then there exists a constant $c >0$ which depends only on $p$, $q$, $r$ such that on any finite interval $I$, \begin{equation}\label{scaling_lpq} \|P\|_{L_p(I,X)} \leq c |I|^{1/p-1/q}\|P\|_{L_q(I,X)}, \qquad \forall P\in \mathbb{V}^r_{I,X}. \end{equation} \end{cor} \begin{comment} \begin{proof} By Lemma \ref{lem-scaling-aux}, all norms $\|\cdot \|_{L_p([0,,1))}$, $0<p<\infty$, are equivalent to $\|\cdot\|_{\ast}$ in $\mathbb{V}_{[0,1),X}$. On an arbitrary interval $I=[a,b]$ using the substitution $y:=|I|^{-1}t$ we have that, for $P \in \mathbb{V}_{I,X}$, \begin{align*} \|P\|_{L_p(I,X)}&= \left(\int_{{I}} \|P(t)\|_X^p\mathrm{d} t\right)^{1/p}\\ &= |{I}|^{1/p}\left(\int_{[0,1]} \|P(y)\|_X^p\mathrm{d} y\right)^{1/p} \simeq |I|^{1/p}\left(\int_{[0,1]} \|P(y)\|_X^q\mathrm{d} y\right)^{1/q}\\ &=|{I}|^{1/p-1/q}\left(\int_{{I}} \|P(t)\|_X^q\mathrm{d} t\right)^{1/q}=|{I}|^{1/p-1/q} \|P\|_{L_q(I,X)},\\ \end{align*} which gives the desired result. \end{proof} \end{comment} Following the steps from~\cite[Sec.~6.1]{DeV98} we can now prove Whitney's estimate in $B^s_{q,q}(I,X)\cap L_p(I,X)$. \begin{prop}\label{prop-gen-whitney} Let $0 < p , q \le \infty$, $r\in \mathbb{N}$, and $s>0$. If $\left( 1/q-1/p\right)_+ \le s < r$ then there exists a constant $c >0$ which depends only on $p$, $q$, $r$ such that \begin{equation}\label{whitney0} E_r(f,I)_p:=\inf_{P\in \mathbb{V}^r_{I,X}}\|f-P\|_{L_p(I, X)}\le c |I|^{s+\frac 1p-\frac 1q}|f|_{B^s_{q,q}(I,X)}, \end{equation} for all $f \in B^s_{q,q}(I,X)\cap L_p(I,X)$ and for any finite interval $I$. \end{prop} \begin{proof} Since $E_{r+1}(f,I)_p \le E_r(f,I)_p$, it is sufficient to prove the result in the case $r = \lfloor s \rfloor + 1$, and by scaling it is sufficient to consider $I = [0,1)$. Also, since $E_r(f,I)_p \le E_r(f,I)_q$ when $p < q$, it is sufficient to consider the case $q\le p$. Let $D_k$ for $k=0,1,2,\ldots$ denote the following dyadic partitions of $I$: \[ D_k:=\{ I_k^j:=2^{-k}[j-1,j), \ j=1,\ldots, 2^{k} \}. \] We let $S_k$ denote a piecewise polynomial function of order $r$ on the partition $D_k$ satisfying the Jackson estimate \eqref{Jackson} with $p$ replaced by $q$, in each sub-interval, i.e., \[ \|f-S_k\|_{L_q(I^j_k,X)}\lesssim w_r(f,I_k^j,2^{-k})_q, \qquad j=1,2,\dots,2^k, \quad k=0,1,\dots, \] whence $S_0 \in \mathbb{V}_{I,X}^r$. Then, on the one hand, we have \begin{align*} \|f-S_k\|^q_{L_q(I,X)} &=\sum_{j=1}^{2^k}\|f-S_k\|^q_{L_q(I^j_k,X)}\lesssim \sum_{j=1}^{2^k}w_r(f,I_k^j,2^{-k})_q^q. \end{align*} Denoting $\tilde{I}^j_k = \big( I_k^j \big)_{rh}$ we obtain \begin{align} \|f-S_k\|^q_{L_q(I,X)} &\lesssim \frac{1}{2^{-k}}\int_0^{2^{-k}}\sum_{j=1}^{2^k} \|\Delta_h^r f\|^q_{L_q(\tilde{I}^j_k,X)}\mathrm{d} h\notag\\ &=\frac{1}{2^{-k}}\int_0^{2^{-k}} \sum_{j=1}^{2^k}\int_{\tilde{I}^j_k} \|\Delta_h^r f(t)\|_X^q \mathrm{d} t\mathrm{d} h\notag\\ & \leq \frac{1}{2^{-k}}\int_0^{2^{-k}} \int_{[0,1-rh]}\|\Delta_h^r f(t)\|_X^q \mathrm{d} t\mathrm{d} h\notag\\ &= \frac{1}{2^{-k}}\int_0^{2^{-k}} \|\Delta_h^r f\|^q_{L_q([0,1-rh],X)}\mathrm{d} h =w_r(f,I, 2^{-k})_q^q. \label{est-03} \end{align} On the other hand, using \eqref{scaling_lpq} in each subinterval $I_{k+1}^j$, we have \begin{align} \|S_k-S_{k+1}\|^p_{L_p(I,X)} &=\sum_{j=1}^{2^{k+1}} \|S_k-S_{k+1}\|^p_{L_p(I_{k+1}^j,X)}\notag\\ &\lesssim 2^{-k\left(1-\frac pq\right)}\sum_{j=1}^{2^{k+1}}\|S_k-S_{k+1}\|^p_{L_q(I_{k+1}^j,X)}\notag\\ &\lesssim 2^{-k\left(1-\frac pq\right)}\left(\sum_{j=1}^{2^{k+1}}\|S_k-S_{k+1}\|^q_{L_q(I_{k+1}^j,X)}\right)^{p/q}\notag\\ &=2^{-k\left(1-\frac pq\right)}\|S_k-S_{k+1}\|^p_{L_q(I,X)},\label{est-05} \end{align} where in the second to last line we used the fact that $\ell_{q/p}\hookrightarrow \ell_1$ for $q\le p$. This yields for $\overline{p}=\min\{1,p\}$, \begin{equation}\label{est_02} \|S_k-S_{k+1}\|^{\overline{p}}_{L_p(I,X)}\lesssim 2^{-k\left(\frac 1p-\frac 1q\right)\overline{p}}\|S_k-S_{k+1}\|^{\overline{p}}_{L_q(I,X)}. \end{equation} But then using \eqref{est-03}, \eqref{est_02}, and the assumption that $f \in L_p(I,X)$, we obtain \begin{align} E_r(f,I)_p^{\overline{p}} &\le \|f-S_0\|^{\overline{p}}_{L_p(I,X)} \leq \sum_{k=0}^{\infty} \|S_k-S_{k+1}\|^{\overline{p}}_{L_p(I,X)}\notag\\ &\lesssim \sum_{k=0}^{\infty}2^{-k\left(\frac 1p-\frac 1q\right)\overline{p}}\|S_k-S_{k+1}\|^{\overline{p}}_{L_q(I,X)}\notag\\ &\leq \sum_{k=0}^{\infty}2^{-k\left(\frac 1p-\frac 1q\right)\overline{p}}\left(\|S_k-f\|^{\overline{p}}_{L_q(I,X)}+\|f-S_{k+1}\|^{\overline{p}}_{L_q(I,X)}\right)\notag\\ &\lesssim \sum_{k=0}^{\infty}2^{-k\left(\frac 1p-\frac 1q\right)\overline{p}}\|f-S_k\|^{\overline{p}}_{L_q(I,X)}\notag\\ &\lesssim \sum_{k=0}^{\infty}2^{-k\left(\frac 1p-\frac 1q\right)\overline{p}}w_r(f,I,2^{-k})_q^{\overline{p}}\notag\\ &= \sum_{k=0}^{\infty}2^{-k\left(\left(\frac 1p-\frac 1q\right)+s\right)\overline{p}}2^{{ks\overline{p}} }w_r(f,I,2^{-k})_q^{\overline{p}}\notag\\ &\leq \sum_{k=0}^{\infty}2^{-k\delta\overline{p}}2^{{ks\overline{p}} }w_r(f,I,2^{-k})_q^{\overline{p}}, \label{est-04} \end{align} where $\delta:=\left(\frac 1p-\frac 1q\right)+s\geq 0$ due to our assumption $s\geq \left(1/q-1/p\right)_+$. In \eqref{est-04} we proceed as follows: if $q<\overline{p}$ we make use of the embedding $\ell_q\hookrightarrow \ell_{\overline{p}}$ together with the fact that $2^{-k\delta \overline{p}}\leq 1$ and for $q>\overline{p}$ we apply H\"older's inequality with $\frac{q}{\overline{p}}>1$. This finally gives \begin{equation} \label{est-final} \|f-S_0\|_{L_p(I,X)} \leq \left(\sum_{k=0}^{\infty}2^{{ksq} }w_r(f,2^{-k},I)_q^{q}\right)^{1/{q}} \simeq |f|_{B^s_{q,q}(I,X)}. \end{equation} The assertion thus follows by recalling that $S_0 \in \mathbb{V}_{I,X}^r$. \end{proof} As a consequence of the previous theorem we have that under the same assumptions $B^s_{q,q}(I,X)$ is embedded into $L_p(I,X)$. \begin{cor}\label{cor-gen-whitney} Let $0<p,q \le \infty$, $r\in \mathbb{N}$, and $s>0$. If $\left( 1/q-1/p\right)_+ \le s$ then $B^s_{q,q}(I,X)$ is embedded into $L_p(I,X)$ and there exists a constant $c >0$ which depends only on $p$, $q$, $r$, and $s$ such that \begin{equation* \|f\|_{L_p(I, X)}\le c \|f\|_{B^s_{q,q}(I,X)}, \end{equation*} for all $f \in B^s_{q,q}(I,X)$ and for any finite interval $I$. \end{cor} \begin{proof} Let $f \in B^s_{q,q}(I,X) \cap L_p(I,X)$, $r \in \mathbb{N}$, $r > s$, and let $S_0$ be as in the proof of Theorem~\ref{thm-gen-whitney}. Then, \[ \| f \|_{L_p(I,X)} \lesssim \| f - S_0 \|_{L_p(I,X)} + \| S_0 \|_{L_p(I,X)} \lesssim | f |_{B^s_{q,q}(I,X)} + \| S_0 \|_{L_q(I,X)}. \] Since $S_0 \in \mathbb{V}_{I,X}^r$ was chosen satisfying Jackson estimate~\eqref{Jackson} with $p$ replaced by $q$, \[ \|S_0\|_{L_q(I,X)} \lesssim \|f-S_0\|_{L_q(I,X)} + \|f\|_{L_q(I,X)} \lesssim w_r(f,I,1)_q + \|f\|_{L_q(I,X)} \lesssim \|f\|_{L_q(I,X)}. \] Therefore, for all $f \in B^s_{q,q}(I,X) \cap L_p(I,X)$, \[ \| f \|_{L_p(I,X)} \lesssim | f |_{B^s_{q,q}(I,X)} + \|f\|_{L_q(I,X)} \lesssim \| f \|_{B^s_{q,q}(I,X)}. \] Finally, since $B^s_{q,q}(I,X) \cap L_p(I,X)$ is dense in $B^s_{q,q}(I,X)$ the assertion follows. \end{proof} The generalized Whitney's theorem, Theorem~\ref{thm-gen-whitney}, is now a consequence of Proposition~\ref{prop-gen-whitney} and Corollary~\ref{cor-gen-whitney}. \section{Adaptive approximation {in one variable}}\label{sec:onevariable} \subsection{The stationary case} Given a polyhedral space domain $\Omega\subset \mathbb{R}^n$, $n \in \mathbb{N}$, we let $\mathbb{T}(\mathcal{T}_0)$ denote the set of all triangulations $\mathcal{T}$ (partitions into simplices) that are obtained by successive application of the bisection routine of \cite{Ste08} from a properly labeled initial triangulation $\mathcal{T}_0$ of $\Omega$. If $n=1$, $\mathbb{T}(\{0<T\})$ denotes the set of all partitions of $\Omega = [0,T)$ into sub-intervals that may be obtained by successive bisection of $\mathcal{T}_0 = \{[0,T)\}$. For simplicity, the one-dimensional partition $\{ [0=t_0,t_1), [t_1,t_2),\dots,[t_{N-1},t_N = T) \}$ will be usually denoted by $\{ 0=t_0 < t_1 < \dots < t_N = T \}$. Whenever we write $\mathcal{T}_* = \textsc{Refine($\mathcal{T},\mathcal{M}$)}$, we understand that $\mathcal{M} \subset \mathcal{T}$ and $\mathcal{T}_*$ is the refinement of $\mathcal{T}$ obtained by the bisection routine of \cite{Ste08}. In the one-dimensional case, we understand that $\mathcal{T}_*$ is obtained by the sole replacement in $\mathcal{T}$ of each element $T = [a,b) \in \mathcal{M}$ by its children $[a,\frac{a+b}2)$, $[\frac{a+b}2,b)$. Therefore, the following complexity bound holds: \begin{quote} Let $\mathcal{T}_0$, $\mathcal{T}_1$, $\mathcal{T}_2$, \dots, be a sequence of partitions in $\mathbb{T}(\mathcal{T}_0)$ obtained by successive calls of $\mathcal{T}_{k+1} = \textsc{Refine($\mathcal{T}_k,\mathcal{M}_k$)}$, with $\mathcal{M}_k \subset \mathcal{T}_k$ the set of \emph{marked} elements. Then, there exists a constant $C$ that depends on the initial triangulation $\mathcal{T}_0$ such that \begin{equation} \label{eq:complexity} \#\mathcal{T}_k - \#\mathcal{T}_0 \le C \sum_{j=0}^{k-1} \#\mathcal{M}_j, \qquad k=1,2,\dots. \end{equation} \end{quote} For $\mathcal{T} \in \mathbb{T}(\mathcal{T}_0)$, recall that $\mathbb{V}_{\mathcal{T}}^r$ is the finite element space of continuous piecewise polynomials of order $r$, i.e., \[ \mathbb{V}_{\mathcal{T}}^r :=\{g\in C(\overline{\Omega}): \ g\big|_{T}\in \Pi^r\ \text{for all } T\in \mathcal{T}\}, \] where $\Pi^r$ denotes the set of polynomials of total degree (strictly) less than $r$. The underlying domain $\Omega$ and its dimension are implicitly indicated by the partition $\mathcal{T}$, which will sometimes correspond to a time interval $[0,T)$ and sometimes to an $n$-dimensional space domain.\\ \paragraph{\bf Approximation Classes} Let $X$ be a quasi-Banach space on the polyhedral bounded Lipschitz domain $\Omega \subset \mathbb{R}^n$ with quasi-norm $\|\cdot\|_X$. Let $\mathcal{T}_0$ be a triangulation of $\Omega$, properly labeled so that~\eqref{eq:complexity} holds, and assume further that $\mathbb{V}_\mathcal{T}^r \subset X$ for $\mathcal{T} \in \mathbb{T}(\mathcal{T}_0)$. In this context, for $f\in X$, the best $N$-term approximation error is given by \[ \sigma_N(f)=\inf_{|\mathcal{T}|\leq N}\inf_{g\in \mathbb{V}_{\mathcal{T}}^r}\|f-g\|_X. \] For $s>0$ we define the approximation class $\mathbb{A}_s(X)$ as the set of those functions in $X$ whose best $N$-term approximation error is of order $N^{-s}$, i.e., \[ \mathbb{A}_s(X):=\{f\in X: \ \exists c>0 \text{ such that } \sigma_N(f)\leq cN^{-s}, \ \forall N\in \mathbb{N}\}. \] Equivalently, we can define $\mathbb{A}_s(X)$ through a semi-quasi-norm as follows: \[ \mathbb{A}_s(X):=\{f\in X: \ |f|_{\mathbb{A}_s(X)}<\infty\}\quad \text{with}\quad |f|_{\mathbb{A}_s(X)}:=\sup_{N\in \mathbb{N}}N^s\sigma_N(f). \] Alternatively, this definition is equivalent to saying that $f\in {\mathbb{A}_s(X)}$ if there is a constant $c$ such that for all $\varepsilon>0$, there exists a mesh $\mathcal{T}$ that satisfies \begin{equation}\label{appr-class-1} \inf_{g\in \mathbb{V}^{r}_{\mathcal{T}}}\|f-g\|_X\leq c \varepsilon \quad \text{and}\quad |\mathcal{T}|\leq \varepsilon^{-1/s}, \end{equation} and $|f|_{\mathbb{A}_s(X)}$ is equivalent to the infimum of all constants $c$ that satisfy \eqref{appr-class-1}. We use the following result from \cite[Thm. 2.2, Cor. 2.3]{GM14}, which is the high-order analog to the one presented in~\cite{BDDP02} for linear finite elements ($r=2$). \begin{thm}\label{space-poly} Let $X=B^{\alpha}_{p,p}(\Omega)$, $0<p<\infty$, $0<\alpha<\min\{r,1+\frac 1p\}$ or $X=L_p(\Omega)$ if $\alpha=0$. If $f\in B^{s+\alpha}_{\tau,\tau}(\Omega)$ with $s>0$, $0<\frac{1}{\tau}<\frac sn+\frac 1p$, and $s+\alpha < r$, then \begin{align} B^{\alpha+s}_{\tau,\tau}(\Omega)&\subset \mathbb{A}_{s/n}(B^{\alpha}_{p,p}(\Omega))\qquad (\alpha>0), \label{approx-class-2}\\ B^{s}_{\tau,\tau}(\Omega)&\subset \mathbb{A}_{s/n}(L_{p}(\Omega))\qquad \ \; (\alpha=0). \label{approx-class-3} \end{align} \end{thm} In particular, if $p=2$ and $\alpha = 0$ we have the following result. \begin{cor}\label{cor:spaceapprox} Let $X = L_2(\Omega)$, $r \in \mathbb{N}$, $0<s < r$, and $0<\frac 1\tau < \frac sn + \frac 12$. Then there exists a constant $C = C(r,s,\tau,\Omega,\mathbb{T})$ such that, for every $\varepsilon > 0$ there exists $\mathcal{T} \in \mathbb{T}(\mathcal{T}_0)$ and $g \in \mathbb{V}^{r}_\mathcal{T}$ such that \[ \| f - g \|_X \le \varepsilon\, |f|_{B^s_{\tau,\tau}(\Omega)} \qquad\text{and}\qquad |\mathcal{T}| \lesssim \varepsilon^{-n/s}. \] \end{cor} \begin{comment} \subsection{\bf The non-stationary case} \todo[inline]{Remove this whole paragraph? It is contained in the introduction.} Let $r_1,r_2\in\mathbb{N}$ denote the polynomial orders in time and space, respectively. Let $\{ 0=t_0<t_1<\ldots<t_N=T \} \in \mathbb{T}(\{0<T\})$ be a partition of the time interval and $\mathcal{T}_1,\ldots, \mathcal{T}_N \in \mathbb{T}(\mathcal{T}_0)$ be partitions of the space domain $\Omega$, where $\mathcal{T}_i$ corresponds to the subinterval $[t_{i-1},t_i)$, $i=1,\ldots N$. The time-space partition is then given by \[ \mathcal{P}=\left(\{0=t_0<t_1<\ldots<t_N=T\}, \{\mathcal{T}_1,\ldots, \mathcal{T}_N\}\right), \quad \text{with }\# \mathcal{P}=\sum_{i=1}^N \#\mathcal{T}_i, \] and $\mathbb{P}$ is the set of all those time-space partitions. \begin{figure} \includegraphics[width=9cm]{partition_domain.pdf} \end{figure} The finite element space $\overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}$ subject to the partition $\mathcal{P}$ is defined as \begin{align*} \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2} &:=\{g: [0,T)\times\Omega\rightarrow \mathbb{R}: g_{[t_{i-1},t_i)\times\Omega} \in \Pi^{r_1} \otimes \mathbb{V}_{\mathcal{T}_i}^{r_2},\text{ for all }i=1,2,\dots,N \}, \end{align*} i.e., $g \in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}$ if and only if $g(t,\cdot)\in \mathbb{V}_{\mathcal{T}_i}^{r_2}$ for all $t\in [t_{i-1},t_i) $ and $g(\cdot,x)\big|_{[t_{i-1},t_i)}\in \Pi^{r_1}$ for all $x\in \Omega$, and all $i=1,2,\dots,N$. For $0<p\le\infty$ and a quasi-Banach separable space $X$ over the space domain $\Omega$ we define the $m$-term approximation error by \[ \overline{\sigma}_m(f)=\inf_{\# \mathcal{P}\leq m}\inf_{g\in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}}\|f-g|L_p([0,T), X)\|, \] where we implicitly assume that $\overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2} \subset L_p([0,T), X)$. \end{comment} \subsection{Greedy algorithm} Theorem~\ref{space-poly}, or equivalently Corollary~\ref{cor:spaceapprox}, is proved with the help of a so called \emph{Greedy algorithm}. In order to make this article self-contained, we present it here and use it to build a quasi-optimal partition of $[0,T)$ to approximate a vector-valued function in $L_{p}([0,T),X)$. This, in turn, is an intermediate tool for constructing the optimal time-space partition. In the rest of this section we consider the following framework. We let $X$ denote a Banach space, $r\in\mathbb{N}$ denotes the polynomial \emph{order} with respect to time, and for an interval $I$, recall the definition of $\mathbb{V}_{I,X}^r$ from~\eqref{Vr}: \[ \mathbb{V}^r_{I,X}:=\Big\{ P(t)=\sum_{j=0}^{r-1}a_j t^j, \ a_j\in X, \ t\in I\Big\} \subset L_p(I,X), \] i.e., the tensor product space $\Pi^r \otimes X$ on the time slice $I\times \Omega$. For a partition $\mathcal{T} = \{ 0=t_0 < t_1 < \dots < t_N = T \}$ of the time interval $[0,T)$, we consider the following corresponding (abstract) finite element space: \[ \mathbb{V}_{\mathcal{T},X}^r = \{ P \in L_p([0,T),X) :\ P_{|I} \in \mathbb{V}_{I,X}^r,\ I \in \mathcal{T} \}. \] Recall the definition of the best approximation error $E_r(f,I)_p$ associated with an interval $I\subset [0,T)$, i.e., \begin{equation} E_r(f,I)_p = \inf_{P_I\in \mathbb{V}_{I,X}^r}\|f-P_I\|_{L_p(I, X)}, \end{equation} so that \[ \inf_{g \in \mathbb{V}_{\mathcal{T},X}^r} \| f - g \|_{L_p([0,T),X)} = \left(\sum_{I \in \mathcal{T}} E_r(f,I)_p^p \right)^{1/p}. \] An algorithm approximating the solution with a parameter $\delta >0$ reads as follows: \\ \begin{algorithm}[h] \caption{\emph{Greedy} algorithm}\label{a:greedy} \begin{algorithmic}[1] \Function{Greedy}{$f$,$\delta$} \State{Let $\mathcal{T}_0 = \{0<T\} = \{ [0,T) \}$.} \State{$k = 0$} \While{$\mathcal{M}_k := \{ I \in \mathcal{T}_k : E_r(f,I)_p > \delta \} \ne \emptyset$} \State{Let $\mathcal{T}_{k+1} = \textsc{Refine}(\mathcal{T}_k,\mathcal{M}_k)$} \State{$k \leftarrow k+1$} \EndWhile \EndFunction \end{algorithmic} \end{algorithm} \subsection{Semi-discretization in time} Concerning the error when approximating a vector-valued function with piecewise polynomials with respect to time, we have the following result. \begin{thm}[{\bf Time discretization}]\label{time-poly} Let $X$ be a separable Banach space, let $s>0$, $0<p,q\leq \infty$, and $\left(\frac 1q-\frac 1p\right)_+ \le s < r$, with $r \in \mathbb{N}$. Then, if $f\in B^s_{q,q}([0,T),X)$ and $\varepsilon > 0$, there exists $\delta > 0$ such that \textsc{Greedy($f$,$\delta$)} terminates in finitely many steps and the generated partition $\mathcal{T}$ satisfies \begin{equation}\label{boundonT} \# \mathcal{T}\le c_1 \, \varepsilon^{-1/s}, \end{equation} where the constant $c_1>0$ depends on $p$, $q$, and $s$ but not on $f$. Moreover, there exists $P \in \mathbb{V}_{\mathcal{T},X}^r$ satisfying \begin{equation}\label{boundonerror} \|f-P\|_{L_p([0,T), X)} \le c_2 \, \varepsilon \, |f|_{B^s_{q,q}([0,T),X)} \le c_3 \, {(\# \mathcal{T})^{-s}}|f|_{B^s_{q,q}([0,T),X)}, \end{equation} with $c_2,c_3>0$ depending on $p$, $q$, and $s$ but not on $f$. \end{thm} \begin{proof} Let $\varepsilon>0$ be given and let $\delta = \varepsilon^{\frac{s+1/p}s} |f|_{B^s_{q,q}([0,T),X)}$. Using Whitney's estimate \eqref{whitney} we see that the error $ E_r(f,I)_p$ associated with an interval $I$ satisfies \begin{equation}\label{error_est} E_r(f,I)_p = \inf_{P_I \in \mathbb{V}_{I,X}^r} \| f - P_I \|_{L_p(I,X)} \lesssim |I|^{s+\frac 1p-\frac 1q}|f|_{B^s_{q,q}(I,X)}. \end{equation} Since $s+\frac {1}{p}-\frac 1q>0$ the right-hand side goes to zero as $|I|$ goes to zero, which shows that the Greedy algorithm terminates in a finite number of steps $K$. We now bound the number of elements of $\mathcal{T}:= \mathcal{T}_K$ as follows. Initially, $\mathcal{T}_0=\{[0,T)\}$, therefore, $\# \mathcal{T}_0=1$. In each iteration of the {\bf while}-loop, $\#\mathcal{M}_k$ elements are \emph{marked} for refinement. If $\overline{\mathcal{M}}=\bigcup_{k=0}^{K-1}\mathcal{M}_k$ is the union of all marked elements in a certain step of the algorithm, then, due to~\eqref{eq:complexity}, the resulting final partition $\mathcal{T}$ satisfies $\# \mathcal{T}\lesssim 1 + \# \overline{\mathcal{M}} \lesssim \# \overline{\mathcal{M}}$. We see that estimating $\# \mathcal{T}$ is comparable with estimating $\# \overline{\mathcal{M}}$. In order to count the number of elements in $\overline{\mathcal{M}}$ observe that $\#\overline{\mathcal{M}}=\sum_{k=0}^{\infty}\#\mathcal{M}^k$, with \[ \mathcal{M}_k=\left\{I\in \overline{\mathcal{M}}: |I| = T/2^k\right\} \ \text{ if }\ 0 \le k \le K-1 \qquad \text{and}\qquad \mathcal{M}_k = \emptyset \ \text{ if }\ k \ge K. \] On the one hand, since our time interval $[0,T)$ is finite, we obtain the upper bound \[ \#\mathcal{M}_k \le 2^k, \qquad k\in \mathbb{N}_0. \] On the other hand, if $I\in \mathcal{M}_k$ from steps 4 and 6 of the Greedy algorithm and formula \eqref{error_est}, we have \[ \delta< E_r(f,I)_p \lesssim \left(\frac{1}{2^k}\right)^{s+\frac 1p-\frac 1q}|f|_{B^s_{q,q}(I,X)}, \quad\text{so that}\quad \delta^q\lesssim \left(\frac{1}{2^k}\right)^{sq+\frac qp-1}|f|^q_{B^s_{q,q}(I,X)}. \] This implies \[ \delta^q \# \mathcal{M}_k =\sum_{I\in \mathcal{M}_k}\delta^q \lesssim \left(\frac{1}{2^k}\right)^{sq+\frac qp-1}\sum_{I\in \mathcal{M}_k}|f|^q_{B^s_{q,q}(I,X)} \le \left(\frac{1}{2^k}\right)^{sq+\frac qp-1}|f|^q_{B^s_{q,q}([0,T),X)}, \] i.e., \[ \# \mathcal{M}_k \lesssim \min \left\{2^k, \frac{1}{\delta^q}\left(\frac{1}{2^k}\right)^{sq+\frac qp-1}|f|^q_{B^s_{q,q}([0,T),X)}\right\}. \] The first term corresponds to an increasing geometric series, the second to a decreasing one. Setting $k_0:=\min \left\{k\in \mathbb{N}_0: \ \frac{1}{\delta^q} \left(\frac{1}{2^k}\right)^{sq+\frac qp-1}|f|^q_{B^s_{q,q}([0,T),X)}<2^k\right\}$ we obtain \begin{align} \# \overline{\mathcal{M}}=\sum_{k=0}^\infty \#\mathcal{M}_k &\leq \sum_{k=0}^{k_0-1}2^k+\sum_{k=k_0}^{\infty}\frac{1}{\delta^q} \left(\frac{1}{2^k}\right)^{sq+\frac qp-1}|f|^q_{B^s_{q,q}([0,T),X)} \notag \\ & \lesssim 2^{k_0}+\frac{1}{\delta^q} \left(\frac{1}{2^{k_0}}\right)^{sq+\frac qp-1}|f|^q_{B^s_{q,q}([0,T),X)} \lesssim 2^{k_0}. \label{one} \end{align} In order to estimate $2^{k_0}$ we observe that \[ 2^{k_0-1}\le \frac{1}{\delta^q} \left(\frac{1}{2^{k_0}}\right)^{sq+\frac qp-1}|f|^q_{B^s_{q,q}([0,T),X)}<2^{k_0}, \] \[ 2^{k_0(s+\frac 1p)-\frac 1q}\le \frac{1}{\delta}|f|_{B^s_{q,q}([0,T),X)}<2^{k_0(s+\frac 1p)}. \] We see that \begin{equation}\label{two} 2^{k_0}\le \left(\frac{1}{\delta}\right)^{\frac1{s+1/p}}|f|^{\frac1{s+1/p}}_{B^s_{q,q}([0,T),X)}, \end{equation} therefore, from~\eqref{one} and~\eqref{two} we get $$ \# \mathcal{T} \lesssim \# \overline{\mathcal{M}}\lesssim \left(\frac{1}{\delta}\right)^{\frac1{s+1/p}}|f|^{\frac1{s+1/p}}_{B^s_{q,q}([0,T),X)}, \quad \text{i.e.}, \quad \delta \lesssim (\# \mathcal{T})^{-(s+\frac1p)}|f|_{B^s_{q,q}([0,T),X)}, $$ and~\eqref{boundonT} follows after recalling that $\delta = \varepsilon^{\frac{s+1/p}s} |f|_{B^s_{q,q}([0,T),X)}$. Finally, for each $I \in \mathcal{T}$ we let $P_I \in \mathbb{V}_{I,X}^r$ satisfy $\| f - P_I\|_{L_p(I,X)} \le {2 E_r(f,I)_p}$ and let $P(t) = \sum_{I \in \mathcal{T}} \chi_I(t) P_I(t)$, $t \in [0,T)$. Hence, \begin{align*} P \in \mathbb{V}_{\mathcal{T},X}^r \quad\text{and}\quad \| f - P \|_{L_p([0,T),X)}^p &\lesssim \delta^p \#\mathcal{T} \lesssim \#\mathcal{T}^{-ps} |f|_{B^s_{q,q}([0,T),X)}^p, \end{align*} and~\eqref{boundonerror} follows. \end{proof} \section{Discretization in time and space}\label{sec:timespace} We now consider the error when approximating a function with piecewise polynomials with respect to time and space. In this article, we deal with the approximation in $L_2([0,T)\times\Omega) = L_2([0,T),X)$, where hereafter we let $X = L_2(\Omega)$. {We restrict ourselves to this Hilbertian case in order to avoid additional technical difficulties and leave the study of more general quasi-norms, e.g. $p\neq 2$ and $X \neq L_2(\Omega)$, to a forthcoming article.} \subsection{{Time marching fully discrete adaptivity}} Recall that the type of discretizations that we consider are those consisting of a partition $\{ 0=t_0<t_1<\dots<t_N=T \}$ of the time interval and a sequence of partitions $\mathcal{T}_1,\dots, \mathcal{T}_N \in \mathbb{T}$ of the space domain $\Omega$, where $\mathcal{T}_i$ corresponds to the subinterval $[t_{i-1},t_i)$, $i=1,\dots N$. The time-space partition is then given by \[ \mathcal{P}=\left(\{0=t_0<t_1<\ldots<t_N=T\}, \{\mathcal{T}_1,\ldots, \mathcal{T}_N\}\right), \quad \text{with }\# \mathcal{P}=\sum_{i=1}^N \#\mathcal{T}_i, \] Given $r_1,r_2 \in\mathbb{N}$, the finite element space $\overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}$ subject to such a partition $\mathcal{P}$ is defined as \begin{align*} \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2} &:=\{G: [0,T)\times\Omega\rightarrow \mathbb{R}: G_{\big|[t_{i-1},t_i)\times\Omega} \in \Pi^{r_1} \otimes \mathbb{V}_{\mathcal{T}_i}^{r_2},\text{ for all }i=1,2,\dots,N \}, \end{align*} i.e., $G \in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}$ if and only if $G(t,\cdot)\in \mathbb{V}_{\mathcal{T}_i}^{r_2}$ for all $t\in [t_{i-1},t_i) $ and $G(\cdot,x)\big|_{[t_{i-1},t_i)}\in \Pi^{r_1}$ for all $x\in \Omega$ and all $i=1,2,\dots,N$. \begin{comment} \begin{equation} \inf_{g\in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}}\|f-g|L_2([0,T),X)\|, \end{equation} where hereafter $X = L_2(\Omega)$. \end{comment} In order to construct an optimal approximate solution with tolerance $\varepsilon>0$ we use the one-dimensional Greedy algorithm as described on page \pageref{a:greedy} for the (adaptive) discretization in time and an $n$-dimensional Greedy algorithm for (adaptive) discretizations in space. This allows us to use the results from Theorems \ref{time-poly} and \ref{space-poly}, respectively. In particular, we obtain the following result. \begin{thm}[{\bf Approximation with fully discrete functions}]\label{space-time-poly} Let $0 < s_i < r_i$, $ i=1,2$, $0 < q_1 \le \infty$, $1\le q_2 \le \infty$ with $s_1 > \big(\frac1{q_1}-\frac12\big)_+$ and $s_2 > n (\frac1{q_2}-\frac12\big)_+$. Let $f\in B^{s_1}_{q_1,q_1}([0,T),X) \cap L_2([0,T),B_{q_2,q_2}^{s_2}(\Omega))$, with $X = L_2(\Omega)$. Then, for each $\varepsilon > 0$ there exists a time-space partition $\mathcal{P}$ that satisfies \[ \# \mathcal{P}\le c_1 \varepsilon^{-\big(\frac1{s_1}+\frac {n}{s_2}\big)} \] and a function $F \in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}$ such that \begin{align*} \| f - F \|_{ L_2([0,T),X)} \le c_2 \varepsilon |\!|\!| f |\!|\!| \le c_3 (\#\mathcal{P})^{-\frac{1}{\frac1{s_1}+\frac {n}{s_2}}} |\!|\!| f |\!|\!|, \end{align*} where $|\!|\!| f |\!|\!| = | f |_{B^{s_1}_{q_1,q_1}([0,T),X)} + \| f \|_{L_2([0,T),B_{q_2,q_2}^{s_2}(\Omega))}$ and the positive constants $c_1,c_2,c_3$ depend on $q_1,q_2$, and $s_1,s_2$ but not on $f$. \end{thm} \begin{rem} Here (with a little abuse) we use the notation \[ L_2(I,B_{q,q}^s(\Omega)) = \left\{ f : I \to B_{q,q}^s(\Omega) : \| f \|_{ L_2(I,B_{q,q}^s(\Omega))} < \infty \right\} \] with $ \| f \|_{ L_2(I,B_{q,q}^s(\Omega))} := \left(\int_I \| f(t) \|_{B_{q,q}^s(\Omega)}^2 \mathrm{d} t\right)^{1/2} $. The restriction $q_2\geq 1$ in Theorem \ref{space-time-poly} can probably be removed and replaced by $q_2>0$. It appears here due to the fact that we require in the proof below a uniform bound of the approximants on a subinterval $I=[t_{i-1},t_i)$, which is established in Lemma \ref{L:uniformbound}. Our current proof of Lemma \ref{L:uniformbound} uses Minkowski's inequality which only works if $q_2\geq 1$. So far we were not able to find an appropriate modification for $q_2<1$. \end{rem} \begin{proof}[Proof of Theorem \ref{space-time-poly}] Given $f \in B^{s_1}_{q_1,q_1}([0,T),X) \cap L_2([0,T),B_{q_2,q_2}^{s_2}(\Omega))$ and $\varepsilon > 0$, the approximant $F\in \overline{\mathbb{V}}_{\mathcal{P}}^{r_1,r_2}$ is constructed in two steps as follows. We first use a one-dimensional Greedy algorithm and apply the results from Theorem~\ref{time-poly}. This gives a partition of the time interval $0=t_0<t_1<\ldots < t_N=T$ and an approximant $G=\sum_{i=1}^N{\chi}_{[t_{i-1},t_i)}G_i \in \mathbb{V}_{\{0<t_1<\dots<T\},X}^{r_1}$ with $G_i$ the $L_2([t_{i-1},t_i),X)$ projection of $f_{|[t_{i-1},t_i)}$ into $\mathbb{V}_{[t_{i-1},t_i),X}^{r_1}$. This partition and approximant satisfy \[ N \lesssim \varepsilon^{-1/s_1} \quad\text{and}\quad \|f-G\|_{L_2([0,T), X)} \lesssim \varepsilon | f |_{ B^{s_1}_{q_1,q_1}([0,T),X)}. \] Also, if $\{\onb{i}{j}\}_{j=1}^{r_1}$ is an orthonormal basis of $\mathbb{V}^{r_1}_{[t_{i-1},t_i),\mathbb{R}}$ then \[ G_i(t) = \sum_{j=1}^{r_1} G_i^j \, \onb{i}{j}(t), \quad\text{with}\quad G_i^j = \int_I f(t) \onb{i}{j}(t) \, \mathrm{d} t, \] noting that the last integral is a Bochner integral in $X = L_2(\Omega)$. We now observe that due to Lemma~\ref{L:uniformbound} below we have $G_i^j \in B^{s_2}_{q_2,q_2}(\Omega)$ and \begin{equation}\label{uniformbound} \| G_i \|_{ L_2([t_{i-1},t_i),B^{s_2}_{q_2,q_2}(\Omega) )} \lesssim \| f \|_{ L_2([t_{i-1},t_i),B^{s_2}_{q_2,q_2}(\Omega) )}. \end{equation} The second step consists in approximating each function $G_i^j \in B^{s_2}_{q_2,q_2}(\Omega)$ using the space-adaptive Greedy algorithm. Resorting to Corollary~\ref{cor:spaceapprox} we find a mesh $\mathcal{T}_i^j \in \mathbb{T}(\mathcal{T}_0)$ and a finite element function $F_i^j \in \mathbb{V}_{\mathcal{T}_i^j}^{r_2}$ with \[ \#\mathcal{T}_i^j \lesssim \varepsilon^{-\frac{n}{s_2}} \quad\text{and}\quad \| G_i^j - F_i^j \|_X \lesssim \varepsilon | G_i^j |_{B^{s_2}_{q_2,q_2}(\Omega)}. \] Therefore, after defining $\mathcal{T}_i = \oplus_{j=1}^{r_1} \mathcal{T}_i^j$ (the overlay of the meshes~\cite{CKNS08}), we have that $F_i(t) := \sum_{j=1}^{r_1} \onb{i}{j}(t) F_i^j \in \mathbb{V}_{[t_{i-1},t_i),\mathbb{V}_{\mathcal{T}_i}^{r_2}}^{r_1}$ satisfies \begin{align*} \#\mathcal{T}_i \le \sum_{j=1}^{r_1} \#\mathcal{T}_i^j &\lesssim \varepsilon^{-\frac{n}{s_2}} \quad\text{\cite[Lem.~3.7]{CKNS08} and}\quad \\ \| F_i - G_i \|_{ L_2([t_{i-1},t_i),X )} & \lesssim \varepsilon \| G_i \|_{ L_2([t_{i-1},t_i),B^{s_2}_{q_2,q_2}(\Omega)) } \lesssim \varepsilon \| f \|_{ L_2([t_{i-1},t_i),B^{s_2}_{q_2,q_2}(\Omega)) }, \end{align*} due to~\eqref{uniformbound}. Finally, we let $\mathcal{P} = \{ \{ 0=t_0 < t_1 < \dots < t_N = T\}, \{ \mathcal{T}_1, \mathcal{T}_2, \dots , \mathcal{T}_N \} \}$ and define $F = \sum_{i=1}^N \chi_{[t_{i-1},t_i)} F_i \in \overline{\mathbb{V}}_\mathcal{P}^{r_1,r_2}$, whence by the triangle inequality \begin{align*} \|f-F\|_{L_2([0,T),X)} & \leq \|f-G\|_{L_2([0,T),X)} + \|G-F\|_{L_2([0,T),X)} \\ & \lesssim \varepsilon | f |_{ B^{s_1}_{q_1,q_1}([0,T),X)} + \bigg( \sum_{i=1}^N \| G_i - F_i \|_{ L_2([t_{i-1},t_i),X)}^2 \bigg)^{1/2} \\ &\lesssim \varepsilon \Big( | f |_{ B^{s_1}_{q_1,q_1}([0,T),X)} + \| f \|_{ L_2([t_{i-1},t_i),B^{s_2}_{q_2,q_2}(\Omega))} \Big) \end{align*} and \begin{align*} \# \mathcal{P} =\sum_{i=1}^N \# \mathcal{T}_i \lesssim N \, \varepsilon^{-\frac n{s_2}} \lesssim \varepsilon^{-\frac{1}{s_1}} \, \varepsilon^{-\frac n{s_2}} = \varepsilon^{-(\frac 1{s_1}+\frac n{s_2})} . \end{align*} The assertion of the theorem thus follows. \end{proof} In Theorem \ref{space-time-poly}, formula \eqref{uniformbound}, we required a uniform bound of the approximants $G_i$ on a subinterval $I=[t_{i-1},t_i)$, which is provided by the following lemma. \begin{lem}\label{L:uniformbound} Given a finite interval $I$, let $r=r_1$, $s=s_2$, and $q=q_2$ satisfy the assumptions from Theorem~\ref{space-time-poly} and assume $f \in L_2(I,B_{q,q}^s(\Omega))$. If $G \in \mathbb{V}_{I,X}^r$ is the $L_2(I,X)$ projection of $f \in L_2(I,B_{q,q}^s(\Omega))$, then \[ \| G \|_{ L_2(I,B^{s}_{q,q}(\Omega) )} \lesssim \| f \|_{ L_2(I,B^{s}_{q,q}(\Omega) )}. \] \end{lem} \begin{proof} If $\{\onb{}{j}\}_{j=1}^{r}$ is an orthonormal basis of $\mathbb{V}_{I,\mathbb{R}}^r$ then \[ G(t) = \sum_{j=1}^{r} G^j \onb{}{j}(t) \quad\text{with}\quad G^j = \int_I f(t) \onb{}{j}(t) \mathrm{d} t, \] i.e., \[ G(t)(x) = \sum_{j=1}^{r} G^j(x) \onb{}{j}(t) \quad\text{with}\quad G^j(x) = \int_I f(t,x) \onb{}{j}(t) \mathrm{d} t, \] for almost every $x \in \Omega$. Notice first that \begin{align*} \| G \|_{L_2(I,B^s_{q,q}(\Omega))}^2 &= \int_I \| G(t) \|_{L_q(\Omega)}^2 + | G |_{B_{q,q}^s(\Omega)}^2 \mathrm{d} t \nonumber\\ &= \int_I \Big\| \sum_{j=1}^r G^j \onb{}{j}(t) \Big\|_{L_q(\Omega)}^2 + \Big| \sum_{j=1}^r G^j \onb{}{j}(t) \Big|_{B_{q,q}^s(\Omega)}^2 \mathrm{d} t \nonumber\\ &\lesssim \sum_{j=1}^r \| G^j \|_{L_q(\Omega)}^2+ |G^j|_{B_{q,q}^s(\Omega)}^2 , \end{align*} so that \begin{equation}\label{eq:G_bound2} \| G \|_{L_2(I,B^s_{q,q}(\Omega))} \lesssim \sum_{j=1}^r \| G^j \|_{L_q(\Omega)}+ |G^j|_{B_{q,q}^s(\Omega)}. \end{equation} We now bound $\| G^j \|_{L_q(\Omega)}$ and $ |G^j|_{B_{q,q}^s(\Omega)} $ and focus on the case $1\le q < \infty$, noting that the case $q=\infty$ is analogous. Since $q \ge 1$, by Minkowski's inequality, for any $j=1,2,\dots, r$ we have \begin{align*} \| G^j\|_{ L_q(\Omega)} & = \left(\int_\Omega \left|\int_I f(x,t)W^j(t)\mathrm{d} t \right|^q \mathrm{d} x\right)^{1/q}\nonumber\\ &\le \int_I \left(\int_\Omega \left|f(x,t)W^j(t) \right|^q \mathrm{d} x \right)^{1/q}\mathrm{d} t \nonumber\\ &= \int_I \left|W^j(t)\right| \left\|f(\cdot,t)\right\|_{L_q(\Omega)} \mathrm{d} t \nonumber\\ &\le \left\| \onb{}{j}(t) \right\|_{L_2(I)} \left\| \left\|f(\cdot,t)\right\|_{L_q(\Omega)} \right\|_{L_2(I)} \end{align*} so that \begin{equation}\label{eq:G_bound0} \| G^j\|_{ L_q(\Omega)} \lesssim \|f\|_{L_2(I,L_q(\Omega))}, \quad j=1,2,\dots, r. \end{equation} \begin{comment} \pedro{ \noindent$\mathbf{-}$ \ \textbf{If $0<q < 1$}, we notice that $\|W^j\|_{L_\infty(I)} \lesssim |I|^{-1/2}$ so that, using Lemma~\ref{L:auxiliar} we have \begin{align*} \| G^j \|_{L_q(\Omega)}^q &= \int_\Omega \left| \int_I W^j(t) f(x,t) \mathrm{d} t \right|^q \mathrm{d} x \\ &\lesssim |I|^{-q/2} \int_\Omega\left| |I|^{1-\frac1q} \left( \int_I |f(x,t)|^q \mathrm{d} t \right)^{1/q} + |I|^{\frac q{1+q}} \left( \int_I |f(x,t)|^{1+q} \mathrm{d} t \right)^{\frac1{1+q}} \right|^q \mathrm{d} x \\ &\lesssim |I|^{-q/2} \int_\Omega |I|^{q-1} \int_I |f(x,t)|^q \mathrm{d} t \mathrm{d} x + |I|^{-q/2} \int_\Omega |I|^{\frac{q^2}{1+q}} \left( \int_I |f(x,t)|^{1+q} \mathrm{d} t \right)^{\frac q{1+q}} \mathrm{d} x \\ &\lesssim |I|^{q/2-1} \int_\Omega \int_I |f(x,t)|^q \mathrm{d} t \mathrm{d} x + |I|^{\frac q 2 \frac{q-1}{1+q}} \int_\Omega \left( \int_I |f(x,t)|^{1+q} \mathrm{d} t \right)^{\frac q{1+q}} \mathrm{d} x =: A + B. \end{align*} } \pedro{ We now notice that using Hölder's inequality with exponents $p=2/q$ and $p'=(1-q/2)^{-1}$ \begin{align*} A &= |I|^{q/2-1} \int_I \int_\Omega |f(x,t)|^q \mathrm{d} x \mathrm{d} t \\ &\le |I|^{q/2-1} \left[\int_I \left(\int_\Omega |f(x,t)|^q \mathrm{d} x\right)^{2/q} \mathrm{d} t \right]^{q/2} \left[\int_I 1 \mathrm{d} t\right]^{1-q/2} \\ &= \| f \|_{L_2(I,L_q(\Omega))}^q. \end{align*} } \pedro{ We now apply Hölder's inequality with exponents $p = \frac{1+q}q$ and $p'= 1+q$ to bound \begin{align*} B &= |I|^{\frac q 2 \frac{q-1}{1+q}} \int_\Omega \left( \int_I |f(x,t)|^{1+q} \mathrm{d} t \right)^{\frac q{1+q}} \mathrm{d} x \\ &\le |I|^{\frac q 2 \frac{q-1}{1+q}} \left[ \int_\Omega \int_I |f(x,t)|^{1+q} \mathrm{d} t \mathrm{d} x \right]^{\frac q{1+q}} |\Omega|^{\frac1{1+q}} \\ &\simeq |I|^{\frac q 2 \frac{q-1}{1+q}} \left[ \int_I \int_\Omega |f(x,t)|^{1+q} \mathrm{d} x \mathrm{d} t \right]^{\frac q{1+q}}. \end{align*} One last application of Hölder's inequality with exponents $p = \frac2{1+q}$ and $p'=\frac2{1-q}$ leads to \[ B \le |I|^{\frac q 2 \frac{q-1}{1+q}} \left[ \int_I \left( \int_\Omega |f(x,t)|^{1+q} \mathrm{d} x \right)^{\frac2{1+q}} \mathrm{d} t \right]^{\frac q2} \left[\int_I 1 \mathrm{d} t\right]^{\frac{q}{1+q}\frac{1-q}2} = \| f \|_{L_2(I,L_{q+1}(\Omega))}^q, \] whence \begin{equation}\label{GjLq} \| G^j\|_{L_q(\Omega)} \lesssim \| f \|_{L_2(I,L_q(\Omega))} + \| f \|_{L_2(I,L_{q+1}(\Omega))} \end{equation} } \end{comment} We now deal with $\left|G^j\right|_{B^s_{q,q}(\Omega)} $. Observe that for any $j$ we have \begin{align} \left|G^j\right|_{B^s_{q,q}(\Omega)} &\lesssim \left(\int_0^1 \left[u^{-s}w_r(G^j,I,u)_q\right]^q \frac{\mathrm{d} u}{u}\right)^{1/q} = \left(\int_0^1 u^{-sq}w_r(G^j,I,u)_q^q \frac{\mathrm{d} u}{u}\right)^{1/q} \nonumber \\ &= \left(\int_0^1 u^{-sq} \frac{1}{(2u)^{n}} \int_{|h|\le u} \left\|\Delta_h^rG^j\right\|^q_{L_q(\Omega_{ rh })} \mathrm{d} h \frac{\mathrm{d} u}{u}\right)^{1/q}\nonumber \\ &= \left(\int_0^1 u^{-sq}\frac{1}{(2u)^{n}} \int_{|h|\le u} \int_{\Omega_{ rh }}\left|\int_I \Delta_h^rf(t,x) \onb{}{j}(t) \mathrm{d} t\right|^q\mathrm{d} x \mathrm{d} h \frac{\mathrm{d} u}{u}\right)^{1/q}\nonumber \\ &= \left(\int_0^1 \int_{|h|\le u} \int_{\Omega_{ rh }} u^{-sq}\frac{1}{(2u)^{n}} \left|\int_I \Delta_h^rf(t,x) \onb{}{j}(t) \mathrm{d} t\right|^q\mathrm{d} x \mathrm{d} h \frac{\mathrm{d} u}{u}\right)^{1/q}. \nonumber \end{align} Again, by Minkowski's inequality \begin{align*} \left|G^j\right|_{B^s_{q,q}(\Omega)} &\le \int_I \left(\int_0^1 \int_{|h|\le u} \int_{\Omega_{ rh }} u^{-sq}\frac{1}{(2u)^{n}} \left| \Delta_h^rf(t,x) \onb{}{j}(t)\right|^q\mathrm{d} x \mathrm{d} h \frac{\mathrm{d} u}{u}\right)^{1/q} \mathrm{d} t \nonumber \\ &= \int_I \left| \onb{}{j}(t) \right| \left(\int_0^1 u^{-sq} \frac{1}{(2u)^{n}} \int_{|h|\le u} \int_{\Omega_{rh }} \left|\Delta_h^rf(t,x)\right|^q \mathrm{d} x \mathrm{d} h \frac{\mathrm{d} u}{u}\right)^{1/q} \mathrm{d} t\nonumber \\ &= \int_I \left|\onb{}{j}(t)\right| \left|f(t)\right|_{B^s_{q,q}(\Omega)} \mathrm{d} t \leq \left\|\onb{}{j}\right\|_{L_2(I)}\left\|f(t)\right\|_{L_2(I,B^s_{q,q}(\Omega))} \end{align*} whence \begin{equation} \label{eq:G_bound1} \left|G^j\right|_{B^s_{q,q}(\Omega)} \lesssim \left\|f\right\|_{L_2(I,B^s_{q,q}(\Omega))}. \end{equation} \begin{comment} \pedro{ \noindent$\mathbf{-}$ \ \textbf{If $0<q < 1$}, we first resort to Lemma~\ref{L:auxiliar} to bound \begin{align*} \left|\int_I \Delta_h^rf(t,x) \onb{}{j}(t) \mathrm{d} t\right| &\lesssim |I|^{1-\frac1q} \|\Delta_h^rf(\cdot,x)\|_{L_q(I)} + |I|^{\frac q{1+q}} \|\Delta_h^rf(\cdot,x)\|_{L_{1+q}(I)} . \end{align*} We now use again that $\| \onb{}{j} \|_{L_\infty(I)} \simeq |I|^{-1/2}$ to arrive at \begin{align*} |G^j|_{B_{q,q}^s(\Omega)}^q \lesssim{}& \int_0^1 \int_{|h|\le u} \int_{\Omega_h} u^{-sq}\frac{1}{(2u)^{n}} \left|\int_I \Delta_h^rf(t,x) \onb{}{j}(t) \mathrm{d} t\right|^q\mathrm{d} x \mathrm{d} h \frac{\mathrm{d} u}{u} \\ \lesssim{}& |I|^{-\frac q2} \int_0^1 u^{-sq}\frac{1}{(2u)^{n}} \int_{|h|\le u} \int_{\Omega_h} |I|^{q-1} \|\Delta_h^rf(\cdot,x)\|_{L_q(I)}^q \mathrm{d} x \mathrm{d} h \frac{\mathrm{d} u}{u} \\ &+ |I|^{-\frac q2} \int_0^1 u^{-sq}\frac{1}{(2u)^{n}} \int_{|h|\le u} \int_{\Omega_h} |I|^{\frac {q^2}{1+q}} \|\Delta_h^rf(\cdot,x)\|_{L_{1+q}(I)}^q \mathrm{d} x \mathrm{d} h \frac{\mathrm{d} u}{u} \\ =:{}& A + B. \end{align*} } \pedro{Proceeding with Hölder's inequality as before, we get \[ A = |I|^{\frac{q}2-1} \int_I |f(\cdot,t)|_{B_{q,q}^s(\Omega)}^q \mathrm{d} t \le \left(\int_I |f(\cdot,t)|_{B_{q,q}^s(\Omega)}^2 \mathrm{d} t\right)^{\frac q2} \] } \pedro{ We now use Hölder's inequality over $\Omega_h$ with exponents $p = \frac{q+1}q$ and $p'= q+1$ to obtain \begin{align*} B &= |I|^{\frac q2 \frac{q-1}{1+q}} \int_0^1 u^{-sq}\frac{1}{(2u)^{n}} \int_{|h|\le u} \int_{\Omega_h} \|\Delta_h^rf(\cdot,x)\|_{L_{1+q}(I)}^q \mathrm{d} x \mathrm{d} h \frac{\mathrm{d} u}{u} \\ &\lesssim |I|^{\frac q2 \frac{q-1}{1+q}} \int_0^1 u^{-sq}\frac{1}{(2u)^{n}} \int_{|h|\le u} \left( \int_{\Omega_h} \|\Delta_h^rf(\cdot,x)\|_{L_{1+q}(I)}^{q+1} \mathrm{d} x \right)^{\frac q{q+1}} \mathrm{d} h \frac{\mathrm{d} u}{u} \\ &= |I|^{\frac q2 \frac{q-1}{1+q}} \int_0^1 u^{-sq}\frac{1}{(2u)^{n}} \int_{|h|\le u} \left( \int_{\Omega_h} \int_I |\Delta_h^rf(t,x)|^{q+1} \mathrm{d} t \mathrm{d} x \right)^{\frac q{q+1}} \mathrm{d} h \frac{\mathrm{d} u}{u} \\ &= |I|^{\frac q2 \frac{q-1}{1+q}} \int_0^1 u^{-sq}\frac{1}{(2u)^{n}} \int_{|h|\le u} \left( \int_I \int_{\Omega_h} |\Delta_h^rf(t,x)|^{q+1} \mathrm{d} x \mathrm{d} t \right)^{\frac q{q+1}} \mathrm{d} h \frac{\mathrm{d} u}{u} \\ &= |I|^{\frac q2 \frac{q-1}{1+q}} \int_0^1 u^{-sq}\frac{1}{(2u)^{n}} \int_{|h|\le u} \left( \int_I \|\Delta_h^rf(t,\cdot)\|_{L_{1+q}(\Omega_h)}^{q+1} \mathrm{d} t \right)^{\frac q{q+1}} \mathrm{d} h \frac{\mathrm{d} u}{u}. \end{align*} } \pedro{% We now employ Hölder's inequality with the same exponents over $\{ h \in \mathbb{R}^n : |h| \le u\}$, which leads to \begin{align*} B &\le |I|^{\frac q2 \frac{q-1}{1+q}} \int_0^1 u^{-sq}\frac{1}{(2u)^{n}} \left( \int_{|h|\le u} \int_I \|\Delta_h^rf(t,\cdot)\|_{L_{1+q}(\Omega_h)}^{q+1} \mathrm{d} t \mathrm{d} h \right)^{\frac q{q+1}} \left(\int_{|h|\le u} 1 \right)^{\frac{1}{q+1}} \frac{\mathrm{d} u}{u} \\ &\le |I|^{\frac q2 \frac{q-1}{1+q}} \int_0^1 u^{-sq}\frac{1}{(2u)^{n}} (2u)^{\frac{n}{q+1}} \left( \int_I \int_{|h|\le u} \|\Delta_h^rf(t,\cdot)\|_{L_{1+q}(\Omega_h)}^{q+1} \mathrm{d} h \mathrm{d} t \right)^{\frac q{q+1}} \frac{\mathrm{d} u}{u} \\ &\simeq |I|^{\frac q2 \frac{q-1}{1+q}} \int_0^1 u^{-(s+\epsilon) q} u^{\frac{n}{q+1}+\epsilon q} \left( \int_I \int_{|h|\le u} \|\Delta_h^rf(t,\cdot)\|_{L_{1+q}(\Omega_h)}^{q+1} \mathrm{d} h \mathrm{d} t \right)^{\frac q{q+1}} \frac{\mathrm{d} u}{u^{n+1}}. \end{align*} } \pedro{% We now apply Hölder's inequality one last time, over $[0,1]$ with the same exponents to bound \begin{align*} B \lesssim |I|^{\frac q2 \frac{q-1}{1+q}} &\left( \int_0^1 u^{-(s+\epsilon) (q+1)} \int_I \int_{|h|\le u} \|\Delta_h^rf(t,\cdot)\|_{L_{1+q}(\Omega_h)}^{q+1} \mathrm{d} h \mathrm{d} t \frac{\mathrm{d} u}{u^{n+1}}\right)^{\frac q{q+1}} \\ & \times \left( \int_0^1 u^{(\frac{n}{q+1}+\epsilon q)(q+1)} \frac{\mathrm{d} u}{u^{n+1}}\right)^{\frac 1{q+1}}. \end{align*} Since $(\frac{n}{q+1}+\epsilon q)(q+1)-(n+1) = -1+\epsilon q (q+1) > -1$ the last integral is finite and \begin{align*} B &\lesssim |I|^{\frac q2 \frac{q-1}{1+q}} \left( \int_I |f(t,\cdot)|_{B_{1+q,1+q}^{s+\epsilon}(\Omega)}^{q+1} \mathrm{d} t \right)^{\frac q{q+1}} \\ &\le |I|^{\frac q2 \frac{q-1}{1+q}} \left( \int_I |f(t,\cdot)|_{B_{1+q,1+q}^{s+\epsilon} \mathrm{d} t(\Omega)}^2 \right)^{\frac q2} \left( \int_I 1 \mathrm{d} t \right)^{\frac q{1+q} \frac{1-q}2 } \le \| f \|_{L_2(I,B_{1+q,1+q}^{s+\epsilon}(\Omega))}^q. \end{align*}} \end{comment} Therefore from \eqref{eq:G_bound0} \eqref{eq:G_bound1} and \eqref{eq:G_bound2} we get \[ \| G \|_{ L_2(I,B^{s}_{q,q}(\Omega) )} \lesssim \| f \|_{ L_2(I,B^{s}_{q,q}(\Omega) )} \qquad\text{if \ $1 \le q < \infty$}, \] and analogously for $q = \infty$. \end{proof} \begin{comment} \pedro{ \begin{lem}\label{L:auxiliar} Let $0 < q < 1$ and let $f \in L_{1+q}(I)$ for some interval $I$ of positive length $|I|$. Then \[ \| f \|_{L_1(I)} \le q^2{|I|^{1-\frac1q}} \| f\|_{L_q(I)} + (1-q^2) |I|^{\frac{q}{1+q}} \| f \|_{L_{1+q}(\Omega)} \] \end{lem} } \begin{proof} \pedro{ We start applying Hölder's inequality with exponents $p = \frac1q$ and $p'=\frac1{1-q}$ to bound \begin{align*} \int_I |f| &= \int_I |f|^{q^2} |f|^{1-q^2} \\ &\le \left(\int_I|f|^{\frac{q^2}q} \right)^{q} \left(\int_I |f|^{\frac{1-q^2}{1-q}}\right)^{1-q} \\ &= \left(\int_I|f|^q\right)^{q} \left(\int_I |f|^{1+q}\right)^{1-q} = \| f \|_{L_q(I)}^{q^2} \| f \|_{L_{1+q}(I)}^{1-q^2}. \end{align*} } \pedro{ We now apply Young's inequality $ab \le \frac{a^p}{p} + \frac{b^{p'}}{p'}$ with $p = \frac1{q^2}$ and $p'= \frac1{1-q^2}$ and $a = |I|^{q^2-q}\| f \|_{L_q(I)}^{q^2}$ and $b = |I|^{q-q^2} \| f \|_{L_{1+q}(I)}^{1-q^2}$ to obtain \[ \int_I |f| \le q^2\left(|I|^{q^2-q}\| f \|_{L_q(I)}^{q^2}\right)^{\frac{1}{q^2}} + (1-q^2) \left( |I|^{q-q^2} \| f \|_{L_{1+q}(I)}^{1-q^2} \right)^{\frac{1}{1-q^2}}, \] which is the desired assertion. } \end{proof} \end{comment} If we use the same polynomial degree in space and time in Theorem \ref{space-time-poly} the result reads as follows. \begin{cor}[{\bf Fully discrete with same polynomial degree}]\label{space-time-poly-2} Let $1\le q \le \infty$ and $n\Big(\frac1q-\frac12\Big)_+ < s < r \in \mathbb{N}$. If $f\in B^{s}_{q,q}([0,T),X) \cap L_2([0,T),B_{q,q}^{s}(\Omega))$ with $X = L_2(\Omega)$, then for each $\varepsilon > 0$ there exists a time-space partition $\mathcal{P}$ that satisfies \[ \# \mathcal{P}\le c_1 \varepsilon^{-\frac{n+1}{s}} \] and a function $F \in \overline{\mathbb{V}}_{\mathcal{P}}^{r,r}$ such that \begin{align*} \| f - F \|_{ L_2([0,T),X)} \le c_2 \varepsilon |\!|\!| f |\!|\!| \le c_3 (\#\mathcal{P})^{-\frac{s}{n+1}} |\!|\!| f |\!|\!|, \end{align*} where $|\!|\!| f |\!|\!| = | f |_{B^{s}_{q,q}([0,T),X)} + \| f \|_{L_2([0,T),B_{q,q}^{s}(\Omega))}$ and the positive constants $c_1,c_2,c_3$ depend on $q$ and $s$ but not on $f$. \end{cor} \subsection{Comparison with space-time finite elements} If we were to use space-time finite elements of order $r$ in $\mathbb{R}^{n+1}$, in order to obtain the same rate $(\#\mathcal{P})^{-\frac{s}{n+1}}$ as that indicated in Corollary~\ref{space-time-poly-2}, Corollary~\ref{cor:spaceapprox} tells us that the function $f$ should belong to $B^s_{q,q}([0,T)\times\Omega)$ with $0<s < r$ and $0<\frac1q < \frac{s}{n+1}+\frac12$. This raises the following question: \begin{quote} What is the relation between the spaces \[ B^s_{q,q}([0,T)\times\Omega)\quad\text{and}\quad B^{s}_{q_1,q_1}([0,T),L_2(\Omega)) \cap L_2([0,T),B_{q_2,q_2}^{s}(\Omega)) \] for the respective ranges of the parameters $q_1,q_2,$ and $q$? \end{quote} The following proposition provides a first attempt to give an answer to this question. \begin{prop} Let $0<s<r$ and $0<q_1,q_2,q < \infty$, where we additionally require that \begin{equation} \label{range_q} \frac1q < \frac{s}{n+1}+\frac12, \qquad \frac{1}{q_1} < {s}+\frac12, \qquad \text{and}\qquad \frac{1}{q_2} < \frac{s}{n}+\frac12. \end{equation} Then we have \begin{equation} \bigcup_{q_1,q_2}B^{s}_{q_1,q_1}([0,T),L_2(\Omega))\cap L_2([0,T),B_{q_2,q_2}^{s}(\Omega))\not\subset \bigcup_{q}B^s_{q,q}([0,T)\times \Omega), \end{equation} where the union is taken over all $q,q_1,q_2$ according to \eqref{range_q}. \end{prop} \begin{proof} \begin{comment}{\em Step 1.} Let $f\in B^s_{q,q}([0,T)\times \Omega)$ for some $\frac1q < \frac{s}{n+1}+\frac12$. This immediately implies \pedro{the one} $f\in L_2([0,T)\times \Omega)$, thus, by Fubini's theorem $f(\cdot, x)\in L_2([0,T))$ and $f(t,\cdot)\in L_2(\Omega)$. Moreover, we will also show that this implies that \[ f(\cdot, x)\in B^s_{q,q}([0,T))\qquad \text{and}\qquad f(t,\cdot)\in B^s_{q,q}(\Omega). \] This can be seen as follows: Put $Q:=[0,T)\times \Omega$. Using the equivalence between the moduli of smoothness $w_r$ and $\omega_r$ we have \begin{align*} \omega_r(f,t,Q)^q_q &= \sup_{|\tilde{h}|\leq t}\|\Delta_{\tilde{h}}^r f|{L_q(Q)} \|^q\\ &\geq \sup_{0<h<t}\|\Delta^r_{(h,0)}f|{L_q(Q)}\|^q\\ &\simeq \frac 1t\int_0^t \|\Delta^r_{(h,0)}f|{L_q(Q)}\|^q\mathrm{d} h \\ &= \frac 1t\int_0^t \int_{\Omega}\int_0^{1-rh} |\Delta^r_{(h,0)}f(y,x)|^q\mathrm{d} y\mathrm{d} x\mathrm{d} h \\ &= \int_{\Omega}\frac 1t\int_0^t \|\Delta^r_{(h,0)}f(\cdot,x)|L_q(0,1-rh)\|^q\mathrm{d} h \mathrm{d} x\\ &= \int_{\Omega}w_r(f(\cdot,x),t,[0,1])^q_q \mathrm{d} x. \end{align*} Multiplying both sides with $t^{-sq}$, integration w.r.t. $\frac{\mathrm{d} t}{t}$ from $0$ to $T$, and interchanging the double integral on the right-hand side yields \[ \int_{\Omega}\int_0^T t^{-sq}w_r(f(\cdot,x),t,[0,1])^q_q \frac{\mathrm{d} t}{t}\mathrm{d} x\leq \int_0^T t^{-sq}\omega_r(f,t,Q)_q^q\frac{\mathrm{d} t}{t}. \] But this is equivalent to \[ \int_{\Omega}|f(\cdot, x)|_{B^s_{q,q}([0,T))}^q\mathrm{d} x\lesssim |f|_{B^s_{q,q}([0,T)\times \Omega)}<\infty, \] which shows that for almost any $x\in \Omega$ we have $f(\cdot, x)\in B^s_{q,q}([0,T))$. The fact that $f(t,\cdot)\in B^s_{q,q}(\Omega)$ is proven in the same way by interchanging the roles of $[0,T)$ and $\Omega$ in the above calculations. \\ {\em Step 2.} We need to check when for $f(\cdot ,x)$ we have the embedding \[ B^s_{q,q}([0,T))\hookrightarrow B^s_{q_1,q_1}([0,T)). \] According to \cite[Cor.~3.7]{HS13} for our range of parameters (i.e., $s_1=s_2=s$ in Cor. 3.7) this holds if, and only if \[ s-s=\left(\frac{1}{q}-\frac{1}{q_1}\right)_+\qquad \text{and}\qquad q\leq q_1. \] But this implies we have to choose $q_1=q$. Moreover, for $f(t, \cdot)$ we have the embedding \[ B^s_{q,q}(\Omega)\hookrightarrow B^s_{q_2,q_2}(\Omega). \] According to \cite[Cor.~3.7]{HS13} this holds if, and only if \[ s-s=\left(\frac{1}{q}-\frac{1}{q_2}\right)_+\qquad \text{and}\qquad q\leq q_2, \] thus, we choose $q_2=q$. In conclusion, the desired embedding holds if we can choose $q_1=q_2=q$, where $$q<\min\left\{s, \frac sn, \frac{s}{n+1}\right\}+\frac 12=\frac{s}{n+1}+\frac 12.$$ This is always possible by our assumptions. \\ {\em Step 3.} \end{comment} We show that we can find functions belonging to $ \bigcup_{q_1,q_2}B^{s}_{q_1,q_1}([0,T),L_2(\Omega))\cap L_2([0,T),B_{q_2,q_2}^{s}(\Omega))$ which are not in $\bigcup_{q}B^s_{q,q}([0,T)\times \Omega)$. For this let us choose $q_1$ such that $$ \frac 1q< \frac{s}{n+1}+\frac 12<\frac{1}{q_1}< s + \frac 12 $$ and consider a function $f$ which is constant with respect to the space variable $x$ and belongs to $B^s_{q_1,q_1}([0,T))$. Clearly, by our assumptions this function is also in $L_2([0,T))$. Moreover, by or choice of $q_1$ we see from \cite[Cor.~3.7]{HS13} that \[ B^s_{q_1,q_1}([0,T),L_2(\Omega))\not\hookrightarrow B^{s}_{q,q}([0,T)\times\Omega), \] since $q_1<q$, which proves the claim. Alternatively, we could choose $q_2$ such that $$ \frac 1q< \frac{s}{n+1}+\frac 12<\frac{1}{q_2}< \frac sn + \frac 12 $$ and consider a function $f$ which is constant with respect to the time variable $t$ and belongs to $B^s_{q_2,q_2}(\Omega)$. Then, clearly this function also belongs to $L_2(\Omega)$. By our choice of $q_2$ it again follows from \cite[Cor.~3.7]{HS13} that \[ B^s_{q_2,q_2}(\Omega)\not\hookrightarrow B^{s}_{q,q}(\Omega), \] since $q_2<q$. This completes the proof. \end{proof} \begin{rem} We believe that for the above spaces under consideration we actually have the following inclusion \[ \bigcup_{q}B^s_{q,q}([0,T)\times \Omega)\subsetneq \bigcup_{q_1,q_2}B^{s}_{q_1,q_1}([0,T),L_2(\Omega))\cap L_2([0,T),B_{q_2,q_2}^{s}(\Omega)), \] where $q, q_1$, and $q_2$ are chosen according to \eqref{range_q}. This can be interpreted in the sense that the respective solution spaces for time-stepping algorithms yielding the approximation class $\mathbb{A}_{\frac{s}{n+1}}(L_2([0,T)\times \Omega))$ are in fact larger than the corresponding solution spaces for space-time finite elements. \\ In this context the fact that $B^s_{q,q}([0,T)\times \Omega)\subset L_2([0,T), B^s_{q_2,q_2}(\Omega))$ should be easier to handle. However, these matters are quite technical and, therefore, this interesting question will be tackled in a future paper. \end{rem} \begin{comment} Answer: neither is included in the other. We have a counterexample - at least for Sobolev spaces. \begin{rem} It turns out that \[ W^1_1(L_2)\cap L_2(W^1_q)\quad \text{and}\quad W^1_{\tilde{q}}((0,T)\times \Omega) \] are not related in terms of embeddings of one space into the other. \begin{proof} {\em Step 1:} We first show that $$W^1_1(L_2)\cap L_2(W^1_q)\not\subset W^1_{\tilde{q}}((0,T)\times \Omega).$$ Take a function $f$ which is constant w.r.t. the spacial variable $x$. Then the problem reduces to show that \[ W_1^1\cap L_2\not\subset W^1_{\tilde{q}}. \] Since $\tilde{q}\geq q\geq 1$ and the time inverval is bounded, we have embeddings \[ W^1_{\tilde{q}}\subset W^1_q\subset W^1_1\subset L_2, \] where the last embedding follows since we have $1-\frac{d}{1}\geq -\frac d2$ when $d=1$. Therefore, the problem reduces to find a function $f$ such that \[ f\in W^1_1 \quad \text{and}\quad f\notin W^{1}_{\tilde{q}}, \] but this is possible. \\ {\em Step 2:} We want to show that $$W^1_{\tilde{q}}((0,T)\times \Omega)\not\subset W^1_1(L_2)\cap L_2(W^1_q).$$ In order to do so we construct a counterexample. Let $$f(t,x)=(t-|x|)^{\alpha}, \qquad \alpha\in \mathbb{R}, \qquad (t,x)\in (0,T)\times \Omega, \qquad \textcolor{red}{t\geq |x|}.$$ Then for the derivatives of $f$ using $\partial_{x_i}(|x|)=\frac{x_i}{|x|}$ we see that \begin{eqnarray*} \partial_t f(t,x)&=&\alpha(t-|x|)^{\alpha-1}, \\ \partial_{x_i}f(t,x)&=& -\alpha(t-|x|)^{\alpha-1}\frac{x_i}{|x|}. \end{eqnarray*} This yields \begin{eqnarray*} |\partial_{x_i}f(t,x)|&\leq & |\alpha(t-|x|)^{\alpha-1}| \frac{|x_i|}{|x|}\leq |\alpha(t-|x|)^{\alpha-1}|=|\partial_t f(t,x)|, \\ |f(t,x)|&=&\left|\partial_t f(t,x)\frac{t-|x|}{\alpha}\right|\leq c |\partial_t f(t,x)|, \end{eqnarray*} therefore, $|\partial_t f|$ majorizes $|f|$ and $|\partial_{x_i}f|$. We need to choose $\alpha$ such that $f$ to satisfies $f\in L_{\tilde{q}}(L_{\tilde{q}})$, $\partial_t f\in L_{\tilde{q}}(L_{\tilde{q}})$, $\partial_{x_i}f\in L_{\tilde{q}}(L_{\tilde{q}})$ (which by the above considerations is satisfied if $\partial_t f\in L_{\tilde{q}}(L_{\tilde{q}})$) but $\partial_t f\notin L_1(L_2)$. We proceed by giving a detailed calculation to characterize all $\alpha$ such that $\partial_t f\in L_p(L_q)$ for general $p,q$. W.l.o.g. we restrict ourselves to the case when $I\times \Omega=(0,1)\times B_1(0)$ and estimate \begin{align*} & \|\partial_t f\|_{L_p(L_q)}\\ &= \left(\int_0^1 \left(\int_{B_1}|\partial_t f|^q \mathrm{d} x\right)^{p/q}\mathrm{d} t\right)^{1/p}\\ &= \left(\int_0^1 \left(\int_{B_t}|\partial_t f|^q \mathrm{d} x+\int_{B_1\setminus B_t}|\partial_t f|^q \mathrm{d} x\right)^{p/q}\mathrm{d} t\right)^{1/p}\\ &\simeq \left(\int_0^1 \left(\int_{B_t}|\partial_t f|^q \mathrm{d} x\right)^{p/q}\mathrm{d} t\right)^{1/p}+ \left(\int_0^1 \left(\int_{B_1\setminus B_t}|\partial_t f|^q \mathrm{d} x\right)^{p/q}\mathrm{d} t\right)^{1/p}=:I+II.\\ \end{align*} We restrict ourselves to the first integral $I$ since the behaviour of the second one $II$ is the same. Thus, \begin{align} I &=\left(\int_0^1 \left(\int_{B_t}|\alpha|^q |t-|x||^{(\alpha-1)q}\mathrm{d} x\right)^{p/q}\mathrm{d} t\right)^{1/p}\notag \\ &= c_{\alpha}\left(\int_0^1 \left(\int_{B_t} (t-|x|)^{(\alpha-1)q}\mathrm{d} x\right)^{p/q}\mathrm{d} t\right)^{1/p}\notag\\ &= c_{\alpha}\left(\int_0^1 \left(\int_{0}^t (t-r)^{(\alpha-1)q}r^{d-1}\mathrm{d} r\right)^{p/q}\mathrm{d} t\right)^{1/p}, \label{est-01} \end{align} where in the second line we used $|x|\leq t$ and in the last line we applied polar coordinates. Concerning the inner integral in \eqref{est-01} we substitute $u:=t-r$ and obtain \begin{align*} &\int_{0}^t (t-r)^{(\alpha-1)q}r^{d-1}\mathrm{d} r\\ &= -\int_{0}^t u^{(\alpha-1)q}(t-u)^{d-1}\mathrm{d} u\\ &=-\int_{0}^t u^{(\alpha-1)q}\sum_{k=0}^{d-1}{d-1 \choose k}t^{d-1-k}(-u)^k\mathrm{d} u\\ &=\sum_{k=0}^{d-1} c_{d,k}\cdot t^{d-1-k}\underbrace{\int_{0}^t u^{(\alpha-1)q+k}\mathrm{d} u}_{\simeq u^{(\alpha-1)q+k+1}\big|_0^t}<\infty \end{align*} if, and only, if \begin{equation}\label{cond-int-1} (\alpha-1)q+k>-1. \end{equation} For the outer integral in \eqref{est-01} we then obtain \begin{align*} I&\simeq \left(\sum_{k=0}^{d-1}c_{k,d,\alpha}\int_0^1 \left(t^{d+(\alpha-1)q}\right)^{p/q}\mathrm{d} t\right)^{1/p}<\infty, \end{align*} if, and only, if \begin{equation} \label{cond-int-2} \frac{dp}{q}+(\alpha-1)p>-1. \end{equation} From the above considerations if follows that condition \eqref{cond-int-1} with $k=0$ (worst case) together with \eqref{cond-int-2} yields \begin{equation} \label{char_alpha} \partial_tf\in L_p(L_q)\quad \iff\quad \Big( (\alpha-1)q>-1\Big) \ \wedge \ \left(\frac{dp}{q}+(\alpha-1)p>-1\right). \end{equation} For $p=q=\tilde{q}$ we obtain from this \[ \Big( (\alpha-1)\tilde{q}>-1\Big) \ \wedge \ \Big(d+(\alpha-1)\tilde{q}>-1\Big), \] hence, $\partial_tf\in L_{\tilde{q}}$ if, and only, if $\alpha>1-\frac{1}{\tilde{q}}$. On the other hand for $p=1$, $q=2$ we get \[ \Big(\alpha>\frac 12\Big) \ \wedge \ \left(\alpha>-\frac d2\right), \] hence, $\partial_t f\in L_1(L_2)$ if, and only, if $\alpha>\frac 12$. We get the desired counterexample by choosing $$ 1-\frac{1}{\tilde{q}}<\alpha<\frac 12, $$ which is possible for $\tilde{q}<2$. \end{proof} \end{rem} \end{comment} \begin{comment} \subsection{Piecewise constants and Sobolev regularity} \begin{cor}[{\bf Approximation with piecewise constant functions}]\label{space-time-poly-pwconst} Let $u\in W^1_{q_1,q_1}([0,T), L_2(\Omega)) \cap L_2([0,T),W^1_{q_2,q_2}(\Omega))$, where $q_1$ and $q_2$ satisfy the assumptions of Theorem \ref{space-time-poly}. Then the time-stepping Greedy algorithm terminates in finitely many steps and the generated partition $\mathcal{P}$ satisfies \[\# \mathcal{P} \lesssim \frac{1}{\varepsilon^{n+1}}.\] Moreover, \[ \inf_{v\in \overline{\mathbb{V}}_{\mathcal{P}}^{1,1}}\|u-v|L_{\textcolor{red}{2}}([0,T), L_{\textcolor{red}{2}}(\Omega))\|\lesssim {(\# \mathcal{P})^{-\frac{1}{1+n}}}|u|_{W^1_{q_1,q_1}([0,T),W^1_{q_2,q_2}(\Omega))}, \] where the constants depend on $q_1$ and $q_2$ but not on $u$. \textcolor{red}{In particular, (...is there something missing?)} \end{cor} \begin{proof} \textcolor{red}{This follows from the results above when applying \eqref{gen_whitney_2} instead of \eqref{gen_whitney}. } \end{proof} \end{comment} \bibliographystyle{alpha} \def$'${$'$}
1,116,691,496,960
arxiv
\section{Introduction} Nonclassical states are important resources for quantum information processing and probing fundamental properties of quantum mechanics~\cite{Dodonov02,Braunstein05}. A wide range of states have been studied in the literature, most notably in optical-based systems, including Fock states~\cite{Waks06}, displaced Fock states~\cite{Satya85,Ziesel13} and different types of squeezed states~\cite{Lvovsky15}. In particular, photon-added coherent (PAC) states~\cite{Agarwal91,Agarwal92}, where a photon is added to the same mode as a coherent state, have received much attention~\cite{Sivakumar99,Sivakumar00}. They have applications in quantum sensing~\cite{Braun14,Gard16,Schnabel17} and helping to develop security protocols in quantum key distribution~\cite{Loepp06,Assche06,Barnett06,Barnett18}. Furthermore, the process of adding (and subtracting) a photon in an optical field has interesting physical consequences~\cite{Kim08}, enabling quantum state engineering~\cite{Fiurasek09} and the probing of quantum features, such as bosonic commutation relations~\cite{Parigi07} and quantum thermodynamics~\cite{Vidrighin16}. In recent years, theoretical studies have investigated the generation of PAC states in cavity and ion-trap systems~\cite{RamosPrieto14}, and their creation using photon-subtracted states~\cite{Mojaveri14}, amplification methods~\cite{Shringapure19} and nonlinear optics~\cite{Kalamidas08,Li07}. Studies have also investigated their entanglement properties~\cite{Dominguez16,Ren19}, robustness to noise and dissipation~\cite{Dominguez16,Hu09}, in addition to their statistics~\cite{Barnett18}, practical characterisation~\cite{Filippov13} and generalisation to more complex structured states~\cite{HongChun10,Sivakumar13}. On the experimental side, studies have investigated the generation of PAC states by parametric down-conversion~\cite{Zavatta04}, as well as the characterisation of properties, such as photon statistics and the Wigner function~\cite{Zavatta05}, and degree of non-Gaussianity~\cite{Barbieri10}. Although considerable progress has been made in the development of PAC states, an important issue is that the theoretical works carried out so far use a single-mode description, while in experiments pulsed light is used, which naturally requires a continuous (temporal) mode description~\cite{Blow90,Loudon00}. The results observed in pulsed experiments are roughly in line with the single-mode theory, however, the impact of temporal and spectral wavepacket imperfections on the properties of PAC states cannot be predicted from a single-mode picture. For example, it is not possible to predict how a mismatch in the pulse duration of the single photon and coherent state wavepackets, when added together, affects the sub-Poissonian behaviour and quadrature squeezing. This may have an adverse effect on the performance of the generated PAC states in quantum sensing and other applications. In this work, we use a continuous-mode formalism to show how the properties of PAC states are affected by timing and bandwidth imperfections, as well as loss from propagation in waveguides. We study the photon-number distribution, second-order correlations, quadrature squeezing and fidelity of pulsed PAC states. We find that PAC states are reasonably robust to temporal and spectral mismatch, as well as propagation loss. The results of the work may help in the further development of experimental schemes for PAC state generation and their use in quantum information applications. In Section II we introduce the model for the work and some preliminary details, including some mathematical relations that will be used throughout the study. In Section III we investigate the photon statistics of continuous-mode PAC states, including photon-number distribution and second-order correlations. In Section IV we study quadratures and squeezing, and in Section V we derive an expression for the fidelity. Finally, in Section VI we summarize our findings. \begin{figure*}[t] \centering \includegraphics[width=17.6cm]{fig1.jpg} \caption{Generation and propagation of continuous-mode photon-added coherent (PAC) state pulses. (a) A scheme for generating a PAC state using an optical parametric amplifier with a Beta Barium Borate (BBO) crystal. A coherent state is sent into the signal mode and a down-converted photon is emitted into the same spatial mode via stimulated emission. A second photon is emitted into the idler mode and used to herald the successful photon addition. Ideally, the stimulated photon is in phase with the coherent state and has the same temporal shape. However, temporal properties of the pump pulse may affect the emission time, duration and shape of the added photon wavepacket. (b) Propagation of the continuous-mode PAC state pulses in three example waveguides: (i) surface plasmon polariton (SPP) nanowire, (ii) SPP stripe and (iii) dielectric fibre. In all waveguides a perfect coupling from free space is assumed. Imperfect coupling can be included in an overall loss factor~\cite{Tame08}. (c) Loss incurred, $\eta$, for a given propagation distance $L$ in the waveguides considered in (b). Here, the loss factor $\eta=e^{-k_{\rm i} L}$, with $k_{\rm i}$ set as $1/1.2$, $1/15$ and $1/10^{10}$ as examples for light at optical wavelengths in the respective waveguides.} \label{fig1} \end{figure*} \section{Model and Preliminaries} \subsection{Photon-added coherent state} A salient feature of the continuous mode formalism is the use of frequency dependent photon annihilation and creation operators $\hat{a}(\omega)$ and $\hat{a}^\dagger(\omega)$, respectively. The Fourier transforms of these operators give the instantaneous operators \begin{equation} \hat{a}(t) = \frac{1}{\sqrt{2\pi}} \int d\omega \hat{a}(\omega)\mathrm{e}^{-{\rm i} \omega t} \end{equation} and its Hermitian conjugate $\hat{a}^\dagger(t)$. These operators obey the commutation relation \begin{equation} [\hat{a}(t),\hat{a}^\dagger(t')] = \delta(t' - t). \label{eqn:atcomm} \end{equation} Using $\hat{a}^\dagger(t)$ a photon wavepacket creation operator may be defined as \begin{equation} \hat{a}_\xi^\dagger = \int dt \xi(t) \atd, \label{eqn:axid} \end{equation} where $\xi(t)$ is the wavepacket amplitude. A continuous-mode PAC state in a single spatial mode is constructed by the action of the photon wavepacket creation operator, $\hat{a}_\xi^\dagger$, with pulse profile $\xi(t + \tau)$, on a spatial mode containing a pulsed coherent state, $|\{\alpha\}\rangle$, with pulse profile $\alpha(t)$~\cite{Blow90,Loudon00}. The parameter $\tau$ enables the single photon and coherent state pulses to be offset in time. The resulting state, \begin{equation} |\{\alpha\},1_\xi\rangle = |N|^{1/2}~\hat{a}_\xi^\dagger~|\{\alpha\}\rangle, \label{eqn:pacs} \end{equation} is renormalised by the factor $|N|^{1/2}$. The normalisation constant is obtained from the relation $\braket{1_{\xi},\{\alpha\}|\{\alpha\},1_\xi}=1$, by substituting in Eq.~\eqref{eqn:pacs} and writing the operators in normal-order using the commutation relation $[{\hat{a}}_{\xi},\hat{a}_\xi^\dagger] = 1$. The following equation is also used, \begin{equation} {\hat{a}}_{\xi}|\{\alpha\}\rangle = \sigma(\tau)|\{\alpha\}\rangle, \label{eqn:axieig} \end{equation} which is obtained from the relations ${\hat{a}}_{\xi} = \int dt \xi^*(t+\tau) \at$ and $\at \ket{\{\alpha\}} = \alpha(t)\ket{\{\alpha\}}$ (see Appendix \ref{appendix: CMF}). The quantity $\sigma(\tau)$, given by \begin{equation} \sigma(\tau) = \int dt~\xi^*(t + \tau)\alpha(t), \label{eqn:sigma} \end{equation} is the cross-correlation of the single-photon and coherent state pulse profiles, and serves as a measure of the overlap between the pulses. The above steps lead to the normalization \begin{equation} |N(\tau)| = \Big(1 + |\sigma(\tau)|^2\Big)^{-1}. \end{equation} In the limit of perfect overlap, {\it i.e.} $\tau=0$ and $\xi(t)=\alpha(t)/\sqrt{n_{\alpha}}$, where $n_{\alpha}$ is the mean photon number of the coherent state, we have $|\sigma(\tau)|^2=n_{\alpha}$ and a normalization $N=(1+n_{\alpha})^{-1}$, which recovers the well-known single-mode result~\cite{Agarwal91}. To the best of our knowledge, this continuous-mode version of a PAC state has not been considered before. While similar to the single-mode case on which it is based, its construction is not obvious as it involves an overlap between a single photon and a coherent state wavepacket. This naturally leads to the possibility of temporal and bandwidth mismatch in the state, which may occur depending on the physical scenario. As an example, the continuous-mode PAC state introduced above can be produced using the process shown in Fig.~\ref{fig1}~(a), where a coherent state is fed into the signal spatial mode of an optical parametric amplifier (OPA) and a single down-converted photon is emitted into the same mode via stimulated emission, with another photon emitted into the idler mode and used to herald the successful photon addition~\cite{Zavatta04,Zavatta05}. Under ideal conditions the stimulated photon in the signal mode produced by the ensemble of atoms in the OPA will be in phase with the coherent state and have the same temporal shape~\cite{Vahala93}. However, the arrival time, duration and shape of the pump pulse entering the OPA (BBO crystal) will affect the emission time, duration and shape of the added photon wavepacket. This may cause a mismatch in the overlap between it and the coherent state. For instance, the pump pulse could have a time duration narrower than the coherent state, and potentially even an offset in its arrival time at the crystal. Another example is in the stimulated emission from a single excited atom by a coherent state~\cite{Sivakumar99}, under the assumption of a weak intensity, well-defined spatial mode and long enough pulse profile~\cite{Vahala93,Fischer18}. Due to the finite lifetime of the atom's excited state, a pump pulse must be used to place the atom into this state close in time to the arrival of the coherent state wavepacket. If the timing of the pump is not exact, it will affect the time at which the atom can produce a stimulated photon in the same mode as the coherent state. For instance, the atom might only be put into the excited state by the pump after part of the coherent state pulse has already passed, causing a mismatch in the overlap between it and the coherent state. Other scenarios for generating PAC states can be considered with different timing and bandwidth imperfections~\cite{Sivakumar00}. For instance, when adding a spontaneously emitted photon in phase with a coherent state~\cite{Resch02,Rarity05}. To keep the model general we consider the case where the coherent state and the single-photon wavepackets have arbitrary time profiles with time durations and pulse centre times varied as independent parameters. As mentioned earlier, the single photon pulse in our model has a complex wavepacket amplitude, or profile, $\xi(t+\tau)$. This profile has a bandwidth $\Omega_1$, phase $\theta_1(t + \tau) = \omega_0(t + \tau)$ and pulse centre shifted in time by $\tau$ with respect to that of the coherent state. The coherent state, with profile $\alpha(t)$, has a mean photon number of $n_\alpha$, a bandwidth of $\Omega$, and a phase $\theta(t) = \omega_0 t$. The peak of the coherent state pulse passes the coordinate origin $z=0$ of the single spatial mode that it occupies at time $t_0=0$. For simplicity, we limit our study to pulses of the same central frequency $\omega_0$ and assume that the coherent state frequency can be tuned to closely match that of the single photon~\cite{Zavatta04}. All the results shown are for the case of Gaussian pulses, however the theory is applicable to more general pulse profiles. In the above model, we have described PAC states using the continuous-mode formalism of Refs.~\cite{Blow90,Loudon00}, as it is most appropriate for narrowband wavepackets. Brief descriptions of number state and coherent state pulses are provided in Appendix~\ref{appendix: CMF} for the interested reader, with minor details required for some of the derivations given in the remainder of the work. \subsection{Propagation} Finally, we describe the model used for the propagation of continuous-mode PAC states in various types of waveguides, such as those shown in Fig.~\ref{fig1}~(b). As example waveguides, we consider a surface plasmon polariton (SPP) nanowire~\cite{Akimov07}, SPP stripe~\cite{DiMartino12} and dielectric fibre~\cite{OBrien05}, all with varying amounts of loss incurred for a given propagation distance, as described by the function $\eta$ shown in Fig.~\ref{fig1}~(c). An input state propagating in a waveguide with a complex dispersion relation $k(\omega) = k_r(\omega) + {\rm i} k_i(\omega)$ will undergo a phase-shift and attenuation due to the real and imaginary parts of $k(\omega)$, respectively. The travelling-wave attenuation model of Ref.~\cite{Jeffers93} enables a description of such propagation using an effective beamsplitter system. The input-output relations are \begin{equation} \hat{a}_{L}(t) = \eta^{\frac{1}{2}}(L)~\at[r] + {\rm i}(1-\eta(L))^{\frac{1}{2}}~\hat{v}(t) \label{eqn:aL} \end{equation} and \begin{equation} \hat{v}_{L}(t) = \eta^{\frac{1}{2}}(L)~\hat{v}(t) + {\rm i}(1-\eta(L))^{\frac{1}{2}}~\hat{a}(t_r). \label{eqn:vL} \end{equation} The operators $\hat{a}(t)$ and $\hat{v}(t)$ represent the initial single guided and environment (bath) modes respectively, while $\hat{a}_L(t)$ and $\hat{v}_L(t)$ are correspondingly the modes after the state propagates a distance $L$. A retarded time, $t_r$, takes into account the time for a state to propagate the distance $L$ and the loss factor, $\eta(L)$, represents the amount of loss experienced during the propagation. In the next sections, loss is introduced into observables by substituting $\hat{a}(t) \rightarrow \hat{a}_L(t)$, $\hat{v}(t) \rightarrow \hat{v}_L(t)$, and similary for their Hermitian conjugates, with the environment taken to be in the vacuum state initially and traced out finally. The loss factor, $\eta(L) = \mathrm{e}^{2{\rm i} k_r(\omega)L}\mathrm{e}^{-k_i(\omega)L}$, is approximately \begin{equation} \eta(L) \simeq \mathrm{e}^{2{\rm i} [k_r(\omega_0)L - \omega_0 L/v_g(\omega_0)]}~\mathrm{e}^{-k_i(\omega_0)L} \label{eqn:etaL} \end{equation} for a narrowband input state. The amplitude and phase of $\eta(L)$ are identified as $|\eta(L)| = \exp(-k_i(\omega_0)L)$ and $\varphi_\eta = 2[k_r(\omega_0)L - \omega_0 L/v_g(\omega_0)]$, respectively. The retarded time is \begin{equation} t_r = t - \frac{L}{v_g(\omega_0)}, \label{eqn:tr} \end{equation} where $v_g(\omega_0) = d\omega/dk|_{\omega_0}$ is the wavepacket group velocity. \section{Photon Statistics} We start our investigation of continuous-mode PAC states by studying their photon statistics. In particular, the photon number distribution and second-order correlation function are considered, and we study the impact of temporal mismatch and loss on these statistical properties. Both are experimentally relevant for the characterization of PAC states~\cite{Zavatta04,Zavatta05,Barbieri10}. \subsection{Photon number distribution} The first property we investigate is the photon number distribution $P_n$, its mean $\braket{n}$, and variance $(\Delta n)^2$. We begin by deriving the expression for $P_n$ and then describe its modification when propagation loss is included. These expressions are then evaluated for PAC states with arbitrary temporal profiles for the combined single-photon and coherent state. Due to the composite nature (ambiguity in the pulse profile) of PAC states, we first calculate the probability density $P_n\big(\{t_i\}_{i=1}^{n}\big)$ by projecting out the transient number state defined as \begin{equation} \ket{\{1_{t_i}\}_{i=1}^{n}} = \frac{1}{\sqrt{\cal N}} \prod_{i=1}^{n} \atd[i]\ket{0}, \label{eqn:nt} \end{equation} (see Appendix~\ref{appendix: CMF} for further details) giving, \begin{equation} P_n\big(\{t_i\}_{i=1}^{n}\big) = |\braket{\{1_{t_i}\}_{i=1}^{n}|\psi}|^2. \label{eqn:Pdensity} \end{equation} The projection selects out the probability amplitude of the $n$-photon eigenstate in the state $\ket{\psi}$, which may be in a superposition of different photon numbers. This approach allows us to avoid specifying the profile of the state $\ket{\psi}$. Integrating the probability density then gives the probability distribution, \begin{equation} P_n = \int dt_1\cdots dt_n~P_n\big(\{t_i\}_{i=1}^{n}\big). \label{eqn:Pdist} \end{equation} Eq.~\eqref{eqn:Pdist} is valid for photon numbers $n \geq 1$, as Eq.~\eqref{eqn:Pdensity} yields the probability $P_0$ (and not the density) when the vacuum state $\ket{0}$ is projected out. For consistency, we restrict Eq.~\eqref{eqn:Pdensity} to $n\geq1$, and treat the vacuum state separately using \begin{equation} P_0 = 1 - \sum_{n=1}^{\infty} P_n. \label{eqn:P0} \end{equation} Introducing loss during propagation requires that we project out the guided mode state $\ket{\{1_{t_i}\}_{i=1}^n}_g$, as well as trace out the environment states $\ket{\{1_{t_i}\}_{i=n+1}^{m}}_e \forall~m\geq n$. As an example, we consider a guided mode state having $m \geq n$ photons present, $m-n$ of which can be lost to the environment. In the Heisenberg picture the projectors evolve, while the state remains the same. We denote the initial state as $\rho=\ketbra{\Psi}{\Psi}$, where $\ket{\Psi}=\ket{\psi}_g\ket{0}_e$, and the projectors as $\hat{P}_{g,L}=\ketbra{\{ 1_{t_i}\}_{i=1}^{n}}{\{ 1_{t_i}\}_{i=1}^{n}}$ and $\hat{P}_{e,L}=\ketbra{\{ 1_{t_i}\}_{i=n+1}^{m}}{\{ 1_{t_i}\}_{i=n+1}^{m}}$, which are constructed using the definition of the transient number state in Appendix~\ref{appendix: CMF} and the replacement $\hat{a}(t)\to\hat{a}_L(t)$ and $\hat{v}(t)\to\hat{v}_L(t)$ to account for the evolution (propagation). The joint probability density of having $n$ photons in the guided mode and $m-n$ in the environment after a propagation distance $L$ is given by \begin{equation} P_{n,m-n}\big(\{ t_i\}_{i=1}^{m},L\big)={\rm Tr}\big[ \hat{P}_{g,L} (\{ t_i\}_{i=1}^{n}) \otimes \hat{P}_{e,L} (\{ t_i\}_{i=n+1}^{m})\rho \big]. \end{equation} We then have the expression \begin{eqnarray} && P_{n,m-n}\big(\{t_i\}_{i=1}^m,L\big) = \frac{1}{n!(m-n)!} \times \nonumber \\ && \qquad \qquad \qquad \big|\bra{0}_e\bra{0}_g\prod_{j=n+1}^{m}\hat{v}_L(t_j)\prod_{i=1}^{n}\hat{a}_L(t_i)\ket{\psi}_g\ket{0}_e\big|^2, \nonumber \label{eqn:PnmtL} \end{eqnarray} in which the factorials arise from the normalisation of the transient number states (see Appendix~\ref{appendix: CMF}). Next, the expressions for $\hat{a}_L(t)$ and $\hat{v}_L(t)$ from Eqs. (\ref{eqn:aL}) and (\ref{eqn:vL}) are used, taking into account that the environment is initially in a vacuum so that the expectation value of all terms with $\hat{v}$ vanishes. The photon number probability density is then \begin{equation} P_{n,m-n}\big(\{t_i\}_{i=1}^m,L\big) = {|\eta(L)|}^n{|1-\eta(L)|}^{m-n}~\binom{m}{n}~ P_m\big(\{t_{r_i}\}_{i=1}^m\big). \label{eqn:PnmdensityL} \end{equation} Integrating with respect to all times gives the probability distribution \begin{equation} P_{n,m-n}\big(L\big) = {|\eta(L)|}^n{|1-\eta(L)|}^{m-n}~\binom{m}{n}~P_m. \label{eqn:PnmdistL} \end{equation} In general the initial guided state may be a superposition of number states, in which case the environment will evolve into a superposition state. This necessitates tracing out all the environment states. Thus, the probability of exactly $n$ photons remaining in the guided mode after propagating a distance $L$ is then simply Eq.~\eqref{eqn:PnmdistL} summed for all $m \geq n$; \begin{equation} P_n(L) = {|\eta(L)|}^n~\sum_{m=n}^{\infty}{|1-\eta(L)|}^{m-n}~\binom{m}{n}~P_m. \label{eqn:PnL} \end{equation} Here, $P_m\big(\{t_{r_i}\}_{i=1}^m\big)$ and $P_m$ are the lossless probability density (retarded) and probability distribution defined in Eqs.~\eqref{eqn:Pdensity} and \eqref{eqn:Pdist}, respectively. The above formula in Eq.~(\ref{eqn:PnL}) is a continuous-mode generalisation of the single-mode result that applies a Bernoulli transformation to a state's probability distribution in order to account for loss~\cite{Gardiner00}. We are now in a position to apply the continuous-mode probability density and its associated probability distribution to the PAC states we consider. We allow $n=0,1,2,\cdots$ in Eq.~\eqref{eqn:PnL}, but make the restriction that $0<|\eta(L)|<1$. The boundaries of $|\eta(L)|$ are excluded due the occurrence of $0^0$ when $n = 0$ and $n = m$. When $|\eta(L)|=0$ we have $P_n(L) = \delta_{0,n}$ for any initial state, since all photons will have been lost. When $|\eta(L)| = 1$, the lossless distribution derived for a specific initial state from Eq.~\eqref{eqn:Pdensity} is used. \begin{figure*}[t] \centering \includegraphics[width=17.8cm]{fig2.pdf} \caption{Photon number probability distributions for a continuous-mode PAC state as it propagates in a waveguide and undergoes loss. Top row: a PAC state with $n_\alpha=3$, middle row: a coherent state with $n_\alpha=4$, and bottom row: a Fock state with $n=4$. The loss increases from left to right and is shown above each column. The corresponding propagation distances, $L$, for a plasmonic nanowire (n), stripe (s), and dielectric fibre (d), as shown in Fig.~\ref{fig1}~(c), are $L_{\rm n}=L_{\rm s}=L_{\rm d}=0$ ($\eta=1$); $L_{\rm n}=0.35\mu$m, $L_{\rm s}=4.32\mu$m and $L_{\rm d}=2.88$km ($\eta=0.75$); $L_{\rm n}=0.83\mu$m, $L_{\rm s}=10.40\mu$m and $L_{\rm d}=6.93$km ($\eta=0.5$); $L_{\rm n}=1.66\mu$m, $L_{\rm s}=20.79\mu$m and $L_{\rm d}=13.86$km ($\eta=0.25$); $L_{\rm n}=L_{\rm s}=L_{\rm d}=\infty$ ($\eta=0$). The mean photon number and variance are also shown for each distribution as insets.} \label{fig2} \end{figure*} We start by calculating the photon number probability distribution, mean, and variance for PAC states. As outlined earlier, we begin with the probability density, \begin{equation} P_n\big(\{t_i\}_{i=1}^{n}\big)=\frac{|N(\tau)|}{n!}|\braket{0|\prod_{i=1}^{n}\hat{a}(t_i)\hat{a}_\xi^\dagger|\{\alpha\}}|^2, \end{equation} which has been expanded from Eq.~\eqref{eqn:Pdensity} using the state definitions in Eqs.~\eqref{eqn:nt} and \eqref{eqn:pacs}. Writing the operator product in normal order using Eq.~\eqref{eqn:atnaxi} and then carrying out the operations we obtain, \begin{equation} \begin{split} P_n\big(\{t_i\}_{i=1}^{n}\big) =& \frac{|N(\tau)|}{n!}\Big|\sum_{k=1}^{n}\Big[\xi(t_k+\tau)\prod_{\stackrel{i=1}{i\neq k}}^{n}\alpha(t_i)\Big]\Big|^2\mathrm{e}^{-n_\alpha}\\ = &\frac{|N(\tau)|}{n!}\Bigg(\sum_{k=1}^{n}{|\xi(t_k+\tau)|}^2~\prod_{\substack{i=1\\i\neq k}}^{n}{|\alpha(t_i)|}^2 ~+\\& ~\sum_{k=1}^{n}\sum_{\substack{k'=1\\k'\neq k}}^{n}~\xi(t_k+\tau)~\alpha^*(t_k)~\xi^*(t_{k'}+\tau)~\alpha(t_{k'})\\& \times\prod_{\substack{i=1\\i\neq k,k'}}^{n}~{|\alpha(t_i)|}^2\Bigg)\mathrm{e}^{-n_\alpha}. \end{split} \label{eqn:Pdensity_pacs} \end{equation} The modulus-squared factor has been separated into squared and mixed terms. In this form, the integral of Eq.~\eqref{eqn:Pdensity_pacs} with respect to all times simplifies easily. Performing the integration gives the photon number probability distribution, \begin{equation} \begin{split} P_n =&~\frac{|N(\tau)|}{n!}\Bigg(\sum_{k=1}^{n}\int dt_k~{|\xi(t_k+\tau)|}^2~\prod_{\substack{i=1\\i\neq k}}^{n}\int dt_i{|\alpha(t_i)|}^2 ~+\\& ~\sum_{k=1}^{n}\sum_{\substack{k'=1\\k'\neq k}}^{n}~\int dt_k~\xi(t_k+\tau)~\alpha^*(t_k)\int dt_{k'}~\xi^*(t_{k'}+\tau)~\alpha(t_{k'})\\& \times\prod_{\substack{i=1\\i\neq k,k'}}^{n}~\int dt_i~{|\alpha(t_i)|}^2\Bigg)\mathrm{e}^{-n_\alpha}. \end{split} \end{equation} Substituting Eqs.~\eqref{eqn:modsqdxi}, \eqref{eqn:modsqdalpha}, \eqref{eqn:sigma} and its complex conjugate for the integrals, and simplifying we obtain \begin{equation} P_n = |N(\tau)|~\frac{n_\alpha^{n-1}}{(n-1)!}~\mathrm{e}^{-n_\alpha}~\Bigg(1 + (n - 1)~\frac{{|\sigma(\tau)|}^2}{n_\alpha}\Bigg). \label{eqn:Pn_pacs} \end{equation} In the limit of perfect temporal overlap, {\it i.e.} $\tau=0$ and $\Omega_1=\Omega$, we recover the single-mode result given in Ref.~\cite{Zavatta05}. To obtain an expression for the distribution after propagation, we substitute Eq.~\eqref{eqn:Pn_pacs} into Eq.~\eqref{eqn:PnL} and find that the series converges, yielding \begin{equation} \begin{split} P_n(L) = &~{|\eta(L)|}^n~|1 - \eta(L)|~|N(\tau)|~\mathrm{e}^{-|\eta(L)|n_\alpha}\frac{{n_\alpha}^n}{n!}~\\ &\times\Big(1 + \frac{2~n~|\sigma(\tau)|^2}{n_\alpha} + |1 - \eta(L)|~|\sigma(\tau)|^2\Big)\\ &+{|\eta(L)|}^n~|N(\tau)|~\mathrm{e}^{-|\eta(L)|n_\alpha}\frac{{n_\alpha}^{n-1}}{(n-1)!} \\ &\times \Big(1 + (n-1)\frac{|\sigma(\tau)|^2}{n_\alpha} \Big). \end{split} \label{eqn:PnL_pacs} \end{equation} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{fig3.pdf} \caption{Sub-Poissonian behaviour of continuous-mode PAC states using the ratio of the variance to mean of the number state distribution, $(\Delta n)^2/\langle n\rangle$. The added single-photon pulse has a temporal offset $\tau$ in units of $1/\Omega$ and bandwidth $\Omega_1$ in units of $\Omega$, where $\Omega$ is the bandwidth of the coherent state pulse. The central frequency $\omega_0$ has no impact on any expressions plotted, however its value should be at least $10\Omega$ in order to satisfy the narrowband approximation. (a) A density plot showing the variance to mean ratio as $\tau$ and $\Omega_1$ are varied. In (b) and (c), cross sections of the plot in (a) are shown for the PAC state (solid, blue). Also included in these cross sections are the ratios for the number state (solid, orange) and coherent state (dashed, green). The number state and coherent state ratios are independent from $\tau$, $\Omega_1$, and $n_\alpha$, but are shown as a reference. The parameters not being varied in these plots are $n_\alpha=3$ and $|\eta|=1$ (no loss).} \label{fig3} \end{figure} In Fig.~\ref{fig2} we show histograms of the photon number distribution for a PAC state (first row) with $\tau=0$, $\Omega_1 = \Omega$ and $n_\alpha=3$. The value of $n_\alpha$ is chosen as an example due to it giving the greatest amount of quadrature squeezing in the single-mode treatment~\cite{Zavatta04}. The histograms for various values of $|\eta(L)|$ clearly show that, with increasing loss, the distribution shifts and skews toward lower photon numbers. A coherent state with $n_\alpha=4$ and a four-photon Fock state are also shown for comparison in the middle and bottom rows, respectively. The analytical forms of the probability distributions for these continuous-mode states are given in Appendix \ref{appendix:P}. The histograms highlight that the PAC state has an increased mean and a narrower distribution (variance) initially compared to the coherent state with the same mean photon number and is therefore sub-Poissonian~\cite{Loudon00}. One can clearly see how the mean and variance of the PAC state are affected by loss compared to the other states. The figure illustrates the basic number statistics of the continuous mode PAC state when there is perfect temporal and bandwidth overlap. We now study the effect of imperfect temporal overlap, focussing on the sub-Poissonian nature of PAC states, which is not clearly demonstrated by photon number distribution histograms. Thus, we extract the mean of the distribution in Eq.~\eqref{eqn:PnL_pacs} using $\sum_{n=0}^{\infty}nP_n(L)$, which converges to the result \begin{equation} \langle n \rangle = |\eta(L)|\Big(1 + n_\alpha + {|\sigma(\tau)|}^2~|N(\tau)|\Big). \label{eqn:Pn_mean} \end{equation} Similarly, using $\sum_{n=0}^{\infty}(n - \braket{n})^2P_n(L)$, the variance is given by \begin{equation} \begin{split} {(\Delta n)}^2 = &~\langle n \rangle - \Big(1 - {|\sigma(\tau)|}^4~{|N(\tau)|}^2\Big){|\eta(L)|}^2. \end{split} \label{eqn:Pn_variance} \end{equation} Again, in the limit of perfect temporal overlap and no loss, it is straightforward to verify that the above expressions give the single-mode results for the mean and variance of the photon number distribution~\cite{Agarwal91}. With perfect overlap and in the limit of large $n_\alpha$, ${|\sigma(\tau)|}^4~{|N(\tau)|}^2 \rightarrow 1$. Thus for a coherent state with a large mean photon number, the variance is negligibly close to the mean. The second term, which reduces the variance, is quadratic in $|\eta(L)|$, while the mean is linear. Therefore, as loss increases ($|\eta(L)|$ decreases) the reduction of the variance decays faster than the mean, and the variance to mean ratio increases linearly. In Fig.~\ref{fig3}~(a), the ratio of the variance and mean are plotted as the temporal offset $\tau$ and bandwidth $\Omega_1$ of the added single photon in the PAC state are varied (with $n_\alpha=3$). The loss is set to zero for the moment. In the cross section cuts in Figs.~\ref{fig3}~(b) and (c) we have included the results for the coherent state with arbitrary mean photon number, which is Poissonian (variance equal to the mean) and serves as an upper-bound for sub-Poissonian states (variance less than the mean). Also included is an $n$-photon state which, having a variance of $(\Delta n)^2 = 0$, is an ideal sub-Poissonian state. As can be seen in Fig.~\ref{fig3}~(a), the ratio $ (\Delta n)^2/\langle n \rangle < 1$ for all parameter ranges, thus demonstrating the sub-Poissonian nature of PAC states. In Fig.~\ref{fig3}~(b), we see that the PAC state is always sub-Poissonian regardless of the temporal overlap. However, the variance (and ratio) decreases with better overlap, with the lowest variance and ratio at $\tau = 0$. The fact that the ratio is always less than one regardless of the temporal offset of the single photon is due to the populations being derived from integrated values. For example, adding a photon completely out of time (or even incoherently) with the coherent state leads to the $P_n$ distribution shifting up by 1 ($n \to n+1$), which increases the mean, but leaves the variance unchanged. This results in a variance to mean ratio of $n_\alpha/(n_\alpha+1)$, which is always less than 1 and gives the asymptotic value of the ratio. In Fig.~\ref{fig3}~(b) this is given by $3/4=0.75$ for large $\tau$. In Fig.~\ref{fig3}~(c), the PAC state is again always sub-Poissonian regardless of the bandwidth mismatch, with a minimum value at $\Omega_1=\Omega$, corresponding to perfect spectral overlap. It is therefore clear that a sub-Poissonian behavior is not enough on its own to completely determine the quantum, or nonclassical, character of a continuous-mode PAC state and further analysis of its properties is needed. \subsection{Second-order correlation function} The second property of continuous-mode PAC states we investigate is the second-order correlation function, $g^{(2)}$, which allows us to go beyond basic photon number statistics and study temporal correlations in the statistics of a state. In particular, the correlation between the intensity of a field at time $t_1$ and at time $t_2$, for a fixed position. A value of less than unity at zero delay ($t_1=t_2$) is only possible for a nonclassical state and therefore we can use it to determine nonclassicality~\cite{Loudon00}. The second-order correlation function is given by~\cite{Loudon00,Gardiner00} \begin{figure*}[t] \centering \includegraphics[width=17.5cm]{fig5.pdf} \caption{Second-order correlation function $g^{(2)}(t_1,t_2)$ of the continuous-mode PAC state for various cases of temporal overlap between the added photon and coherent state. (a) Perfect temporal overlap between the photon and coherent state. (b) The bandwidth of the single photon is the same as the coherent state ($\Omega_1=\Omega$), but the single-photon pulse is shifted in time by $\tau=3/\Omega$. (c) The single photon and coherent state have zero delay, but the bandwidth of the single photon is larger than that of the coherent state (pulse duration shorter) by a factor of three. (d) The bandwidth of the single photon is three times larger than that of the coherent state and its pulse is shifted in time by $\tau=3/\Omega$. In all cases the second-order correlation function has a value at zero time delay ($t_1=t_2$) of less than 1, confirming nonclassicality of the state.} \label{fig5} \end{figure*} \begin{equation} g^{(2)}(t_1,t_2) = \frac{\braket{\hat{a}^\dagger(t_1)\hat{a}^\dagger(t_2)\hat{a}(t_2)\hat{a}(t_1)}}{\braket{\hat{a}^\dagger(t_1)\hat{a}(t_1)}\braket{\hat{a}^\dagger(t_2)\hat{a}(t_2)}}. \label{eqn:g2t} \end{equation} In this form, $g^{(2)}(t_1,t_2)$ represents the average measure of correlation of the second order of the field at the set of times $\{ t_1,t_2\}$. However, when pulses are involved, as in an experiment, it is also useful to consider a `measured' second-order correlation function at zero time delay, given by~\cite{Loudon00} \begin{equation} g^{(2)}[0] = \frac{\int dt_1 dt_2\braket{\hat{a}^\dagger(t_1)\hat{a}^\dagger(t_2)\hat{a}(t_2)\hat{a}(t_1)}}{\int dt_1\braket{\hat{a}^\dagger(t_1)\hat{a}(t_1)}\int dt_2\braket{\hat{a}^\dagger(t_2)\hat{a}(t_2)}}. \label{eqn:g20} \end{equation} The measured version can be obtained from the detection events in an experiment, where detectors monitor photocounts over a fixed period of time, $T$, which encompasses the pulse duration. We will consider both $g^{(2)}(t_1,t_2)$ and $g^{(2)}[0]$. Further details about these two types of second-order correlation and how they are related can be found in Ref.~\cite{Fischer16}. We start by identifying the constituent quantities of Eqs.~\eqref{eqn:g2t} and \eqref{eqn:g20}, for the case of no loss, as the photon flux, \begin{equation} f_1(t) = \braket{\hat{a}^\dagger(t)\hat{a}(t)}, \end{equation} and two-time coincidence rate, \begin{equation} f_2(t_1, t_2) = \braket{\hat{a}^\dagger(t_1)\hat{a}^\dagger(t_2)\hat{a}(t_2)\hat{a}(t_1)}. \end{equation} Following the same procedure as was done for the photon number distribution, we arrive at the PAC state photon flux, \begin{equation} \begin{split} f_1(t) =&~ |N(\tau)|\braket{\{\alpha\}|\hat{a}_\xi\hat{a}^\dagger(t)\hat{a}(t)\hat{a}_\xi^\dagger|\{\alpha\}}\\ =&~~|\alpha(t)|^2 + 2|N(\tau)||\sigma(\tau)||\xi(t+\tau)||\alpha(t)| \\ & \qquad + |N(\tau)||\xi(t+\tau)|^2, \end{split} \label{eqn:f1t_pacs} \end{equation} and coincidence rate, \begin{equation} \begin{split} f_2(t_1,t_2) &=~ |N(\tau)|\Big[|\xi(t_1+\tau)|^2|\alpha(t_2)|^2 + |\xi(t_2+\tau)|^2|\alpha(t_1)|^2\\ &+~ |\alpha(t_1)|^2|\alpha(t_2)|^2/|N(\tau)|\\ &+~ 2|\xi(t_1 + \tau)||\xi(t_2 + \tau)||\alpha(t_1)||\alpha(t_2)|\\ &+~ 2|\sigma(\tau)||\xi(t_1 + \tau)||\alpha(t_1)||\alpha(t_2)|^2\\ &+~ 2|\sigma(\tau)||\xi(t_2 + \tau)||\alpha(t_2)||\alpha(t_1)|^2\Big]. \end{split} \label{eqn:f2t_pacs} \end{equation} The second-order correlation function, $g^{(2)}(t_1,t_2)$, is then \begin{equation} g^{(2)}(t_1, t_2) = \frac{f_2(t_1, t_2)}{f_1(t_1) f_1(t_2)}. \label{eqn:g2t_pacs} \end{equation} To include the effects of propagation and loss the operators are transformed, as carried out in the previous section, using $\hat{a}(t)\to\hat{a}_L(t)$. We then have that $f_1(t_i) \to |\eta (L)| f_1(t_{r,i})$ and $f_2(t_1,t_2) \to |\eta (L)|^2 f_2(t_{r,1},t_{r,2})$, where $t_{r,i}=t_i - L/v_g$ is a retarded time. Thus, $g^{(2)}(t_1,t_2)$ is unaffected by loss as the loss factors cancel in the numerator and denominator, and the resulting propagation only shifts the time arguments. In Fig.~\ref{fig5} we show $g^{(2)}(t_1,t_2)$ for various cases of temporal overlap between the added photon and coherent state, with $n_\alpha=3$. Fig.~\ref{fig5}~(a) shows the case of perfect temporal overlap between the photon and coherent state. The correlation function is rotationally symmetric in time due to the wavepackets of the single-photon and coherent state being rotationally symmetric and having perfect overlap. Fig.~\ref{fig5}~(b) shows the case where the bandwidth of the single photon is the same as the coherent state ($\Omega_1=\Omega$), but the single-photon pulse is shifted in time by $\tau=3/\Omega$. The function is no longer rotationally symmetric due to a time offset in the wavepackets. Fig.~\ref{fig5}~(c) shows the case where the single photon and coherent state have zero time delay, but the bandwidth of the single photon is three times larger than the coherent state (shorter duration). Finally, Fig.~\ref{fig5}~(d) shows the case where the bandwidth of the single photon is three times larger than that of the coherent state and its pulse is shifted in time by $\tau=3/\Omega$. In all cases, the second-order correlation function has a value at zero time delay ($t_1=t_2$, {\it i.e.} along the diagonal) of less than 1, confirming non-classicality of the state. This is in contrast to a coherent state pulse, which gives $g^{(2)}(t_1,t_2)=1~\forall~t_1,t_2$, and a single-photon pulse, which gives $g^{(2)}(t_1,t_2)=0 ~\forall ~t_1,t_2$~\cite{Loudon00}. The fact that $g^{(2)}(t_1,t_2)$ is constant over all time for these states, even though they are represented by a Gaussian pulse is due to $g^{(2)}(t_1,t_2)$ being a ratio of a coincidence rate and a photon flux of a wavepacket state that theoretically extends over all time, with the amplitudes of the coincidence rate and flux cancelling to give a constant at all times. On the other hand, $g^{(2)}(t_1,t_2)$ for the PAC state is not uniform like the coherent and single-photon states, but has localised hotspots, corresponding to pairs of times where there is more likely to be a coincidence. These additional coincidences are those between the added single-photon and photons of the coherent state pulse. Thus, the location and shape of the hotspots are determined by the temporal offset and width of the single photon with respect to the coherent state. The value of $g^{(2)}(t_1,t_2)$ remains approximately zero outside these hotspots due to the relative amplitudes of the coincidence rate and flux. To calculate the measured version of the second-order correlation, $g^{(2)}[0]$, we must perform the integrations. For no loss, integrating Eq.~\eqref{eqn:f1t_pacs} gives the mean photon number \begin{equation} f_1 = 1 + n_\alpha + |N(\tau)||\sigma(\tau)|^2, \label{eqn:f1_pacs} \end{equation} which matches Eq.~(\ref{eqn:Pn_mean}) for $|\eta(L)|=1$. Integrating Eq.~\eqref{eqn:f2t_pacs} over both times gives the average number of coincidences \begin{equation} \begin{split} f_2 = n_\alpha^2 + 2n_\alpha + 2|N(\tau)||\sigma(\tau)|^2\big(n_\alpha + 1\big). \end{split} \label{eqn:f2_pacs} \end{equation} Taking the ratio $f_2/f_1^2$ then gives \begin{equation} \begin{split} g^{(2)}[0] = & ~\frac{n_\alpha^2 + 2n_\alpha + 2|N(\tau)||\sigma(\tau)|^2\big(n_\alpha + 1\big)}{\Big(1 + n_\alpha + |N(\tau)||\sigma(\tau)|^2\Big)^2}\\ = & ~1 - \frac{1 + |N(\tau)|^2|\sigma(\tau)|^4}{\Big(1 + n_\alpha + |N(\tau)||\sigma(\tau)|^2\Big)^2}. \end{split} \label{eqn:g20_pacs} \end{equation} When loss is included, the loss factors cancel for $f_1^2$ and $f_2$, as in the case of $g^{(2)}(t_1,t_2)$. Note that in this case there is no retarded time as all times have been integrated. From Eq.~\eqref{eqn:g20_pacs}, it is clear that $g^{(2)}[0] < 1$ for all parameters as $|N(\tau)| > 0$ and $|\sigma(\tau)| \geq 0$ always. Furthermore, the second term is always less than $1$ for non-zero $n_\alpha$. In the limit of large $n_\alpha$, the second term approaches zero resulting in a $g^{(2)}[0]$ value approaching $1$, as would be expected with a strong coherent state. In Fig.~\ref{fig6}~(a) we show $g^{(2)}[0]$ as the temporal offset, $\tau$, and bandwidth, $\Omega_1$, are varied. The mean photon number of the coherent state within the PAC state is $n_\alpha=3$ as an example. In Figs.~\ref{fig6}~(b) and (c) we show cross sections corresponding to $\Omega_1=\Omega$ and $\tau=0$, respectively. Also shown are values of $g^{(2)}[0]$ for a coherent state (arbitrary mean photon number). One can see that the value of $g^{(2)}[0]$ is less than 1 for the PAC state for all values of $\tau$ and $\Omega_1$, confirming its nonclassicality. The study of the second-order correlation function complements that of the photon number statistics for determining the nonclassical character of a continuous-more PAC state. However, both the photon number statistics (sub-Poissonian behaviour) and the second-order correlation do not completely characterize a given PAC state, even though they highlight its nonclassicality well. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{fig6.pdf} \caption{Measured second-order correlation function, $g^{(2)}[0]$, of continuous-mode PAC states. The added single-photon pulse has a temporal offset $\tau$ in units of $1/\Omega$ and bandwidth $\Omega_1$ in units of $\Omega$, where $\Omega$ is the bandwidth of the coherent state pulse. (a) Density plot showing $g^{(2)}[0]$ as $\tau$ and $\Omega_1$ are varied. In (b) and (c), cross sections of the plot in (a) are shown for the PAC state (solid, blue). Also included in these cross sections are $g^{(2)}[0]$ values for the coherent state (dashed, green). The values are independent from $\tau$, $\Omega_1$, and $n_\alpha$, but are shown as a reference. The value of the mean photon number for the coherent state in the PAC state is $n_\alpha=3$.} \label{fig6} \end{figure} \section{Quadratures} We now move on from studying the basic statistics of the field and derive the mean and variance of the quadrature operator. The aim is to determine if, and under what conditions, the continuous-mode PAC state $\ket{\{\alpha\},1_\xi}$ may be quadrature squeezed. Such a state would have one quadrature with a variance less than that of a coherent state quadrature and is another signature of nonclassicality~\cite{Lvovsky15}. The PAC state is formally described as a non-Gaussian state~\cite{Tan19}, as its Wigner function is not a Gaussian distribution~\cite{Agarwal91}. As such, the mean and variance of the quadrature operator do not completely characterize it, unlike for Gaussian states. However, they represent additional physical quantities beyond the basic sub-Poissonian statistics and second-order correlations that can be used to characterize a state's nonclassicality. In general, Gaussian and non-Gaussian states are important in a wide range of applications in quantum information processing~\cite{Tan19}. In particular, in quantum sensing, where the squeezed nature of a state gives intra-mode correlations that can be exploited to improve precision measurements of phase~\cite{Knott16}. This has been studied for non-Gaussian states~\cite{Braun14}, where it was found that non-Gaussian states generated by Gaussian states (e.g. a coherent state) modified by the subtraction and addition of photons provides enhanced sensitivity in phase sensing. Thus, the squeezed nature of non-Gaussian states such as PAC states can provide advantages under certain conditions and it is therefore an important aspect to study. The derivation of the mean and variance of the continuous-mode coherent state and number state quadratures are given in Appendix \ref{appendix:Q}. Here, we focus on the PAC state. We begin with the instantaneous quadrature operator~\cite{Loudon00}, \begin{equation} \hat{X}_\varphi(t) = \frac{1}{2} \Big(~\at \mathrm{e}^{-{\rm i} \varphi(t)} + \atd \mathrm{e}^{{\rm i} \varphi(t)}~\Big), \label{eqn:Xphit} \end{equation} where $\varphi(t)$ is the quadrature phase. Similar to the measured second-order correlation function, this definition is integrated with respect to time, making it suitable in an experimental context where measurements usually span a period of time, $T$. Taking the expectation value of Eq.~\eqref{eqn:Xphit} and then integrating, we obtain the total mean quadrature, \begin{equation} \braket{\hat{X}_\varphi(t,T)} = \int_{t}^{t + T} dt'~\mathrm{Re}\Big\{\braket{\hat{a}(t')} \mathrm{e}^{-{\rm i}\varphi(t')}\Big\}. \label{eqn:Xphi_mean} \end{equation} The conjugate quadrature $\hat{X}_{\varphi+\pi/2}(t, T)$ is obtained by substituting $\varphi(t) \rightarrow \varphi(t) + \pi/2$ in Eq.~\eqref{eqn:Xphit}. The variance of the quadrature operator is given by $\Big(\Delta X_\varphi(t,T)\Big)^2 = \braket{{\hat{X}_\varphi}^2(t,T)} - \braket{\hat{X}_\varphi(t,T)}^2$. This expression becomes simpler to use in the normal-ordered form~\cite{Blow90}: \begin{equation} \Big(\Delta X_\varphi(t,T)\Big)^2 = \frac{T}{4} + \braket{:{\hat{X}_\varphi}^2(t,T):} - \braket{\hat{X}_\varphi(t,T)}^2, \label{eqn:Xphi_var} \end{equation} where \begin{equation} \begin{split} \braket{:{\hat{X}_\varphi}^2(t,T):} = &\frac{1}{2} \int_{t}^{t + T} dt'dt''~\mathrm{Re}\Big\{\braket{\hat{a}(t')\hat{a}(t'')}\mathrm{e}^{-{\rm i} \{\varphi(t') + \varphi(t'')\}} \\ &\qquad+ \braket{\hat{a}^{\dagger}(t'')\hat{a}(t')}\mathrm{e}^{-{\rm i} \{\varphi(t') - \varphi(t'')\}} \Big\}. \end{split} \label{eqn:Xphi2moment} \end{equation} The expression in Eq.~(\ref{eqn:Xphi_var}) can be derived by squaring Eq.~\eqref{eqn:Xphit}, then normal-ordering the result using the commutation relation in Eq.~\eqref{eqn:atcomm} and taking the expectation value to obtain $\braket{{\hat{X}_\varphi}^2(t,T)} = T/4 + \braket{:{\hat{X}_\varphi}^2(t,T):}$. We now study the quadrature mean and variance with no propagation loss for PAC states using the above formulas. The mean can be calculated by evaluating Eq.~\eqref{eqn:Xphi_mean} with respect to the state $\ket{\{\alpha\},1_\xi}$, which is expanded using Eq.~\eqref{eqn:pacs}, giving \begin{equation} \braket{\hat{X}_\varphi(t,T)} = |N(\tau)|\int_{t}^{t + T}dt'\mathrm{Re}\Big\{\braket{\{\alpha\}|\hat{a}_\xi\hat{a}(t')\hat{a}_\xi^\dagger|\{\alpha\}} \mathrm{e}^{{\rm i} \varphi(t')}\Big\}. \label{eqn:Xphimean_pacs0} \end{equation} The operator product can be re-ordered as, \begin{equation} \hat{a}_\xi\hat{a}(t')\hat{a}_\xi^\dagger = \xi(t' + \tau)\hat{a}_\xi + \hat{a}(t') + \hat{a}_\xi^\dagger\hat{a}_\xi\hat{a}(t'), \end{equation} using Eqs.~\eqref{eqn:axicom} and \eqref{eqn:atnaxi}. Then carrying out these operations on $\ket{\{\alpha\}}$ using Eqs. \eqref{eqn:eigen} and \eqref{eqn:axieig}, we obtain \begin{equation} \begin{split} \bra{\{\alpha \}}\hat{a}_\xi\hat{a}(t')\hat{a}_\xi^\dagger\ket{\{\alpha\}} =\sigma(\tau)\xi(t' + \tau) + \alpha(t')(1 + |\sigma(\tau)|^2)\\ = \Big(|\sigma(\tau)||\xi(t' + \tau)|+ |\alpha(t')|\big(1 + |\sigma(\tau)|^2\big)\Big)\mathrm{e}^{-{\rm i} \theta(t')}. \end{split} \end{equation} Substituting this into Eq.~\eqref{eqn:Xphimean_pacs0} and taking the real part, gives \begin{equation} \begin{split} \braket{\hat{X}_\varphi(t, T)} =&~|N(\tau)||\sigma(\tau)|\int_{t}^{t + T}dt'~|\xi(t' + \tau)|\cos\big[\theta(t') - \varphi(t')\big]\\ & + \int_{t}^{t + T}dt'~|\alpha(t')|\cos\big[\theta(t') - \varphi(t')\big]. \end{split} \label{eqn:Xphimeanpacs} \end{equation} In the same manner, $\braket{:\hat{X}_\varphi^2(t, T):}$ can be derived; \begin{equation} \begin{split} \braket{:\hat{X}_\varphi^2(t, T):} &=~ \braket{\hat{X}_\varphi(t,T)}^2 -{|N(\tau)|}^2{|\sigma(\tau)|}^2 \\ & \times\Bigg[\int_{t}^{t + T}dt'~|\xi(t' + \tau)|\cos\big[\theta(t') - \varphi(t')\big]\Bigg]^2\\ & +~\frac{1}{2}|N(\tau)|\int_{t}^{t+T}dt'dt''|\xi(t' + \tau)||\xi(t'' + \tau)|\\ & \times\cos\big[\theta(t') - \varphi(t') - \theta(t'') + \varphi(t'')\big]. \end{split} \label{eqn:Xphi2moment_pacs} \end{equation} By inserting Eqs.~\eqref{eqn:Xphimeanpacs} and \eqref{eqn:Xphi2moment_pacs} into Eq.~\eqref{eqn:Xphi_var}, we obtain \begin{equation} \begin{split} &\Big(\Delta X_\varphi(t, T)\Big)^2 = ~\frac{T}{4} + |N(\tau)|\Big(\frac{1}{2} - |N(\tau)||\sigma(\tau)|^2\Big)\\ &\qquad \qquad\times\Bigg[\int_{t}^{t + T}dt'~|\xi(t'+\tau)|\cos\big[\theta(t') - \varphi(t')\big]\Bigg]^2\\ &\qquad \qquad+\frac{1}{2}|N(\tau)|\Bigg[\int_{t}^{t + T}dt'~|\xi(t'+\tau)|\sin\big[\theta(t') - \varphi(t')\big]\Bigg]^2.\\ \end{split} \label{eqn:Xphivar_pacs} \end{equation} When Eq.~\eqref{eqn:Xphivar_pacs} is less than the quadrature variance of the coherent state, $T/4$ (see Appendix~\ref{appendix:qCS}), the PAC state can be said to be squeezed. To determine the quadrature most likely to be squeezed, we look for the quadrature phase $\varphi(t)$ such that the variance $(\Delta X_\varphi(t,T))^2$ is minimal. Since only the second term in Eq.~\eqref{eqn:Xphivar_pacs} may be negative, we choose $\varphi(t) = \theta(t)$ to maximise the integral factor while reducing the third term to zero. This gives the average \begin{equation} \braket{\hat{X}_\theta(t, T)} = \int_{t}^{t + T} dt'~\Big(|N(\tau)||\sigma(\tau)||\xi(t' + \tau)| + |\alpha(t')|\Big), \label{eqn:Xphiminmean_pacs} \end{equation} with the minimal variance \begin{equation} \begin{split} \Big(\Delta X_\theta(t, T)\Big)^2 = \frac{T}{4} + |N(\tau)|\Big(\frac{1}{2} - |N(\tau)|{|\sigma(\tau)|}^2\Big)\\ \times~\Bigg[\int_{t}^{t + T} dt' |\xi(t'+\tau)|\Bigg]^2. \end{split} \label{eqn:Xphiminvar_pacs} \end{equation} A reduction in the variance is determined by the value of $|N(\tau)||\sigma(\tau)|^2$, with larger values providing higher reduction. Values greater than $1/2$ yield a variance $(\Delta X_\theta(t,T))^2 < T/4$, indicating that the state exhibits quadrature squeezing. In the case of perfect overlap $|N(\tau)||\sigma(\tau)|^2 = n_\alpha/(1 + n_\alpha)$, which is greater than $1/2$ for $n_\alpha > 1$, with the corresponding variance expression resembling the single-mode result~\cite{Agarwal91}. Propagation loss is incorporated into the mean by replacing $\hat{a}(t')$ in Eq.~\eqref{eqn:Xphi_mean} with $\hat{a}_L(t')$ . Then, using Eq.~\eqref{eqn:aL} we obtain \begin{equation} \begin{split} \braket{\hat{X}_\varphi(t,T,L)} &= \int_{t}^{t + T} dt'~\mathrm{Re}\Big\{\Big\langle\eta^{\frac{1}{2}}(L)\hat{a}(t_r')\\ &~~~~~~~~~~~~~~~~~~~~~+ {\rm i}(1-\eta(L))^{\frac{1}{2}}\hat{v}(t')\Big\rangle\mathrm{e}^{-{\rm i}\varphi(t')}\Big\}, \end{split} \end{equation} where $t'_r = t' - L/v_g(\omega_0)$. Since the environment is initially in a vacuum state, $\braket{\hat{v}(t')} = 0$, the above expression reduces to \begin{equation} \begin{split} \braket{\hat{X}_\varphi(t,T,L)} &= |\eta(L)|^{1/2} \int_{t + L/v_g}^{t + L/v_g + T} dt'~\mathrm{Re}\Big\{\braket{\hat{a}(t_r')} \mathrm{e}^{-{\rm i}\varphi_r(t_r')}\Big\}\\ &=~|\eta(L)|^{1/2} \braket{\hat{X}_{\varphi_r}(t,T)}. \end{split} \label{eqn:XphiL_mean} \end{equation} Note that we have shifted the integral limits by the propagation time $L/v_g$ to remain centred on the pulse. This allows the mean quadrature with loss to equal the attenuated and phase-shifted lossless quadrature. The phase is shifted, resulting in a retarded phase of $\varphi_r(t'_r)=\varphi(t'_r+L/v_g(\omega_0))-\varphi_\eta/2$. Applying the same procedure to Eq.~\eqref{eqn:Xphi2moment} as was performed for the mean, gives \begin{equation} \braket{:{\hat{X}_\varphi}^2(t,T,L):} = ~|\eta(L)|\braket{:{\hat{X}_{\varphi_r}}^2(t,T):}. \label{eqn:Xphi2Lmoment} \end{equation} Combining this result with the mean in Eq.~\eqref{eqn:XphiL_mean} gives the variance with loss, which can be expressed in terms of the phase-shifted lossless variance as~\cite{Blow90,Loudon00} \begin{equation} \Big(\Delta X_\varphi(t, T, L)\Big)^2 = \frac{T}{4}\big(1 - |\eta(L)|\big) + |\eta(L)|\Big(\Delta X_{\varphi_r}(t, T)\Big)^2. \label{eqn:XphiL_var} \end{equation} Minimising the lossy variance in Eq.~\eqref{eqn:XphiL_var}, requires that we minimise the phase-shifted lossless variance $(\Delta X_{\varphi_r}(t,T))^2$. Thus, as before, we simply set $\varphi_r(t) = \theta(t)$ in Eqs.~\eqref{eqn:XphiL_mean} and \eqref{eqn:XphiL_var}. This means that the quadrature phase, and that of the local oscillator in a homodyne detection scheme, is $\varphi(t) = \theta(t_r) + \varphi_\eta/2$. The lossy quadrature mean is then \begin{equation} \braket{\hat{X}_\varphi(t,T,L)} = |\eta(L)|^{1/2}\braket{\hat{X}_\theta(t,T)}, \label{eqn:XphiLmean_pacs} \end{equation} with the minimal variance \begin{equation} \Big(\Delta X_\varphi(t, T, L)\Big)^2 = \frac{T}{4} + |\eta(L)|\Big[\Big(\Delta X_{\theta}(t, T)\Big)^2-\frac{T}{4}\Big]. \label{eqn:XphiLvar_pacs} \end{equation} In Eq.~\eqref{eqn:XphiLvar_pacs} we see that with increasing loss ($|\eta(L)|\rightarrow0^+$) the variance linearly approaches $T/4$ as the initial state goes to the vacuum. Thus, as loss increases, the quadrature variance of a squeezed PAC state linearly increases as the second term in Eq.~\eqref{eqn:XphiLvar_pacs} becomes decreasingly negative. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{fig10.pdf} \caption{Variance of the quadrature operator for continuous-mode PAC states. (a) The difference in the value of the quadrature variance ($\varphi=\theta$) for continuous-mode coherent states and PAC states as the temporal offset $\tau$ and bandwidth $\Omega_1$ of the added single photon in the PAC state is varied (with $n_\alpha=3$). Negative regions in the plot indicate a parameter regime in which quadrature squeezing occurs. (b) The single-photon offset $\tau$ is varied. (c) The single-photon bandwidth is varied. In (b) and (c) the results for a single-photon state and the out-of-phase quadrature for the PAC state ($\varphi=\theta+\pi/2$) are included as solid orange and dashed blue lines, respectively.} \label{fig10} \end{figure} In Fig.~\ref{fig10}~(a) we show the difference in the value of the quadrature variance for continuous-mode coherent states and PAC states given by Eq.~\eqref{eqn:Xphiminvar_pacs} as the temporal offset $\tau$ and bandwidth $\Omega_1$ of the added single photon in the PAC state is varied (with $n_\alpha=3$). The loss is set to zero for the moment. In the cross section cuts in Figs.~\ref{fig10}~(b) and (c) we have included the results for a single-photon state (see Appendix \ref{appendix:qNS}) and the out-of-phase quadrature for the PAC state ($\theta \to \theta+\pi/2$). Negative regions in the plot indicate a parameter regime in which quadrature squeezing occurs. Thus, from Fig.~\ref{fig10}~(b), one can see that the variance of $\hat{X}_\theta$ is less than that of the coherent state, provided that the added single photon is off-set by less than a pulse-width. As the offset increases beyond one pulse-width, the PAC state becomes anti-squeezed, with its variances approaching the number state result. In general, maintaining the indistinguishability of the added single photon from the coherent state photons is required to ensure optimal squeezing. Similarly, in Fig.~\ref{fig10}~(c), as the single-photon bandwidth is broadened it reduces its pulse-width in time, which induces some distinguishability. The PAC state quadrature variance then approaches that of the number state result. In Fig.~\ref{fig11} we consider perfect temporal overlap and show the quadrature variance for continuous-mode PAC states given by Eq.~\eqref{eqn:Xphiminvar_pacs} as the mean photon number of the coherent state, $n_\alpha$, is varied. Also shown are the results for a single-photon state and the out-of-phase quadrature for the PAC state. In Fig.~\ref{fig11}, one can see that as the intensity of the coherent state increases, the less squeezed the PAC state becomes as it begins to resemble a coherent state, a result well known from the single-mode case~\cite{Zavatta04}. Conversely, if the coherent state pulse within the PAC state has fewer than one photon on average, the single-photon pulse dominates resulting in anti-squeezing. The optimal coherent state pulse has $n_\alpha = 3$. By modifying the parameters $\tau$ and $\Omega_1$, behaviour similar to that seen in the plots of Fig.~\ref{fig10}~(b) and (c) are obtained for arbitrary $n_\alpha$, but with overall low values of the quadrature variance for small $n_\alpha$ ($\gtrsim 3$) and higher values for large $n_\alpha$, as well as $n_\alpha < 3$. When loss is included during propagation, with an initial imperfect pulse overlap, the variance of the quadrature is given by Eq.~\eqref{eqn:XphiLvar_pacs}. Here, changing the pulse parameters simply changes the starting variance at the boundary $|\eta(L)|=1$. As with the case of perfect overlap, the variance linearly increases from its starting value as the second term becomes decreasingly negative for increasing loss. \begin{figure}[t] \centering \includegraphics[width=6.8cm]{fig11.pdf} \caption{Variance of the quadrature operator ($\varphi=\theta$) for continuous-mode PAC states as the mean photon number of the coherent state, $n_\alpha$, is varied. The dotted vertical line corresponds to $n_\alpha=3$, giving the value obtained in Fig.~\ref{fig10}~(a) at $\tau=0$ and $\Omega_1=\Omega$. Also shown are the single-photon state (solid, orange) and the out-of-phase quadrature for the PAC state (dashed, blue) with $\varphi=\theta+\pi/2$.} \label{fig11} \end{figure} \section{Fidelity} In the previous sections it was found that the sub-Poissonian behaviour, second-order correlation function and quadrature squeezing all highlight the nonclassical nature of continuous-mode PAC states. In the case of sub-Poissonian behaviour and the second-order correlation function, it is interesting to note that for all values of $\tau$ and $\Omega_1$ a PAC state can be said to be nonclassical to a varying degree. The nonclassicality of continuous-mode PAC states is in some sense robust to timing and bandwidth imperfections for certain quantities like these and therefore they do not tell us how good or bad a given state is compared to the ideal case. We now introduce a final quantity, the fidelity, with the aim of characterizing the `quality' of the PAC state. In this case, the quality is represented by the overlap squared of the PAC state with the ideal case. The fidelity is a measure of the closeness between two states $\ket{\psi}$ and $\ket{\phi}$. It is defined as $F=|\langle \psi | \phi \rangle|^2$~\cite{Nielsen10}, which is equal to 1 for perfect overlap and zero for no overlap. In the case of PAC states we have $\ket{\psi}=(1+n_\alpha)^{-1/2}\hat{a}_{\xi_0}^\dag |\ket{\{ \alpha \}}$ as the ideal PAC state with a single photon pulse profile $\xi_0(t)=\frac{1}{\sqrt{n_\alpha}}\alpha (t)$ having perfect timing and bandwidth, and $\ket{\phi}=\ket{1_\xi,\{ \alpha \}}$ as the PAC state with a single photon profile $\xi$ having arbitrary timing and bandwidth. This leads to \begin{equation} F=|\langle \psi | \phi \rangle|^2=\frac{|N(\tau)|}{1+n_\alpha} |\langle \{ \alpha\}|\hat{a}_{\xi_0}\hat{a}_{\xi}^\dag \ket{\{ \alpha \}}|^2. \end{equation} Using the techniques in Section III, it is straightforward to show that (see Appendix~\ref{appendix:fid}) \begin{equation} F=\frac{|\sigma(\tau)|^2(1+n_\alpha)}{n_\alpha(1+|\sigma(\tau)|^2)}, \label{eqn:fidelity} \end{equation} For perfect overlap of the single photon, $|\sigma(\tau)|^2=n_\alpha$, which gives $F=1$. As the overlap decreases, $|\sigma(\tau)|^2 \to 0$, and we have that $F \to 0$. In general, for a given photon number of the coherent state, $n_\alpha$, the fidelity decays with a $|\sigma(\tau)|^2/(1+|\sigma(\tau)|^2)$ dependence. The corresponding expression for the fidelity with loss present is given in Appendix~\ref{appendix:fid}. In an experiment, the fidelity could be obtained by carrying out time-dependent state tomography, such as in a reconstruction of the Wigner function for the state~\cite{Lvovsky09,MacRae12,Morin13,Ogawa16}. As an example from Section III, for $n_\alpha=3$ and no loss, an imperfect PAC state with $\tau=5/\Omega$ and $\Omega_1=5 \Omega$ displays sub-Poissionian behaviour with $(\Delta n)^2/\langle n \rangle \simeq 0.75$ and has a second-order correlation $g^{(2)}[0]\simeq 0.94$, and is therefore nonclassical. However, $|\sigma(\tau)|^2\sim0$ and thus the state's fidelity with the ideal PAC state is effectively zero. Choosing $\tau=1/\Omega$ and $\Omega_1=\Omega$, the sub-Poissonian behaviour and second-order correlation do not change appreciably, while $F\simeq 0.70$. The fidelity appears to have little correlation with these two nonclassical quantities, however, they are correlated in that as $F$ increases, the sub-Poissonian behavior and second-order correlation improve slightly, reaching their minimum ideal values, as seen in Figs.~\ref{fig3} and \ref{fig5}. For the quadrature variance we observe a larger variation. As seen in Fig.~\ref{fig10}~(b), we have a minimum of $0.15$ below the coherent state value at $\tau=0$ and $\Omega_1=\Omega$ (where $F=1$) demonstrating squeezing, which increases to the coherent state value at $\tau=1/\Omega$ and $\Omega_1=\Omega$ (where $F=0.70$) corresponding to no squeezing. Thus, squeezing and fidelity appear to be more correlated with each other, which can be seen more clearly by comparing the functional forms of Eqs. (\ref{eqn:Xphiminvar_pacs}) and (\ref{eqn:fidelity}). Further work on connecting continuous-mode fidelity to quadrature squeezing and nonclassicality in general is an interesting direction for future studies. \section{Conclusion} In this work we used a continuous-mode formalism for PAC states to describe how various properties are affected by timing and bandwidth imperfections. We also included loss during propagation. The properties studied were the photon-number distribution, the second-order correlations, quadrature squeezing and fidelity. For the photon-number distribution we calculated its mean and variance, and used these to quantify the degree of nonclassicality in terms of how much the variance to mean ratio went below unity, where the state becomes sub-Poissonian. We found the ratio to be robust to temporal and spectral mismatch, and that for increasing loss the ratio had a linear dependence. For the second-order correlations, we found that they are also robust to temporal and spectral mismatch, with values consistently in the quantum (nonclassical) regime and unaffected by loss. For the quadratures, we found that the variance is again robust to temporal and spectral mismatch, although not as much as the photon-number distribution and correlations. Squeezing was found for a range of temporal and spectral mismatch, further highlighting the nonclassical nature of the continuous-mode PAC state. For the fidelity, we derived the functional dependence describing the closeness of a PAC state to the ideal case and found a small correlation between the fidelity and the sub-Poissonian behavior, as well as the second-order correlation. On the other hand, the fidelity and quadrature variance had an improved correlation. The combined results of this study may aid the further development of robust schemes for PAC state generation and their use in quantum information applications. Future work could study multi-photon added coherent states and quantify the performance of continuous-mode PAC states in quantum sensing. {\it Acknowledgements.---} This research was supported by the South African National Research Foundation, the National Laser Centre and the South African Research Chair Initiative of the Department of Science and Innovation and National Research Foundation.
1,116,691,496,961
arxiv
\section{Introduction} \label{I} \setcounter{equation}{0} Academia is undergoing profound change. All over the world, positions of leadership and senior management within universities are becoming professionalised. Whereas in times gone by, such positions were typically occupied by senior academics, having proved themselves through long and successful careers in research and education, nowadays professional managers -- who lack such experience -- are increasingly empowered. In the never-ending drive for increased efficiency, policy makers and university managers seek to commoditise all aspects of academic activity including research. In this context, a plethora of indicators has emerged {{seeking}} to measure the previously unmeasurable. As a result, the academic sector is becoming ever more business-like and league tables encourage competition over cooperation. One of the simplest indicator is the size of a research department or group. Funding bodies, governments and universities frequently claim that these must be above a certain minimum to be viable. The term ``critical mass'' has been borrowed from nuclear physics to capture this notion. An implication of the term, as it is still all too widely understood, is that a group or department which is subcritical does not have the ability to sustain quality research activity. It suggests (erroneously as we shall see) the existence of a quantum jump in quality once the critical threshold is passed -- a first-order phase transition, to use the parlance of statistical physics. Extending the notion, it is often claimed that bigger is always better in research and that funding should therefore be focused into small numbers of large, elite institutions with smaller universities relegated to teaching-only roles. Here we show that, despite their widespread uses, these notions are manifestly incorrect. They are based upon unfounded and conflated analogies and not on scientific rigour. A proper, measurable definition of critical mass in the research context has only emerged in the last five years~\cite{KeBe10,KeBe11a,KeBe12a}. It does {\emph{not}} entail a quantum leap or first-order transition from low to high quality. Instead, the critical-mass phase transition is higher order -- barely noticeable, in fact. There is also a second, measurable phase transition as group size grows. This is triggered by a {\emph{Ringelmann-type effect}} and marked by a {\emph{Dunbar number.}} The former is due to a slow-down in the rate of increase of quality with quantity due to coordination losses. The latter expresses the reasons for these losses, namely cognitive limits to the numbers of interactions individuals can sustain. In Section~2 we explain these terms in more detail and review their connection to critical mass. A second indicator which has recently emerged is the $h$-index. Again this is very attractive to policy makers and managers as a simple, zero-dimensional metric which is supposed to measure quality on an individual level. The notion has recently been extended to research groups in Ref.\cite{Bi14}. Competing measures include the normalised citation index (NCI)~\cite{Evidenceweb,Ev10,Evidence2011}. While the $h$-index itself was developed by a physicist, the NCI is an invention of a private company. In the second part of this paper, we compare both of these against expert peer review measures of the quality of research groups. We show that the $h$-index is the better indicator of group quality but that its role is strongly dependent on group size. We also show that neither of these metrics is a reliable substitute for expert peer review estimates of quality. Academics fear the increased use of metrics to measure research activity. The fear is that the increasingly professionalised management class lacks detailed knowledge of, and appreciation for, academic subjects and seeks to base judgment and decisions on automated metrics rather than expertise and academic experience. This, accompanied by a top-down management approach, bolstered by metrics, and in pursuit of league-table rankings, may impinge upon academic freedom as researchers are forced to chase arbitrary measures rather than follow where their curiosity leads. Here we show that these fears are well founded; despite the rising tide of metrics, they are a poor measure of quality. \section{Evaluation of Quality and Critical Mass} \label{II} \setcounter{equation}{0} The notion of critical mass in research has been around for a very long time. The basic idea is that sub-critical research groups tend to produce research of poor quality. Once the critical threshold is passed, research quality becomes of an acceptable level. This idea has been extended to, and far beyond, its logical conclusion to the idea that ``bigger is always better'' so that benefit always accrues through increasing group size. However, multiple analyses based on citation counts have produced no evidence for such a concept of critical mass. Despite that, policies based upon this belief have been developed and implemented and these have serious consequences. Mark Harrison has studied how government officials allocated funding amongst research projects in the former USSR. In Ref.\cite{Ha09}, he describes how policies swung between phases of competition and phases based on critical mass. Harrison sees remarkably similar parallels in the UK's funding system today. He predicts that the current focus on concentrating research funding in pursuit of ``critical mass'' {{(which itself is misunderstood as we shall see)}} will eventually give way to competition, but only after the follies of the policy are declared. By then, he fears, emerging research groups in promising universities will have lost their funding; small but excellent research centres will have closed and individual careers curtailed. Amazingly, the policies in which these dangers are inherent are based upon concepts of critical mass which have no foundation. Five years ago, we developed a theory for critical mass and the dependency of research quality on group size~\cite{KeBe10}. Our model was successfully tested against empirical data coming from peer-review based national research assessment exercises in the UK and France~\cite{KeBe11a}. To date, this model remains the only theory in existence for the relationship between research quality and group size. A significant amount of literature has been developed since Ref.\cite{KeBe10} and all quantitative evidence supports the theory. There is no quantitative evidence invalidating the model we are about to describe. Here we recall the model developed in Ref.\cite{KeBe10} and how it was statistically tested against empirical evidence. It is important to realise that what we are about to develop here is a theory of averages (a ``mean-field theory'' in the language of statistical physics). There will always be deviations from such averages; the brilliant lone researcher, beavering away on a profound topic for years - still exists, especially in an area like pure mathematics~\cite{Nature}. (We will show, in fact, that critical mass for that subject is very small -- only one or two and perhaps it is unmeasurable.) Also, although the model predicts higher research success for large groups than small, there are exceptions: some smaller groups are as good as, or better than some large ones. We could overlay our model of averages with a random distribution to simulate deviations that always exist, but we feel such complications would only serve to obfuscate the trends we are trying to capture. With this in mind, then, we proceed to develop our theory of averages. A naive assumption (but one that was/is widespread, especially amongst policy makers, funding bodies and research managers) would be that the strength $S$ of a research group or department is a simple sum of the strengths of the individuals comprising it. The idea here is that excellent researchers will be attracted to larger research groups, and the best therefore become bigger. With this causation arrow, the driving mechanism is quality determining size: better becomes bigger. If this were the only relation between quality and size, one would expect to see an unbounded {\emph{Matthew effect.}} The Matthew effect, or cumulative advantage, is a phenomenon whereby ``the rich get richer and the poor get poorer''. (The name comes from the Gospel according to St Matthew, which states: ``For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.'') However, as we shall see, the empirical data does not support this scenario as the dominant driving mechanism. Of course, individual strength is itself a function of many factors, including innate calibre, teaching and administrative loads, management support, the extent of interdisciplinarity, the availability of suitable equipment, whether the work is mainly experimental, theoretical or computational, the methodologies and traditions of the field, library facilities, journal access, external collaboration, grants held, confidence supplied by previous successes, prestige of the institute and many other factors. We denote the mean strength of the individuals in a group, resulting from all these factors and more, by $a$. If the group has $N$ members, then, the strength of the group would simply be $S=aN$. However, research groups are complex systems and we have to take interactions between individuals into account. We suppose that direct two-way interaction between individuals produces an added effect and we denote the mean strength of that effect by $b$. (We consider three-way interactions, and so on, as collections of two-way interactions, so we need not take these into account separately.) Since there are $N(N-1)/2$ possible two-way communication links in the group, the strength becomes $S= aN + bN(N-1)/2$. Note that certain institutions, publications and league tables refer to ``research power'' rather than ``strength''~\cite{REF2014def}. We define research {\emph{quality}} $s$ as the strength or power per head, and write $s = a_1 + b_1 N$. In fact this is not the full story for there has to be a limit to the number of colleagues with whom a given researcher can effectively communicate. In {{evolutionary psychology and}} anthropology, this limit is known as the {\emph{Dunbar number}}~\cite{Du92}. This was proposed in the 1990's by Robin Dunbar who noticed a correlation between the average sizes of the neocortex in species of primates and the average sizes of their social groups. Dunbar suggested that it represents a cognitive limit to the number of individuals with whom one can maintain stable relationships. Extrapolating that correlation to human brain sizes, Dunbar predicts that we can comfortably maintain social relationships, on average, with about 150 people. Again, this is an average only - the number can, of course, vary; it has been proposed to typically lie between 100 and 250. There is a large body of evidence in support of Dunbar's theory: the estimated average size of a Neolithic farming village is 150, as is the fragmentation point of Hutterite settlements. It is also the size of basic units in professional armies in Roman antiquity as well as in modern times since the 16th century. It is the average village size in the Domesday Book. In our original publications on the topic, we referred to this limit as the upper critical mass and denoted it $N_c$. But it is a Dunbar number for academic researchers and it is discipline dependent. Once the group exceeds $N_c$, groups tends to fragment into subgroups. We suppose that the average number of such subgroups is $N/(\alpha N_c)$, so that they have average size $\alpha N_c$. The strength of the group is then ascribed to the accumulation of individual strengths together with the strength intra-subgroups interactions, $S= aN + bN(\alpha N_c-1)/2$. Of course, each of the $N/(\alpha N_c)$ subgroups can themselves interact with another subgroup. If the strength of such interactions is $c$, say, the total strength of an average large ($N > N_c$) group is expected to be $S = aN + bN(\alpha N_c-1)/2 + c N/(\alpha N_c)[N/(\alpha N_c) - 1]/2$. Gathering terms of the same order in $N$, we thus arrive at \begin{equation} s = \left\{ \begin{array}{ll} a_1 + b_1 N & {\mbox{if $N \le N_c$}} \\ a_2 + b_2 N & {\mbox{if $N \ge N_c$}}, \end{array} \right. \label{Nc} \end{equation} where $a_i$ and $b_i$ are functions of the strength parameters $a,b,c$ as well as $\alpha N_c$. In particular $b_2 \propto 1/N_c^2$ so that slope of the $s$ versus $N$ curve to the right of the Dunbar point should be small for disciplines with large $N_c$ values. To test the above theory, we used empirical data from the UK's last Research Assessment Exercise (RAE). The RAE is a peer-review based assessment of the quality of university research which the Government then uses to decide how funds should be allocated between different universities, institutions and departments. Not surprisingly, it is taken very seriously by university managers as well as by researchers themselves. The exercise took place about every five years up to 2008. In 2014 it was replaced by a somewhat different exercise wherein, besides quality of research, the impact which that research had beyond academia played an important role. The 2014 event was called the Research Excellence Framework or REF. While for the RAE research was categorised into 67 academic disciplines, in the 2014 REF there were only 36 units of assessment (UoA). Applied Mathematics, for example, which includes some theoretical physics, was a stand-alone UoA at RAE2008 but it was merged with Pure Mathematics, Statistics and Operational Research to form a Mathematical Sciences UoA for REF2014. The 2008 {exercise} was therefore a more ``fine-grained'' one and for that reason the data we report on here comes from the RAE. In the UK system, research was scrutinised by experts from each discipline to determine the proportions carried out at each of five levels: 4* (world-leading research); 3* (internationally excellent research); 2* (research that is internationally recognised); 1* (research recognised at a national level) and unclassified research. Following RAE, a formula is used to determine how funding is distributed to higher education institutes for the subsequent years. The original formula used by the Higher Education Funding Council for England (HEFCE) valued 4* and 3* research seven and three times more highly than 2* research and allocated no funding to lower quality research. We can use that formula for the relative values of the various categories of research as a proxy for quality. If $p_{n*}$ represents the percentage of a team's research which was rated $n$*, then, the team's {\emph{quality} is \begin{equation} s = p_{4*} + \frac{3}{7}p_{3*} + \frac{1}{7}p_{2*} \,. \label{seven} \end{equation} \begin{figure*}[t] \begin{center} \includegraphics[width=0.47\columnwidth,angle=0]{fig1a.eps} \includegraphics[width=0.47\columnwidth,angle=0]{fig1b.eps} \label{figJ6} \end{center} \vspace{0.3cm} \caption{The relationship between quality and quantity of research groups submitted to the UK's last Research Assessment Exercise in Physics and Applied Mathematics. For Physics (which is dominated by experimental physics), the Dunbar number marking the breakpoint in the fitted red curve is $N_c = 25 \pm 5$. For Applied Mathematics (which includes some theoretical physics.) it is $13\pm 2$. The critical mass is half the Dunbar number. The blue dashed lines {in the plots} are error boundaries. The purple dotted lines are extensions of the sub-breakpoint fits and capture some of the top performers.} \end{figure*} With Eq.(\ref{seven}) to hand, it is a straightforward exercise to translate the RAE results to quality scores and plot them against group size.\footnote{While generally throughout this paper, the word `group' means a collection of researchers at a given university active in a common discipline, in the UK context it means those who were submitted to RAE2008 in a given unit of assessment. Usually these are permanent staff but they may include postdocs. A group is not, therefore, synonymous with a department because not all department members may be research active and, therefore, not all may have been submitted to RAE. Alternatively, a submission may draw from researchers interacting across different departments. The word `group' in this sense is also not synonymous with research centre, as such entities may be involved more than one department. Indeed, at RAE, research centres may have been involved in submissions across more than one subject area. While ``group'' in this context captures how research is conducted at universities in countries such as Austria, France, Ireland, Ukraine and the UK, it misses the more hierarchical structure of, say, German universities, where research groups are more focused around individual senior professors.} Two such plots are given in Fig.~1: one for Physics and one for Applied Mathematics. Almost all experimental physics submitted to the RAE were assigned to the ``Physics'' UoA. Some theoretical also came under the ``Physics'' banner but others were entered into ``Applied Mathematics''. For our purposes then, we may consider ``Physics'' as meaning experimental physics and ``Applied Mathematics'' as being both applied mathematics and theoretical physics. Fitting to the formula (\ref{seven}) delivered the estimates $N_c = 13 \pm 2$ for Applied Mathematics and $N_c = 25 \pm 5$ for physics. Note how, as expected, the slopes of the $s$ versus $N$ curves reduce to the right of the Dunbar number. Note also that the right slope is smaller for the discipline with the higher $N_c$-value, as expected. \begin{table}[b!] \caption{The values of the Dunbar numbers or upper critical masses for a variety of academic disciplines.} \begin{center} \begin{tabular}{|l|r|} \hline \hline & \\ Research discipline & $N_c\quad\quad$ \\ & \\ \hline Applied mathematics &$13 \pm 2~\,$ \\ Statistics \& operational research &$17 \pm 6~\,$ \\ Physics &$25 \pm 5~\,$ \\ Geography, environment \& Earth studies &$30 \pm 3~\,$ \\ Biology &$21 \pm 4~\,$ \\ Chemistry &$36 \pm 13$ \\ Agriculture, veterinary \& food sciences &$10 \pm 3~\,$ \\ Law &$31 \pm 4~\,$ \\ Architecture, the build environment, town \& country planning &$14 \pm 3~\,$ \\ French, German, Dutch \& Scandinavian languages &$ 6 \pm 1~\,$ \\ English language \& literature &$32 \pm 3~\,$ \\ Pure mathematics &$ \le 4 ~\,~\,~\,~\,$ \\ Medical sciences &$41 \pm 8~\, $ \\ Nursing, midwifery, allied health professions \& studies &$18 \pm 5~\, $ \\ Computer sciences 1 &$11 \pm 5~\,$ \\ Computer sciences 2 &$33 \pm 9~\,$ \\ Computer sciences 3 &$49 \pm 10$ \\ Archaeology 1 &$17 \pm 3~\,$ \\ Archaeology 2 &$25 \pm 4~\,$ \\ Economics and econometrics &$11 \pm 3~\,$ \\ Business and management studies &$48 \pm 8~\,$ \\ Politics and international studies &$25 \pm 5~\,$ \\ Sociology &$14 \pm 4~\,$ \\ Education &$29 \pm 5~\,$ \\ History &$25 \pm 5~\,$ \\ Philosophy and theology &$19 \pm 3~\,$ \\ Art \& design &$25 \pm 8~\,$ \\ History of art, performing arts, communication and music &$ 9 \pm 2~\,$ \\ & \\ \hline \hline \end{tabular} \end{center} \end{table} We have produced similar plots to those in Fig.1 for a variety of academic disciplines based on data from RAE2008. The reader is referred to the original literature for details~\cite{KeBe11a}. We reproduce the $N_c$ values in Table~1 for convenience. Three different candidate Dunbar numbers were detected for Computer Science. We speculate that this discipline comprises a number of very different approaches, from the formal and theoretical to the empirical. Similarly in Archaeology, two candidates for $N_c$ were determined. In Pure Mathematics no breakpoint was found and since the smallest group submitted had $N=4$, we believe the $N_c$ value for that discipline is that number or less. We have stated that $N_c$ is the Dunbar number or upper critical mass. However, this is {\emph{not} analogous the old (unfounded) notions of critical mass as a threshold below which high quality research is impossible. (Clearly there is no such threshold.) Nonetheless, we can come closer to the old concept of critical mass~\cite{Ha09} with the following considerations. Suppose funding for a new academic were to become available. We ask whether it is more beneficial for society as a whole to allocate the new researcher to a group with $N>N_c$ or to one with $N<N_c$. The decision is made by considering the rate of change of {\emph{strength}} $S=sN$ with $N$. We find that $dS/dN$ is bigger for $N<N_c$ groups provided that $N>N_c/2$. In other words, to maximally benefit society as a whole, additional researchers should be given to the smaller groups provided they are not too small. The cutoff is at \begin{equation} N_k = \frac{N_c}{2}. \label{Nk} \end{equation} We refer to this number as the {\emph{critical mass}}. (In our earlier papers we called it {\emph{lower critical mass}}. In other words, we have established a scaling relation between the critical mass and the Dunbar number. We can now divide research groups into three different categories: small ($N < N_k$); medium ($N_k < N < N_c$); and large ($N>N_c$). These categories are discipline-dependent. For example, while a team of five pure mathematicians may be considered as ``large'', a similar number of experimental physicists is ``small''. This accords with experience. The overall shapes of the curves in Fig.1 may be ascribed to a Ringelmann-type effect. The original Ringelmann effect, in psychology and in sports science, is the tendency for individual members of a group to become less productive as the group size increases~\cite{Ri13}. The effect is due to an increase in inefficiency as the numbers grow. In psychology, this is often ascribed to motivational losses (also known as social loafing). But coordination losses may also account for the phenomenon. Here, as in Dunbar's theory, we also believe that the phenomenon is due to coordination losses. However, rather than the decrease in the average individual performance (which would be the actual Ringelmann effect), our model manifests a reduction in the {\emph{rate of change of quality}} per unit staff member. In other words, it is the quality-versus-quantity gradient rather than in the quality itself that reduces as $N$ increases. The challenge to managers, therefore, is to counteract or minimise this effect. Can we find point the direction to looking for a solution? We believe we can do so as follows. The dotted line in Fig.1 is an extrapolation of the left fit for small and medium groups into the supercritical region where $N$ exceeds $N_c$. We refer to such extrapolated lines as ``overshoots''. In the absence of a transition point $N_c$, if the mechanism which governs research quality for small and medium universities applied also to the large ones, then the research quality to the right of $N_c$ would also be expected to follow this line. In other words, the naive model outlined earlier, and the one upon which many critical mass policies are currently based, would predict an unbounded Matthew effect along this line. In this case, a policy of continued concentration of resources would indeed drive up research quality. However, the figure clearly indicates that this is not the case. Similar figures for other research disciplines may be found in Ref.\cite{KeBe12a}. Instead, as the figure clearly demonstrates, large research teams tend to have a different interaction pattern than small and medium ones, as predicted in Eq.(\ref{Nc}). With a large value of $N_c \approx 25$ for Physics, research quality is saturated to the right of the $N_c$ as Dunbar's theory predicts. This suggests that the concentration of more staff into these teams only leads to a linear increase in research volume and \emph{not} to an increase in research quality. However, it is interesting to note that some of the best performing research teams, which appear as outliers to the overall fit, are bunched slightly above $N_c$ but are well described by the overshoot. We suggest that this overshoot may be caused by a greater than usual degree of cohesiveness in these highly successful research teams. Somehow, two-way communication links are sustained despite their group sizes exceeding the Dunbar numbers for their disciplines. These are the research groups which are best managed: those in which communication is key. We have shown that, it is most beneficial to society if new researchers are added to medium-sized groups. This is because the new researcher brings new links to existing academics in such groups, helping to drive them to the Dunbar number. However, joining a medium-sized group is {\emph{not}} the best course of action from the new individual's point of view. For them, it is better to join a large group. The reason is that a medium group (unless it is on the border of becoming large) only provides the new researcher with a number of links which is below their cognitive limit. If they join a large group, on the other hand, they can enjoy maximal connectivity. (Of course establishing such connections takes time as the intra-group cooperative network becomes reconfigured.) This returns us to the causation mechanism mentioned at the start of Section~2. Of course high-quality individual researchers are attracted to high-quality, large groups. However, that is not how quality grows. Adding more and more researchers to large groups only increases the volume of high-quality outputs. It cannot increase average quality significantly because that is already at saturation level. The fastest way to drive up research quality is to help medium-sized groups attain the nirvana that is the Dunbar number. \section{Quality of Evaluation: Quantitative Indicators} \label{III} \setcounter{equation}{0} So far, we addressed evaluation of the quality of research and the roles played by Dunbar's numbers and critical masses. Now we turn to the second main theme announced in the title of our paper: the quality of research evaluation through metrics. Nowadays, many different measures exist claiming to assess academic research excellence or impact and these are subject to ongoing discussion and debate within the academic, scientometric, university-management and policy-making communities internationally. A topic of prime importance is the extent to which citation-based indicators compare with peer-review-based evaluation. Although flawed in many obvious respects, peer-review remains the only approach that is broadly accepted by the scientific community. But peer-review exercises such as the RAE and REF are expensive, both in terms of the labour involved and the time lost by academics who could otherwise have been engaged in research activity. For example, it has been estimated that UK universities spent about $\pounds$4000 for each researcher submitted to REF2014~\cite{cost}. The total cost to the UK of running the national evaluation exercise in 2014 is estimated to have been $\pounds$246 million~\cite{cost}. This is a significant fraction of the $\pounds$1.6 billion distributed by HEFCE for quality-related research funding in 2015-2016. (That figure is based on submission of 52\,077 individuals, so the funding allocated to universities is over $\pounds$30,000 per submitted researcher in that year.) For this reason, it is desirable for governments, funders and policy makers to introduce, if it exists, a simple, cheap and reliable way to measure scientific excellence. In the absence of expert subject knowledge, professional research managers are also keen on such an approach to serve as a basis for regularly measuring the performances of research groups and individuals. However, while citation-based indicators and metrics are readily accessible, they are far from being universally accepted as a way to inform evaluation processes. They are even less accepted by academics as a way to replace evaluations based on peer review such as the RAE and REF. In Refs.\cite{MrKe13a,MrKe13b}, we considered a citation-based measurement of research at an amalgamated, institutional or group level, from the natural to social sciences and humanities. In particular, we examined how that measure correlates with the results of RAE2008. We found that the citation-based indicator is very highly correlated with peer-evaluated measures of group strength $S$ for some disciplines. But it is poorly correlated with group quality $s = S/N$ for all disciplines. This means that, almost paradoxically, indicators could possibly form a basis for deciding on how to fund research institutions, especially for the so-called hard sciences, but they should not be used as a basis for ranking or comparison of research groups. Moreover, the correlation between peer-evaluated and citation-based scores is weaker for soft sciences. The citation-based measure which we compared against peer review in Refs.\cite{MrKe13a,MrKe13b} was one provided by {\emph{Thomson Reuters Research Analytics.}} This company has developed the so-called {\emph{normalised citation impact}} (NCI) $i$ as a measure of a department's citation performance in a given discipline. The NCI is calculated using data from the {\emph{Web of Knowledge} databases and by comparing outcomes to a mean or expected citation rate. The measure is determined for an entire group or department and then normalised by group size. It is therefore a {\emph{specific}} (per-head) measure. A useful feature of the NCI is that it attempts to take account of different citation rates in different disciplines. To achieve this, the total citation count for each paper is ``rebased'' to an average number of citations per paper for the year of publication and either the field or journal in which the paper was published. Here we denote the NCI specific measure by $i$. Scaled up to the size of a group or department, the corresponding {\emph{absolute}} measure is denoted by $I$ where $I = iN$. The relationship between $i$ and $s$ for Physics research groups submitted to RAE2008 is given in Fig.2 (left panel). Clearly the correlation between the two measures is very weak. In fact the Pearson correlation coefficient is only $0.48$. In Refs.\cite{MrKe13a,MrKe13b} we also analysed the correspondence between $i$ and $s$ for biology; chemistry; mechanical, aeronautical and manufacturing engineering; geography and environmental studies; sociology; and history. The maximum value for Pearson's coefficient was 0.60 (for biology and chemistry). These weak correlations indicate that the NCI cannot be use to estimate research quality - it cannot be used as a proxy for peer review. We also ranked the various universities according to their $i$ and $s$ values to see if the NCI could, at least, serve as a guide to league tables. However Spearman's coefficient was of similar weakness to Pearson's so this is also ruled out. \begin{figure*}[t] \begin{center} \includegraphics[width=0.47\columnwidth,angle=0]{Fig2a.eps} \includegraphics[width=0.47\columnwidth,angle=0]{Fig2b.eps} \label{fig2} \end{center} \vspace{0.3cm} \caption{Left panel: The correlation between the normalised citation impact $i$ and peer-review measures of group research quality $s$ for Physics. Right panel: The correlation between the absolute impact $I=iN$ and research strength/power $S=sN$. The black and red symbols represent large and medium/small groups respectively.} \end{figure*} Figure~2 (right panel) depicts the relationship between the absolute measure $I = iN$ and strength (or power) $S = sN$. The correlations are very good. In fact, for physics, Pearson's correlation coefficient for these measures is a remarkable 0.96. It is also in the nineties for the other scientific disciplines (biology and chemistry). However it is lower for sociology and history (0.88 in each case). The multiplication by $N$ when replacing specific measures by their absolute counterparts stretches the data sets and the correlations between the rescaling leads to improved correlations. Moreover, when one partitions the various teams into small; medium; and large (as explained in the previous section), the correlation tends to be best for large groups. Nonetheless the conclusion is clear: citation-based measures may perhaps inform, or even serve as a proxy, for peer-review measures of the strengths of research groups. But they cannot be used for measures of quality. In the UK system, quality-related governmental funding for research is strength based; the amount of money delivered to universities for a particular research discipline is a function of their $S$-scores at RAE or REF. The above results indicate that the use of citation-based alternatives may offer a far cheaper, and less intrusive alternative to the current system. The scale of the savings that could be made in the UK alone is a quarter of a {billion} pounds per exercise. Such a proxy would work best for the hard sciences but would be less reliable for the social sciences and humanities. Also, it is crucial to realise that such citation-based indicators should not be used to compare or rank the qualities of research groups. However, we see a grave danger in introducing any form of citation-based metric to evaluation exercises. The {\emph{Hawthorne effect}} (also known as the {\emph{observer effect}}) is the name given to reactions of individuals when an aspect of their behavior is observed or measured; they naturally seek to modify that behaviour. This would be disastrous in the academic context; university managers would force researchers to chase citations instead of curiosity. This would shift the entire landscape of academic research as everyone chases the same ``hot topics'' in a bid to increase their citation counts. It is well known that many profound and important scientific breakthroughs have come about through mixtures of curiosity, serendipity, accident and chance. Academic freedom is the cornerstone on which these features can thrive. Combined with the professionalisation of research management and ``businessification'' (indeed, moves towards privatisation) of many third-level institutions, citation-based league tables would pose a serious threat to academic freedom. In 2014, Dorothy Bishop arrived at a similar conclusion to us. She used a different metric - the so-called {\emph{departmental $h$-index} instead of the NCI. In Ref.\cite{MrKe15a} we showed that Bishop's metric has an even better correlation with the RAE-measured strength index $S$ than has the scaled-up NCI. The meaning of a departmental $h$-index of, say, $n$ is that $n$ papers, authored by researchers from a given department in a given discipline were cited at least $n$ times over a given time period (we used the periods of assesment for RAE2008 and REF2014). This index takes into account, in principle, {\emph{all}} researchers (not only those submitted to RAE or REF). However, in practice it can be dominated by a single, extremely strong individual. Also, unlike for REF where authors' addresses at the census date determine their affiliations for assessment purposes, author address at the time of publication determines to which university a given output is allocated for the departmental $h$-index. Nonetheless, the departmental $h$-index is indeed better correlated with research quality; the Pearson and Spearman correlation coefficients for between $h$ and $s$ for Physics at RAE2008 were 0.55 and 0.58, respectively. These correlations are still, however, too weak to replace or inform exercises like the RAE or REF. To demonstrate that, we decided to use the departmental $h$-index to predict outcomes, in terms of rankings, of RAE2014 in advance of their being announced by HEFCE. If simple citation-based metrics can be used as some sort of proxy for peer review, one would expect them to be able to predict at least some such aspects of the outcomes of such exercises. Even limited success might suggest that a citation metric could serve at least as a ``navigator'' -- to help guide research institutes as they prepare for the expert exercises. We placed our predictions for the rankings in Biology, Chemistry, Physics and Sociology on the arXiv in November 2014 (they were subsequently published as Ref.\cite{MrKe15a}.) The REF results were announced in December 2014. We revisited our study in January 2015 and found that our predictions failed to anticipate REF outcomes. For example, the Pearson and Spearman correlation coefficients between the REF-measured $s$ values and the departmental $h$-index in Physics were 0.55 and 0.50, respectively. One submission which was ranked 27th in a certain subject area according to the departmental $h$-index, actually came in in seventh place in the REF. We also sought to predict whether institutions would move up or down in the rankings between RAE2008 and REF2014. For Physics, the correlation between our predictions and the actual results was 0.26. For Chemistry it was 0.05 and for Biology it was -0.15. As commented later, managers would find better estimates of movement in the league tables by tossing dice. Our results are published as Ref.\cite{MrKe15b}. \section{Conclusions} \label{IV} \setcounter{equation}{0} In the first part of this review, we reported upon the development of a precise definition of a minimum {\emph{critical mass}} for academic research. (In our earlier publications on the topic, we referred to this quantity as ``lower critical mass''.) Contrary to previous intuition-based notions, this is not a threshold value marking a step change between low- and high-quality research. In fact it is not directly perceptible or measurable. Instead it is connected, via a scaling relation, to the Dunbar number for academic research. (We previously referred to this as the ``upper critical mass''.) This marks an average cognitive limit to the number of intra-group research relationships one can sustain; in a large department, one cannot cooperate with everybody all of the time. The Dunbar number is discipline dependent; e.g., it is less than or equal to four for Pure Mathematics and approximately twenty-five for Experimental Physics on average (with a standard deviation of five). We showed that concentrating more staffing resources into already-large research groups does not lift the quality of those groups significantly, especially if the discipline has a high Dunbar number. This is because everyone already has sufficient links to maximise cognitive activity. Adding more resources to such a group only serves to increase the volume of high-quality research. If one wishes to increase quality more significantly, more research staff should be allocated to medium-sized research groups. The standard is raised by introducing new links, as relationships between existing staff and the new arrival are established over time. We noticed in Fig.1, and similar figures for other disciplines, that the best performing groups are not always the biggest. Frequently groups with size just in excess of the Dunbar number perform exceptionally well - they are large groups behaving as if they were medium. For example, in REF2008, Lancaster University with 26 Physics staff submitted, was ranked highest in terms of quality but Cambridge, with 141.25 staff, was largest. We suggested that these are the groups within which communication flows easiest and we suggested that these should form a guide for managers at other institutions to seek to emulate. We then compared measures of quality of research coming from peer review to a citation-based measures. The poor correlations mean that such citation indicators {\emph{cannot}} be used as proxies for peer. Nor should such indicators be used to rank university departments or groups into league tables. However, the NCI and $h$-index compare well to the outcomes of peer review when one is interested in measuring the strength or power, defined as volume of quality. Since this is the basis on which quality-related funding is distributed, one could argue that a suitable metric may be developed to replace invasive exercises or reduce their cost. Dorothy Bishop goes a step further; she suggests to abolish the REF altogether and revert to a block-grant system that existed before the RAE. She argues that besides not being cost-effective, the REF has numerous adverse effects on academia. On the basis of evidence and rigorous analyses produced over the past five years, we are inclined to agree, but only for well-established universities with large research groups of solid and long-standing repute. Research evaluation exercises have, however, been useful for smaller and medium groups as they seek to establish themselves in newer universities which have less developed research cultures. Some such universities have learned -- through the RAE and REF -- that curiosity-driven, pure academic research has value. We suggest that peer review (not metric-based!) exercises for such groups can and should continue. But they should be extended to include measurements of the extent to which such institutes promote academic values such as academic freedom and bottom-up research. In other words, peer review should be more focused on institutions which can actually benefit from it. Those which are established, and who enjoy long-standing high reputations amongst their peers internationally, should be allowed to get on with research. Fig.1(a) shows that very little separates such groups in terms of quality; in ranking such groups, one is essentially ranking noise. At the risk of stating the obvious, citation-based metrics measure citation counts, not quality. These are very different things. We suggest that on no account should one use metrics to decide policy or inform management. In our opinions, these pose a serious and current threat to curiosity-driven research which is the very {\emph{raison d'\^{e}tre}} for universities. Finally we suggest that it is incumbent upon academics themselves, who have the training and expert knowledge, to turn their methodologies on the scientific process itself in order to produce evidence to influence the powers that be. The UK recently commissioned an independent review of the role of metrics in research assessment and management. The remit was ``to investigate the current and potential future roles that quantitative indicators can play in the assessment and management of research.'' The report, ``The Metric Tide'', was published in July 2015~\cite{Tide1}. It concludes that peer review continues to command widespread support as the main basis for evaluating research and there is legitimate concern that indicators, including citation counts and rankings, can be misused or ``gamed.'' The report also concluded that no current metric can provide a replacement for peer review. A legacy of the Report is the establishment of a forum for ongoing discussion of issues related to metrics. This aims to celebrate good practices but also to name and shame bad ones. Inspired by the ``Bad Sex in Fiction'' award, a ``Bad Metric'' prize will be awarded to the most egregious example of inappropriate use of quantitative indicators in research management~\cite{ResponsibleMetrics}. This demonstrates the power of turning the evaluation process on itself \section*{References}
1,116,691,496,962
arxiv
\section{Introduction} \label{sec:introduction} The \emph{Columbia plot}, of which we show in \figurename~\ref{fig:scenariosCP} two possible versions based on current findings, encapsulates our still very limited knowledge about the order of the thermal phase transition in QCD as function of the two light (assumed degenerate) quark masses $m_{u,d}$ and the strange quark mass $m_{s}$. Continuum extrapolated results are so far only available at the physical point. Elsewhere, using different unimproved~\cite{Karsch:2001nf,deForcrand:2003vyj,deForcrand:2007rq,Bonati:2014kpa,Philipsen:2016hkv,deForcrand:2017cgb} and improved~\cite{Burger:2011zc,Brandt:2016daq,Jin:2017jjp,Bazavov:2017xul} fermion discretizations, seemingly contradicting results have been obtained, in particular in what concerns the case of $N_\text{f}=2,3$ degenerate light flavors in the limit of small masses corresponding to the top and bottom left corners in the Columbia plot, respectively. \begin{figure}[tp] \centering \subfigure[First order scenario in the $m_{s}-m_{u,d}$ plane]% {\label{fig:firstOrderScenarioCP}\includegraphics[width=0.45\textwidth,clip]{ColumbiaPlot1st}}\hfill % \subfigure[Second order scenario in the $m_{s}-m_{u,d}$ plane.]% {\label{fig:secondOrderScenarioCP}\includegraphics[width=0.45\textwidth,clip]{ColumbiaPlot2nd}} \caption{Two possible scenarios for the order of the QCD thermal phase transition as function of the masses of quarks. Indicated in Fig.~\protect\ref{fig:secondOrderScenarioCP} are also plausible universality classes for the second order line at $m_{u,d}=0$.} \label{fig:scenariosCP} \end{figure} This motivated us to push forward with studies aiming at elucidating, in particular, the picture for $N_\text{f}=2$ degenerate light flavors, by exploiting the dependence of the chiral transition on the number of light degenerate flavors $N_\text{f}$ as a means to perform controlled chiral extrapolations. To this end, we treated $N_\text{f}$ as a continuous real parameter, of some statistical system behaving, at any integer $N_\text{f}$ value, as QCD at zero density, with $N_\text{f}$ mass-degenerate fermion species~\cite{Cuteri:2017gci} \begin{equation} Z_{N_\text{f}}(m) = \int \mathcal{D}U \left[\det M(U,m)\right]^{N_\text{f}} e^{-\Action_{\text{G}}}\;. \end{equation} Within this framework, the two considered scenarios for the Columbia plot can be put in one-to-one correspondence with the two sketches for the order of the thermal phase transition in the $(m,N_\text{f})$-plane displayed in Figure~\ref{fig:scenariosMvsNf}. \begin{figure}[tp] \centering \subfigure[First order scenario in the $m-N_\text{f}$ plane]% {\label{fig:firstOrderScenarioMvsNf}\includegraphics[width=0.45\textwidth,clip]{1stMVsNf}}\hfill % \subfigure[Second order scenario in the $m-N_\text{f}$ plane]% {\label{fig:secondOrderScenariosMvsNf}\includegraphics[width=0.45\textwidth,clip]{2ndMVsNf}} \caption{The two considered possible scenarios for the order of the QCD thermal phase transition as function of the mass of the quarks and the number of degenerate fermion flavors.} \label{fig:scenariosMvsNf} \end{figure} Our original strategy was to find out for which (tricritical) value $\Nf^\text{tric}$ the phase transition displayed by this system changes from first-order to second-order, by mapping out the $Z_2$ phase boundary. The extrapolation to the chiral limit with known tricritical exponents can then decide between the two scenarios, depending on whether $\Nf^\text{tric}$ is larger or smaller than 2. While the tricritical scaling region was found to be very narrow already on coarse lattices, results at larger $m$ and $N_\text{f}$ were found to feature, over a much wider region, a remarkable linear behavior, which was not expected on universality grounds. \begin{wrapfigure}{r}{0.5\textwidth} \centering {\label{fig:rescaled}\includegraphics[width=0.5\textwidth,clip]{MVsNfDetailWithLinExtr}} \caption{Sketch showing how, via a linear extrapolation to the chiral limit, $\Nf^\text{lin}$ as an upper bound for $\Nf^\text{tric}$ can be extracted, with the first order scenario being realized for as long as $\Nf^\text{lin} < 2$.} \end{wrapfigure} What our findings suggest is that, if it is reasonable to expect both linearity within some range in $N_\text{f}$ and tricritical scaling more in the chiral limit, then one would be able to make use of a linear extrapolation to $m=0$, to at least get $\Nf^\text{lin}$ as an upper bound for $\Nf^\text{tric}$, out of much more affordable simulations and possibly without even simulating at noninteger numbers of flavors. For as long as the upper bound from the linear extrapolation keeps lying at $N_\text{f} < 2$, while one simulates at larger and larger $N_\tau$ values towards the continuum limit, one can infer that the transition in the $N_\text{f}=2$ chiral limit is of first order. However, should our linear extrapolation give $\Nf^\text{lin} \gtrsim 2$, then knowledge of the size of the scaling region is necessary to draw conclusions. \section{Numerical strategy} \label{sec:numericalStrategy} We employ unimproved staggered fermions and use the RHMC algorithm~\cite{Kennedy:1998cu} to simulate any number $N_\text{f}$ of degenerate flavors, with $\frac{N_\text{f}}{4}$ being the power to which the fermion determinant is raised in $Z_{N_\text{f}}(m)$. All numerical simulations are performed using the publicly available OpenCL-based code CL\kern-.25em\textsuperscript{2}QCD~\cite{Philipsen:2014mra} of which a version 1.0 has been recently released~\cite{clqcd}. We consider temporal extents $N_\tau = 4,6$ to check for the cutoff dependence of $\Nf^\text{lin}$. The ranges in mass $m$ and gauge coupling constant $\beta$ of the investigated parameter space are dictated by our purpose of locating the chiral phase transition for values of the mass $m$ around the critical $\LatMassStaggered_{\ZTwoUniversality}$ value, with the temperature related to the coupling according to $T = 1/(a(\beta)N_\tau)$. To locate and identify the order of the chiral phase transition we rely on a finite size scaling analysis of the third and fourth standardized moments of the distribution of the (approximate) order parameter. The $\text{n}^{\text{th}}$ standardized moment for a generic observable $\mathcal{O}$ is expressed as \begin{equation} B_n(\beta,m,N_\sigma) = \frac{\left\langle\left(\mathcal{O} - \left\langle\mathcal{O}\right\rangle\right)^n\right\rangle}{\left\langle\left(\mathcal{O} - \left\langle\mathcal{O}\right\rangle\right)^2\right\rangle^{n/2}} \; . \end{equation} Being interested in the order of the thermal phase transition in the chiral limit, we consider the kurtosis $B_4(\beta,m)$~\cite{Binder:1981sa} of the sampled $\langle \psibar \psi \rangle$ distribution, evaluated at the coupling $\beta_c$ for which $B_3(\beta=\beta_c,m,N_\sigma)=0$, i.e.~on the phase boundary. \begin{figure}[tp] \centering\includegraphics[width=1\textwidth,clip]{distributionsAndMoments} \caption{The chiral condensate distribution according to a model $\mathcal{P}(x)$ based on our numerical findings at various $\beta$ values and the corresponding moments as function of $\beta$. Details on the model are discussed in~\cite{Cuteri:2017gci}.} \label{fig:distributions} \end{figure} In the thermodynamic limit $N_\sigma \rightarrow \infty$, the kurtosis $B_4(\beta_c,m)$ takes the values of 1 for a first order transition and 3 for an analytic crossover, respectively, with a discontinuity when passing from a first order region to a crossover region via a second order point. For the $3D$ Ising universality class, which is the relevant one for our case, the kurtosis takes the value $1.604$~\cite{Pelissetto:2000ek}. The discontinuous step function is smeared out to a smooth function as soon as a finite volume is considered. In the lattice box, the distribution of the approximate order parameter and its higher moments behave, depending on $\beta$, as illustrated in Figure~\ref{fig:distributions}. Moreover, in the vicinity of a critical point, the kurtosis $B_4(\beta_c,m,N_\sigma)$ can be expanded in powers of the scaling variable $x\equiv(m - \LatMassStaggered_{\ZTwoUniversality}) N_\sigma^{1/\nu}$, and, for large enough volumes, the expansion can be truncated after the linear term, \begin{equation}\label{eq:BinderScaling} B_4(\beta_c,m, N_\sigma) \simeq B_4(\beta_c,\LatMassStaggered_{\ZTwoUniversality}, \infty) + \textcolor{blue}{c} \, (m - \textcolor{blue}{\LatMassStaggered_{\ZTwoUniversality}}) N_\sigma^{1/\nu}. \end{equation} As already mentioned, in our case, the critical value for the mass $\LatMassStaggered_{\ZTwoUniversality}$ is known to correspond to a second order phase transition in the 3D Ising universality class, so we fix \mbox{$B_4(\beta_c,\LatMassStaggered_{\ZTwoUniversality}, \infty) = 1.604$} and $\nu = 0.6301$ to better constrain the fit. Our simulated values for $B_4(\beta_c,m, N_\sigma)$ are then fitted to Eq.~\eqref{eq:BinderScaling} and the fit parameters $\textcolor{blue}{c}$ and $\textcolor{blue}{\LatMassStaggered_{\ZTwoUniversality}}$ are extracted. The whole study has been repeated for $N_\text{f}\in\lbrace3, 4, 5\rbrace$ at $N_\tau=4$ and \mbox{$N_\text{f}\in\lbrace3.6, 4.0, 4.4\rbrace$ at $N_\tau=6$}. \section{Results and conclusions} \label{sec:resultsAndConclusions} \begin{figure}[tp] \centering\includegraphics[width=0.9\textwidth,clip]{MassOverTNf} \caption{The $Z_2$ boundary in the $m/T-N_\text{f}$ plane for $N_\tau=4, 6$. The dark blue line represents the tricritical extrapolation to the chiral limit as in~\cite{Cuteri:2017gci}. The orange line represents a linear extrapolation based on $\LatMassStaggered_{\ZTwoUniversality}$ in the $N_\text{f}$ range 2.4-5 using also newly simulated points. The violet line represents a linear extrapolation on the basis of $\LatMassStaggered_{\ZTwoUniversality}$ in the $N_\text{f}$ range 3.6-4.4. The magenta point at $N_\tau=6$ and $N_\text{f}=3$ is borrowed from~\cite{deForcrand:2007rq}.} \label{fig:results} \end{figure} Our results are reported in Figure~\ref{fig:results}. The first important thing to observe is that, while tricritical extrapolation for $N_\tau=4$ resulted in $\Nf^\text{tric} < 2$, providing a confirmation for the first order scenario being realized on coarse lattices, a linear extrapolation to the chiral limit using those data which exhibit linear scaling within the range $N_\text{f} \in [2.4,5.0]$, results in $\Nf^\text{lin}=2$ within errors. Strictly speaking, by just considering $N_\tau=4$ results, one would conclude that the linear extrapolation alone cannot give conclusive answers on the order of the $N_\text{f}=2$ transition in the chiral limit. However, results on finer lattices were produced as well. On $N_\tau=6$, what we observe is that data within the range $N_\text{f} \in [3.6,4.4]$ certainly do not fall in the tricritical scaling region, but they do exhibit linear scaling. Moreover, if we consider the result for $N_\text{f}=3$ for the same discretization from the literature~\cite{deForcrand:2007rq}, we can see it is fully consistent with our linear extrapolation. Finally, the most important aspect of this result is that, linearly extrapolating at $N_\tau=6$, we get $\Nf^\text{lin} \lesssim 3$, namely quite far to the right of $N_\text{f}=2$. To conclude, we have proposed and tested an approach, to clarify the order of the thermal transition in the chiral limit of QCD at zero chemical potential with two dynamical flavors of quarks. Specifically, a controlled chiral extrapolation in the $m-N_f$ plane with $N_f$ promoted to a continuous parameter in the path integral formulation of the theory is possible, given that if the transition for $m\rightarrow0$ changes with $N_f$ from $1^{st}$ order (triple) to $2^{nd}$ by reducing $N_\text{f}$, there has to exist a tricritical point at some $\Nf^\text{tric}$. Moreover, the linearity featured by the $Z_2$ boundary over a wide $N_\text{f}$ region suggests that a linear extrapolation to $m=0$ can also provide an upper bound $\Nf^\text{lin}$ for $\Nf^\text{tric}$, which may become useful to discriminate between first and second order scenario and help resolving the "$N_\text{f}=2$ puzzle". Based on our numerical findings, the shift in the $Z_2$ critical boundary from $N_\tau=4$ to $N_\tau=6$ points towards a behavior consistent with that from improved actions on sufficiently fine lattices. \section{Acknowledgements} The authors acknowledge support by the Deutsche Forschungsgemeinschaft (DFG) through the grant CRC-TR 211 ``Strong-interaction matter under extreme conditions'' and by the Helmholtz International Center for FAIR within the LOEWE program of the State of Hesse. The project received initial support by the German BMBF under contract no. 05P1RFCA1/05P2015 (BMBF-FSP 202). We also thank the staff of LOEWE-CSC\ and L-CSC\ for their support. \bibliographystyle{JHEP}
1,116,691,496,963
arxiv
\subsection{Canonical suppression}\label{sec:strangeness:canonical} The abundances of strange particles in heavy-ion collisions are compatible with those of a hadron gas in thermal and chemical equilibrium and can be described using a grand canonical statistical model~\cite{Cleymans:2006xj,Andronic:2008gu}. Extensions of the statistical description, like the ones employed in the strangeness canonical suppression~\cite{Redlich:2001kb} and the core-corona superposition~\cite{Becattini:2008yn,Aichelin:2008mi} models, can effectively produce a suppression of strangeness production in small systems. The authors in~\cite{Vislavicius:2016rwi} have studied the multiplicity dependence of light flavour hadron production at LHC energies in the strangeness canonical suppression picture using THERMUS~\cite{Wheaton:2004vg}. The details can be found therein. As can be observed from Figure~\ref{fig:strangeness:canonical:suppression}, the study shows that the model provides a good description of the experimental data, except for the $\phi$ meson. As a matter of fact, the strangeness canonical suppression model is only sensitive to the strangeness quantum number of the hadron. The observation of an enhancement (suppression) of $\phi$ meson production with increasing (decreasing) multiplicity clearly signals that the particle production rate is not consistent with the one expected from a hadron with zero strangeness quantum number. This can be observed also in Figure~\ref{fig:strangeness:canonical:phi}, which shows the ratio of $\phi$/K and $\Xi$/$\phi$ as a function of \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace. It is evident that $\phi$ meson (S = 0) production increases faster than K meson (S = 1) production. When compared to the $\Xi$, the $\Xi$/$\phi$ is constant within uncertainties for \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace $>$ 10. This is an indication that the $\phi$ meson behaviour is closer to the behaviour expected from a hadron formed by two strange quarks. The increase of the relative $\phi$ meson production with \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace constitues a notable crack in the canonical suppression interpretation of the observed strangeness enhancement in high-multiplicity pp and p--Pb\xspace collisions. \section{Collective phenomena}\label{sec:collective} The observation of the ridge in pp~\cite{Khachatryan:2010gv} and subsequently in p--Pb\xspace collisions~\cite{CMS:2012qk} promptly triggered further investigations. Studies of two-particle correlations in p--Pb\xspace collisions reported by the ALICE Collaboration have shown that, when taking the difference between the yields per trigger particle in high-multiplicity and low-multiplicity events, two nearly identical long-range ridge-like excess structures on the near-side and away-side, the so-called ``double ridge'', arise~\cite{Abelev:2012ola,Aad:2012gla}. Results on two-particle correlations of identified hadrons~\cite{Khachatryan:2016txc,ABELEV:2013wsa,Khachatryan:2014jra,Abelev:2014pua} and of Bose-Einstein (HBT) correlations~\cite{ATLAS:2017jts,Adamova:2017opl} further stressed the initial observations, showing striking similarities between pp, p--Pb\xspace and Pb--Pb\xspace collisions, consistent with the hydrodynamic picture of a particle-production source expanding more explosively along the collision event plane. \input{multiparticle} \input{spectra} \section{Conclusions}\label{sec:conclusions} Several effects, like near-side long-range correlations and mass-dependent hardening of \ensuremath{p_{\rm T}}\xspace distributions, which in nuclear collisions are typically attributed to the formation of a strongly-interacting collectively-expanding quark-gluon medium, have been observed in high-multiplicity pp and p--Pb\xspace collisions at the LHC~\cite{Khachatryan:2010gv,CMS:2012qk,Abelev:2012ola,Aad:2012gla,Aad:2013fja,Chatrchyan:2013nka,Abelev:2013haa,ABELEV:2013wsa,Adam:2015vsf,Khachatryan:2016yru,Khachatryan:2016txc}. An enhanced production of strange particles as a function of the charged-particle multiplicity density (\ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace), originally considered to be another signature of QGP formation in nuclear collisions~\cite{Rafelski:1982pu,Koch:1982ij,Koch:1986ud}, has also been recently observed in pp collisions~\cite{ALICE:2017jyt}. The study of small collision systems at high multiplicity is undoubtedly of considerable interest. Further studies are essential to assess whether the combination of these observations can be interpreted as signal of the progressive onset of a QGP medium, which starts developing already in small systems. \subsection{Strangeness enhancement}\label{sec:strangeness:enhancement} An enhanced production of strange hadrons was one of the earliest proposed indicators for the formation of a Quark-Gluon Plasma (QGP) state~\cite{Rafelski:1982pu,Koch:1982ij,Koch:1986ud}, as higher rates for strange quark production are expected in a highly-excited state of QCD matter. This strangeness enhancement is expected to be more pronounced for multi-strange baryons, and was indeed observed in collisions of heavy nuclei at the SPS, RHIC and LHC~\cite{Andersen:1998vu,Andersen:1999ym,Antinori:2004ee,Afanasiev:2002he,Anticic:2003ux,Adams:2003fy,Adams:2006ke,Abelev:2007xp,Antinori:2010jm,ABELEV:2013zaa}. The ALICE Collaboration has recently reported on the multiplicity dependence of the production of primary strange and multi-strange hadrons in pp collisions~\cite{ALICE:2017jyt}. Similar measurements have been previously performed by ALICE p--Pb\xspace collisions~\cite{Abelev:2013haa,Adam:2015vsf} and in Pb--Pb\xspace collisions~\cite{Abelev:2013xaa,ABELEV:2013zaa}. Figure~\ref{fig:strangeness:enhancement:ratios} shows the measured ratios of the \ensuremath{p_{\rm T}}\xspace-integrated yields of \ensuremath{{\rm K}^{0}_{S}}\xspace, \ensuremath{\Lambda}\xspace, \ensuremath{\Xi}\xspace and \ensuremath{\Omega}\xspace to the pion yield as a function of \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace in pp collisions~\cite{ALICE:2017jyt} compared to p--Pb\xspace and Pb--Pb\xspace results at the LHC~\cite{Abelev:2013haa,Adam:2015vsf,ABELEV:2013zaa}. A significant enhancement of strange to non-strange hadron production is observed with increasing charged-particle multiplicity in pp collisions. The behaviour observed in pp collisions resembles that of p--Pb\xspace collisions at a slightly lower centre-of-mass energy~\cite{Adam:2015vsf}, both in the values of the ratios and in their evolution with \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace, suggesting that strangeness production is driven by the event activity of the event rather than by the initial-state collision system or energy. Figure~\ref{fig:strangeness:enhancement:bmratios} shows that the \ensuremath{p_{\rm T}}\xspace-integrated yield ratios \ensuremath{\Lambda}\xspace/\ensuremath{{\rm K}^{0}_{S}}\xspace and \ensuremath{\rm p}\xspace/\ensuremath{{\pi}}\xspace do not change significantly with multiplicity, demonstrating that the observed enhanced production rates of strange hadrons with respect to pions is not due to the difference in the hadron masses. The results in Figures~\ref{fig:strangeness:enhancement:ratios} and~\ref{fig:strangeness:enhancement:bmratios} are compared to calculations from Monte Carlo models commonly used for pp collisions at the LHC: PYTHIA8~\cite{Sjostrand:2007gs}, EPOS LHC~\cite{Pierog:2013ria} and DIPSY~\cite{Flensburg:2011kk,Bierlich:2014xba,Bierlich:2015rha}. The observation of a multiplicity-dependent enhancement of the production of strange hadrons along with the constant production of protons relative to pions cannot be simultaneously reproduced by any of the Monte Carlo event generators commonly used at the LHC. \section{Introduction}\label{sec:introduction} There was a time when small collision systems, like proton-proton and proton-nucleus, where almost excusively employed in heavy-ion physics as a reference to compute the nuclear modification factors of particle production and as a proxy to disentangle initial state, cold nuclear matter effects (i.e. due to the nuclear PDF of the target/projectile) from genuine final state, hot nuclear matter effects. As a matter of fact, results based on the analysis of particle production in small systems provided the evidence that the characteristic phenomena observed in heavy-ion collisions, most notably the high-\ensuremath{p_{\rm T}}\xspace and jet suppression~\cite{Adler:2003ii,Adams:2003im}, were actually due to the formation of a hot state of QCD matter, the Quark-Gluon Plasma. With the advent of the LHC, pioneering studies of pp collisions in high-multiplicity events revealed the presence of unexpected phenomena in small systems. The discovery of the ``ridge'' in pp collisions by the CMS Collaboration was the first of a long list of such results~\cite{Khachatryan:2010gv}. Long-range, near-side angular correlations in particle production emerged in pp and subsequently in p--Pb\xspace collisions~\cite{CMS:2012qk} and paved the way for a systematic investigation of the existence of collective phenomena, known since long in heavy-ion collisions~\cite{Abelev:2009af}, in the much smaller pp and p--Pb\xspace collision systems. A wealth of new, unexpected phenomena have been observed so far with striking similarities to heavy-ion phenomenolgy. In this report, only a few of them can be discussed because of the lack of space. The focus will be given to a selection of recent LHC results aiming at studying collective phenomena and the chemistry of hadron production in small systems. \subsection{Multi-particle correlations}\label{sec:collective:multiparticle} A fundamental question is whether the two-particle azimuthal correlation structures observed at large relative pseudorapidity in pp and p--Pb\xspace collisions result from correlations exclusively between particle pairs, or if it is a multi-particle, genuine collective effect. A strong hint, originally named as evidence by the CMS Collaboration, for collective multi-particle correlations in pp and p--Pb\xspace pp collisions was reported already~\cite{Khachatryan:2015waa,Khachatryan:2016txc}. Figure~\ref{fig:collective:multiparticle:cms} shows the second-order azimuthal anisotropy Fourier harmonics $v_{2}$ measured in pp, p--Pb\xspace and Pb--Pb\xspace collisions over a wide pseudorapidity range based on correlations among up to eight particles. The $v_{2}$ values obtained with correlations among more than four particles are consistent with four-particle results and with comparable magnitude to those from two-particle correlations. These observations support the interpretation of a collective origin for the observed long-range correlations in high-multiplicity pp and p--Pb\xspace collisions. There is a note of care that must be taken seriously into account. While multi-particle correlation measurements have the advantage of suppressing short-range two-particle correlations such as jets and resonance decays, they are not totally insensitive to such or other so-called ``non-flow'' effects. This has been stressed in a publication by the ATLAS Collaboration~\cite{Aaboud:2017acw}, where it is shown how different multiplicity selection methods may yield different four-particle cumulants ($c_{2}\{4\}$) results. The measurements of $c_{2}\{4\}$ are shown in Figure~\ref{fig:collective:multiparticle:atlas} for pp and p--Pb\xspace collisions. The results confirm the evidence for collective phenomena in p--Pb\xspace and low-multiplicity Pb--Pb\xspace collisions (not reported in Figure~\ref{fig:collective:multiparticle:atlas}). On the other hand, the pp results for $c_{2}\{4\}$ do not demonstrate collective behaviour, indicating that they may be biased by contributions from non-flow correlations. Reliably suppressing non-flow correlations in pp collision is a central issue that needs to be solved in order to be able to unveil the actual underlying collectivity. Fortunately, several new methods have been recently developed which aim at tackling this issue. As an example, the ATLAS Collaboration has reported on the measurement of multi-particle correlations in pp and p--Pb\xspace collisions with the so-called ``two-subevent'' and the ``three-subevent'' methods~\cite{ATLAS:2017tqk}. Figure~\ref{fig:collective:multiparticle:atlas2} shows $c_{2}\{4\}$ measured in pp and p--Pb\xspace collisions with such methods. $c_{2}\{4\}$ from the standard method is sensitive to the choice of particles used to form the event classes. The sensitivity is greatly reduced in the two-subevent method and is almost fully removed in the three-subevent method, suggesting that the three-subevent method is much more robust against non-flow effects. The three-subevent method shows significant flow in pp collisions in a broad $N_{\rm ch}$ range. \subsection{Identified hadron spectra}\label{sec:collective:spectra} The \ensuremath{p_{\rm T}}\xspace distributions of identified hadrons are another tool to probe the collective behaviour of particle production. If collective radial flow develops, this would result in a characteristic dependence of the shape of the transverse momentum distribution on the particle mass. The \ensuremath{p_{\rm T}}\xspace distributions in pp and p--Pb\xspace collisions show a clear evolution, becoming harder as the multiplicity increases, with a change which is most pronounced for protons and lambdas. The stronger multiplicity dependence of the spectral shapes of heavier particles is evident when looking at the \ensuremath{p_{\rm T}}\xspace-dependent ratios, such as the \ensuremath{\Lambda}\xspace/\ensuremath{{\rm K}^{0}_{S}}\xspace ratio measured by the CMS Collaboration~\cite{Khachatryan:2016yru}, shown in Figure~\ref{fig:collective:spectra:cms}. The ratios in pp and p--Pb\xspace collisions show a significant enhancement at intermediate \ensuremath{p_{\rm T}}\xspace ($\sim$3~GeV/$c$\xspace), quantitatively similar to that measured in Pb--Pb\xspace collisions at similar multiplicity. Results from the ALICE Collaboration show that the modification of the \ensuremath{p_{\rm T}}\xspace-depentent ratios of \ensuremath{\rm p}\xspace/\ensuremath{{\pi}}\xspace and of \ensuremath{\Lambda}\xspace/\ensuremath{{\rm K}^{0}_{S}}\xspace follows a rather smooth evolution at low and mid \ensuremath{p_{\rm T}}\xspace across different systems, pointing towards a universal soft mechanism driven by final state \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace~\cite{Abelev:2013haa}. The high-\ensuremath{p_{\rm T}}\xspace part of the ratio, which is dominated by hard fragmentation, is unchanged. The Blast-Wave model~\cite{Schnedermann:1993ws} is a useful tool to characterize the spectral shapes of identified hadrons and test data against the radial flow picture. A simultaneous fit to pions, kaons and protons is performed by the ALICE Collaboration following the approach discussed in~\cite{Abelev:2013haa}. The best fit describes the data with good accurancy in the low-\ensuremath{p_{\rm T}}\xspace domain, stressing the consistency of particle production from a thermal source expanding with a common transverse velocity. The results of the fit to pp, p--Pb\xspace and Pb--Pb\xspace data are shown in Figure~\ref{fig:collective:spectra:bw} as a function of \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace. In the context of the Blast-Wave model, when comparing the parameters of different systems at similar \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace, \ensuremath{T_{\rm kin}}\xspace values are similar, whereas \ensuremath{\beta_{\rm T}}\xspace is larger for small systems. This might be an indication of a larger radial flow in small systems as consequence of stronger pressure gradients. The ALICE results also show that both \ensuremath{T_{\rm kin}}\xspace and \ensuremath{\beta_{\rm T}}\xspace parameters obtained from pp and p--Pb\xspace fits are consistent for similar \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace. One has to acknowledge that a different conclusion emerges from similar fits reported by CMS~\cite{Khachatryan:2016yru}, where pp collisions appear as more explosive than p--Pb\xspace collisions, with a larger \ensuremath{\beta_{\rm T}}\xspace for similar \ensuremath{{\rm d}N_{\rm ch}/{\rm d}\eta}\xspace. \section{Strangeness production}\label{sec:strangeness} The study of the production of strange hadrons in high-energy hadronic interactions provides an important means to investigate the properties of QCD. Unlike up ($u$) and down ($d$) quarks, which form ordinary matter, strange ($s$) quarks are not present as valence quarks in the initial state. Yet they are sufficiently light to be abundantly created by non-perturbative (soft) processes in the course of the collisions. String fragmentation, as an example of non-perturbative QCD process, dominates the production of strange hadrons at low \ensuremath{p_{\rm T}}\xspace. As a matter of fact, given that the mass of the strange quark is larger than the one of up and down quarks, production of strange hadrons in fragmentation is suppressed relative to hadrons containing only light quarks. \input{enhancement} \input{canonical}
1,116,691,496,964
arxiv
\section{Shear cell design} The shear cell is composed of two parallel microscope slides, which are sand-blasted (rms roughness $1\um$) to prevent slipping, except for a small window of diameter $\approx 2~\mathrm{mm}$ to probe optically the sample. For light scattering experiments, the spacing between the two slides is controlled by three stainless steel, high-precision ball bearings. The ball bearings are embedded in a custom-made rectangular frame of polydimethylsiloxane (PDMS), which contains the sample and avoids solvent evaporation. The PDMS frame is prepared by mixing two fluid components (a base and a cross-linker) with a mass-ratio 50:1, yielding a material with an elastic modulus of about 10 kPa. For microscopy observations, the two microscope slides are separated by a $250~\mu\mathrm{m}$ thick, $16 \times 16 \mathrm{mm}^2$ double-adhesive gene frame (Thermo Scientific), which acts as a spacer and avoids evaporation. A motor is used to displace one of the plates along the $x$ direction by an amount $\delta$, thereby imposing a strain $\gamma = \delta / e$, with $e$ the sample thickness. The motor speed during the displacement is $0.05~\mathrm{mm}~\mathrm{s}^{-1}$. For both light scattering and microscopy, we measure the thickness of the sample chamber by microscopy, by measuring $e\mathrm{_{obj}}$, the vertical displacement of the microscope objective when focusing the upper and lower plates, respectively, using a $20\times$ air objective. The sample thickness is obtained as $e = n e\mathrm{_{obj}}$, where $n = 1.38$ is the sample refractive index. We measure $e\mathrm{_{obj}}$ at several locations separated by 1 cm, finding no difference to within the measurement uncertainty. This implies that the maximum deviation from parallelism over 1 cm is smaller than the measurement uncertainty ($\le 8~\mu\mathrm{m}$), corresponding to less than $5\times 10^{-4}e$ (respectively, $1.3\times 10^{-3}e$) over the region sampled by light scattering (respectively, microscopy). \section{Light scattering measurements on a purely elastic sample cyclically sheared} We test our light scattering set-up by measuring the correlation function $g_2 - 1$ for a sample whose mechanical response is purely elastic, a transparent PDMS elastomer, seeded with copper particles of diameter $3~\mu\mathrm{m}$. Here, the scattering signal is dominated by the contribution of the particles, whose microscopic configuration is essentially frozen due to the stiffness of the elastomer, whose elastic modulus is $G' \approx 500~\mathrm{kPa}$. We impose a cyclic shear deformation of amplitude $\gamma = 4.6$ \%. The inset of fig.~SM1 shows $\beta^{-1}(g_2-1)$ at short time lags, $\tau$: when $\tau$ corresponds to an odd number of half-cycles, the correlation drops to zero, due to the relative motion of the scatterers associated with the affine displacement field induced by the applied strain. For a delay time equal to an integer number of cycles, a high correlation level is recovered, an echo effect similar to that reported previously in concentrated emulsions and colloidal glasses~\cite{Hebraud1997,PetekidisPRE2002}. In the inset of Fig.~SM1 the height of the echos is unity, indicating that the scatterers have recovered exactly the initial microscopic configuration, as expected for tracer particles embedded in a purely elastic matrix, whose deformation is fully reversible. In the main figure, only data for integer values of $\tau$ are plotted, but for a very extended range of delay $\tau$. No significant loss of correlation is observed over $2000$ cycles, thus demonstrating that the setup is stable enough for the dynamics to be reliably probed over thousands of cycles. \begin{figure \includegraphics[width=10cm]{figSM1} \begin{center} \noindent \textbf{Figure SM1:} (Color online) Intensity correlation functions for an elastic PDMS elastomer. Main figure: data for integer delays $\tau$. Inset: zoom on the behavior of $\beta^{-1}(g_2-1)$ at small $\tau$. Data for both half-integer and integer delays are shown. \label{fig:SM1} \end{center} \end{figure} \section{Strain-dependence of the viscoelasticity of a colloidal polycrystal} \begin{figure}[h!] \includegraphics[width=10cm]{figSM2} \begin{center} \noindent \textbf{Figure SM2:} (Color online) Strain dependence of the storage modulus, $G'$ (green triangles), and the loss modulus, $G''$ (red circles), as measured by standard oscillatory rheology, at a fixed frequency, $f=0.5$ Hz. Same sample as for the light scattering experiments (micellar polycrystal doped with silica nanoparticles of diameter 30 nm, at a volume fraction $\varphi =1\%$). The vertical lines indicate the five strain amplitudes used in the light scattering experiments discussed in the main text. \label{fig:SM2} \end{center} \end{figure} Figure SM2 shows the strain dependence of the elastic and loss moduli measured by oscillatory rheology, for the colloidal polycrystal investigated by light scattering in the main text. The vertical lines indicate the strain amplitudes in the light scattering experiments: they correspond to an intermediate regime beyond the linear regime (which ends beyond $\gamma \approx 0.3\%$), but before fluidization occurs, for $\gamma \gtrsim 6 \%$. \\ \section{Probability distribution function of the scatterers' velocity} We show here that in the asymptotic, stationary regime the grain boundaries undergo ballistic motion and that the probability distribution function (PDF) of their velocity is a Levy stable law~\cite{bouchaud90}. In our dynamic light scattering (DLS) experiments we measure the intensity correlation function $g_2(q,\tau)-1$ (see Eq.~(1) of the main text), which is related to the intermediate scattering function $f(q,\tau)$ (also known as the dynamic structure factor) by $$g_2(\mathbf{q},\tau)-1 = \beta f(\mathbf{q},\tau)^2 \,,$$ with \begin{equation} \label{eqn:fq} f(\mathbf{q},\tau) = \frac{1}{N}\left < \sum_{j,k=1}^N \exp \left \{i\mathbf{q}\cdot[\mathbf{r}_j(0)-\mathbf{r}_k(\tau)]\right\} \right > \,, \end{equation} with $N$ the number if scatterers, $\mathbf{r}_j$ the time-dependent position of the $j$-th scatterer, and where the brackets denote an ensemble average, and the factor $\beta$ an instrumental constant $\lesssim 1$~\cite{Berne1976}. As discussed in the main text, we find that in the stationary regime correlation functions measured at different $q$ all collapse on a master curve when plotted against time scaled by $q \equiv |\mathbf{q}_x|$, for $q \ge q_\mathrm{c}$. This implies that $f(q,\tau)$ does not depend on $q$ and $\tau$ separately, but rather on the product $u = q \tau$. Recalling the compressed exponential shape of $g_2-1$, one has $f(q,\tau) = f(u) = \exp[-(uV_{0,x})^p]$, where $V_{0,x}$ represents the modulus of a characteristic velocity related to the relaxation time $\tau_R$ introduced in the main text by \begin{equation} \label{eq:V0} V_{0,x} = \frac{\frac{2^{-\frac{1}{p}}}{p}\Gamma\left (\frac{1}{p} \right )}{q\tau_R}\,, \end{equation} with $\Gamma$ the gamma function. \begin{figure}[h!] \includegraphics[width=10cm]{figSM3} \begin{center} \noindent \textbf{Figure SM3:} (Color online) Probability distribution function of the $x$ component of the velocity of the grain boundaries in the asymptotic, stationary regime, for $\gamma = 4.6\%$. The PDF is obtained by Fourier transforming the compressed exponential fit to the data shown in the inset of Fig. 2b of the main text, see Eq.~(\ref{eq:pdf}) above. The distribution function is essentially flat up to the characteristic velocity $V_{0,x}$, while it decays as a power law for large $|V_x|$, with an exponent $-p-1$ directly related to the compressing exponent $p = 1.54$ of the fit to $g_2-1$. $V_{0,x}= 5.54~\mu\mathrm{m}~\mathrm{cycle}^{-1}$ is related to the $q$-dependent decay time of $g_2-1$, $\tau_R$, by Eq.~(\ref{eq:V0}). \label{fig:SM3} \end{center} \end{figure} Under these conditions, by following~\cite{Berne1976,LucaFaraday2003} one finds that the ensemble average in Eq.~(\ref{eqn:fq}) can be recast as an average over the probability distribution function of the $x$ component of the scatterers' velocity, $P_V(V_x)$, yielding: \begin{equation} \label{eq:fu} f(u) = \int \mathbf{d}V_x P_V(V_x)\exp(-iuV_x) \,. \end{equation} By taking the inverse Fourier transform of Eq.~(\ref{eq:fu}), one finds \begin{equation} \label{eq:pdf} P_V(V_x) \propto \int \mathbf{d}u f(u)\exp(iuV_x)= \int \mathbf{d}u \exp[-(uV_{0,x})^p] \exp(iuV_x) \,. \end{equation} The last term on the r.h.s. of Eq.~(\ref{eq:pdf}) is the integral representation of the Levy stable law $L_{p,0}$~\cite{LucaFaraday2003,bouchaud90}. The Levy PDF is characterized by a flat distribution for $|V_x| \lesssim V_{0,x}$, followed by a power-law tail, $P_V(|V_x|) \sim |V_x|^{-p-1}$~\cite{bouchaud90}. We show in Fig. SM3 the PDF obtained by numerical integration of Eq.~(\ref{eq:pdf}), for the dynamics measured in the asymptotic regime shown in Fig. 2b of the main text ($\gamma = 4.6\%$, $p = 1.54$, $V_{0,x} = 5.54\times 10^{-4}~\mu\mathrm{m}~\mathrm{cycle}^{-1}$). \begin{table} [h!] \begin{center} \begin{tabular}{l|l|l} \multicolumn{1}{c|}{$\gamma$ (\%)} & \multicolumn{1}{c|}{$p$} & \multicolumn{1}{c}{$V_{0,x}~\mathrm{(}\mu\mathrm{m}~\mathrm{cycle}^{-1}\mathrm{)}$} \\ \hline 1.5 & 1.66 & $3.52 \times 10^{-4}$\\ 2.5 & 1.76 & $2.82 \times 10^{-4}$\\ 3.5 & 1.65 & $2.55 \times 10^{-4}$\\ 4.6 & 1.54 & $5.64 \times 10^{-4}$\\ 5.2 & 1.53 & $6.54 \times 10^{-4}$\\ \end{tabular} \end{center} \vspace{0.1 cm} {\noindent \textbf {Table SM4:} Compressing exponent $p$ governing the power-law tail of the velocity distribution of the grain boundaries and characteristic velocity obtained from the fits to the intensity correlation functions $g_2-1$ in the asymptotic, stationary regime, for the five values of the applied strain amplitude.} \end{table} Table SM4 summarizes the parameters of the Levy distributions of $V_x$ for all our experiments. We find that the $p$ exponent governing the slope of the tail of $P_V$ varies only slightly with the applied strain, $\gamma$. The variation of the characteristic velocity is more pronounced: the general trend is for $V_{0,x}$ to increase with $\gamma$, albeit with some scatter in the data. The order of magnitude of $V_{0,x}$ is $5 \times 10^{-4}\mu\mathrm{m}~\mathrm{cycle}^{-1}$. This is close to $V_{0,x} \sim 10^{-3}\mu\mathrm{m}~\mathrm{cycle}^{-1}$, as obtained by analyzing in real space the grain boundary displacement for a sample with a slightly larger grain size, as discussed in Sec.~\ref{sec:cm_vs_dls} below. \section{Comparison between light scattering and microscopy measurements} \label{sec:cm_vs_dls} \begin{figure \includegraphics[width=12.5cm]{figSM5} \begin{center} \noindent \textbf{Figure SM5} (Color online) a) - d): zoom into representative GB trajectories, from Fig. 1c of the main text. The trajectories are obtained by measuring the GB position at $t = 1, 112, 2617, 3130$, and 3711 shear cycles. e): GB displacement with respect to the position at $t=1$, for the trajectories shown in a)-d), as indicated by the labels. The angle $\alpha$ between consecutive segments of a trajectory is also shown for two segments of the trajectory c). f): probability distribution function of $\alpha$. The PDF is strongly peaked around $\alpha = 0$, implying that the GBs trajectories are close to straight lines, a behavior suggestive of ballistic motion and incompatible with diffusion, for which $\alpha$ would be evenly distributed (dotted line). The labels indicate the average and the standard deviation of $\alpha$. \label{fig:SM5} \end{center} \end{figure} In order to provide additional support to the analysis of the grain boundary motion performed on light scattering data, we measure the GB displacement also in microscopy experiments. Figures SM5a-d zoom into some of the trajectories shown in Fig. 1c of the main text. The trajectories are obtained by measuring the position of representative GBs at times $t = 1, 112, 2617, 3130$, and 3711 cycles. The trajectories are overlaid to the images of the polycrystal taken at $t=1$ (red) and $3711$ (blue). The images at intermediate times are not shown for clarity. For the same GBs, the displacement $(\Delta x,\Delta y)$ with respect to the position at $t=1$ is shown in Fig. SM5e. Clearly, the trajectories are close to straight lines, a behavior incompatible with random motion. To quantify the tendency of the GBs to move along straight lines, we calculate the angle $\alpha$ between successive segments of the trajectories, as exemplified in Fig. SM5e. Figure SM5f shows the PDF of $\alpha$, obtained from all the trajectory segments at all the locations shown in Fig. 1c of the main text, \textit{i.e.} 52 segments from 13 different trajectories. The PDF is strongly peaked around $\alpha = 0$, confirming that the trajectories are incompatible with diffusive motion and are rather suggestive of straight-line displacements as in ballistic motion, consistent with the DLS results. \begin{figure}[h!] \includegraphics[width=9cm]{figSM6} \begin{center} \noindent \textbf{Figure SM6:} (Color online) Cumulative distribution function of the velocity components $V_\alpha$ of the grain boundaries in the asymptotic, stationary regime ($\alpha = x$ (resp. , $y$) for the component parallel (resp., perpendicular) to the direction of the applied strain). CM and symbols: data obtained by analyzing the trajectories obtained from confocal microscopy and shown in Fig. 1c of the main text ($\gamma = 3.6\%$). Lines: data obtained by light scattering for the five strain amplitudes reported in the main text. \label{fig:SM3} \end{center} \end{figure} We measure the GB velocity over two time intervals, $t\in [2617-3130]$ and $t\in [3130-3711]$. We find comparable average velocities, consistently with the notion that the sample dynamics become stationary at large $t$, as seen by DLS. We calculate the cumulative distribution of the GB velocity, using all trajectories and both time intervals and compare it to the cumulative velocity distributions obtained from the DLS data analysis. The results are shown in Fig. SM6 for the components of the velocity parallel and perpendicular to the shear direction. Several comments are in order. First, there is an overall good agreement between the microscopy and DLS data. This agreement is particularly remarkable given that the sample composition (nanoparticle kind, size and concentration) has been separately optimized according to the specific requirements of each experiment. As a consequence, the grain size --although of the same order of magnitude-- is not identical in the samples used for microscopy and light scattering (see Figs. 1b and 1d in the main text). Second, microscopy data measured for the $x$ and $y$ direction overlap, thus indicating that plasticity is essentially isotropic. Third, the order of magnitude of the GB velocity in the stationary regime is very small, as also seen in Figs. SM3 and SM5. This highlights a key requirement of our experiments, \textit{i.e.} sensitivity to small-scale motion. In this respect, light scattering is superior to microscopy: for the former, the smallest rms displacement that can be reliably measured is $\Delta r_{\mathrm{min}} \sim 0.1\um$, corresponding to a decay of 5\% of $g_2-1$ at the largest scattering vector. For microscopy, $\Delta r_{\mathrm{min}} \sim 0.34 \um$ (corresponding to 1 pixel), more than three times larger than by DLS. An additional advantage of light scattering is a better statistics: for DLS, the probed volume is $V_{\mathrm{scatt}} \sim 1 ~\mathrm{mm}^3$, about 80 times larger than $V_{\mathrm{micro}} \sim 0.013~\mathrm{mm}^3$, the volume accessible to confocal microscopy. These advantages motivate our choice of DLS as the main quantitative probe of the GB dynamics.
1,116,691,496,965
arxiv
\section{Introduction} One point of view of understanding D-branes is that they are solutions of string field theory equation of motion. Different solutions of string field theory equation of motion represent different two dimensional conformal field theory (CFT) backgrounds. Inspired by the Schnabl's analytic construction of open string field theory (OSFT) equation of motion representing the tachyon vacuum \cite{Schnabl05} (see \cite{Okawa1,Ellwood:2006ba,RZ06,ORZ,Fuchs2,Fuchs0,Fuchs1,Bonora:2007tm,Erler:2006hw,Erler:2006ww,Imbimbo:2006tz} for more development on this), recently several more solutions have been obtained \cite{Schnabl:2007az,KORZ,Fuchs3,Okawa:2007ri,Okawa3,Fuchs:2007gw,Kiermaier:2007ki} for both bosonic and supersymmetric OSFT. These new solutions describe conformal field theories that are deformed by exactly marginal operators. Among them are the solutions representing the CFT of lower-dimensional D-branes. It is a well established fact \cite{Sen:1999mh,Recknagel:1998ih,Callan:1994ub,Polchinski:1994my,Callan:1993mw,Kogetsu:2004te} that the boundary conformal field theory (BCFT) describing a $D{p}$--brane in bosonic string theory is identical to that of a $D(p+1)$-- brane deformed by an exactly marginal boundary operator. More precisely, one can deform the former BCFT into the later by adding an exactly marginal boundary term \begin{eqnarray} \delta S_{ws}={\tilde\lambda}\int dt\sqrt{2}{\rm cos}(X(t)) \end{eqnarray} to the world-sheet action, where $X$ is the direction transverse to the $Dp$--brane, t is a coordinate on the world-sheet boundary and ${\tilde\lambda}$ is a free parameter. At (${\tilde\lambda}=\pm{1\over 2}$) with this we obtain a periodic array of $Dp$--branes, with Dirichlet boundary condition on $X$, placed at ($x=(2n+1)\pi$) if we choose the plus sign and at ($x=2n\pi$) if we choose the minus sign. An alternative description of marginal deformations in the framework of string field theory was considered in \cite{Sen:1990hh,Sen:2000hx,Takahashi:2002ez,Kluson:2002av,Kluson:2003xu,Sen:2004cq}. It was shown that, switching on a boundary marginal deformation operator give rise to a string field theory configuration corresponding to a new classical solution of the equation of motion of OSFT formulated around the original undeformed BCFT. In these investigations, mainly the level truncation method in the Siegel gauge was used and switching on the marginal boundary operator in world sheet was interpreted as giving vacuum expectation values (vev) to the fields associated to the tachyonic and the massless open string modes. The rolling tachyon solution by Sen \cite{Sen:2002nu} is the best example for such description of marginal deformations in string field theory framework, where the vev was turned on only for the tachyonic mode. A recent construction of analytic solutions for marginal deformations in OSFT \cite{Schnabl:2007az,KORZ} used the recursive technique developed in \cite{Sen:2002nu} in a new gauge introduced by Schnabl in \cite{Schnabl05} ($B\Psi=0$), where $B$ is the antighost zero mode in the conformal frame of the sliver. The ansatz for the solutions were given by a series expansion in some parameter $\lambda$ which to the first order can be identified with the coupling constant ${\tilde\lambda}$ of the exactly marginal operator we mentioned above. One can then solve the equation of motion at each order of $\lambda$. These techniques were very effective to obtain solutions generated by a marginal deformation operator $V(z)$ that has a regular OPE with itself. When the OPE is singular, divergences arises as the separation between boundary insertions approaches zero and one needs to add counter terms at each order of $\lambda$ to regularize it. However, the form of the counter terms were obtained only up to the third order by a clever guess and their forms for higher order terms are not known. One purpose of this paper is to study the origin of the divergences in the case of marginal deformations with singular OPE and to develop a method to determine the counter terms necessary to cancel the divergences at any order. In an earlier work \cite{KORZ}it was mensioned as an open propblem that some of the counter terms violate the gauge condition even though the solutions were constracted to respect the gauge. In this paper we will demonstrate by explicite calculatios that in the case of singular OPE marginal deformation, unlike the regular ones, only a piece of the solution can respect the Schnabl gauge and it is not surprising to have counter terms out side the gauge. The rest of the paper is organized as follows. In section 2 we will consider solutions with both regular OPE and singular OPE. We will show that the main difference between these solution is the fact that the first one can be expanded only in terms of states with positive $L$ eigenvalue, where $L$ is the Virasoro operator $L_0$ in the conformal frame of the sliver, while the second one contains the eigenvalues $0$ and $-1$ as well. In the same section we will show that the divergences in the case of singular OPE arise from inverting the $L$ operator on zero eigenvalue states and using the Schwinger representation of ($L^{-1})$ on negative eigenvalue states. Knowing the origin of the divergences we could easily determine the form of the counter terms to add at each level to regularize the solution. In section 3 we will use the procedure we developed in section 2 to write solution representing array of D24--branes, which are obtained when an exactly marginal boundary deformation is turned on along the 25--th direction. In section 4 we will discuss our results. \section {The action of B/L and the OPE of V} The linearized string field theory equation of motion ($Q_B\Psi=0$) is satisfied by the state $\Psi^{(1)}=cV(0)|0\rangle$ corresponding to the operator $cV(0)$, for any dimension one matter primary operator $V$. An ansatz for new class of solutions for the non-linear equation of motion ($Q_B\Psi+\Psi\ast\Psi=0$) were made as an expansion in some parameter $\lambda$. \begin{eqnarray} \Psi_{\lambda}=\sum_{n=1}^{\infty}\lambda^{n}\Psi^{(n)}, \end{eqnarray} with $\Psi^{(n)}$ satisfying \begin{eqnarray} Q_B\Psi^{(n)}=\Phi^{(n)},~~~~~~n>1 \label{eom} \end{eqnarray} where $\Phi^{(n)}$ is BRST exact and is given by \begin{eqnarray} \Phi^{(n)}=-\sum_{k=1}^{n-1}\Psi^{(n-k)}\ast\Psi^{(k)}\label{phin} \end{eqnarray} If $\Psi$ is in Schnabl gauge ( $B\Psi_\lambda=0$) and there is no overlap between $\Phi^{(n)}$ and the kernel of $L$ the solution can be written as \begin{eqnarray} \Psi^{(n)}={B\over L}\Phi^{(n)}\label{psin} \end{eqnarray} Further more if $\Phi^{(n)}$ does not contain states with negative $L$ eigenvalues we can write \begin{eqnarray} \Psi^{(n)}=\int_{0}^{\infty}dT~Be^{-TL}\Phi^{(n)}\label{sol1} \end{eqnarray} In this section we will show that such a solution is allowed only when $V(z)$ has a regular OPE with itself while in the case of singular OPE, only part of the solution can be written as in (\ref{sol1}). Here we notice that if not for the action of $L^{-1}$, in \ref{psin} the operators are inserted at finite distances from each other along the real axis of the conformal frame of the sliver and every thing is regular, even if the matter primary operator has a singular OPE with itself. However, the action of $L^{-1}$ deletes a strip of certain width and make the operators to collide. Therefore, the origin of any singularity is the action of $L^{-1}$ on states of zero $L$ eigenvalues or its Schwinger representation on states of negative $L$ eigenvalues. We will see this in detail below. Lets begin with the regular OPE case where \begin{eqnarray} \lim_{z_1\to z_2}V(z_1)V(z_2)={\rm regular} \end{eqnarray} Using this we can easily verify that the commutation relation for the modes of V is \begin{eqnarray} [V_m,V_n]=\oint{dz_2\over 2\pi i}{\rm Res}_{z_1\to z_2}z_1^m z_2^n V(z_1)V(z_2)=0,~~~~~~\forall~ m,n \end{eqnarray} It is also true that for $m\ge 0$, $V_m|0\rangle=0$, as the conformal dimension of $V$ is one. We start our computation with the lowest level of (\ref{phin}). \begin{eqnarray} \Phi^{(2)}=-\Psi^{(1)}\ast\Psi^{(1)}=-cV(0)|0\rangle\ast cV(0)|0\rangle \end{eqnarray} In the conformal frame of the sliver \begin{eqnarray} \Phi^{(2)}=-{\tilde c}{\tilde V}(0)|0\rangle\ast {\tilde c}{\tilde V}(0)|0\rangle.\label{phi2} \end{eqnarray} Note that as $cV$ is a primary operator of conformal dimension zero so that there is no associated conformal factor infront. This star product can be easily be carried out as it is the simplest case of the star product of wedge states with insertions \begin{eqnarray} U_r^\dagger U_r{\tilde\phi_1}(z_1)|0\rangle\ast U_s^\dagger U_s{\tilde\phi_2}(z_2)|0\rangle=U_{r+s-1}^\dagger U_{r+s-1}{\tilde\phi_1}(z_1+{s-1\over 2}){\tilde\phi_2}(z_2-{r-1\over 2})|0\rangle\nonumber\\ \label{star1} \end{eqnarray} which we can write, after obvious shift of coordinate (${\tilde z}_i\to{\tilde z}_i+{r-1\over 2}$), as \begin{eqnarray} U_r^\dagger U_r{\tilde\phi_1}(z_1)|0\rangle\ast U_s^\dagger U_s{\tilde\phi_2}(z_2)|0\rangle=U_{r+s-1}^\dagger U_{r+s-1}{\tilde\phi_1}(z_1+{r+s-2\over 2}){\tilde\phi_2}(z_2)|0\rangle\nonumber\\ \label{starprod} \end{eqnarray} where $U_r^\dagger U_r=e^{-{1\over 2}(r-2)L^+}$ with $L^+=L+L^\dagger$. If we have more than two states to star multiply we use (\ref{star1}) associatively and do the appropriate shift of coordinate at the end, as the shift we have just made is not associative. In our simple case, which is ($r=s=2)$ we find \begin{eqnarray} \Phi^{(2)}=-U_3^\dagger U_3{\tilde c}{\tilde V}\left(1\right) {\tilde c}{\tilde V}\left(0\right)|0\rangle. \end{eqnarray} Expanding both ${\tilde c}(z)$ and ${\tilde V}(z)$ in their oscillator modes we can write \begin{eqnarray} \Phi^{(2)}=-e^{-{1\over 2}L^+}\sum_{l}\sum_{m}{\tilde c}_l{\tilde c}_{1}{\tilde V}_m {\tilde V}_{-1} |0\rangle. \end{eqnarray} As all commutations and anticommutations of the oscillator modes appearing in this expression are trivial the range of the indices will be \begin{eqnarray} \Phi^{(2)}=-\sum_{r=0}^{\infty}{1\over (-2)^r r!}(L^+)^r\sum_{l=-\infty}^{1}\sum_{m=-\infty}^{-1}{\tilde c}_l{\tilde c}_{1}{\tilde V}_m {\tilde V}_{-1} |0\rangle.\label{phi2r} \end{eqnarray} Here we notice that each term of this multiple sum is an eigenstate of $L$ with eigenvalue ($l_0=r-(l+m)\ge 1$) for ($r,l,m$) in the these ranges. Therefore, we conclude that if V has a regular OPE with itself, there is no overlap between the kernel of $L$ and $\Phi^{(2)}$ does not contain any term with negative $L$ eigenvalue. For higher order $\Phi^{(n)}$ we will have similar expression with more ${\tilde V}_m, {\tilde c}_l$ and $B^+=B+B^\dagger$ insertions. With $l,m$ still in the range given above and noting that $B^+$ raise the $L$ eigenvalue by one we see that higher order $\Phi^{(n)}$ also does not contain negative or zero $L$ eigenvalues. Therefore, it is safe to invert $L$ or use the Schwinger representation of $L^{-1}$ on $\Phi^{(n)}$ for $\forall n>1$ when V has regular OPE with itself. Next lets consider the case where $V$ has singular OPE, in particular \begin{eqnarray} V(z_1)V(z_2)={1\over (z_1-z_2)^2}+{\rm regular}.\label{OPE} \end{eqnarray} The commutation relation and the action on the vacuum of the oscillator modes will be \begin{eqnarray} [V_m,V_n] =m\delta_{m,-n},~~~~~~~~~~ V_l|0\rangle=0,~~\forall~l\ge 0. \end{eqnarray} Therefore, unlike the case in equation (\ref{phi2r}) we can not drop all the positive modes of V and hence $\Phi^{(2)}$ is written as \begin{eqnarray} \Phi^{(2)}=-\sum_{r=0}^{\infty}{1\over (-2)^r r!}(L^+)^r\sum_{l=-\infty}^{1}\sum_{m=-\infty}^{1}{\tilde c}_l{\tilde c}_1{\tilde V}_m {\tilde V}_{-1} |0\rangle\label{phi2ir} \end{eqnarray} or \begin{eqnarray} \Phi^{(2)}&=&-\sum_{r=0}^{\infty}{1\over (-2)^r r!}(L^+)^r\sum_{l=-\infty}^{1}\sum_{m=-\infty}^{-1}{\tilde c}_l{\tilde c}_1{\tilde V}_m {\tilde V}_{-1} |0\rangle\nonumber\\ &-&\sum_{r=0}^{\infty}{1\over (-2)^r r!}(L^+)^r\sum_{l=-\infty}^{1}{\tilde c}_l{\tilde c}_1|0\rangle.\label{phi2ir2} \end{eqnarray} The first line is exactly what we have in the case of regular OPE and hence there is no ($l_0\le 0$) state in the first line. The $L$ eigenvalue of each term in the second line is ($l_0=r-(l+1)\ge -1$). Therefore, in this case there is an overlap between the kernel of $L$ and $\Phi^{(2)}$ and it contains negative $L$ eigenvalue terms as well. The only choices which give ($l_0=0$) are \begin{eqnarray} (r=0,l=-1),~~~~ (r=1,l=0) \end{eqnarray} and the only one which gives ($l_0=-1$) is \begin{eqnarray} (r=0,l=0) \end{eqnarray} The $(r=0,l=-1)$ case is ruled out by twist symmetry \cite{Schnabl05}, therefore, $\Phi^{(2)}$ can be written as \begin{eqnarray} \Phi^{(2)}&=&-\sum_{r=0}^{\infty}{1\over (-2)^r r!}(L^+)^r\sum_{l=-\infty}^{1}\sum_{m=-\infty}^{-1}{\tilde c}_l{\tilde c}_1{\tilde V}_m {\tilde V}_{-1} |0\rangle\nonumber\\ &-&\sum_{r'}{1\over (-2)^{r'} r'!}(L^+)^{r'}\sum_{l'}{\tilde c}_{l'}{\tilde c}_{1}|0\rangle\nonumber\\ &+&\left(-{\tilde c}_0{\tilde c}_1|0\rangle+{1\over 2}L^+{\tilde c}_0{\tilde c}_1|0\rangle\right) \end{eqnarray} where the primed indices are the corresponding unprimed indices without the cases which give ($l_0=0$) or ($l_0=-1$). The last line is BRST exact so that we can write \begin{eqnarray} \Phi^{(2)}&=&-\sum_{r=0}^{\infty}{1\over (-2)^r r!}(L^+)^r\sum_{l=-\infty}^{1}\sum_{m=-\infty}^{-1}{\tilde c}_l{\tilde c}_1{\tilde V}_m {\tilde V}_{-1} |0\rangle\nonumber\\ &-&\sum_{r'}{1\over (-2)^{r'} r'!}(L^+)^{r'}\sum_{l'}{\tilde c}_{l'}{\tilde c}_{1}|0\rangle\nonumber\\ &+& Q_B\left({\tilde c}_1|0\rangle-{1\over 2}L^+{\tilde c}_1|0\rangle\right)\nonumber\\ &=& Q_B\left({\tilde c}_1|0\rangle-{1\over 2}L^+{\tilde c}_1|0\rangle\right)+\Phi^{(2)}_{>} \end{eqnarray} where $\Phi^{(2)}_{>}$ contains only $l_0>0$ states. Up to some $Q_B$ closed term $\Psi^{(2)}$ is given by \begin{eqnarray} \Psi^{(2)}={\tilde c}_1|0\rangle-{1\over 2}L^+{\tilde c}_1|0\rangle+\Psi^{(2)}_{>} \end{eqnarray} where $\Psi^{(2)}_{>}$ satisfies $Q_B\Psi^{(2)}_{>}=\Phi^{(2)}_{>}$. Assuming $\Psi^{(2)}_{>}$ is in the Schnabl gauge we can write \begin{eqnarray} \Psi^{(2)}&=&{\tilde c}_1|0\rangle-{1\over 2}L^+{\tilde c}_1|0\rangle+\int_{0}^{\infty}dT~Be^{-TL}\Phi^{(2)}_{>}\nonumber\\ &=&{\tilde c}_1|0\rangle-{1\over 2}L^+{\tilde c}_1|0\rangle+\lim_{\Lambda\to\infty}\int_{0}^{\Lambda}dT~Be^{-TL}[\Phi^{(2)}+\left({\tilde c}_0{\tilde c}_1|0\rangle-{1\over 2}L^+{\tilde c}_0{\tilde c}_1|0\rangle\right)]\nonumber \end{eqnarray} Replacing $\int_{0}^{\Lambda}dT~Be^{-TL}$ by ${B\over L}-e^{-\Lambda L}{B\over L}$ on the terms with $l_0=0$ and $l_0=-1$ we obtain \begin{eqnarray} \Psi^{(2)}=\lim_{\Lambda\to\infty}\left(\int_{0}^{\Lambda}dT~Be^{-TL}\Phi^{(2)}+e^\Lambda{\tilde c}_1|0\rangle-{1\over 2}\Lambda\left[L^+{\tilde c}_1|0\rangle+B^+{\tilde c}_0{\tilde c}_1|0\rangle\right]-{1\over 2}L^+{\tilde c}_1|0\rangle\right)\nonumber\\ =\lim_{\Lambda\to\infty}\left(-{1\over 2}(\Lambda+1)L^+{\tilde c}_1|0\rangle-{1\over 2}\Lambda B^+{\tilde c}_0{\tilde c}_1|0\rangle+e^\Lambda{\tilde c}_1|0\rangle-\int_{e^{-\Lambda}}^{1}dt~~\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast B_L^+\Psi^{(1)} \right)\nonumber\\ \label{psi2} \end{eqnarray} This solution is obtained by inverting $L$ only on the positive eigenvalue terms of $\Phi^{(2)}$ so that it is regular. We also notice that one can not fit the entire $\Psi^{(2)}$ into the Schnabl gauge, only the portion which is related to the positive eigenvalue terms of $\Phi^{(2)}$ can satisfy the gauge condition. The fact that this solution is regular will be apparent when we will use it to calculate the tachyon profile of the array of D24--brane solutions in the next section. Now we use the identity (see \cite{Schnabl05}) \begin{eqnarray} \phi_1\ast B_L^+\phi_2=(-1)^{\phi_1}B_L^+(\phi_1\ast\phi_2)-(-1)^{\phi_1}(B_1\phi_1)\ast\phi_2 \label{idt} \end{eqnarray} and the fact that $B_1U_t^\dagger U_t|0\rangle=0$ and write $\Psi^{(2)}$ as \begin{eqnarray} \Psi^{(2)}&=&\lim_{\Lambda\to\infty}\left(\int_{e^{-\Lambda}}^{1}dt~~\{ B_L^+[\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}]-[B_1\Psi^{(1)}]\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}\}\right.\nonumber\\ &+&\left.{\over} e^\Lambda{\tilde c}_1|0\rangle-{1\over 2}(\Lambda+1)L^+{\tilde c}_1|0\rangle-{1\over 2}\Lambda B^+{\tilde c}_0{\tilde c}_1|0\rangle\right). \end{eqnarray} This last form is convenient to calculate $\Phi^{(3)}$ which is given by \begin{eqnarray} \Phi^{(3)}=-\Psi^{(1)}\ast\Psi^{(2)}-\Psi^{(2)}\ast\Psi^{(1)}. \end{eqnarray} With the help of the identity (\ref{idt}) again, we obtain \begin{eqnarray} \Phi^{(3)}&=&\lim_{\Lambda\to\infty}\left(\int_{e^{-\Lambda}}^{1}dt~~ \{B_L^+[\Psi^{(1)}\ast\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}]-[B_1\Psi^{(1)}]\ast\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}\right.\nonumber\\ &+&\left.\Psi^{(1)}\ast [B_1\Psi^{(1)}]\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}\}-e^\Lambda\Psi^{(1)}\ast{\tilde c}_1|0\rangle+{1\over 2}\Psi^{(1)}\ast L^+{\tilde c}_1|0\rangle\right.\nonumber \\ &-&\left.{1\over 2}\Lambda Q_B[\Psi^{(1)}\ast B^+{\tilde c}_1|0\rangle ]\right)\nonumber\\ &-&\lim_{\Lambda\to\infty}\left(\int_{e^{-\Lambda}}^{1}dt~~ \{B_L^+[\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}\ast\Psi^{(1)}]-[B_1\Psi^{(1)}]\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}\ast\Psi^{(1)}\}\right.\nonumber\\ &+&\left.{\over} e^\Lambda{\tilde c}_1|0\rangle \ast \Psi^{(1)}-{1\over 2}L^+{\tilde c}_1|0\rangle \ast \Psi^{(1)}-{1\over 2}\Lambda Q_B[B^+{\tilde c}_1|0\rangle\ast\Psi^{(1)}]\right).\label{Phi3} \end{eqnarray} Now we will need the following re-writings \begin{eqnarray} L^+\phi_1\ast \phi_2=-2{\partial\over\partial s} U_s^\dagger U_s\phi_1\ast \phi_2\left|_{s=2}\right.\nonumber\\ \phi_1\ast L^+\phi_2=-2{\partial\over\partial s}\phi_1\ast U_s^\dagger U_s\phi_2\left|_{s=2}\right.. \end{eqnarray} Therefore, \begin{eqnarray} \Phi^{(3)}&=&\lim_{\Lambda\to\infty}\left(\int_{e^{-\Lambda}}^{1}dt~~ \{B_L^+[\Psi^{(1)}\ast\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}]-[B_1\Psi^{(1)}]\ast\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}\right.\nonumber\\ &+&\left.\Psi^{(1)}\ast [B_1\Psi^{(1)}]\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}\}-{\over} e^\Lambda\Psi^{(1)}\ast{\tilde c}_1|0\rangle-{\partial\over\partial s}[\Psi^{(1)}\ast U_s^\dagger U_s{\tilde c}_1|0\rangle]\left|_{s=2}\right.\right.\nonumber \\ &-&\left.{1\over 2}\Lambda Q_B[ \Psi^{(1)}\ast B^+{\tilde c}_1|0\rangle] {\over}\right)\nonumber\\ &-&\lim_{\Lambda\to\infty}\left(\int_{e^{-\Lambda}}^{1}dt~~ \{B_L^+[\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}\ast\Psi^{(1)}]-[B_1\Psi^{(1)}]\ast U_t^\dagger U_t|0\rangle\ast \Psi^{(1)}\ast\Psi^{(1)}\}\right.\nonumber\\ &+&{\over} e^\Lambda{\tilde c}_1|0\rangle \ast \Psi^{(1)}+{\partial\over\partial s}[ U_s^\dagger U_s{\tilde c}_1|0\rangle \ast \Psi^{(1)}]\left|_{s=2}\right.- \left.{1\over 2}\Lambda Q_B[B^+{\tilde c}_1|0\rangle\ast\Psi^{(1)}]{\over}\right)\label{Phi31} \end{eqnarray} Since $B_1\Psi^{(1)}=V(0)|0\rangle$ we can use the standard formula for star product of wedge states with insertions to perform the star product. As usual, our aim is to single out the terms with negative or zero eigenvalues of $L$ so that we can use the Schiwnger representation (\ref{sol1}) of $Q_B$ on the remaining terms of $\Phi^{(3)}$ to obtain $\Psi^{(3)}$. It can be easily verified that the $Q_B$ exact terms in $\Phi^{(3)}$ do not contain $l_0\le 0$ term, therefore, we will leave these terms as they are. \begin{eqnarray} \Phi^{(3)}&=&\lim_{\Lambda\to\infty}\left[\int_{e^{-\Lambda}}^{1}dt~~ \left\{B_L^+U_{t+3}^\dagger U_{t+3}{\tilde c}{\tilde V}\left({t+1}\right){\tilde c}{\tilde V}\left({t}\right){\tilde c}{\tilde V}\left(0\right)|0\rangle\right.\right.\nonumber\\ &-&U_{t+3}^\dagger U_{t+3}{\tilde V}\left({t+1}\right){\tilde c}{\tilde V}\left({t}\right){\tilde c}{\tilde V}\left(0\right)|0\rangle\nonumber\\ &+&\left.U_{t+3}^\dagger U_{t+3}{\tilde c}{\tilde V}\left({t+1}\right){\tilde V}\left({t}\right){\tilde c}{\tilde V}\left(0\right)|0\rangle\right\}\nonumber\\ &-&e^{\Lambda}U_{3}^\dagger U_{3}{\tilde c}{\tilde V}\left({1}\right){\tilde c}\left({0}\right)|0\rangle+{1\over 2}U_{3}^\dagger U_{3}L^+{\tilde c}{\tilde V}\left({1}\right){\tilde c}\left({0}\right)|0\rangle\nonumber\\ &-&\left.{1\over 2}U_{3}^\dagger U_{3}\partial({\tilde c}{\tilde V})\left({1}\right){\tilde c}\left({0}\right)|0\rangle-{1\over 2}\Lambda Q_B[ \Psi^{(1)}\ast B^+{\tilde c}_1|0\rangle]\right]\nonumber\\ &-&\lim_{\Lambda\to\infty}\left[\int_{e^{-\Lambda}}^{1}dt~~ \left\{B_L^+U_{t+3}^\dagger U_{t+3}{\tilde c}{\tilde V}\left({t+1}\right){\tilde c}{\tilde V}\left({1}\right){\tilde c}{\tilde V}\left({0}\right)|0\rangle\right.\right.\nonumber\\ &-&\left.U_{t+3}^\dagger U_{t+3}{\tilde V}\left({t+1}\right){\tilde c}{\tilde V}\left({1}\right){\tilde c}{\tilde V}\left({0}\right)|0\rangle\right\}\nonumber\\ &+&e^{\Lambda}U_{3}^\dagger U_{3}{\tilde c}\left({1}\right){\tilde c}{\tilde V}\left({0}\right)|0\rangle-{1\over 2}U_{3}^\dagger U_{3}L^+{\tilde c}\left({1}\right){\tilde c}{\tilde V}\left({0}\right)|0\rangle\nonumber\\ &+&\left.{1\over 2}U_{3}^\dagger U_{3}\partial{\tilde c}\left({1}\right){\tilde c}{\tilde V}\left({0}\right)|0\rangle-{1\over 2}\Lambda Q_B[B^+{\tilde c}_1|0\rangle\ast\Psi^{(1)}]\right] \end{eqnarray} As we did for $\Phi^{(2)}$, after expanding both ${\tilde c}$ and ${\tilde V}$ in modes and also expanding $U_s^\dagger U_s$ in power of $L^+$ we see that each term in the multiple summation is an eigenstate of $L$. We would like to focus on the terms which contain $l_0\le 0$ and which are not $Q_B$ exact. Here we see that the $e^\Lambda$, the $(\partial c)V$ and $\partial c$ terms contain such states. It is also easy to see that some contribution comes from the lines $2,3$ and $7$. Using the commutation relation for the $V$ modes we can separate these terms from the others so that \begin{eqnarray} \Phi^{(3)}&=&\lim_{\Lambda\to\infty}\left(\left[1-2e^\Lambda+\int_{e^{-\Lambda}}^{1}dt~~f(t)\right]{\tilde c}_0{\tilde c}_1{\tilde V}_{-1}|0\rangle+\Phi^{(3)}_{>}(\rm{non-exact})\right.\nonumber\\ &-& \left.{1\over 2}\Lambda Q_B[\Psi^{(1)}\ast B^+{\tilde c}_1|0\rangle]-B^+{\tilde c}_1|0\rangle\ast\Psi^{(1)}]\right)\label{phi32} \end{eqnarray} where $\Phi^{(3)}_{>}(\rm{non-exact})$ contains only terms with $l_0>0$ and are not $Q_B$ exact and \begin{eqnarray} f(t)=2+{2\over t^2}+{2\over (1+t)^2}. \end{eqnarray} Here we notice that unlike the $\Psi^{(2)}$ case, now we have $Q_B$ non--exact $l_0=0$ terms, therefore, we can not tell apart every term with $l_0=0$ of $\Psi^{(3)}$. However, still there is a piece of $(\ref{Phi3})$ which is $Q_B$ exact. It is convenient to write $\Phi^{(3)}$ as \begin{eqnarray} \Phi^{(3)}=-\lim_{\Lambda\to\infty}{1\over 2}\Lambda Q_B\left(\Psi^{(1)}\ast B^+{\tilde c}_1|0\rangle-B^+{\tilde c}_1|0\rangle\ast\Psi^{(1)}\right)+\Phi^{(3)}_{rest}.\label{conv} \end{eqnarray} With this we can see that the most general $\Psi^{(3)}$, up to some $Q_B$ closed addition, is \begin{eqnarray} \Psi^{(3)}=-\lim_{\Lambda\to\infty}{1\over 2}\Lambda\left(\Psi^{(1)}\ast B^+{\tilde c}_1|0\rangle-B^+{\tilde c}_1|0\rangle\ast\Psi^{(1)}\right)+\Psi^{(3)}_{rest}, \end{eqnarray} where $\Psi^{(3)}_{rest}$ is defined as \begin{eqnarray} Q_B\Psi^{(3)}_{rest}=\Phi^{(3)}_{rest}. \end{eqnarray} We assume that $\Psi^{(3)}_{rest}$ is in the Schnabl gauge, so we can formally put $\Psi^{(3)}$ as \begin{eqnarray} \Psi^{(3)}_{0}&=&-\lim_{\Lambda\to\infty}{1\over 2}\Lambda\left(\Psi^{(1)}\ast B^+{\tilde c}_1|0\rangle-B^+{\tilde c}_1|0\rangle\ast\Psi^{(1)}\right)\nonumber\\ &&+\lim_{\Gamma\to\infty}\left(\int_{0}^{\Gamma}dT~Be^{-TL}\Phi^{(3)}_{rest} \right) \end{eqnarray} This has $Q_B$ closed divergent term which arise from some of the $l_0=0$ terms of $\Phi^{(3)}_{rest}$ and needs to be regularized. From (\ref{phi32}) it is not difficult to realize that the regularized $\Psi^{(3)}$ will be \begin{eqnarray} \Psi^{(3)}_{reg}&=&-\lim_{\Lambda\to\infty}{1\over 2}\Lambda\left(\Psi^{(1)}\ast B^+{\tilde c}_1|0\rangle-B^+{\tilde c}_1|0\rangle\ast\Psi^{(1)}\right)\nonumber\\ &+&\lim_{\Gamma\to\infty}\left(\int_{0}^{\Gamma}dT~Be^{-TL}\Phi^{(3)}_{rest}-\lim_{\Lambda\to\infty}\left[ -2e^\Lambda+\int_{e^{-\Lambda}}^{1}dt~~f(t)\right]\Gamma{\tilde c}_1{\tilde V}_{-1}|0\rangle\right)\nonumber\\ \end{eqnarray} Note that the added counter terms are all $Q_B$ closed and are also in the Schnabl gauge, therefore, they will not affect the equation of motion as well as the gauge condition. From equations (\ref{Phi3}) and (\ref{conv}) we can easily read $\Phi^{(3)}_{rest}$ and we finally obtain \begin{eqnarray} \Psi^{(3)}_{reg}&=&-\lim_{\Lambda\to\infty}{1\over 2}\Lambda\left(\Psi^{(1)}\ast B^+{\tilde c}_1|0\rangle-B^+{\tilde c}_1|0\rangle\ast\Psi^{(1)}\right)\nonumber\\ &+&\lim_{\Gamma\to\infty}\lim_{\Lambda\to\infty}\left\{-\left[-2e^\Lambda+\int_{e^{-\Lambda}}^{1}dt~~f(t)\right]\Gamma{\tilde c}_1{\tilde V}_{-1}|0\rangle\right.\nonumber\\ &-&e^\Lambda\int_{e^{-\Gamma}}^{1}dt_2~~{1\over t_2}\left[\Psi^{(1)}\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast B_L^+{\tilde c}_1|0\rangle+{\tilde c}_1|0\rangle\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast B_L^+\Psi^{(1)}\right]\nonumber\\ &+&{1\over 2}\int_{e^{-\Gamma}}^{1}dt_2\left({1\over t_2}\left[-\Psi^{(1)}\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast B^+{\tilde c}_1|0\rangle+B^+{\tilde c}_1|0\rangle\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast \Psi^{(1)}\right]\right.\nonumber\\ &+&\left.\Psi^{(1)}\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast B_L^+L^+{\tilde c}_1|0\rangle+L^+{\tilde c}_1|0\rangle\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast B_L^+\Psi^{(1)}{\over}\right)\nonumber\\ &+& \left.\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Gamma}}^{1}dt_2~t_2\left[\Psi^{(1)}\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast(-B_L^+)\Psi^{(1)}\ast U_{t_1t_2}^\dagger U_{t_1t_2}|0\rangle\ast(-B_L^+)\Psi^{(1)}\right]\right.\nonumber\\ &+& \left.\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Gamma}}^{1}dt_2~t_2\left[\Psi^{(1)}\ast U_{t_1t_2}^\dagger U_{t_1t_2}|0\rangle\ast(-B_L^+)\Psi^{(1)}\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast(-B_L^+)\Psi^{(1)}\right] \right\}\nonumber\\ \label{Psi3} \end{eqnarray} In the third and the fourth lines it is clear that the integrands are not well defined in the region $t_2\to 0$. However, the singularities coming form this region are cancelled partly by the corresponding counter terms in the second line, the ($-2e^\Lambda$) term and partly by similar divergences coming from the last two lines. There are other divergences arising from the last two line. These will be cancelled by the remaining part of the counter term in the second line and there is no more divergence related to $t_2\to 0$. Since the divergences related to $t_1\to 0$ has already been regularized at level 2, this result is perfectly regular. We will demonstrate this cancellation of divergences in the next section using a particular example. We would like to emphasize also, like it is at level 2 here again the entire solution can not be in the Schnabl gauge, only the part which is obtained from the $Q_B$ non-exact piece of $\Phi^{(3)}$ is in the gauge. \newpage For level 4 calculation we would like to focus entirely on the terms which have zero or negative $L$ eigenvalues. Separating these terms from the rest we can write $\Phi^{(4)}$ as \begin{eqnarray} \Phi^{(4)}&=&-\left[\Psi^{(3)}\ast\Psi^{(1)}+\Psi^{(1)}\ast\Psi^{(3)}+\Psi^{(2)}\ast\Psi^{(2)}\right]_{>}\nonumber\\ &+&\lim_{\Lambda\to\infty}\lim_{\Gamma\to\infty}\left\{\left[-\Lambda+\left(4e^{\Lambda}-2\int_{e^{-\Lambda}}^{1}dtf(t)\right)\Gamma +e^{\Lambda}\int_{e^{-\Gamma}}^{1}dt_2\left(-{2\over t_2}-{2\over t_2(1+t_2)^2}\right)\right.\right.\nonumber\\ &+&\int_{e^{-\Gamma}}^{1}dt_2\left(-{1\over t_2}-{1-t_2\over t_2(1+t_2)^2}\right)-\int_{e^{-\Gamma}}^{1}dt_2{t_2-3\over (1+t_2)^3}\nonumber\\ &+&\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Gamma}}^{1}dt_2~t_2\left({2\over t_2^2(l_2+1)^2}+{2\over l_2^2(t_2+1)^2}+{2\over (l_2-t_2)^2}+t_2\to t_1t_2\right)\nonumber\\ &-&\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Lambda}}^{1}dt_2\left({t_2\over (l_2'+1)^2}+{t_2\over (t_1+1)^2(t_2+1)^2}+{1\over t_1t_2^2}\right)\nonumber\\ &+&\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Lambda}}^{1}dt_2\left({t_2+1\over (l_2'+1)^2}+{t_2+1\over t_1^2t_2^2}+{1\over (t_2+1)(t_1+1)^2}\right)\nonumber\\ &-&e^{\Lambda}\left(-\int_{e^{-\Lambda}}^{1}dt_1{1\over t_1^2}+{e^{-\Lambda}\over 2}\int_{e^{-\Lambda}}^{1}dt_1{1\over t_1^2}-\int_{e^{-\Lambda}}^{1}dt_2{1\over t_2}+\int_{e^{-\Lambda}}^{1}dt_2{t_2+1\over t_2^2}-e^{\Lambda}-{(\Lambda+1)}\right)\nonumber\\ &-&\left.{(\Lambda+1)\over 2}\int_{e^{-\Lambda}}^{1}dt_2{1\over t_2}-{\Lambda\over 2}\int_{e^{-\Lambda}}^{1}dt_2{1\over t_2^2}+{\Lambda(\Lambda+1)\over 4}\right]{\bf Q_BL^{+}{\tilde c}_1|0\rangle}\nonumber\\ &+&\left[\Lambda+\left(-2e^{\Lambda}+\int_{e^{-\Lambda}}^{1}dtf(t)\right)\Gamma+e^{\Lambda}\int_{e^{-\Gamma}}^{1}dt_2\left({1+t_2\over t_2}+{1\over t_2(1+t_2)}\right)\right.\nonumber\\ &+&\int_{e^{-\Gamma}}^{1}dt_2\left({1+t_2\over 2t_2}+{1-t_2\over 2t_2(1+t_2)}\right)+\int_{e^{-\Gamma}}^{1}dt_2\left( 1+{t_2-1\over 2(1+t_2)}\right)\nonumber\\ &-&\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Gamma}}^{1}dt_2~t_2(l_2+1)\left({1\over t_2^2(l_2+1)^2}+{1\over l_2^2(t_2+1)^2}+{1\over (l_2-t_2)^2}+t_2\to t_1t_2\right)\nonumber\\ &+&\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Lambda}}^{1}dt_2{l_2'+1\over 2}\left({t_2\over (l_2'+1)^2}+{t_2\over (t_1+1)^2(t_2+1)^2}+{1\over t_1t_2^2}\right)\nonumber\\ &-&\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Lambda}}^{1}dt_2{l_2'+1\over 2}\left({t_2+1\over (l_2'+1)^2}+{t_2+1\over t_1^2t_2^2}+{1\over (t_2+1)(t_1+1)^2}\right)\nonumber\\ &-&e^{\Lambda}\int_{e^{-\Lambda}}^{1}dt_1{t_1+1\over 2t_1^2}+{\Lambda+1\over 2}\int_{e^{-\Lambda}}^{1}dt_1{t_1+3\over 2t_1^2}-{\Lambda\over 2}\int_{e^{-\Lambda}}^{1}dt_1{t_1+1\over 2t_1^2}\nonumber\\ &-&e^\Lambda\left(\int_{e^{-\Lambda}}^{1}dt_2\left({t_2+1\over 2t_2}-{(t_2+1)^2\over 2t_2^2}\right)+{e^{\Lambda}\over 2}+{3(\Lambda+1)\over 2}+e^{-\Lambda}{(\Lambda+1)\over 2}\int_{e^{-\Lambda}}^{1}dt_2{1\over t_2}\right)\nonumber\\ &+&{3(\Lambda+1)\over 4}\int_{e^{-\Lambda}}^{1}dt_2{t_2+1\over t_2}-(\Lambda+1)^2+{\Lambda\over 2}\int_{e^{-\Lambda}}^{1}dt_2{t_2+1\over 2t_2^2}\nonumber\\ &-&\left.\left.{3\Lambda(\Lambda+1)\over 8}\right]{\bf Q_BL^{+}{\tilde c}_1|0\rangle}\right\} \end{eqnarray} where $l_2=t_2+t_1t_2$ and $l_2'=t_1+t_2$. Since the terms in the square brackets are just numerical factors we may write $\Phi^{(4)}$ as \begin{eqnarray} \Phi^{(4)}=\Phi^{(4)}_>+A{\bf Q_B{\tilde c}_1|0\rangle}+B{\bf Q_BL^{+}{\tilde c}_1|0\rangle}. \end{eqnarray} We see that $\Phi^{(4)}$ has the same form as $\Phi^{(2)}$ such that the terms with zero or negative $L$ eigenvalues are $Q_B$ exact. Therefore, we follow the same procedure as in level two to solve for $\Psi^{(4)}$. Up to some $Q_B$ closed term the anzats for $\Psi^{(4)}$ is \begin{eqnarray} \Psi^{(4)}=A{\tilde c}_1|0\rangle+BL^+{\tilde c}_1|0\rangle+\Psi^{(4)}_{>} \end{eqnarray} where $\Psi^{(4)}_{>}$ satisfies $Q_B\Psi^{(4)}_{>}=\Phi^{(4)}_{>}$. Assuming $\Psi^{(4)}_{>}$ is in the Schnabl gauge we can write \begin{eqnarray} \Psi^{(4)}&=&A{\tilde c}_1|0\rangle+BL^+{\tilde c}_1|0\rangle+\int_{0}^{\infty}dT~Be^{-TL}\Phi^{(4)}_{>} \end{eqnarray} In obtaining this solution we applied $L^{-1}$ or its Schwinger form only on terms with positive definite $L$ eigenvalues. Therefore, at this level we didn't produce any new divergent term. Since we have already regularized all the lower level $\Psi^{(i)}$s' it is clear that $\Psi^{(4)}$ is regular. Like the lower levels we see that also at this level the solution contains a gauge condition violating term, which is a charcterstic of the solutions with singular OPE. Now lets generalize our procedure to arbitrary level. By now it is clear that divergences arise only when there are zero or negative $L$ eigenvalue terms in $\Phi^{(n)}$. Noting that $\Phi^{(n)}$ has to be of ghost number two and has to be twist even, one can easily see that the only terms which can appear in $\Phi^{(n)}$ and can have a negative or zero eigenvalue are $\left({\tilde c}_0{\tilde c}_1|0\rangle,~~L^+{\tilde c}_0{\tilde c}_1|0\rangle,~~ {\tilde c}_0{\tilde c}_1{\tilde V}_{-1}|0\rangle\right)$, which are exactly what we have at levels two and three. In particular, if $n$ is even only the first two of these terms (which are $Q_B$ exact) appear. The reason is that applying the commutation relation for each pair of $V$ kills all the $V$ operators and hence the term of the third kind can not appear. In this case we follow the procedure of level two to solve for $\Psi^{(n)}$. For odd $n$, where we have odd number of $V$ operators and the pairing will leave one $V$, only the last term appears and it give rise to a new divergent term in $\Psi^{(n)}$. However, this new divergent term is $Q_B$ closed as well as satisfy the Schnabl gauge so that we can subtruct it out to get a regular solution. Therefore, our procedure can be used at any level. \section{The tachyon profile} In this section we consider the special case of a marginal deformation corresponding to a periodic array of D24--branes. The dimension one boundary matter primary operator $V(z)$ giving such a solution is \begin{eqnarray} V(z)={1\over {\sqrt 2}}[V^+(z)+V^-(z)],~~~~~~ {\rm with}~~V^\pm=e^{\pm iX(z)} \end{eqnarray} where we choose $\alpha'=1$ and $X(z)=X^{25}(z)$. We can easily see that the OPE of $V$ with itself is given by (\ref{OPE}) and hence it is an example of singular OPE solutions we saw in section 2. Our aim in this section is to calculate the $x$ dependence of the tachyon field level by level and verify that the solution are regular and they are indeed correspond to array of D24--branes. The calculation beyond the thrid level is too complicated so we restrict our treatement in this section to the first three levels. Actually, the over all shape of the tachyon profile does not change when we consider higher level contributions, what changes is the depth of its minima, to which we are not intending to associate any physical meaning for the reason we will give in the discussion section. Since the result at level one is trivial we start with level two calculations. At level two $x$ dependence of $\Psi^{(2)}$ must be of the form \begin{eqnarray} \Psi^{(2)}=\left(e^{2iX(0)}+e^{-2iX(0)}\right)\left[\beta_{2}^{2}c_1|0\rangle+...\right]+\left[\beta_{0}^{2}c_1|0\rangle+...\right]. \end{eqnarray} The dotes indicate higher level space-time fields and the coefficients $\beta_n^{2}$ are given by \begin{eqnarray} \beta_{n}^{2}=\langle\phi_{\pm n}, \Psi^{(2)}\rangle,~~~~~ \phi_{\pm n}=e^{\pm inX(0)}c\partial c(0)|0\rangle \end{eqnarray} where we have ignored the irrelevant space time volume factor. By momentum conservation $\beta_2^2$ gets a contribution only from the last term of (\ref{psi2}) which is given by \begin{eqnarray} \beta_{2}^{2}&=&{1\over 2}\left\langle\phi_{-2},\lim_{\Lambda\to\infty}\int_{e^{-\Lambda}}^{1}dt~~cV^+(0)|0\rangle\ast U_t^\dagger U_t|0\rangle\ast (-B_L^+)cV^{+}(0)|0\rangle\right\rangle\nonumber\\ &=&{1\over 2}\left\langle\phi_{2},\lim_{\Lambda\to\infty}\int_{e^{-\Lambda}}^{1}dt~~cV^-(0)|0\rangle\ast U_t^\dagger U_t|0\rangle\ast (-B_L^+)cV^{-}(0)|0\rangle\right\rangle\nonumber \end{eqnarray} Each of the $V^{\pm}$'s gives the regular OPE solutions and the above result have been calculated in \cite{KORZ}, and the answer is \begin{eqnarray} \beta_2^2={1\over 2}(0.15206). \end{eqnarray} $\beta_0^2$ gets a contribution from all the terms in (\ref{psi2}). Using the definitions \begin{eqnarray} L^+=-2(K_1^L-K_1),~~~~~B^+=-2(B_1^L-B_1) \end{eqnarray} and noting that ($K_1 c_1|0\rangle+B_1c_0c_1|0\rangle=0$) we can rewrite $\Psi^{(2)}$ in the following more convenient way: \begin{eqnarray} \Psi^{(2)}=\lim_{\Lambda\to\infty}\left(\Lambda \psi_0'-{1\over 2}L^+{\tilde c}_1|0\rangle+ e^\Lambda{\tilde c}_1|0\rangle-\int_{e^{-\Lambda}}^{1}dt~~\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast B_L^+\Psi^{(1)}\right)\nonumber\\ =\lim_{\Lambda\to\infty}\left(-\Lambda \psi_0'+e^\Lambda \exp\left[-{e^{-\Lambda}L^+\over 2}\right]{\tilde c}_1|0\rangle-\int_{e^{-\Lambda}}^{1}dt~~\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast B_L^+\Psi^{(1)}\right)\nonumber \end{eqnarray} where $\left(\psi_0'=K_1^Lc_1|0\rangle+B_1^Lc_0c_1|0\rangle\right)$ is defined in \cite{Okawa1}. With the help of the identity ($L^+=2L^+_L+K_1$), in the limit $\Lambda\to\infty$ it is not difficult to show that \begin{eqnarray} \exp\left[-{e^{-\Lambda}L^+\over 2}\right]{\tilde c}_1|0\rangle=U_{e^{-\Lambda}+1}^\dagger U_{e^{-\Lambda}+1}|0\rangle\ast\left({\tilde c}_1|0\rangle-{1\over 2}e^{-\Lambda}{\tilde c}_0|0\rangle\right) \end{eqnarray} so that \begin{eqnarray} \Psi^{(2)}&=&\lim_{\Lambda\to\infty}\left[-\Lambda \psi_0'+e^\Lambda U_{e^{-\Lambda}+1}^\dagger U_{e^{-\Lambda}+1}|0\rangle\ast\left({\tilde c}_1|0\rangle -{1\over 2}e^{-\Lambda}{\tilde c}_0|0\rangle\right)\right.\nonumber\\ &+&\left.\int_{e^{-\Lambda}}^{1}dt~~\Psi^{(1)}\ast U_t^\dagger U_t|0\rangle\ast(-B_L^+)\Psi^{(1)}\right]. \end{eqnarray} Therefore, \begin{eqnarray} \beta_0^2&=&\lim_{\Lambda\to\infty}\left(-\Lambda\langle\phi_0, \psi_0'\rangle-e^\Lambda \left\langle f\circ\phi_0(0)\left({\tilde c}-{1\over 2}e^{-\Lambda}\partial{\tilde c}\right)(e^{-\Lambda}+1)\right\rangle_{e^{-\Lambda}+2}\right.\nonumber\\ &+&\left.\int_{e^{-\Lambda}}^{1}dt~~\left\langle f\circ\phi_0(0) {\tilde c}{\tilde V}^+(1){\cal B}{\tilde c}{\tilde V}^-(t+1)\right\rangle_{t+2}\right).\label{beta0} \end{eqnarray} The subscripts indicate the width of the strip over which the correlators are taken. Noting that $\phi_0=Q_Bc(0)|0\rangle$ we have \begin{eqnarray} \langle\phi_0, \psi_0'\rangle=\langle c_{-1}, Q_B\psi_0'\rangle=0 \end{eqnarray} After a simple calculation the remaining terms in the first line of (\ref{beta0}) gives \begin{eqnarray} e^\Lambda \left\langle f\circ\phi_0(0)\left({\tilde c}-{1\over 2}e^{-\Lambda}\partial{\tilde c}\right)(e^{-\Lambda}+1)\right\rangle_{e^{-\Lambda}+2}={2\over \pi}(e^\Lambda+1)+{\cal O}(e^{-\Lambda}).\label{beta22} \end{eqnarray} Note that when we do the star product of wedge states with insertions (eq. \ref{starprod}) we insert the operator of the last state in the star product, first on the strip obtained by gluing together the strips of the individual state. This operator ordering is opposite to the one we use when we calculate the correlator in (\ref{beta0}) and as a result we got an extra minus sign. The ghost part of the last line of (\ref{beta0}) have been calculated in \cite{KORZ} and the matter part calculation is straight forward. Finally, we obtain \begin{eqnarray} \int_{e^{-\Lambda}}^{1}dt~~\left\langle f\circ\phi_0(0) {\tilde c}{\tilde V}^+(1){\cal B}{\tilde c}{\tilde V}^-(t+1)\right\rangle_{t+2}&=&\int_{e^{-\Lambda}}^{1}dt~~{\pi\over t+2}\left[1-{2+t\over 2\pi}{\rm sin}\left({2\pi\over 2+t}\right)\right]\nonumber\\ &\times&{\rm sin}^2\left({\pi\over 2+t}\right){\rm sin}^{-2}\left({\pi t\over 2+t}\right) \end{eqnarray} Putting every thing together, and using Mathematica we obtain \begin{eqnarray} \beta_{0}^2=-{\sqrt{ 27}\over 4}=-1.29904 \end{eqnarray} which is regular as we anticipated. To level 2 the tachyon profile is given by \begin{eqnarray} T(x)=-{\rm cos}(x)+(0.15206){\rm cos}(2x)-1.29904 \end{eqnarray} if we choose $\lambda=-1$ and \begin{eqnarray} T(x)={\rm cos}(x)+(0.15206){\rm cos}(2x)-1.29904 \end{eqnarray} if we choose $ \lambda=+1$.\\ \begin{figure}[htbp] \hspace{-0.5cm} \begin{center} \includegraphics[scale=1]{t2n.eps} \end{center} \caption{\emph{\small The level 2 approximation of the tachyon profile for $\lambda=-1$}} \label{fig1} \end{figure} \begin{figure}[htbp] \hspace{-0.5cm} \begin{center} \includegraphics[scale=1]{t2p.eps} \end{center} \caption{\emph{\small The level 2 approximation of the tachyon profile for $\lambda=+1$ }} \label{fig2} \end{figure} Now lets proceed to level three calculations which should be of the form \begin{eqnarray} \Psi^{(3)}=\left(e^{3iX(0)}+e^{-3iX(0)}\right)\left[\beta_{3}^{3}c_1|0\rangle+...\right]+\left(e^{iX(0)}+e^{-iX(0)}\right)\left[\beta_{1}^{3}c_1|0\rangle+...\right] \end{eqnarray} with \begin{eqnarray} \beta_{n}^{3}=\langle\phi_{\pm n}, \Psi^{(3)}\rangle,~~~~~ \phi_{\pm n}=e^{\pm inX(0)}c\partial c(0)|0\rangle \end{eqnarray} By momentum conservation only the last line of $(\ref{Psi3})$ matters in the calculation of $\beta_{3}^{3}$, which is given by \begin{eqnarray} \beta_{3}^{3}&=&{1\over \sqrt{8}}\left\langle\phi_{-3},\lim_{\Gamma\to\infty}\lim_{\Lambda\to\infty}\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Gamma}}^{1}dt_2~~cV^+(0)|0\rangle\ast U_{t_1}^\dagger U_{t_1}|0\rangle\right.\nonumber\\ &&~~~~~~~~~~~~~~~~~\left.\ast (B_L^+)cV^{+}(0)|0\rangle\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast (B_L^+)cV^{+}(0)|0\rangle\right\rangle\nonumber\\ &=&{1\over \sqrt{8}}\left\langle\phi_{3},\lim_{\Gamma\to\infty}\lim_{\Lambda\to\infty}\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Gamma}}^{1}dt_2~~cV^-(0)|0\rangle\ast U_{t_1}^\dagger U_{t_1}|0\rangle\right.\nonumber\\ &&~~~~~~~~~~~~~\left.\ast (B_L^+)cV^{-}(0)|0\rangle\ast U_{t_2}^\dagger U_{t_2}|0\rangle\ast (B_L^+)cV^{-}(0)|0\rangle\right\rangle. \end{eqnarray} All the operator involved have regular OPE and the result is that of \cite{KORZ} again. \begin{eqnarray} \beta_{3}^{3}={1\over\sqrt{8}}(2.148\times10^{-3}) \end{eqnarray} The calculation of $\beta_{1}^{3}$ is tedious but it is trivial, it receives a contribution from all the terms in (\ref{Psi3}). Next we list the contribution of each of them, where $l_i$ stands for the contribution of the $i^{th}$ line in (\ref{Psi3}). \begin{eqnarray} l_1=-\lim_{\Lambda\to\infty}{\Lambda\over\sqrt{2}}\left[{4\over 3}\left(1-{3\sqrt{3}\over 4\pi}\right)-1\right], \end{eqnarray} \begin{eqnarray} l_2=\lim_{\Gamma\to\infty}\lim_{\Lambda\to\infty}\left[{\Gamma\over\sqrt{2}}\left(-2e^\Lambda+\int_{e^{-\Lambda}}^{1}dt~~f(t)\right)\right], \end{eqnarray} \begin{eqnarray} l_3=-\lim_{\Gamma\to\infty}\lim_{\Lambda\to\infty}{4e^\Lambda\over\sqrt{2}}\int_{e^{-\Gamma}}^{1}dt_2~~{1\over t_2}\left\{{1\over t_2+2}\left[1-{2+t_2\over 2\pi}{\rm sin}\left({2\pi\over 2+t_2}\right)\right]\right\}, \end{eqnarray} \begin{eqnarray} l_4=-\lim_{\Gamma\to\infty}{4\over\sqrt{2}}\int_{e^{-\Gamma}}^{1}dt_2~~{1\over t_2}\left\{{1\over t_2+2}\left[1-{2+t_2\over 2\pi}{\rm sin}\left({2\pi\over 2+t_2}\right)\right]-{1\over 4}\right\}, \end{eqnarray} \begin{eqnarray} l_5={1\over\sqrt{8}}(0.734828) \end{eqnarray} \begin{eqnarray} l_6&=&\lim_{\Gamma\to\infty}\lim_{\Lambda\to\infty}{2\pi^2\over\sqrt{8}}\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Gamma}}^{1}dt_2{t_2\over(2+l_2)^3}\left[1-{2+l_2\over 2\pi}{\rm sin}\left({2\pi\over 2+l_2}\right)\right] \nonumber\\ &\times&\left[{\rm sin}^{-2}\left({\pi t_2\over 2+l_2}\right){\rm sin}^{-2}\left({\pi (1+t_2)\over 2+l_2}\right){\rm sin}^{-2}\left({2\pi\over 2+l_2}\right){\rm sin}^{2}\left({\pi t_1t_2\over 2+l_2}\right){\rm sin}^{2}\left({\pi\over 2+l_2}\right)\right.\nonumber\\ &+&{\rm sin}^{-2}\left({\pi\over 2+l_2}\right){\rm sin}^{-2}\left({\pi t_2\over 2+l_2}\right){\rm sin}^{-2}\left({\pi t_1t_2\over 2+l_2}\right){\rm sin}^{2}\left({\pi(1+t_2)\over 2+l_2}\right){\rm sin}^{2}\left({2\pi\over 2+l_2}\right)\nonumber\\ &+&\left.{\rm sin}^{-2}\left({\pi (1+t_2)\over 2+l_2}\right){\rm sin}^{-2}\left({\pi t_1t_2\over 2+l_2}\right){\rm sin}^{-2}\left({2\pi\over 2+l_2}\right){\rm sin}^{2}\left({\pi t_2\over 2+l_2}\right){\rm sin}^{2}\left({\pi\over 2+l_2}\right)\right]\nonumber\\ \end{eqnarray} \begin{eqnarray} l_7&=&\lim_{\Gamma\to\infty}\lim_{\Lambda\to\infty}{2\pi^2\over\sqrt{8}}\int_{e^{-\Lambda}}^{1}dt_1\int_{e^{-\Gamma}}^{1}dt_2{t_2\over(2+l_2)^3}\left[1-{2+l_2\over 2\pi}{\rm sin}\left({2\pi\over 2+l_2}\right)\right] \nonumber\\ &\times&\left[{\rm sin}^{-2}\left({\pi t_1t_2\over 2+l_2}\right){\rm sin}^{-2}\left({\pi (1+t_1t_2)\over 2+l_2}\right){\rm sin}^{-2}\left({2\pi\over 2+l_2}\right){\rm sin}^{2}\left({\pi t_2\over 2+l_2}\right){\rm sin}^{2}\left({\pi\over 2+l_2}\right)\right.\nonumber\\ &+&{\rm sin}^{-2}\left({\pi\over 2+l_2}\right){\rm sin}^{-2}\left({\pi t_1t_2\over 2+l_2}\right){\rm sin}^{-2}\left({\pi t_2\over 2+l_2}\right){\rm sin}^{2}\left({\pi(1+t_1t_2)\over 2+l_2}\right){\rm sin}^{2}\left({2\pi\over 2+l_2}\right)\nonumber\\ &+&\left.{\rm sin}^{-2}\left({\pi (1+t_1t_2)\over 2+l_2}\right){\rm sin}^{-2}\left({\pi t_2\over 2+l_2}\right){\rm sin}^{-2}\left({2\pi\over 2+l_2}\right){\rm sin}^{2}\left({\pi t_1t_2\over 2+l_2}\right){\rm sin}^{2}\left({\pi\over 2+l_2}\right)\right]\nonumber\\ \end{eqnarray} where $l_2=t_2+t_1t_2$. Note that we have made a change of sign on the first five terms for the same reason we gave after equation (\ref{beta22}). Each of these term can be evaluated numerically using mathematica and we finally obtain the following finite answer \begin{eqnarray} \beta_{1}^{3}=0.798956. \end{eqnarray} With this we can write the level three approximation of the tachyon profile as \begin{eqnarray} T(x)=-2.59791{\rm cos}(x)+(0.15206){\rm cos}(2x)-1.29904-1.51887\times 10^{-3}{\rm cos}(3x)\nonumber\\ \end{eqnarray} for $\lambda=-1$ and \begin{eqnarray} T(x)=2.59791{\rm cos}(x)+(0.15206){\rm cos}(2x)-1.29904+1.51887\times 10^{-3}{\rm cos}(3x)\nonumber\\ \end{eqnarray} for $\lambda=+1$.\\ \begin{figure}[htbp] \hspace{-0.5cm} \begin{center} \includegraphics[scale=1]{t3n.eps} \end{center} \caption{\emph{\small The level 3 approximation of the tachyon profile for $\lambda=-1$ }} \label{fig3} \end{figure} \begin{figure}[htbp] \hspace{-0.5cm} \begin{center} \includegraphics[scale=1]{t3p.eps} \end{center} \caption{\emph{\small The level 3 approximation of the tachyon profile for $\lambda=+1$ }} \label{fig4} \end{figure} At both level two an level three approximations, our result confirms the results from conformal field theory description that the ${\rm cos}(x)$ boundary deformation gives a solution representing a periodic array of D--branes placed at odd integral multiple of $\pi$ when the coupling ${\tilde\lambda}$ is positive and at even integral multiple of $\pi$ when the coupling ${\tilde\lambda}$ is negative. In both cases the D-brane is situated at the minimum of the interaction potential switched on along the boundary of the world-sheet. To first order approximation, the tachyon profile and this interaction potential can be identified. This means the first level approximation of the tachyon profile indicates the location of the D-branes. We just showed that including higher level contributions does not change the location of this minima and that means still with higher level contribution the tachyon profile minima is the location of the D-branes. \section{Conclusion} In this paper at the first place we could verify that an explicit expansion of the $\Phi^{(n)}$ in terms of definite $L$ eigenvalue states, contains zero and negative eigenvalues only when the matter primary operator $V(z)$ has a singular OPE. This fact helped us to identify the terms which give raise to divergences in the case of singular OPE marginal deformations are those with zero or negative eigenvalues. As these kind of terms with the right ghost and twist number are very few, we conclude that one can determine exactly the form of the counter terms which have to be subtracted at any level of expansion in powers of $\lambda$ to cancel the divergences associated with these terms. We have also seen that unlike the regular OPE case, where the entire solution satisfies the Schnabl gauge, only some piece of the solution can satisfy the Schnabl gauge in the case of singular OPE. We have shown this explicitely upto level 4 and it works the same for levels higere than that as the the gauge violating terms of these levels are the same as those of the lowest levels. In our computations we have considered only the case where the OPE is given by $\ref{OPE}$. However, as what matters is the commutation relation between the modes of $V(z)$, we believe that the treatment in this paper can be generalized to any matter primary operator with arbitrary singular OPE and hence different commutation relation for the modes of $V(z)$. In the second part of the paper we have considered the ${\rm cos}(x)$ marginal deformation, which from the world-sheet CFT point of view, is known to represent a periodic array of D--branes located at the minima of world-sheet potential. Using our results of the first part we could calculate the tachyon profile up to level 3 and obtained a result which agrees with the world-sheet description. Earlier, in the string field theory framework, the tachyon profile of a lump solution have been obtained in \cite{Moeller:2000jy} using the level truncation method, when the transverse direction is compactified on a circle. Their result indicates that the lump solution represents a single D--brane placed at $x=0$, which coincides with our solution if we restrict our solution to one period of the potential. Lastly, we would like to comment on the depth of the minima of the tachyon profile which seems to increase as we go to higher and higher levels. As we mentioned before, to first order approximation, the tachyon profile is related to the world-sheet boundary interaction potential. This might lead one to the conclusion that the depth of the minima of the tachyon profile is related to the hight of the potential. However, this can not be true since at each order approximation, our solutions are determined up to a $Q_B$ closed additional terms, which if taken into account will affects the depth of the minima of the tachyon profile. Therefore, here we do not take the depth of the minima of the tachyon profile seriously, all we need is its position which is the position of the lower dimensional D--brane. \acknowledgments D.D.T. would like to thank Y. Okawa for his kind response to several questions I have asked. C. Park would like to thank the Isaac Newton Institute for Mathematical Sciences, where I was visting while this work was in progress, for their hospitality. This work is supported by the Science Research Center Program of the Korean Science and Engineering Foundation through the Center for Quantum SpaceTime (CQUeST) of Sogang University with grant number R11-2005-021.
1,116,691,496,966
arxiv
\section{Introduction} \label{sec:introduction} The study of scattering amplitudes has played a central role in the development of theoretical physics, and has led to some of the most precise predictions in all of science~\cite{Aoyama:2012wj,Aoyama:2017uqe,Laporta:2017okg}. These predictions have traditionally been made with the use of Feynman diagrams, which provide an intuitive picture for scattering amplitudes as the sum over all ways a given configuration of incoming particles can scatter into a configuration of outgoing particles. However, we now know that there are completely different ways of formulating scattering amplitudes that make no reference to particle trajectories, or even any notion of space-time. These novel ways of thinking about scattering amplitudes have mainly arisen from investigations of ${\cal N}=4$ supersymmetric Yang-Mills (SYM) theory in four dimensions, the theory that we focus on in this white paper. Part of the motivation for recasting scattering amplitudes in new and more abstract ways comes from the incredible simplicity these quantities exhibit, relative to the complexity of the calculations currently required to compute them. This has been especially true in ${\cal N} = 4$ SYM theory, in which seemingly-miraculous cancellations have led to the discovery of beautiful mathematical structures that make contact with many branches of modern mathematics, including combinatorics, algebraic geometry, number theory, and the theory of motives. The endeavor to understand the simplicity of amplitudes has correspondingly led to a rich and productive interplay between amplitudes researchers and mathematicians. Our understanding of the {\it planar} limit of ${\cal N}=4$ SYM theory, in which the number of colors in the SU($N_c$) gauge group becomes large, is especially well developed. There are currently three independent descriptions of scattering amplitudes in this regime, illustrated in Figure~\ref{Fig:SolveNeq4Figure}. A weak-coupling formulation makes contact with perturbative methods involving Feynman diagrams~\cite{Feynman:1949zx,Caron-Huot:2020bkp}; a ``holographic" strong-coupling formulation employs minimal-area surfaces in Anti-de Sitter space~\cite{Alday:2007hr,Alday:2010vh}; and a pentagon operator product expansion (POPE) approach exploits the two-dimensional integrability of a dual string picture at finite coupling in various kinematic limits~\cite{Alday:2010ku,Basso:2013vsa,Basso:2014pla}. These formulations are all mutually consistent but make use of different physics, are formulated in mathematically distinct ways, and make different properties of amplitudes manifest. A major task for the future will be to find a single unifying description of amplitudes in this theory that properly matches each of these formulations in the appropriate limit. Mathematically, this question can be framed as a search for the functions that are able to express the markedly varied behavior exhibited by amplitudes, from weak to strong coupling and in arbitrary kinematics. \begin{figure}[t] \begin{center} \includegraphics[width=6.5in]{SolveNeq4.pdf} \end{center} \caption{The three approaches that should be unified to solve ${\cal N}=4$ super-Yang-Mills theory in the planar limit: weak coupling via perturbation theory, strong coupling via minimal surfaces, and near-collinear kinematics via the pentagon operator product expansion at any coupling.} \label{Fig:SolveNeq4Figure} \end{figure} Many interesting facets of these questions deserve attention in the coming years. They include: How do gluonic and stringy descriptions morph into each other as the coupling and kinematics are varied? What kind of singularities show up, and what is the physics associated to them? How do holographic dualities, string theory, and even space-time itself, emerge dynamically from planar gauge theories? Solving scattering in planar ${\cal N}{=}4$ SYM theory will provide a quantitative test for our physical and mathematical expectations, and will lead to an improved intuition that can be applied to more general and realistic quantum field theories. More immediately, the goal of this white paper is to lay out a set of concrete goals that are within the reach of current technology, and that will allow us to make progress on these overarching questions. Before outlining these goals, we provide a brief overview of the state of knowledge about ${\cal N}{=}4$ SYM theory, starting with a review of its particle content and symmetries in section~\ref{sec:background}. In section~\ref{sec:amplituheron} we describe what is known about amplitudes in this theory at the level of the integrand, while their properties as functions are described in section~\ref{sec:mathematical_properties}. The more detailed understanding we have of certain kinematic limits is described in section~\ref{sec:kinematic_limits}, and in section~\ref{sec:bootstrap} we explain how this understanding can be combined with knowledge of the analytic properties of amplitudes to in some cases bootstrap them directly. Finally, we highlight some of the research questions that we expect will be important in the coming decade in section~\ref{sec:outlook}. \section{${\cal N} = 4$ Supersymmetric Yang-Mills Theory} \label{sec:background} The field content of SYM theory in four dimensions consists of a gauge field $A_\mu$, four Weyl fermion gluinos $\psi^{a A}$, and six scalar fields that are conventionally packaged into a two-index antisymmetric field $\phi^{AB} = - \phi^{BA}$. The indices $A,B$ are vector indices of an unbroken $SU(4)$ R-symmetry that is possessed by the theory. Since supersymmetry transformations relate the fields to each other, gauge invariance requires that $\psi^{a A}$ and $\phi^{AB}$ must transform in the adjoint representation of the gauge group, similar to $A_\mu$. The $\mathcal{N}=4$ supersymmetric (indeed, superconformal) Lagrangian~\cite{Brink:1976bc} for this field content is unique up to the choice of gauge group and the value of a single complex, dimensionless coupling constant. The on-shell degrees of freedom of the $\mathcal{N} = 4$ supermultiplet consist of a positive helicity gluon $g^+$, four $+\frac{1}{2}$ helicity gluino states $\tilde{g}_A$, six scalars $S_{AB}$, four $-\frac{1}{2}$ helicity gluino states $\smash{\overline{\tilde{g}}^A}$, and the negative helicity gluon $g^-$. It is useful to package this collection of on-shell states into an on-shell superfield~\cite{Nair:1988bq} \begin{equation} \Phi(p^{a \dot{a}}, \eta^A) = g^+(p) + \eta^A \tilde{g}_A(p) + \tfrac{1}{2} \eta^A \eta^B S_{AB}(p) + \tfrac{1}{6} \eta^A \eta^B \eta^C \epsilon_{ABCD} \overline{\tilde{g}}^D(p) + \tfrac{1}{24} \eta^A \eta^B \eta^C \eta^D \epsilon_{ABCD} g^-(p)\,, \end{equation} in terms of which superamplitudes are constructed. This object is a function of an on-shell four-momentum $p$, which is often parametrize in terms of spinor helicity variables as \begin{equation} \label{eq:spinorhelicity} p^{a \dot{a}} = \lambda^a \widetilde{\lambda}^{\dot{a}}\,, \end{equation} as well as four independent superspace coordinates $\eta^A$ satisfying \begin{equation} \{ \eta^A, \eta^B \} = 0\,. \end{equation} For more details on these conventions, see for instance~\cite{Elvang:2013cua}. The conservation of momentum and half of the components of supermomentum (those corresponding to the generators $Q^{a A} = \lambda^a \eta^A$)\footnote{The conservation of the $\overline{Q}^{\dot{a}}_A = \widetilde{\lambda}^{\dot{a}} \frac{\partial}{\partial \eta^A}$ is not manifest in this formalism, but implies non-trivial and powerful differential constraints on superamplitudes~\cite{Caron-Huot:2011zgw}.} can be made manifest by writing the $n$-particle superamplitude ${\cal{A}}_n$ with an explicit prefactor of \begin{equation} \delta^4\left(\sum_{i=1}^n \lambda_i^a \widetilde{\lambda}_i^{\dot{a}}\right) \delta^8\left(\sum_{i=1}^n \lambda_i^a \eta_i^A\right). \label{eq:deltas} \end{equation} (The sole exception is the so-called $\overline{\rm MHV}$ three-point amplitude, which exists due to the peculiarities of massless three-point kinematics in four dimensions.) In light of Eq.~(\ref{eq:deltas}), the Grassmann Taylor expansion of ${\mathcal{A}}_n$ evidently begins at ${\mathcal{O}}(\eta^8)$, and, thanks to R-symmetry, can only contain terms with $8 + 4 k$ powers of $\eta$. Terms in the expansion of ${\cal A}_n$ are conventionally denoted \begin{equation} \label{eq:pexpansion} {\cal A}_n(\lambda_i^a, \widetilde{\lambda}_i^{\dot{a}}, \eta_i^A) = \sum_{k = 0}^{n-4} {\cal A}_n^{{\rm N}^{k}{\rm MHV}}(\lambda_i^a, \widetilde{\lambda}_i^{\dot{a}}, \eta_i^A)\,, \end{equation} where ${\cal A}_n^{{\rm N}^{k}{\rm MHV}}$ is homogeneous of degree $4k+8$ in the $\eta$'s and MHV stands for maximally helicity violating. Because of the overall supermomentum conserving delta function, we can say that ${\cal A}_n^{{\rm N}^{k}{\rm MHV}}$ is equal to $\delta^8(q)$ times a homogeneous polynomial in the $\eta$'s of degree $4k$. The terms with $k=0,1,2,\ldots$ are referred to as MHV, NMHV (next-to-MHV), NNMHV (next-to-next-to-MHV), etc. In the planar limit, where only single-trace color structures contribute,\footnote{The color-stripped planar $n$-gluon amplitudes ${\cal A}_n$ are the coefficients of a single-trace color factor ${\rm Tr}(T^{a_1}T^{a_2}\cdots T^{a_n})$, and naturally transform under the dihedral group $D_n$.} it is useful to trivialize (super)momentum conservation by formulating this constraint geometrically. If we place the $n$ four-momentum vectors $p_i^{a \dot{a}}$ of the scattering particles head to tail in the order dictated by the color trace, they form a closed polygon in Minkowski space with light-like edges. Such a configuration may alternatively be described by the locations of its vertices, which we denote by $x_i$ and call dual coordinates. Specifically we have \begin{equation} \label{eq:dualvar} x^{a \dot{a}}_i - x^{a \dot{a}}_{i+1} = p^{a \dot{a}}_i\,, \end{equation} and we similarly have $n$ Grassmann dual coordinates $\theta_i^{a A}$ obeying \begin{equation} \theta_{i}^{aA} - \theta^{aA}_{i{+}1} = \lambda_i^a \eta^A_i \qquad \text{(no sum on $i$).} \end{equation} This notation not only serves to trivialize (super)momentum conservation (via periodicity in $i \rightarrow i+n$), \begin{equation} \delta^4(p) \delta^8(q) = \delta^4(x_{n+1} - x_1) \delta^8(\theta_{n+1} - \theta_1)\,, \end{equation} it also helps to expose a striking property of planar scattering amplitudes in SYM theory called dual (super)conformal symmetry~\cite{Drummond:2007au,Drummond:2008vq}---which is simply superconformal symmetry in $(x,\theta)$ space. One of the most remarkable developments in the understanding of scattering amplitudes in SYM theory is the discovery of the amplitude/Wilson loop correspondence~\cite{Alday:2007hr,Drummond:2007aua,Brandhuber:2007yx,Drummond:2007cf,Drummond:2007au,Bern:2008ap,Drummond:2008aq,Alday:2008yw,Adamo:2011pv}. We saw that it was natural to picture the kinematic configuration $(p_1,\ldots,p_n)$ of $n$ null momenta satisfying energy-momentum conservation as a polygon with light-like edges and vertices located at the $n$ dual coordinates $(x_1,\ldots,x_n)$. Let $\langle W \rangle$ denote the expectation value of a Wilson loop associated to this polygon. In its simplest form, the amplitude/Wilson loop correspondence is the statement of the exact equivalence \begin{equation} \label{eq:Wilsonloop} \log \frac{{\cal{A}}_n^{\rm MHV}(p_1,\ldots,p_n)}{ {\cal{A}}_n^{\rm MHV}(p_1,\ldots,p_n)\rvert_{\rm tree-level}} = \log\,\langle W(x_1,\ldots,x_n) \rangle \end{equation} in planar SYM theory. A generalization of this formula is also known to hold for non-MHV amplitudes, when the Wilson loop is suitably decorated by the insertion of certain operators on its edges. Both sides of Eq.~(\ref{eq:Wilsonloop}) are divergent. The left-hand side has the usual infrared divergences of massless gauge theories, while the right-hand side has ultraviolet divergences arising from gluon exchange between adjacent edges of the polygon near its corners. Fortunately the divergences of both sides are very well-understood and take a simple factorized form. In dimensional regularization to $D = 4 - 2 \epsilon$ we can write~\cite{Magnea:1990zb,Catani:1998bh,Sterman:2002qn,Bern:2005iz} \begin{equation} \label{eq:logW} \log\,\langle W(x_1,\ldots,x_n) \rangle = \sum_{i=1}^n \Div(x_{i-1,i+1}^2; \epsilon) + \Fin_n(x_{ij}^2) \end{equation} where $x_{i,j}^2 = (x_i - x_j)^2$, \begin{equation} \Div(x^2; \epsilon) = - \frac{1}{4} \sum_{L=1}^\infty g^{2L} ( - x^2 \mu^2 )^{L \epsilon} \left[ \frac{\Gamma_{\rm cusp}^{(L)}}{(L \epsilon)^2} + \frac{\Gamma_{\rm collinear}^{(L)}}{L \epsilon} \right] \end{equation} and $\Fin_n(x_{ij}^2)$ is free of infrared and ultraviolet divergences. For a gauge group $SU(N)$ and gauge coupling $g_{\rm YM}$, we define \begin{equation} g^2 \equiv \frac{g_{\rm YM}^2 N}{16 \pi^2} = \frac{\lambda}{16 \pi^2} \,, \end{equation} where $\lambda$ is the `t Hooft coupling of planar SYM theory, $\mu$ is an arbitrary mass parameter, and the two sequences of numbers denoted $\Gamma^{(L)}$ are respectively the $L$-loop cusp and collinear anomalous dimensions.\footnote{The collinear anomalous dimension for amplitudes differs from that for Wilson loops; the difference drops out for suitable finite ratios.} The Wilson loop has conformal symmetry in $x$-space (this is the dual conformal symmetry of the corresponding amplitude), except that this is broken by the UV divergences at the cusps. In the $\epsilon \to 0$ limit, the breaking of the conformal symmetry manifests itself as an anomalous Ward identity~\cite{Drummond:2007au} \begin{equation} \label{eq:ward} K^\mu \Fin_n(x_{ij}^2) = \frac{1}{2} \Gamma_{\text{cusp}}(g^2) \sum_{i=1}^n x_{i,i+1}^\mu \log \left( x_{i,i+2}^2/ x_{i-1,i+1}^2\right) \end{equation} for the generator of special conformal transformations \begin{equation} K^\mu = \sum_{i=1}^n \left[ 2 x_i^\mu x_i^\nu \frac{\partial}{\partial x_{i \nu}} - x_i^2 \frac{\partial}{\partial x_{i \mu}} \right]. \end{equation} One particular solution to the conformal Ward identity~(\ref{eq:ward}) is a function of the $x_{ij}^2$ known as the BDS ansatz~\cite{Bern:2005iz}.\footnote{More precisely, the term ``BDS ansatz'' usually refers to the sum of this particular solution and the divergent terms displayed explicitly in Eq.~(\ref{eq:logW}).} Therefore, the Ward identity completely determines the Wilson loop expectation value (and hence, the MHV amplitude) up to the addition of any homogeneous solution of Eq.~(\ref{eq:ward}), i.e.~up to any dual conformally invariant function of the $x_{ij}^2$. The finite, dual conformally invariant quantity obtained by subtracting the BDS ansatz from Eq.~(\ref{eq:Wilsonloop}) is called the MHV remainder function $R_n$. Because the polygon edges are light-like, $x_{i,i+1}^2=0$, it is impossible to form any non-trivial dual conformal cross ratios for $n<6$. So the first nontrivial instance of the remainder function is for $n=6$, where three independent cross ratios \begin{equation} \label{eq:six_point_uvw} u = \frac{x_{13}^2 x_{46}^2}{x_{14}^2 x_{36}^2}, \qquad v = \frac{x_{24}^2 x_{51}^2}{x_{25}^2 x_{41}^2}, \qquad w = \frac{x_{35}^2 x_{62}^2}{x_{36}^2 x_{52}^2}\, \end{equation} can be defined. In general, there are $3(n-5)$ independent variables, and this is the dimensionality of the phase-space for $n$-point scattering in planar SYM, five variables fewer than in a generic theory. In practice, it is often convenient to parametrize dual-conformally-invariant kinematics using momentum twistors~\cite{Hodges:2009hk}, which are four-component objects defined by \begin{equation} Z^I_i = (\lambda_i^a, x_i^{b \dot{a} }\lambda_{i b})\, \end{equation} for each particle index $i$, where $I= (a, \dot{a})$ is a combined $SU(2,2)$ index. Momentum twistors are invariant under overall rescalings $Z_i^R \to t_i Z_i^R$ and as such represent points in $\mathbb{CP}^3$. They also transform linearly under dual conformal transformations. In $n$-particle kinematics, they can be assembled into a $4 \times n$ matrix \begin{equation} Z \in \text{Gr}(4,n)/ \text{GL}(1)^{n-1}\, , \end{equation} which corresponds to a point in the Grassmannian of four-dimensional subspaces in $\mathbb{CP}^n$, modulo independent rescalings on its columns. Up to these rescalings, every element of $\text{Gr}(4,n)$ thus specifies a point in $n$-particle kinematics; in particular, the value of dual conformal cross ratios such as those shown in Eq.~\eqref{eq:six_point_uvw} can be computed by making the replacement \begin{equation} x_{ij}^2 \rightarrow \text{det} (Z_{i-1} Z_i Z_{j-1} Z_j ) \, , \end{equation} as all other factors in these ratios cancel out. In the literature, these determinants are usually denoted by four-brackets as $\langle i j k l \rangle = \text{det} (Z_i Z_j Z_k Z_l)$. \section{Amplitude Integrands and the Amplituhedron} \label{sec:amplituheron} At tree level, scattering amplitudes are rational functions of kinematical variables. In the planar limit they are especially simple, and poles can only appear when squared sums of consecutive momenta vanish, namely when $(p_i{+}p_{i{+}1}{+}{\cdots}{+}p_j)^2=0$. Amplitudes factorize on these poles into pairs of subamplitudes, as dictated by unitarity. At loop level, while amplitudes generally evaluate to complicated transcendental functions, it is possible (in the planar case) to define a rational $n$-point N$^k$MHV $\ell$-loop \emph{integrand} ${\cal I}_{n,k}^{\rm \ell-loop}$, which is a function of both external kinematics and loop momenta. We can think about these rational functions as the integrands one would get from summing over all Feynman diagrams prior to integration; however, because of the power-counting properties of $\mathcal{N}=4$ SYM theory, these functions are also uniquely determined by the requirement that they satisfy all possible cuts of the amplitude \cite{Arkani-Hamed:2010zjl}. In practice, this fact can be used to compute ${\cal I}_{n,k}^{\rm \ell-loop}$ much more efficiently than would be possible using Feynman diagrams. Although one must integrate over the loop momenta in these integrands to obtain the full loop-level amplitude, it also proves interesting to study these rational functions prior to integration. In doing so, it is usually advantageous to make use of momentum twistors, as defined in section~\ref{sec:background}. These variables make the dual conformal symmetry of planar ${\cal N}=4$ SYM amplitudes completely manifest, and furnish the space of kinematics with a nice geometric interpretation. Namely, in momentum twistor space, we have $n$ ordered momentum twistors $Z_i$ representing the external momenta, and $L$ lines $(AB)_j$ representing the independent loop momenta, which are each represented by a pair of points $Z_A$ and $Z_B$. In this framework, the cuts of the amplitude only have support on special configurations of the lines $(AB)_j$, in which they intersect the external lines $Z_iZ_{i{+}1}$ in a specified way. Given the function ${\cal I}_{n,k}^{\rm \ell-loop}$, which is constructed to match the predictions of field theory on all of the amplitude's cuts, the full integrand form $\Omega_{n,k}^{\rm \ell-loop}$ for the $n$-point N$^k$MHV amplitude at $\ell$ loops is given by \begin{equation} \Omega_{n,k}^{\rm \ell-loop} = d\mu_1 d\mu_2\dots d\mu_\ell\,{\cal I}_{n,k}^{\rm \ell-loop} \, , \end{equation} where $d\mu_k = \langle AB\,d^2A\rangle\langle AB\,d^2B\rangle$ for each loop momentum. In order to carry out these integrals over the loop momenta, one must generally regularize these integrals in the infrared. The integrand form $\Omega_{n,k}^{\rm \ell-loop}$ can also be obtained as the canonical differential form on the \emph{Amplituhedron geometry} \cite{Arkani-Hamed:2013jha,Arkani-Hamed:2013kca}. This geometry is defined as a special configuration of momentum twistors $Z_i$ and lines $(AB)_j$ which are subject to certain positivity conditions. For $\Omega_{n,k}^{\rm \ell-loop}$, the momentum twistors are chosen to satisfy \begin{equation} \langle i\,i{+}1\,j\,j{+}1\rangle>0 \, , \quad \mbox{where the series $\{\langle 1234\rangle, \langle 1235\rangle,\dots,\langle 123n\rangle\}$ has $k$ sign flips.} \end{equation} In addition, each loop momentum line $(AB)$ must satisfy \begin{equation} \!\! \langle AB\,i\,i{+}1\rangle>0 \, , \quad \mbox{where the series $\{\langle AB12\rangle, \langle AB13\rangle,\dots,\langle AB1n\rangle\}$ has $k{+}2$ sign flips.} \end{equation} For each pair of lines $(AB)_i$, $(AB)_j$, we also require that $\langle (AB)_i(AB)_j\rangle>0$. This defines the loop Amplituhedron space ${\cal A}_{n,k}^{(\ell)}$. We can think about the Amplituhedron as being parametrized by a set of variables $x_i$ (the degrees of freedom in $Z_i$ and $(AB)_j$) which are subject to certain polynomial inequalities. The boundaries of the Amplituhedron correspond to the loci where either $\langle i\,i{+}1\,j\,j{+}1\rangle=0$ or $\langle (AB)_i\,j{+}1\rangle=0$, which are exactly the poles of the loop integrand ${\cal I}_{n,k}^{\rm \ell-loop}$. In general, we can define a canonical form $\smash{\omega_{n,k}^{\rm \ell-loop}}$ that has logarithmic singularities on the boundaries of the space $\smash{\cal A}_{n,k}^{(\ell)}$. Namely, this form has a singularity of the form $\smash{dx/x}$ whenever we approach one of the boundaries of the Amplituhedron corresponding to $x = 0$ (and nowhere else). This canonical form is guaranteed to exist for $\smash{\cal A}_{n,k}^{(\ell)}$, and to be unique. The amplitude form $\smash{\Omega_{n,k}^{\rm \ell-loop}}$ can then be obtained from $\smash{\omega_{n,k}^{\rm \ell-loop}}$ by a simple replacement \begin{equation} \Omega_{n,k}^{\rm \ell-loop} = \omega_{n,k}^{\rm \ell-loop}(dZ_i \rightarrow \eta_i) \, , \end{equation} where the differentials $dZ_i$ are replaced by the fermionic variables $\eta_i$. This provides a completely geometric reformulation of ${\cal N}=4$ tree-level amplitudes, as well as the integrands of planar loop-level amplitudes. In particular, the normal physical properties of amplitudes, such as their singularity and branch cut structure, emerge as nontrivial consequences of the positivity conditions that define the Amplituhedron geometry. Moreover, the computational problem of obtaining a particular loop integrand $\Omega_{n,k}^{\rm \ell-loop}$ as a sum of Feynman diagrams, or as the product of recursion relations, is translated into the mathematical problem of triangulating the Amplituhedron space. One promising direction in the effort to extend the Amplituhedron picture to other theories is the formulation of the positive geometry in the Mandelstam or spinor helicity space \cite{Arkani-Hamed:2017mur,He:2018okq,Damgaard:2019ztj}. While a closed-form expression for the integrand of planar amplitudes in $\mathcal{N}=4$ SYM theory remains an open problem, the geometric problem has been solved for some all-loop-order cuts \cite{Arkani-Hamed:2018rsk}. Moreover, the connection to Wilson loops and infrared-finite quantities has allowed for the development of interesting geometric approximations for all-loop quantities including the cusp anomalous dimension \cite{Arkani-Hamed:2021iya}. The Amplituhedron picture has also been studied extensively from the purely mathematical perspective \cite{Karp:2017ouj,Lukowski:2020dpn,Parisi:2021oql}, as it provides a substantial generalization of the positive Grassmannian \cite{Postnikov:2006kva,Arkani-Hamed:2012zlh}. As a mathematical structure, it is closely related to cluster algebras and other interesting algebraic structures that remain intact after one carries out the integration over loop momenta, as we discuss in the next section. \section{Mathematical Properties at Loop Level} \label{sec:mathematical_properties} At loop level, amplitudes generally evaluate to transcendental functions that have an exceedingly complicated branch cut structure. Since our \emph{a priori} understanding of this analytic structure remains limited beyond special cases such as $2 \to 2$ scattering, most amplitudes of interest remain prohibitively difficult to evaluate using current technology. Despite this general situation, several infinite classes of amplitudes in planar $\mathcal{N}=4$ SYM theory have been uncovered over the last two decades whose analytic structure is simple enough to be understood either to all loop orders or at all particle multiplicity. In particular, the MHV amplitudes in this theory are known to two loops for any number of particles~\cite{Caron-Huot:2011zgw,Golden:2013xva,Golden:2014xqa}, while its six- and seven-particle amplitudes have been computed to high loop orders~\cite{Caron-Huot:2020bkp} and are not expected to exhibit new types of analytic structure at higher orders in perturbation theory. One of the advantageous features of these classes of amplitudes is that they can be expressed in terms of multiple polylogarithms, or iterated integrals over logarithmic integration kernels~\cite{Chen,G91b,Goncharov:1998kja,Remiddi:1999ew,Borwein:1999js,Moch:2001zr}, whose properties as special functions are well understood. In particular, the analytic structure of multiple polylogarithms can be systematically exposed using the symbol~\cite{Goncharov:2010jf}, which maps multiple polylogarithms to a tensor product of logarithms that encodes all the logarithmic and algebraic branch of the original function.\footnote{Technically, this is only true modulo contributions proportional to transcendental constants such as $\zeta_3$; however, these terms can be captured as well by upgrading the symbol to a full coaction~\cite{Gonch2,Brown:2011ik,Brown1102.1312,Duhr:2012fh}.} Moreover, as the identities that hold between logarithms are completely understood (up to algebraic identities between their arguments), it is easy to find all identities between the symbols of multiple polylogarithms; these identities can then be uplifted to the original space of multiple polylogarithms through the inclusion of further contributions proportional to transcendental constants~\cite{Duhr:2011zq,Duhr:2012fh}.\footnote{We highlight that it can still prove highly nontrivial to find all identities between multiple polylogarithms when complicated algebraic functions appear in the arguments of these logarithms; see for instance~\cite{Bourjaily:2019igt}.} Algorithms also exist for systematically expressing multiple polylogarithms in terms of so-called fibration bases, which allow these functional relations to be imposed on an expression systematically~\cite{FBThesis,Anastasiou:2013srw,Panzer:2014caa}. A great deal has been learned about the specific logarithmic arguments---or symbol letters---that appear in polylogarithmic amplitudes in planar $\mathcal{N}=4$ sYM theory from the study of the theory's two-loop MHV amplitudes. In~\cite{Golden:2013xva} it was shown that the symbol letters that appear in the $n$-particle instance of this class of amplitudes can always be chosen to be cluster coordinates defined on the Grassmannian $\text{Gr}(4,n)$~\cite{1021.16017}. This initial observation has led to a prolific body of work that has tied the analytic structure of these amplitudes to cluster algebras~\cite{Golden:2014pua,Golden:2014xqa,Golden:2014xqf,Drummond:2017ssj,Bourjaily:2018aeq,Drummond:2018dfd,Golden:2018gtk,Drummond:2018caf,Golden:2019kks,Golden:2021ggj}, and to closely related algebraic structures such as tropical fans and polytopes~\cite{Drummond:2019qjk,Drummond:2019cxm,Arkani-Hamed:2019rds,Henke:2019hve,Drummond:2020kqg,Mago:2020kmp,Chicherin:2020umh,Mago:2020nuv,Herderschee:2021dez,He:2021esx,Mago:2021luw,Henke:2021ity,Ren:2021ztg,Yang:2022gko}. Of particular note are the cluster adjacency conditions~\cite{Drummond:2017ssj}, which state that symbol letters only appear in adjacent entries of the symbol of (appropriately-normalized) amplitudes when they also appear together in a cluster. In all known cases, this requirement has been observed to be equivalent to the implications of the extended Steinmann relations~\cite{Steinmann,Steinmann2,Cahill:1973qp,Caron-Huot:2016owq,Caron-Huot:2019bsq}, which restrict these amplitudes from having nonzero double discontinuities in partially-overlapping momentum channels (at any depth in the symbol). Similar constraints on the analytic properties of amplitudes have also been deduced more directly from the Landau equations~\cite{landau1959} and cut integrals~\cite{Cutkosky:1960sp}; see for instance~\cite{Abreu:2014cla,Dennen:2015bet,Bloch:2015efx,Kreimer:2016tqq,Dennen:2016mdk,Prlina:2017azl,Abreu:2017ptx,Prlina:2017tvx,Prlina:2018ukf,Bourjaily:2020wvq,Benincasa:2020aoj,Hannesdottir:2021kpd,Hannesdottir:2022bmo} for recent work in this direction. The six- and seven-particle amplitudes in planar $\mathcal{N}=4$ SYM theory are especially simple, insofar as the cluster coordinates defined on $\text{Gr}(4,6)$ and $\text{Gr}(4,7)$ appear to describe the full set of symbol letters that appear in these amplitudes at any loop order. As we will review in section~\ref{sec:bootstrap}, this expectation has been leveraged to bootstrap the six-particle amplitude through seven loops~\cite{Dixon:2011pw,Dixon:2011nj,Dixon:2013eka,Dixon:2014voa,Dixon:2014iba,Dixon:2015iva,Caron-Huot:2016owq,Dixon:2016apl,Caron-Huot:2019vjl,Dixon:2020cnr}, and the seven-particle through four loops~\cite{Drummond:2014ffa,Dixon:2016nkn,Drummond:2018caf,Dixon:2020cnr}. Access to such high-loop data has, in turn, made it possible to uncover additional types of structure in these amplitudes, such as interesting number-theoretic symmetries under the cosmic Galois group~\cite{Cartier2001,2008arXiv0805.2568A,2008arXiv0805.2569A,Brown:2015fyf}. These symmetries restrict what numerical constants are expected appear in these amplitudes perturbatively; for instance, the constant $\zeta_3$ is not expected to appear in the six-particle amplitude at a particular kinematic point, at any loop order, when it is `cosmically normalized' in the way described in~\cite{Caron-Huot:2019bsq}. Notably, similar number-theoretic symmetry properties have been observed in massless $\phi^4$ theory~\cite{Schnetz:2013hqa,Panzer:2016snt}, QED~\cite{Laporta:2017okg,Schnetz:2017bko}, and string theory~\cite{Schlotterer:2012ny}. Functions beyond multiple polylogarithms also appear in planar $\mathcal{N}=4$ SYM theory, for instance at two loops for ten particles~\cite{Caron-Huot:2012awx,Paulos:2012nu,Nandan:2013ip,Bourjaily:2017bsb,Kristensson:2021ani,Wilhelm:2022wow}. A great deal of work has gone into understanding the next class of special functions that naturally arises, which involves integrals over elliptic curves~\cite{SABRY1962401,Broadhurst:1993mw,Berends:1993ee,Bauberger:1994by,Bauberger:1994hx,Bauberger:1994nk,Laporta:2004rb,Groote:2005ay,MullerStach:2012az,Groote:2012pa,Bloch:2013tra,Adams:2013kgc,Adams:2014vja,Adams:2015gva,Adams:2015ydq,Adams:2016xah,Remiddi:2016gno,Adams:2017tga,Adams:2017ejb,Bogner:2017vim,Remiddi:2017har,Bourjaily:2017bsb,Broedel:2017siw,Chen:2017soz,Adams:2018yfj,Broedel:2018iwv,Adams:2018bsn,Adams:2018kez,Honemann:2018mrb,Bogner:2019lfa,Broedel:2019hyg,Broedel:2019kmn,Bourjaily:2020hjv,Bourjaily:2021vyj,Kristensson:2021ani,Wilhelm:2022wow}. Even more complicated integrals, such as integrals over hyperelliptic curves~\cite{Huang:2013kh,Hauenstein:2014mda} and over Calabi-Yau manifolds of unbounded dimension~\cite{Groote:2005ay,Brown:2010bw,Bloch:2014qca,Bloch:2016izu,mirrors_and_sunsets,Primo:2017ipr,Bourjaily:2018ycu,Bourjaily:2018yfy,Bourjaily:2019hmc,Klemm:2019dbm,Bonisch:2020qmm,Bonisch:2021yfw} also appear. The evaluation of amplitudes that involve functions more complicated than multiple polylogarithms remains an active area of research; see~\cite{Bourjaily:2022bwx} for a white paper devoted to this topic, and~\cite{Weinzierl:2022eaz} for a more pedagogical introduction. \section{Special Kinematic Limits} \label{sec:kinematic_limits} Much more is known about the structure of amplitudes in $\mathcal{N}=4$ SYM theory in special kinematic limits. One important class of examples are multi-Regge limits, where all outgoing particles are strongly ordered in rapidity. Amplitudes exponentiate in these limits and admit an effective description as an expansion in large logarithms. This exponentiation is especially well understood in the planar limit of this theory~\cite{Bartels:2009vkz,Fadin:2011we,Bartels:2011ge,Lipatov:2012gk,Dixon:2012yy,Bartels:2013jna,Basso:2014pla,Dixon:2014iba,DelDuca:2016lad,DelDuca:2018hrv,DelDuca:2019tur}, where it has been shown that the coefficients multiplying these large logarithms are always expressible in terms of specific classes of single-valued multiple polylogarithms~\cite{Dixon:2012yy,DelDuca:2016lad}. Moreover, predictions for these expansions are available at all loop orders and for any number of particles~\cite{Basso:2014pla,DelDuca:2019tur}. For a recent introduction to this topic, see for instance~\cite{DelDuca:2022skz}. Another interesting limit that has been studied in great detail is the near-collinear limit of planar amplitudes in $\mathcal{N}=4$ SYM theory. In the dual theory, this limit admits a non-perturbative description in terms of the so-called pentagon operator product expansion (POPE)~\cite{Alday:2010ku,Basso:2013vsa,Basso:2013aha,Basso:2014koa,Basso:2014jfa,Basso:2014nra,Belitsky:2014sla,Belitsky:2014lta,Basso:2014hfa,Belitsky:2015efa,Basso:2015rta,Basso:2015uxa,Belitsky:2016vyq}, by means of which the amplitude can be computed as an expansion in terms of flux-tube excitations crossing the Wilson loop. While in principle the POPE encodes the full amplitude, it is not yet known how to resum this expansion beyond one loop~\cite{Lam:2016rel,Bork:2019aud}. A form factor operator product expansion (FFOPE) has recently been developed in planar $\mathcal{N}=4$ SYM theory~\cite{Sever:2020jjx,Sever:2021nsq,Sever:2021xga}, which leverages the duality between form factors and Wilson loops in a periodic target space~\cite{Alday:2007he,Maldacena:2010kp,Brandhuber:2010ad}. Further limits have also been studied in six- and seven-particle kinematics, where a number of amplitudes have been computed in general kinematics. These include multi-particle factorization limits~\cite{Dixon:2015iva,Dixon:2016nkn}, and the origin of the six-particle amplitude, which is conjecturally known to all loop orders~\cite{Basso:2020xts}. The amplitude also becomes singular when the dual Wilson polygon crosses itself, and an evolution equation has been derived that governs these singularities, as well as a proposed all-orders resummation~\cite{Dixon:2016epj,Caron-Huot:2019vjl}. \section{Perturbative Bootstrap Calculations} \label{sec:bootstrap} The BDS ansatz for planar amplitudes in $\mathcal{N}=4$ SYM theory needs to be corrected for amplitudes involving more than five particles, starting at two loops. As reviewed in section~\ref{sec:background}, these corrections take the form of finite functions of dual-conformally-invariant cross ratios. In an impressive calculation, the first nontrivial correction---the two-loop correction to the six-particle amplitude---was integrated directly using Mellin-Barnes techniques, and found to be expressible in terms of multiple polylogarithms~\cite{DelDuca:2009au,DelDuca:2010zg}. Symbol methods were subsequently used to put this function into a more parsimonious form, which makes it clear that only nine symbol letters appear~\cite{Goncharov:2010jf}: \begin{equation} \mathcal{S}_6 = \{u, v, w, 1-u, 1-v, 1-w, y_u, y_v, y_w \} \, , \end{equation} where $u$, $v$, and $w$ were defined in~\eqref{eq:six_point_uvw}, and \begin{equation} y_u = \frac{u-z_+}{u-z_-}\,, \qquad y_v = \frac{v-z_+}{v-z_-}\,, \qquad y_w = \frac{w - z_+}{w - z_-}\, , \end{equation} where \begin{equation} z_\pm = \frac{1}{2}\Bigl[-1+u+v+w \pm \sqrt{\Delta}\Bigr], \qquad \Delta = (1-u-v-w)^2 - 4 u v w \, . \end{equation} In other words, this function only develops logarithmic branch cuts on the nine codimension-one surfaces where these letters vanish in the space of dual-conformally-invariant cross ratios. Equipped with this insight into the analytic structure of the two loop contribution, a bootstrap approach to computing the six-particle remainder function at higher loops was initiated in~\cite{Dixon:2011pw}. This approach starts from the assumption that no further symbol letters appear in the dual-conformally-invariant correction to the BDS ansatz, and tries to identify the unique polylogarithmic function that has all the right properties to encode the amplitude. This assumption turns out to be valid, and bootstrap methods have now been used to determine the amplitude through seven loops~\cite{Dixon:2011pw,Dixon:2011nj,Dixon:2013eka,Dixon:2014voa,Dixon:2014iba,Dixon:2015iva,Caron-Huot:2016owq,Dixon:2016apl,Caron-Huot:2019vjl}. As part of this work, the mathematical properties of the six-particle amplitude have been studied in great depth, and are now known to include: \begin{enumerate} \item[(i)] {\bf Dihedral Symmetry --} The amplitude is invariant under relabelings of its external legs that respect the original planar ordering. \item[(ii)] {\bf Branch Cut Conditions --} When formulated in the Euclidean region, the amplitude should only develop branch cuts where one of the Mandelstam invariants vanishes or approaches infinity. \item[(iii)] {\bf Final Entry Conditions --} Only certain letters are allowed to appear in the last entry of the symbol, as prescribed by the action of the $\bar{Q}$ equation~\cite{Caron-Huot:2011zgw,Caron-Huot:2011dec,Bullimore:2011kg}. \item[(iv)] {\bf Extended Steinmann Relations --} When appropriately normalized, the amplitude never involves sequential discontinuities in partially-overlapping three-particle momentum channels~\cite{Steinmann,Steinmann2,Cahill:1973qp,Caron-Huot:2019bsq}. This turns out to be equivalent to the cluster adjacency conditions proposed in~\cite{Drummond:2017ssj}. \item[(v)] {\bf Cosmic Galois Coaction Principle --} The span of functions of fixed weight that appear in the first entry of the coaction of the amplitude stabilizes after a certain number of loop orders; this observation restricts the space of functions that are expected to show up to all higher loop orders~\cite{Caron-Huot:2019bsq}. \item[(vi)] {\bf Multi-Regge Kinematics --} In this limit, the outgoing particles are strongly ordered in rapidity and the amplitude exponentiates. It can be independently computed as an expansion in large logarithms using an effective description in terms of an impact factor and BFKL eigenvalue~\cite{Bartels:2009vkz,Fadin:2011we,Bartels:2011ge,Lipatov:2012gk,Dixon:2012yy,Basso:2014pla,Dixon:2014iba}. \item[(vii)] {\bf Near-Collinear Kinematics --} The expansion of the amplitude around collinear limits is described at finite coupling by the POPE~\cite{Alday:2010ku,Basso:2013vsa,Basso:2013aha,Basso:2014koa,Basso:2014jfa,Basso:2014nra,Belitsky:2014sla,Belitsky:2014lta,Basso:2014hfa,Belitsky:2015efa,Basso:2015rta,Basso:2015uxa,Belitsky:2016vyq}, which makes it possible to independently compute the first terms in this near-collinear expansion at fixed loop order. \item[(viii)] {\bf Self-Crossing Kinematics --} Kinematics in which the transverse momentum of a pair of outgoing gluons vanishes and the amplitude becomes singular. The singular terms are governed by an evolution equation, and have been determined to high loop order~\cite{Dixon:2016epj}. \item[(ix)] {\bf Behavior Near the `Origin' --} The behavior of the MHV amplitude near the `origin' of six-particle kinematics, where $u$, $v$, and $w$ all vanish, is conjecturally understood to all loop orders~\cite{Basso:2020xts}, and it is the exponential of a quadratic form in the logarithms of $u,v,w$. \end{enumerate} It is believed that the six-particle amplitude is determined by (a subset of) these constraints to all orders in perturbation theory; however, in practice constructing the explicit functions becomes too computationally intensive beyond seven or eight loops. To illustrate the power of this procedure, we present the number of free parameters that remain at various steps in the bootstrap procedure for the MHV amplitude through six loops in Table~\ref{tab:six_MHV}. After the amplitude has been uniquely determined, the remaining constraints act as cross checks. The NMHV helicity configuration works in a similar fashion. \renewcommand{\arraystretch}{1.25} \begin{table}[!t] \centering \begin{tabular}[t]{l c c c c c c c c} \hline Constraint & $L=1$\, & $L=2$\, & $L=3$\, & $L=4$\, & $L=5$\, & $L=6$ \\\hline 1. ${\cal H}^{\rm hex}$ & 6 & 27 & 105 & 372 & 1214 & 3692?\\\hline\hline 2. Symmetry & 2 & 7 & 22 & 66 & 197 & 567 \\\hline 3. Final-entry & 1 & 4 & 11 & 30 & 85 & 236 \\\hline 4. Collinear & 0 & 0 & $0^*$ & $0^*$ & $1^{*3}$ & $6^{*2}$ \\\hline 5. LL MRK & 0 & 0 & 0 & 0 & $0^*$ & $1^{*2}$ \\\hline 6. NLL MRK & 0 & 0 & 0 & 0 & $0^*$ & $1^*$ \\\hline 7. NNLL MRK & 0 & 0 & 0 & 0 & 0 & $1$ \\\hline 8. N$^3$LL MRK & 0 & 0 & 0 & 0 & 0 & 1 \\\hline 9. Full MRK & 0 & 0 & 0 & 0 & 0 & 1 \\\hline 10. $T^1$ OPE & 0 & 0 & 0 & 0 & 0 & 1 \\\hline 11. $T^2$ OPE & 0 & 0 & 0 & 0 & 0 & 0 \\\hline \end{tabular} \caption{The number of free parameters that remain in the BDS-like normalized ans\"{a}tze for the MHV six-particle amplitude after each constraint is applied. The initial ansatz is formed out of a general linear combination of the functions in the ${\cal H}^{\rm hex}$ space, which includes all polylogarithms that involve just the letters in $\mathcal{S}_6$, and that satisfy conditions (ii), (iv), and (v). The superscripts ``$*$'' (or ``$*n$'') denote an additional ambiguity (or $n$ ambiguities) that arises due to further ambiguities in the cosmic normalization constant $\rho$. The ``$?$'' indicates an ambiguity about the number of weight 12 odd functions that are ``dropouts'', namely that are allowed at symbol level but not function level. The numbers in this table were taken from~\cite{Caron-Huot:2019vjl}, where further details can be found.} \label{tab:six_MHV} \end{table} A similar bootstrap approach has also been employed to compute the seven-particle amplitude and certain three-point form factors in planar SYM. In the former case, 42 symbol letters appear in the two loop MHV amplitude~\cite{Caron-Huot:2011zgw}, and the same letters have been found to be sufficient for expressing both the MHV and NMHV amplitudes through four loops~\cite{Drummond:2014ffa,Dixon:2016nkn,Drummond:2018caf,Dixon:2020cnr}. In the latter case only six symbol letters appear, and the form factor has been bootstrapped through eight loops~\cite{Brandhuber:2012vm,Dixon:2020bbt,Dixon:2022rse}. Surprisingly, this form factor has also been shown to be dual to the six-particle amplitude evaluated on a two-dimensional kinematic surface, order-by-order in perturbation theory~\cite{Dixon:2021tdw}. The duality reverses all entries in the symbol (the antipode map). It is not known yet whether antipodal duality appears in a wider class of processes. More generally, while the planar two-loop $n$-particle MHV amplitude is known to involve $\frac{3}{2} n^3- 15 n^2 + \frac{77}{2} n$ letters~\cite{Caron-Huot:2011zgw}, additional letters are expected to appear at higher loops for eight or more particles. Such letters explicitly appear in the three-loop eight-point MHV amplitude, which was recently computed with the help of the $\bar{Q}$ equation~\cite{Li:2021bwg}. This fact makes bootstrap computations harder to pursue for more than seven particles, since there doesn't yet exist a reliable method for predicting the symbol letters that will appear in these amplitudes (although much work has been devoted to this question; see for instance~\cite{Arkani-Hamed:2012zlh,Golden:2013xva,Golden:2014xqa,Golden:2014pua,Drummond:2017ssj,Drummond:2018dfd,Golden:2018gtk,Drummond:2018caf,Golden:2019kks,Drummond:2019qjk,Drummond:2019cxm,Arkani-Hamed:2019rds,Henke:2019hve,Drummond:2020kqg,Mago:2020kmp,Chicherin:2020umh,Mago:2020nuv,Herderschee:2021dez,He:2021esx,Mago:2021luw,Henke:2021ity,Ren:2021ztg}). Further data on this question can be gathered by computing three-loop MHV amplitudes at higher points, which should also be possible with the help of the $\bar{Q}$ equation, using input from our knowledge of this theory's two-loop NMHV amplitudes~\cite{He:2019jee,He:2020vob}. At higher points, amplitudes and form factors in $\mathcal{N}=4$ SYM theory are expected to involve functions beyond multiple polylogarithms, even in the planar limit. While it is expected that bootstrap approaches can also be applied to amplitudes that involve these more general types of functions---as indeed, these quantities are expected to exhibit many of the same algebraic and analytic features as the amplitudes that have already been bootstrapped---more technology for dealing with these functions is needed to make this approach feasible. Notably, however, a great deal of progress in this direction has recently been made in the case of elliptic polylogarithms, and similar advances are expected in the coming years in our understanding of the more general types of integrals that appear~\cite{Bourjaily:2022bwx}. \section{Outlook} \label{sec:outlook} While an impressive amount is already known about the amplitudes in $\mathcal{N}=4$ SYM theory, there are many directions in which our understanding can be improved. We now highlight some of the important questions and research directions in which we expect progress can be made in the coming years: \begin{itemize} \item Much of the recent progress in this theory has been made on the planar limit, where significant simplifications occur. While initial results have also been achieved for four- and five-particle amplitudes with full color dependence~\cite{Henn:2016jdu,Abreu:2018aqd,Chicherin:2018yne}, it will become increasingly important to develop tools that scale well with the number of kinematic variables in order to compute amplitudes at higher points. \item There is an ongoing search for the putative dual Amplituhedron, whose volume (rather than associated differential form) should reproduce tree-level amplitudes and loop integrands. The dual geometry for NMHV tree-level amplitudes was discovered some time ago~\cite{Hodges:2009hk,Arkani-Hamed:2010wgm}, and some encouraging results are available in the literature~\cite{Arkani-Hamed:2014dca,Ferro:2015grk,Herrmann:2020qlt,Herrmann:2020oud}; however, an explicit and general construction is still missing. \item Another important direction is to extend the positive geometry construction beyond the planar limit. The Grassmannian formulation for on-shell diagrams \cite{Arkani-Hamed:2012zlh} does extend to non-planar diagrams \cite{Arkani-Hamed:2014bca,Franco:2015rma,Bourjaily:2016mnp,Herrmann:2016qea,Heslop:2016plj}, and there is evidence that many of the analytic properties that have been observed in the planar limit also persist outside of this limit, such as the absence of poles at infinity and the existence of only logarithmic singularities in the integrand~\cite{Arkani-Hamed:2014via,Bern:2014kca,Bern:2015ple,Bourjaily:2019iqr,Bourjaily:2019gqu}. However, how to uniquely define the non-planar integrand, and whether it can be geometrically formulated, remain important open questions. \item As the POPE gives a non-perturbative formulation of this theory's planar amplitudes as expansions near two-particle collinear limits, it should in principle be possible to resum the contributions at fixed loop order. Currently the technology for doing this only exists at one loop~\cite{Lam:2016rel,Belitsky:2017wdo,Bork:2019aud,Bork:2020aut}. Barring a full resummation algorithm, it would also be interesting to be able to read properties of the amplitudes off of these sums, such as what combinations of kinematic variables these amplitudes depend on. \item While the general class of special functions that can appear in perturbative quantum field theory remains unclear, this question can be answered for specific classes of amplitudes with the help of integrand-level basis reduction techniques~\cite{Melrose:1965kb,Passarino:1978jh,Bern:1994zx,Bern:1994cg,Britto:2004nc,Ossola:2006us,Mastrolia:2010nb,Bourjaily:2013mma,Bourjaily:2015jna,Bourjaily:2017wjl,Bourjaily:2020qca}. It would be interesting to catalog what types of functions might appear in $\mathcal{N}=4$ SYM theory at a given multiplicity or loop order. \item The bootstrap approach described in section~\ref{sec:bootstrap} has proven to be wildly successful, and has given rise to some of the highest-order results with nontrivial kinematic dependence in any quantum field theory. However, these techniques currently remain restricted to functions that can be expressed in terms of multiple polylogarithms. As such, it will be important to extend them to function spaces involving elliptic curves and other higher-dimensional varieties; see the Snowmass white paper~\cite{Bourjaily:2022bwx}. A preliminary step could be to bootstrap higher-point amplitudes first on suitable lower-dimensional surfaces. \item One of the main long-term goals in understanding the amplitudes of $\mathcal{N}=4$ SYM theory is to find a closed-form representation of even the simplest amplitudes at finite coupling. (One could argue this has already been done, up to constants, for the four- and five-particle amplitude in the form of the BDS ansatz~\cite{Bern:2005iz}, but these amplitudes benefit an exceptional amount from dual conformal symmetry~\cite{Drummond:2006rz, Bern:2006ew, Bern:2007ct, Alday:2007hr, Bern:2008ap, Drummond:2008vq}.) Some hints as to what form amplitudes might take at finite coupling come from resumming ladder integrals~\cite{Caron-Huot:2018dsv}; it would already be extremely interesting to find a similar finite-coupling formulation of the planar six-particle amplitude, or the planar three-particle form factor studied in~\cite{Brandhuber:2012vm,Dixon:2020bbt,Dixon:2022rse}. Such a formulation would represent a resummation of the POPE mentioned above. \item Amplitudes and form factors have been observed to exhibit interesting number-theoretic properties under the Galois coaction, not only in $\mathcal{N}=4$ SYM theory~\cite{Caron-Huot:2019bsq} but also in massless $\phi^4$ theory~\cite{Schnetz:2013hqa,Panzer:2016snt}, electromagnetism~\cite{Laporta:2017okg,Schnetz:2017bko}, and string theory~\cite{Schlotterer:2012ny}. It would be useful to find a physical explanation for these number-theoretic properties, so as to be able to make predictions about the number-theoretic properties of as-yet-uncomputed amplitudes. \item The $\bar{Q}$ equation has been utilized in a number of impressive calculations to compute amplitudes at all multiplicity~\cite{Caron-Huot:2011zgw,Caron-Huot:2011dec,He:2020vob,Li:2021bwg}. However, its utility currently remains limited to the MHV and NMHV sectors, as amplitudes in other sectors involve contributions that are in the kernel of the $\bar{Q}$ equation. It would be extremely interesting to understand these additional contributions in order to extend the reach of these methods. \item The analytic structure of amplitudes is not well understood beyond four-particle scattering. One step that would improve our understanding of their analytic structure in the case of $\mathcal{N}=4$ SYM theory would be to elucidate the physics behind the connection between the singularity structure of some of its amplitudes and cluster algebras (and geometric structures closely related to cluster algebras, such as tropical fans and polytopes)~\cite{Arkani-Hamed:2012zlh,Golden:2013xva,Golden:2014xqa,Golden:2014pua,Drummond:2017ssj,Drummond:2018dfd,Golden:2018gtk,Drummond:2018caf,Golden:2019kks,Drummond:2019qjk,Drummond:2019cxm,Arkani-Hamed:2019rds,Henke:2019hve,Drummond:2020kqg,Mago:2020kmp,Chicherin:2020umh,Mago:2020nuv,Herderschee:2021dez,He:2021esx,Mago:2021luw,Henke:2021ity,Ren:2021ztg}. Progress is also being made on how the analytic structure of generic Feynman integrals can be better understood using the Landau equations~\cite{landau1959} and cut integrals~\cite{Cutkosky:1960sp}; see for instance~\cite{Abreu:2014cla,Dennen:2015bet,Bloch:2015efx,Kreimer:2016tqq,Dennen:2016mdk,Prlina:2017azl,Abreu:2017ptx,Prlina:2017tvx,Prlina:2018ukf,Bourjaily:2020wvq,Benincasa:2020aoj,Hannesdottir:2021kpd,Hannesdottir:2022bmo}. \item Almost all of the work on amplitudes in $\mathcal{N}=4$ SYM theory has been carried out at the origin of the theory's moduli space (although see~\cite{Caron-Huot:2014gia,Sakata:2017pue,Herderschee:2019dmc}). It would thus be interesting to better understand how the mathematical structure of its amplitudes change when some of its scalars are given a vacuum expectation value. \end{itemize} Of course, this list highlights just some of the topics that merit study over the next decade. In particular, we expect that many unanticipated research directions will arise with the identification of further types of mathematical structure in $\mathcal{N}=4$ SYM theory. With luck, some of these discoveries will give us a glimpse into the appropriate mathematical language in which our current descriptions of amplitudes in this theory can be seen to be encoded in expressions that are valid both at finite coupling and in general kinematics. \section*{Acknowledgements} We thank Zvi Bern and Benjamin Basso for stimulating discussions. This work is supported in part by the US Department of Energy under contracts DE--SC0009988 (N.A-H.), DE--AC02--76SF00515 (L.D.), DE--SC0010010 (M.S.~and A.V.), DE--SC0009999 (J.T.), in part by the funds of the University of California (J.T.), and in part by Simons Investigator Award \#376208 (A.V.). \bibliographystyle{JHEP}
1,116,691,496,967
arxiv
\section{Introduction} Since the discovery of cosmic acceleration in 1998 \cite{1538-3881-116-3-1009,0004-637X-517-2-565}, considerable efforts have been devoted in cosmology to understand the physical mechanism responsible for it. The $\Lambda$CDM model interprets the acceleration of the universe as a consequence of the cosmological constant. Although this model matches cosmological observations well \cite{ade2016planck}, the cosmological constant suffers from some theoretical problems. If the cosmological constant originates from the vacuum energy in quantum field theory, extreme fine-tuning is required to explain its smallness \cite{RevModPhys.61.1}. It is also difficult to explain its closeness to the present matter density of the universe \cite{RevModPhys.61.1}. This motivates the search for alternative explanations for the cosmic acceleration. Two types of approaches have been considered. One can either introduce a new kind of matter whose role is to trigger acceleration, or modify the behavior of gravity on cosmological scales \cite{0253-6102-56-3-24,Joyce20151}. In the first approach, dark energy is introduced as a new energy form, which has positive energy density but negative pressure. In the second approach, various attempts to modify gravity have been presented. For recent reviews on modified gravity, see \cite{CLIFTON20121,NOJIRI20171,burrage2017tests,ANDP:ANDP201400058}. Lovelock's theorem states that General Relativity (GR) represents the most general theory describing a single metric that in four dimensions has field equations with at most second-order derivatives \cite{lovelock1971einstein}. As a result of this theorem, one way to modify Einstein's field equations is to permit the field equations to be higher than second order. In this paper, we will consider the so-called $f(R)$ gravity which has fourth order field equations. The Ricci scalar $R$ in the gravity action is replaced by a general function of Ricci scalar $f(R)$. For reviews on $f(R)$ gravity, see \cite{RevModPhys.82.451,de2010f}. The $f(R)$ gravity does not introduce any new type of matter and can lead to the late time acceleration of the universe\cite{Xu2014,starobinsky2007disappearing}. When cast into the scalar-tensor theory, the $f(R)$ gravity implies a strong coupling between the scalar field and matter. This would violate all experimental constraints on deviations from Newton's gravitation \cite{Brax2008}. Certain constraints have to be imposed on the function $f(R)$ for the model to be linearly stable \cite{PhysRevD.74.104017,PhysRevD.72.124005} and pass local gravitational tests \cite{PhysRevD.77.023507}. The first attempt $f(R)=R-\mu^{2(n+1)}/ R^n$ proposed by Carroll \textit{et al.} in \cite{PhysRevD.70.043528} failed these constraints right away. However, since then, models that evade them have been found \cite{PhysRevD.68.123512,NOJIRI2007343}. Fortunately, the chameleon mechanism can alleviate these constraints. Imposing the chameleon mechanism, the scalar field can develop an environment dependent mass \cite{Brax2017,PhysRevLett.93.171104,PhysRevD.69.044026}. When the ambient matter density is large enough, its mass becomes large, and the corresponding fifth force range is short. Thus the scalar field can be hidden in the high density environment and the fifth force cannot be detected \cite{Brax2008}. The parametrized post-Newtonian (PPN) formalism is useful to study different theories of gravity \cite{Will2014,will1993theory,misner1973gravitation,weinberg1972gravitation}. In the PPN formalism, the PN (weak field and slow motion) limit of different theories are characterized by a set of PPN parameters and the most important two parameters are $\gamma$ and $\beta$. These two parameters can be directly measured by the solar system experiments. The GR prediction ($\gamma=1$ and $\beta=1$) is consistent with the observations \cite{bertotti2003test}, which provide constraints on various modified gravity models \cite{PhysRevD.72.044022,PhysRevD.72.083505}. Meanwhile, the binary pulsar systems can emit gravitational waves and provide a good test for gravitational theories \cite{Will2014,will1993theory,neutron,hou2017constraints,Stairs2003}. Since these systems lose energy due to gravitational radiation, the orbital period of these systems will decay \footnote{The LIGO Scientific Collaboration and Virgo Collaboration have detected the gravitational waves \cite{PhysRevLett.116.061102,PhysRevLett.116.241103,PhysRevLett.118.221101,Abbott2017a,Abbott2017}. This is an important milestone and opens new windows in the gravitational physics and astrophysics.}. Several authors have considered this effect in $f(R)$ gravity \cite{dyadina2016verification,de2013testing,de2015probing} for some specific models. However, in these works, the authors have ignored the chameleon mechanism . Although some authors have applied the chameleon mechanism to $f(R)$ gravity when they study the PN limit, they only calculate the PPN parameter $\gamma$ \cite{PhysRevD.76.063505,Hu:2007nk}. In this paper, we give a comprehensive investigation on various constraints on the general $f(R)$ gravity with chameleon mechanism. Following the method developed in our previous work \cite{Zhang2016}, we first calculate the PPN parameters $\gamma$ and $\beta$, the effective cosmological constant, and the effective gravitational constant in the general $f(R)$ gravity. Considering the current observations in solar system and cosmological scales, we derive the combined constraint for the general $f(R)$ gravity. Binary pulsar system is a good testing ground for alternative theories of gravity. In the previous work \cite{Zhang2017}, we have derived the orbital period derivative for quasicircular binary systems in scalar-tensor gravity with chameleon mechanism. Here, applying the similar analysis to $f(R)$ gravity, we obtain the orbital period derivative for quasicircular neutron star-white dwarf (NS-WD) binary systems. Using the observational data of PSR J0348 +0432 \cite{Antoniadis1233232} and PSR J1738 +0333 \cite{freire2012relativistic}, we also obtain the binary pulsar constraints on $f(R)$ gravity. We find that the chameleon mechanism cannot apply to Palatini $f(R)$ gravity. Thus, in the paper, we mainly focus on metric $f(R)$ gravity. Applying the general results to the specific $f(R)$ models, including Starobinsky model, Hu-Sawicki model and Tsujikawa model, we obtain the constraints on the model parameters. The paper is organized as follows: In Sec. \ref{fr_cha}, we review $f(R)$ gravity and chameleon mechanism. In Sec. \ref{constraint}, we study various observational constraints on $f(R)$ gravity, and obtain the parameter constraints on the specific models. We conclude in Sec. \ref{conclusion}. Through out this paper, the metric convention is chosen as $(-,+,+,+)$, and Greek indices $(\mu,\nu,\cdots)$ run over $0,1,2,3$. We set the units to $c=\hbar=1$, and therefore the reduced Planck mass is $M_\text{Pl}=\sqrt{1/8\pi G}$, where $G$ is the gravitational constant. \section{$f(R)$ gravity and Chameleon mechanism}\label{fr_cha} The $f(R)$ gravity comes about by a straightforward generalization of the Ricci scalar $R$ to become a general function $f(R)$ in the action for gravity. When varying the action, there exist two formalisms: the metric formalism and the Palatini formalism. In the Palatini formalism, the connection is not taken to be the Levi-Civita connection of the metric \textit{a priori} and one varies the action assuming that the metric and the connection are independent variables. Although these two formalisms lead to the same field equations in GR \cite{wald1984general}, this is no longer true for $f(R)$ gravity. We will investigate these two formalisms respectively. \subsection{Metric $f(R)$ gravity} The total action for metric $f(R)$ gravity takes the form \cite{RevModPhys.82.451} \begin{equation} \label{fr} S={1\over 16\pi G}\int d^4 x\sqrt{-g}\,f(R)+ S_m(g_{\mu\nu},\Psi_m), \end{equation} where $\Psi_m$ denotes all the matter fields. Variation with respect to the metric $g_{\mu\nu}$ gives the field equations \cite{RevModPhys.82.451} \begin{equation} f'(R)R_{\mu\nu}-\frac12 f(R)g_{\mu\nu}-[\nabla_\mu\nabla_\nu-g_{\mu\nu}\square]f'(R)=8\pi G T_{\mu\nu}, \end{equation} where a prime denotes differentiation with respect to $R$ and $\square=\nabla^\mu\nabla_\mu$. Since the field equations contain the second derivative of $R$ and $R$ includes second derivatives of the metric, the field equations are fourth order partial differential equations in the metric. Handling fourth order equations can be troublesome, but $f(R)$ gravity can be recast as a scalar-tensor theory via a conformal transformation and the corresponding field equations become second order. Conformal transformation of the metric can also show the scalar degree of freedom explicitly. Introducing a new field $\chi$, we obtain a dynamical equivalent action \cite{RevModPhys.82.451} \begin{equation} \label{equiv} S={1\over 16\pi G}\int d^4 x\sqrt{-g}\,[f(\chi)+f'(\chi)(R-\chi)]+ S_m(g_{\mu\nu},\Psi_m). \end{equation} Varying this action with respect to $\chi$, we have $f''(\chi)(R-\chi)=0$. If $f''(\chi)\neq 0$, we have $R=\chi$. Substituting this into Eq. \eqref{equiv} leads to Eq. \eqref{fr}. Redefining the field by $\theta=f'(\chi)$ and setting $U(\theta)=\theta \chi(\theta)-f(\chi(\theta))$, we have \begin{equation}\label{jordan} S={1\over 16\pi G}\int d^4 x\sqrt{-g}\,[\theta R-U(\theta)]+ S_m(g_{\mu\nu},\Psi_m). \end{equation} The action \eqref{jordan} is in the Jordan frame, which should be transformed into the Einstein frame to utilize the results of the prior studies \cite{Zhang2016,Zhang2017}, although the chameleon mechanism also works in the Jordan frame \cite{PhysRevD.80.104002}. Defining the metric in Einstein frame as $\tilde{g}_{\mu\nu}=\theta g_{\mu\nu}$, we get the Einstein frame action as follows \cite{RevModPhys.82.451}, \begin{equation} S_E ={1\over 16\pi G}\int d^4 x\sqrt{-\tilde{g}}[\tilde{R}-\frac{3}{2\theta^2}(\tilde{\partial}\theta)^2-\frac{U(\theta)}{\theta^2}]+S_m(\theta^{-1}\tilde{g}_{\mu\nu},\Psi_m), \end{equation} where $(\tilde{\partial}\theta)^2=\tilde{g}^{\mu\nu}\partial_\mu\theta \partial_\nu\theta$ and $\tilde{R}$ is the Ricci scalar of $\tilde{g}_{\mu\nu}$. To change the kinetic term into the standard form, we introduce another scalar field $\phi$ that satisfies the following relation $3(\tilde{\partial}\theta)^2/32\pi G\theta^2=(\tilde{\partial}\phi)^2/2$, that is $\frac{d\phi}{d\theta}=-\sqrt{\frac{3}{16 \pi G}}\frac{1}{\theta}$. Solving this differential equation, we have $\theta=\exp(-\sqrt{\frac{16\pi G}{3}}\phi)$. The scalar field $\phi$ can be directly related to the Jordan frame Ricci scalar by \begin{equation}\label{relation2} f'(R)=\exp(-\sqrt{\frac{16\pi G}{3}}\phi). \end{equation} Therefore, the action in the Einstein frame has the form \cite{RevModPhys.82.451}, \begin{equation}\label{s-t2} S_E=\int d^4 x \sqrt{-\tilde{g}}[\frac{\tilde{R}}{16\pi G}-\frac12(\tilde{\partial}\phi)^2-V(\phi)]+S_m(A^2(\phi)\tilde{g}_{\mu\nu},\Psi_m), \end{equation} where the bare potential is \begin{equation} V(\phi)=\frac{f'(R)R-f(R)}{16\pi G f'(R)^2}. \end{equation} The conformal coupling function is \cite{RevModPhys.82.451} \begin{equation}\label{A} A(\phi)=\frac{1}{\sqrt{f'(R)}}=\exp(\frac{\xi\phi}{M_\text{Pl}}) \end{equation} with the conformal coupling parameter $\xi=1/\sqrt{6}$. Variation of $S_E$ with respect to $\tilde{g}_{\mu\nu}$ and $\phi$ gives the field equations \begin{eqnarray} \tilde{R}_{\mu\nu}&=&8\pi G [\tilde{S}_{\mu\nu}+\partial_\mu\phi\partial_\nu\phi+V(\phi)\tilde{g}_{\mu\nu}],\label{metric}\\ \tilde{\square}\phi&=&\frac{\text{d} V}{\text{d} \phi}-\frac{\tilde{T}}{A}\frac{\text{d}A}{\text{d}\phi},\label{scalar} \end{eqnarray} with \begin{equation} \tilde{S}_{\mu\nu}\equiv\tilde{T}_{\mu\nu}-\frac12\tilde{g}_{\mu\nu}\tilde{T}, \end{equation} where $\tilde{T}_{\mu\nu}\equiv(-2/\sqrt{-\tilde{g}})\delta S_m/\delta\tilde{g}_{\mu\nu}$ is the energy-momentum tensor of matter in the Einstein frame, and $\tilde{\square}\equiv\tilde{g}^{\mu\nu}\tilde{\nabla}_\mu\tilde{\nabla}_\nu$. The covariant derivatives $\tilde{\nabla}_\mu$ obey $\tilde{\nabla}_\mu\tilde{g}_{\alpha\beta}=0$. The scalar field equation can be rewritten as follows: \begin{equation} \tilde{\square}\phi=\frac{\text{d}V_\text{eff}}{\text{d}\phi}, \end{equation} with the effective potential \begin{equation}\label{Veff} V_\text{eff}(\phi)\equiv V(\phi)+\rho[A(\phi)-1]. \end{equation} Here the matter is assumed to be nonrelativistic, and $\rho\equiv-\tilde{T}/A$ is the conserved energy density in the Einstein frame, which is independent of $\phi$ \cite{PhysRevLett.93.171104}. \subsection{Chameleon mechanism} An important consequence of the conformal coupling function $A(\phi)$ is that matter will generally feel a fifth force mediated by the scalar field. Since the conformal coupling parameter $\xi$ is of order unity, the fifth force will have a significant impact on the motion of particles \cite{Brax2008}. In order to evade the fifth force constraints, the mass of the field should be sufficiently large in high density environment \cite{PhysRevLett.83.3585}. Since scalar field needs to have cosmological effects to accelerate the expansion of the Universe, on cosmological scales, the magnitude of the scalar mass can be Hubble scale to cause the acceleration of the universe. Thus a mechanism is needed to screen the scalar field in local environment while let the scalar field accelerate the Universe on large scale \cite{Brax2008,Hu:2007nk}. The behavior of the scalar field is governed by the effective potential $V_\text{eff}(\phi)$. An essential element of the model is the fact that $V_\text{eff}(\phi)$ depends explicitly on the matter density, as seen in Eq. \eqref{Veff}. The shape of the effective potential is determined by the function $f(R)$. For a suitably chosen function $f(R)$, the effective potential can have a minimum. We denote by $\phi_\text{min}$ the value at the minimum, that is \cite{Zhang2016}, \begin{equation} \left.\frac{\text{d}V_\text{eff}}{\text{d}\phi}\right |_{\phi_\text{min}}=0. \end{equation} Whilst the mass of small fluctuations around the minimum is \cite{Brax2008}, \begin{equation}\label{mass} \left. m^2_\text{eff}=\frac{\text{d}^2V_\text{eff}}{\text{d}\phi^2}\right |_{\phi_\text{min}}=\left[\frac{\text{d}^2V}{\text{d}\phi^2}+\frac{\xi^2}{M_\text{Pl}^2}\rho\exp(\frac{\xi\phi}{M_\text{Pl}})\right ]_{\phi_\text{min}}. \end{equation} It can be observed that the scalar field has a density dependent mass. When the density of the environment is large enough, the mass becomes large, and the corresponding fifth force range is so small that it cannot be detected by gravitational experiments \cite{Brax2008}. Laboratory constraints can be greatly alleviated if the mass develops a strong dependence on the ambient density of matter. Theories in which such a dependence is realized are called to have a chameleon mechanism. Therefore, if the following three conditions can be satisfied in some regions of $\phi$ space, the $f(R)$ model can have a chameleon mechanism \cite{Joyce20151}: (1) $V'(\phi)<0$: The effective potential $V_\text{eff}$ has a minimum; (2) $V''(\phi)>0$: The mass squared $m^2_\text{eff}$ is positive; (3) $V'''(\phi)<0$: The mass can increase with density. Using Eq. \eqref{relation2}, these conditions can be translated into \cite{Brax2008} \begin{eqnarray} V'(\phi)&=&\frac{\xi M_\text{Pl}}{f'^2}[Rf'-2f]<0,\label{chameleon1}\\ V''(\phi)&=&\frac13[\frac{R}{f'}+\frac{1}{f''}-\frac{4f}{f'^2}]>0,\\ V'''(\phi)&=&\frac{2\xi}{3M_\text{Pl}}[\frac{3}{f''}+\frac{f'f'''}{f''^3}+\frac{R}{f'}-\frac{8f}{f'^2}]<0\label{chameleon3}. \end{eqnarray} \subsection{Palatini $f(R)$ gravity} Previous discussions have focused on the metric formalism. We now discuss the Palatini formalism. The action in the Palatini formalism is formally the same as in the metric formalism. However, the Ricci tensor is constructed from the independent connection and is not related to the metric tensor. The Palatini action takes the form \cite{RevModPhys.82.451} \begin{equation} S_p={1\over 16\pi G}\int d^4 x\sqrt{-g}\,f(\mathcal R)+ S_m(g_{\mu\nu},\Psi_m). \end{equation} Here $\mathcal{R}\equiv g^{\mu\nu}\mathcal{R}_{\mu\nu}$ and the Ricci tensor $\mathcal{R}_{\mu\nu}$ is determined by the independent connection $\Gamma^\mu_{\alpha\beta}$. Variations with respect to the metric and the connection can yield the following formulae respectively \cite{RevModPhys.82.451}, \begin{equation} f'(\mathcal{R})\mathcal{R}_{(\mu\nu)}-\frac12 f(\mathcal{R})g_{\mu\nu}= 8\pi G T_{\mu\nu}, \end{equation} and \begin{equation}\label{connection} \nabla_\mu[\sqrt{-g}(\delta^\mu_\alpha f'g^{\beta\nu}-\frac12\delta^\beta_\alpha f'^{\mu\nu}-\frac12 \delta^\nu_\alpha f'g^{\beta\mu})]=0. \end{equation} Transforming the action into the Einstein frame, we obtain \cite{RevModPhys.82.451} \begin{equation} S_{E}'=\int d^4 x \sqrt{-\tilde{g}}[\frac{\tilde{R}}{16\pi G}-V(\theta)]+S_m(\theta^{-1}\tilde{g}_{\mu\nu},\Psi_m), \end{equation} which follows the scalar field equation, \begin{equation} 2\theta \frac{d V}{d \theta}+\tilde{T}=0. \end{equation} Note that, the scalar field $\theta$ is algebraically related to $\tilde{T}$, i.e., $\theta=\theta(\tilde{T})$, which is non-dynamical and cannot propagate in spacetime. Therefore, we cannot define a mass of the scalar field $\theta$, as discussion above. As a result of the non-dynamical nature of the scalar field, the chameleon mechanism does not apply to Palatini $f(R)$ gravity. There exists another significant difference between Palatini $f(R)$ gravity and the chameleon theory: Since the fifth force is produced by the gradient of the scalar field, and in chameleon theories a compact object in a homogeneous background can generate a scalar field with Yukawa profile. A test particle in the homogeneous background can feel the fifth force. While in Palatini $f(R)$ gravity, the scalar field does not have gradient in a homogeneous background and does not mediate a fifth force. In addition, there are other serious shortcomings of Palatini $f(R)$ gravity \cite{RevModPhys.82.451}. So, in the rest of this paper, we will only focus on metric $f(R)$ gravity. \subsection{Stability issues} More recent attention has focused on the stability issues about metric $f(R)$ gravity. These include Ostrogradski instability \cite{Woodard2007}, Frolov instability \cite{PhysRevLett.101.061103}, Dolgov-Kawasaki instability \cite{DOLGOV20031} and instability of de Sitter space \cite{PhysRevD.72.124005}. A scrutiny of these issues is needed to make sure that $f(R)$ gravity is viable. The first two stability issues can be bypassed in the specific models discussed below \cite{Woodard2007,PhysRevD.80.064002}. Dolgov and Kawasaki \cite{DOLGOV20031} found that the Ricci scalar is instable in the $f(R)$ model proposed by Carroll \textit{et al.} \cite{PhysRevD.70.043528}. Their analysis is generalized to a general function by Faraoni \cite{PhysRevD.74.104017}. The origin of this issue is that the mass squared of the scalar degree of freedom is negative. Since the mass squared has the same sign as $f''(R)$, the stability condition can be written as \cite{RevModPhys.82.451} \begin{equation} f''(R)>0,\quad \text{for}\; R\geq R_0(>0), \end{equation} where $R_0$ is the Ricci scalar today. This condition is satisfied for all the models studied in the following section. In order to investigate the stability of de Sitter space, we consider a spatial flat Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) universe. The vacuum field equations take the form \cite{RevModPhys.82.451} \begin{eqnarray} H^2&=&\frac{1}{3 f'}(\frac{R f'}{2}-\frac{f}{2}-3H\dot f'),\nonumber\\ \dot H&=&-\frac{1}{2f'}(\ddot{f'}-H\dot{f'}), \label{frid1} \end{eqnarray} where an overdot denotes differentiation with respect to $t$. The stationary points of the dynamical system \eqref{frid1} are de Sitter space with Hubble constant $H$. The condition for the existence of de Sitter space is \cite{RevModPhys.82.451} \begin{equation}\label{dS} Rf'-2f=0. \end{equation} The stability condition of de Sitter space with respect to inhomogeneous linear perturbations reads \cite{PhysRevD.72.124005} \begin{equation}\label{dS_stable} \frac{f'}{f''}-R\geq 0. \end{equation} If the solution to Eq. \eqref{dS} meets the stability condition \eqref{dS_stable}, the Universe will enter into a stable de Sitter phase in the future \cite{Xu2014}. We now impose the stability condition of de Sitter space on the specific $f(R)$ models. We will investigate the following well studied models \begin{eqnarray} (A)\; f(R)&=&R- m^2\frac{c_1(R/m^2)^n}{c_2(R/m^2)^n+1} \;(c_1,c_2,n>0),\label{Hu}\\ (B)\; f(R)&=&R-\mu R_c \tanh\frac{R}{R_c}\;(\mu,R_c>0),\label{Tsu}\\ (C)\; f(R)&=&R-\mu R_c[1-(1+\frac{R^2}{R_c^2})^{-k}]\;(\mu,k,R_c>0).\label{Star} \end{eqnarray} The models (A), (B) and (C) are proposed by Hu and Sawicki \cite{Hu:2007nk}, Tsujikawa \cite{PhysRevD.77.023507} and Starobinsky \cite{starobinsky2007disappearing}, respectively. In the model (A), the mass scale is chosen to be \cite{Hu:2007nk} \begin{equation} m^2=\frac{8\pi G \bar{\rho}_0}{3}, \end{equation} where $\bar{\rho}_0 $ is the average matter density in the universe today. In the models (B) and (C), $R_c$ roughly corresponds to the order of observed cosmological constant for $\mu=\mathcal{O}(1)$. During the whole expansion history of the Universe, the Ricci scalar is in the high curvature region, i.e., $R\gg m^2$ or $R\gg R_c$ \cite{Hu:2007nk}. Thus, the model (A) can be approximated by \begin{equation}\label{Hu:ap} f(R)=R-\frac{c_1}{c_2}m^2+\frac{c_1}{c_2^2}m^2(\frac{m^2}{R})^n \end{equation} and the model (C) can be approximated by \begin{equation} f(R)=R-\mu R_c+\mu R_c(\frac{R_c}{R})^{2k}. \end{equation} It can be observed that the free parameters of the model (A) are in one-to-one correspondence with that of the model (C) through the relations $m^2c_1/c_2 \rightarrow\mu R_c$, $m^{2(n+1)}c_1/c_2^2\rightarrow\mu R_c ^{2k+1}$ and $n\rightarrow2k$. So we only study the models (A) and (B) in the following. The model (A) can be expressed as another useful form \cite{PhysRevD.77.023507} \begin{equation}\label{usefull} f(R)=R-\alpha R_c \frac{(R/R_c)^n}{(R/R_c)^n+1}, \end{equation} where $\alpha=c_1c_2^{{1}/{n}-1}$ and $R_c=m^2c_2^{-{1}/{n}}$. The following relation holds at the de~Sitter point: \cite{de2010f} \begin{equation}\label{alpha} \alpha=\frac{(1+x^n)^2}{x^{n-1}(2+2x^n-n)}, \end{equation} where $x\equiv R/R_c$. The stability condition \eqref{dS_stable} implies the relation \cite{de2010f}, \begin{equation} 2x^{2n}-(n-1)(n+4)x^n+(n-1)(n-2)\geq 0. \end{equation} Thus for each specific $n$, the above inequality gives a bound on $x$ and this bound can be transformed into a bound on $\alpha$ through Eq. \eqref{alpha}. For instance, when $n=2$, one has $x\geq\sqrt{3}$ and $\alpha\geq 8\sqrt{3}/9$. In the following section, we will come back to discuss this inequality. \section{Constraints on $f(R)$ gravity }\label{constraint} In this section, we consider the observational constraints on metric $f(R)$ gravity in cosmological scale, solar system and binary pulsar systems, respectively. \subsection{Cosmological constraints}\label{cosmos_const} In order to satisfy the tests on cosmological scales, the $f(R)$ models should mimic the $\Lambda$CDM model at the late time and provide an effective cosmological constant. Similar to the previous work \cite{Zhang2016}, in this paper we do not consider the cosmological perturbations of $f(R)$ gravity \cite{de2010f}. We leave this issue for the general $f(R)$ gravity as a future work. The bare potential $V(\phi)$ in action \eqref{s-t2} can provide the effective cosmological constant to accelerate the universe expansion, which is given by \cite{Zhang2016} \begin{equation}\label{cosmos} \Lambda_\text{eff}=8\pi G V_\text{VEV}=\left. \frac{R f'(R)-f(R)}{2 f'(R)^2}\right |_{R=R_\infty}, \end{equation} where $R_\infty$ is the background value of Ricci scalar. In order to mimic the $\Lambda$CDM model, we need that the value of $\Lambda_\text{eff}$ is equal to the observed cosmological constant $\Lambda$, which accelerates the cosmic expansion. Now, we can apply the cosmological constraint \eqref{cosmos} to specific $f(R)$ models. For model (A), we substitute Eq. \eqref{Hu:ap} into Eq. \eqref{cosmos}, and obtain \begin{equation} \Lambda_\text{eff} \approx \frac{c_1}{2c_2} m^2, \end{equation} which, in turn, implies that \begin{equation}\label{Hu:cosmos} \frac{c_1}{c_2}\approx \frac{2\Lambda_\text{eff}}{m^2}=6 \frac{\Omega_\Lambda}{\Omega_m}=13.5. \end{equation} Note that, we adopted the density parameters $\Omega_m=0.308$ and $\Omega_\Lambda=0.692$ \cite{ade2016planck}. This expression of $c_1/c_2$ is consistent with Eq. (26) in \cite{Hu:2007nk}. Now it can be seen from Eq. \eqref{Hu:ap} that there are two remaining parameters $n$ and $c_1/c_2^2$ in this model. Using the relation $\alpha=c_1c_2^{{1}/{n}-1}$ and the cosmological constraint \eqref{Hu:cosmos}, we have \begin{equation} \frac{c_1}{c_2^2}=13.5(\frac{13.5}{\alpha})^n. \end{equation} Thus, in the case $n=2$, the stability condition $\alpha\geq 8\sqrt{3}/9$ implies an upper bound on $c_1/c^2_2$ \begin{equation} \frac{c_1}{c_2^2}\leq 1038. \end{equation} Using Eq. \eqref{Hu:ap}, we have \begin{equation} \frac{c_1}{c_2^2}=\frac{1-f'(R_0)}{n}\left(\frac{R_0}{m^2}\right)^{n+1}. \end{equation} For a spatial flat FLRW universe, the scalar curvature at the present epoch is $R_0=m^2(12/ {\Omega_m}-9)$ \cite{Hu:2007nk}. Consequently, for different $n$, we can obtain different upper bounds on $|f'(R_0)~-~1|$. The results are presented in Fig. \ref{huc1c2} with dotted line. Similarly, for the model (B), the cosmological constraint is \begin{equation} \label{Tsu:cosmos} \Lambda_\text{eff} \approx \frac{\mu R_c}{2}, \end{equation} and the stability condition \eqref{dS_stable} implies that \cite{de2010f} \begin{equation}\label{stableB} \mu >0.905. \end{equation} \subsection{Solar system constraints}\label{local} In the solar system, the gravitational field is weak and the velocity of planets is slow compared with the speed of light. Thus we can apply the PPN formalism to solar system tests. In the PN limit, the spacetime metric predicted by different metric theory of gravity has the same structure and can be characterized by ten PPN parameters \cite{Will2014}. Among them, the most important parameters are $\gamma$ and $\beta$. Here, we derive the PPN parameters $\gamma$ and $\beta$ and the effective gravitational constant $G_\text{eff}$ in the general metric $f(R)$ gravity with chameleon mechanism. For a scalar-tensor theory with action \eqref{s-t2} , the solution to the scalar field equation \eqref{scalar} is given by \cite{Zhang2017} \begin{equation} \phi(r)=\phi_\infty -\epsilon M_\text{Pl}\frac{G M_E}{r}e^{-m_\infty r}, \end{equation} where the screened parameter is defined as, \begin{equation} \epsilon\equiv\frac{\phi_\infty-\phi_0}{M_\text{Pl}\Phi_E}. \end{equation} The parameters $M_\text{E}$ and $\Phi_\text{E}\equiv G M_\text{E}/r$ are the mass and the Newtonian potential at the surface of the source object in the Einstein frame, respectively. The quantity $\phi_0$ is the field in side the source object and $\phi_\infty$ is the field in the background environment. $m_{\infty}$ is the effective mass of scalar field at $\phi=\phi_{\infty}$. In order to solve the metric field equations, we make use of the PPN formalism introduced in \cite{Will2014,will1993theory}. In this formalism, the gravitational field of the source is weak $GM/r\ll 1$, and the typical velocity $\vec{v}$ of the source is small, i.e. $v^2\sim GM/r\ll 1$. Thus, we can use the perturbative expansion method to solve the field equations, and all dynamical quantities can be expanded to $\mathcal{O}(n)\propto v^{2n}$. The metric field $g_{\mu\nu}$ can be expanded around the Minkowski background as follows: \begin{equation} g_{\mu\nu}=\eta_{\mu\nu}+\accentset{(1)}h_{\mu\nu}+\accentset{(2)}h_{\mu\nu}+ \mathcal{O}(3). \end{equation} We solve the field equations \eqref{metric} and \eqref{scalar} using the PPN method \cite{will1993theory}, and transform the metric to the Jordan frame. Making use of the definitions of $\gamma$ and $\beta$ as follows \cite{Zhang2016}, \begin{eqnarray} \accentset{(1)}h_{\text J 00}=\frac{2G_\text{eff}M_\text J}{\chi},~~~ \accentset{(1)}h_{\text J \chi\chi}=\gamma \frac{2G_\text{eff}M_\text J}{\chi},~~~ \accentset{(2)}h_{\text J 00}=-\beta \frac{4G^2_\text{eff}M^2_\text J}{2\chi^2}, \end{eqnarray} where $M_J$ and $\chi$ are the mass and radial coordinate in the Jordan frame, respectively. We obtain the PPN parameters \begin{eqnarray} \label{ppn} \gamma=1-\frac{2A_1}{A_\text{VEV}}M_\text{Pl}\epsilon,~~ \beta=1-M_\text{Pl}^2(\frac{A^2_1}{2A^2_\text{VEV}}-\frac{A_2}{A_\text{VEV}})\epsilon^2,~~ G_\text{eff}=G A^2_\text{VEV}(1+\frac{A_1}{A_\text{VEV}}M_\text{Pl}\epsilon),\label{G} \end{eqnarray} where $A_\text{VEV}$,$A_1$ and $A_2$ are the expansion coefficients of $A(\phi)$, i.e., \begin{equation} A(\phi)=A_\text{VEV}+A_1(\phi-\phi_\infty)+A_2(\phi-\phi_\infty)^2+\cdots. \end{equation} Note that, here we have taken the limit $m_\infty r\ll 1$, since in the solar system, the distance $r$ is always much less than the Compton wavelength $m^{-1}_\infty$ \cite{Zhang2016}. Applying to the general metric $f(R)$ gravity, using Eqs. \eqref{relation2} and \eqref{A} we obtain the expansion coefficients \begin{eqnarray} A_\text{VEV}=\frac{1}{\sqrt{f'(R_\infty)}}, ~~~ A_1=\frac1{\sqrt{6}M_\text{Pl}}\frac{1}{\sqrt{f'(R_\infty)}},~~~A_2=\frac1{12M^2_\text{Pl}}\frac{1}{\sqrt{f'(R_\infty)}}. \end{eqnarray} Following the discussion of \cite{Hu:2007nk}, $R_\infty=8\pi G \rho_g$ and $\rho_g=10^{-24}\text{g\,cm}^{-3}$ is the average galactic density in the solar vicinity. In the solar system, the source object of the scalar field is the Sun and the background is the Milky Way. Since the density of the Sun is much higher compared with the galactic background, we have $\phi_\infty\gg\phi_0$. Then the screened parameter can be approximated by \begin{equation} \epsilon=\frac{\phi_\infty}{M_\text{Pl}\Phi_\text{E}}. \end{equation} Substituting the above parameters into Eq. \eqref{G}, we obtain \begin{eqnarray} \gamma=1+\frac{\ln f'(R_\infty)}{\Phi_\text{E}},~~~ \beta=1,~~~ G_\text{eff}=\frac{G}{f'(R_\infty)}(1-\frac{\ln f'(R_\infty)}{2\Phi_\text{E}}).\label{Geff} \end{eqnarray} We find that the expression of parameter $\gamma$ is consistent with Eq. (64) in \cite{Hu:2007nk}. The parameter $\beta$ is unity no matter whatever the functional form $f(R)$ is. As can be seen from Eq. \eqref{G}, when the conformal coupling function $A(\phi)$ has the exponential form, the two terms in the bracket of the expression of $\beta$ cancel each other out. And Eq. \eqref{A} shows that the conformal coupling function $A(\phi)$ always has the exponential form in metric $f(R)$ gravity. This suggests that the experimental tests of parameter $\beta$ cannot distinguish between GR and metric $f(R)$ gravity. The relation between $\gamma$ and $G_\text{eff}$ is \begin{equation} \frac{G_\text{eff}}{G}-1\approx-\frac{\gamma-1}{2}. \end{equation} Using the Cassini constraint $|\gamma-1|<2.3\times 10^{-5}$ \cite{bertotti2003test} and the Newtonian potential at the surface of the Sun $\Phi_\text{E}=2.12\times 10^{-6}$, we obtain the constraint on general $f(R)$ gravity as follows, \begin{equation} |\ln f'(R_\infty)|=|\gamma-1|\Phi_\text{E}<4.9\times10^{-11}. \end{equation} Since $\ln f'(R_\infty)\approx f'(R_\infty)-1$, we have \begin{equation}\label{solar} |f'(R_\infty)-1|<4.9\times10^{-11}. \end{equation} Note that, this is a general constraint for any metric $f(R)$ gravity with chameleon mechanism, which is independent of the form of $f(R)$. We can apply this solar system constraint to the models (A) and (B). In the model (A), using Eq. \eqref{Hu:ap}, we have \cite{Hu:2007nk} \begin{equation} \Big(\frac{1-f'(R_\infty)}{1-f'(R_0)}\Big)^{\frac{1}{n+1}}=\frac{R_0}{8\pi G \rho_g}=8.14\times 10^{-7} \frac{R_0}{m^2}\frac{\Omega_m h^2}{0.13}\Big(\frac{\rho_g}{10^{-24}~\text{g cm}^{-3}}\Big)^{-1} \end{equation} Here, $R_\infty=8\pi G \rho_g$ and $\rho_g=10^{-24}\text{g\,cm}^{-3}$ is the average galactic density in the solar vicinity. We adopted the physical matter density $\Omega_m h^2=0.1415$ \cite{ade2016planck}. Applying inequality \eqref{solar} to the above equation, we have \begin{equation} |f'(R_0)-1|<4.9\times 10^{-11}\Big(\frac{8\pi G \rho_g}{R_0}\Big)^{n+1}. \end{equation} The equivalence principle places a bound on the parameter of model (A) \cite{PhysRevD.77.107501} \begin{equation} n>1.8. \end{equation} As shown in Fig. \ref{huc1c2}, in the region $1.8<n<3$, the solar system constraint (solid line) is fairly weak when compared with the stability condition (dotted line) and is sensitive to the value of $n$. Similarly, in the model (B) we have \begin{equation} 1-f'(R_\infty)=\frac{\mu}{\cosh^2\frac{\mu R_\infty}{2\Lambda_\text{eff}}}, \end{equation} where Eq. \eqref{Tsu:cosmos} was used to eliminate $R_c$ in terms of $\mu$. The solar system constraint \eqref{solar} yields \begin{equation}\label{solarB} \mu > 9.5\times 10^{-5}. \end{equation} Compared with the stability condition \eqref{stableB}, this shows that the solar system constraint on $f(R)$ gravity is weaker for this model . Assuming that the cosmological constraint \eqref{cosmos} and solar system constraint \eqref{solar} are both satisfied, we have checked that in the model (A) and (B), the conditions for chameleon mechanism \eqref{chameleon1}-\eqref{chameleon3} can all be satisfied. \begin{figure} \begin{center} \includegraphics[width=8cm, height=6.2cm]{huc1c2.pdf} \caption{Maximum value of $|f'(R_0)-1|$ in Hu-Sawicki model allowed by the solar system constraint (solid line) , the pulsar constraint (dashed line) and the stability condition (dotted line), respectively. Note that, for the dotted line, we have considered the cosmological constraint.}\label{huc1c2} \end{center} \end{figure} \subsection{Binary pulsar constraints}\label{pulsar_const} It is well known that the compact binary systems can lose the orbital energy due to gravitational radiation, and the orbital period will decay. In different theories of gravity, the decay rates are different \cite{will1993theory,Will2014}, which provides another independent opportunities to test the metric $f(R)$ gravity. In a binary system, when the difference between the screened parameters of the two compact stars is significant, the dipole radiation dominates the orbital decay rate. Since the screened parameter is inversely proportional to surface gravitational potential, the neutron star-white dwarf (NS-WD) systems are the best testbeds to constrain the parameters of $f(R)$ gravity. In the previous work \cite{Zhang2017}, we have studied this effect in the most general scalar-tensor gravity with screening mechanism. For a quasicircular ($e\ll 1$) NS-WD binary system, the orbital period derivative is given by \cite{Zhang2017} \begin{equation}\label{pdot} \dot{P}=\dot{P}^{\rm GR}\left[1+\frac{5}{192}\Big(\frac{P}{2\pi Gm}\Big)^{2/3}(\epsilon_{\rm WD}-\epsilon_{\rm NS})^2\right]\,\,\,. \end{equation} Here, $P$ denotes the orbital period, $m=m_{\rm NS}+m_{\rm WD}$ is the total mass, $\mu=m_{\rm NS}m_{\rm WD}/m$ is the reduced mass, $\epsilon_{\rm WD}={\phi_\infty}/{M_{\rm Pl} \Phi_{\rm WD}}$ and $\epsilon_{\rm NS}={\phi_\infty}/{M_{\rm Pl} \Phi_{\rm NS}}$ are the screened parameter of the white dwarf and the neutron star respectively and \begin{equation} \dot{P}^{\rm GR}=-\frac{192\pi}{5}\left(\frac{2\pi Gm}{P}\right)^{5/3}\!\!\left(\frac{\mu}{m}\right) \end{equation} represents the GR prediction of the orbital period derivative. The second term in Eq. \eqref{pdot} corresponds to the scalar dipole radiation correction. We apply this result to the general metric $f(R)$ gravity with chameleon mechanism. Using Eq. \eqref{relation2}, the orbital period derivative translates into \begin{equation} \frac{\dot{P}}{\dot{P}^{\rm GR}}=1+\frac{15}{384}\Big(\frac{P}{2\pi Gm}\Big)^{2/3}\Big(\frac{\ln f'(R_\infty)}{\Phi_{\rm WD}}-\frac{\ln f'(R_\infty)}{\Phi_{\rm NS}}\Big)^2 \, . \end{equation} It can be seen that in the special case $f(R)=R-2\Lambda$, the above result is reduced to $\dot{P}=\dot{P}^{\rm GR}$. Because $\Phi_{\rm NS}/\Phi_{\rm WD}\sim 10^4$, the orbital period derivative can be approximated by \begin{equation}\label{Aobs} \frac{\dot{P}}{\dot{P}^{\rm GR}}=1+\frac{15}{384}\Big(\frac{P}{2\pi Gm}\Big)^{2/3}\Big(\frac{\ln f'(R_\infty)}{\Phi_{\rm WD}}\Big)^2 \, . \end{equation} Since all the pulsar observation agrees well with the GR prediction within the errors \cite{Stairs2003,Antoniadis1233232,freire2012relativistic}, the observation value of the period derivative can be expressed as \begin{equation} \frac{\dot{P}^{\rm obs}}{\dot{P}^{\rm GR}}=1+\delta\pm\sigma \end{equation} where $\delta$ is the fractional deviation of the observed $\dot{P}^{\rm obs}$ from the GR prediction, $\sigma$ is the observational uncertainty. Thus the background field value $f'(R_\infty)$ cannot deviate from unity too much, that is, \begin{equation}\label{psr_bound} |\ln f'(R_\infty)|\approx|f'(R_\infty)-1|<(|\delta|+2\sigma)^{\frac12}(\frac{m}{M_\odot})^{\frac13}(\frac{P}{1 \text{d}})^{-\frac13}(\frac{m_{\rm WD}}{M_\odot})(\frac{R_{\rm WD}}{R_\odot})^{-1}\times 7.63\times 10^{-9} \end{equation} at 95\% confidence level. Up to now, more than 2500 pulsars have been observed \cite{neutron}. However, most of them are isolated and their mass cannot be determined. Table 2 in \cite{neutron} lists fifteen NS-WD systems with low-eccentricity orbits which have accurate measurement of mass. Among these fifteen NS-WD systems only PSR J0348~+0432 and PSR J1738~+0333 have accurate observation value of the radius of the white dwarf companion. Thus we use these two NS-WD systems to constrain $f(R)$ gravity and list here the relevant parameters in Table \ref{psr}. In the PSR J0348 +0432 case (see Table \ref{psr}), $\delta=0.05$ and $\sigma=0.18$. Substituting the parameters into inequality \eqref{psr_bound}, we obtain the upper bound \begin{equation} |f'(R_\infty)-1|<3.583\times10^{-8} \end{equation} at 95\% confidence level. Similarly, using the observation data of PSR J1738~+0333, we obtain \begin{equation} |f'(R_\infty)-1|<3.579\times10^{-8} \end{equation} at 95\% confidence level. Compared with the solar system constraint \eqref{solar}, the pulsar constraint is three orders of magnitude weaker. Applying the pulsar constraint to the model (A), we obtain \begin{equation} |f'(R_0)-1|<3.6\times 10^{-8}\Big(\frac{8\pi G \rho_g}{R_0}\Big)^{n+1}. \end{equation} The above result is also shown in Fig. \ref{huc1c2} with dashed line. Similarly, applying the pulsar constraint to the model (B), we obtain \begin{equation} \mu > 5.4\times 10^{-5}. \end{equation} Consistently, we find both of them are relatively weaker than the corresponding constraints of solar system. \begin{center} \begin{table*}[htb] \caption{Parameters of the binary systems with 1-$\sigma$ uncertainties.} \label{psr} \begin{tabular}{l r r} \hline \hline PSR & J0348 +0432 \cite{Antoniadis1233232} & J1738 +0333 \cite{freire2012relativistic} \\ \hline Eccentricity, $e$ & $\sim10^{-6}$ & $(3.4\pm1.1) \times 10^{-7}$\\ Period, $P$ (day) & 0.102424062722(7) &0.3547907398724(13) \\ Period derivative, $\dot{P}$ ($10^{-14}$) & $-27.3\pm 4.5$ & $-2.59\pm0.32$\\ $\dot{P}^{\rm obs}/\dot{P}^{\rm GR}$ & $1.05\pm0.18$ & $0.93\pm0.13$\\ Total mass, $m$ ($M_\odot$) & $2.18\pm0.04$ & $1.65_{-0.06}^{+0.07}$ \\ WD mass, $m_{\rm WD}$ ($M_\odot$) & $0.172\pm0.003$ & $0.181_{-0.007}^{+0.008}$ \\ WD radius, $R_{\rm WD}$ ($R_\odot$) & $0.065\pm0.005$ & $0.037_{-0.003}^{+0.004}$\\ \hline \hline \end{tabular} \end{table*} \end{center} \section{Conclusions}\label{conclusion} The $f(R)$ gravity has been extensively studied to explain the accelerating expansion of the universe. In this paper, we have studied the general $f(R)$ gravity through the scalar-tensor representation. In this theory, the chameleon mechanism is crucial for $f(R)$ gravity to escape the fifth force constraints. However, due to the non-dynamical nature of the scalar field in Palatini $f(R)$ gravity, this mechanism does not apply to the theory. Therefore, we focused on the metric $f(R)$ gravity with chameleon mechanism. We calculated the PPN parameters $\gamma$ and $\beta$ for the general $f(R)$ gravity, and found that $\beta=1$ in the limit $m_\infty r\ll 1$. As a result, the observed value of $\beta$ cannot constrain the parameters of $f(R)$ models. Applying the Cassini spacecraft measurement of $\gamma$, we obtained the constraint $|f'(R_\infty)-1|<4.9\times10^{-11}$ on the metric $f(R)$ gravity, which is consistent with the previous works. To pass the cosmological test, the metric $f(R)$ gravity should provide the effective cosmological constant. We also calculated the effective cosmological constant in $f(R)$ gravity. In general, the cosmological constraint can reduce one free model parameter in a given specific $f(R)$ model. In addition, we calculated the orbital period derivative $\dot{P}$ of binary pulsar systems in the metric $f(R)$ gravity. Since GR has survived the binary pulsar test, the $\dot{P}$ in the metric $f(R)$ gravity cannot deviate from that in GR too much. We found that the pulsar constraint from the observations of PSR J0348~+0432 and PSR J1738~+0333 is $|f'(R_\infty)-1|<3.6\times10^{-8}$. This is relatively weaker than the current constraints derived from the solar system observations. We also studied the stability condition of de Sitter space. Compared with the observational constraints (binary pulsar and solar system), this theoretical constraint is more stringent in Hu-Sawicki model and Tsujikawa model. With the chameleon mechanism, the metric $f(R)$ gravity with suitable parameters can pass the cosmological test, the solar system test and the binary pulsar test at the same time. \section*{Acknowledgements} This work is supported by NSFC Grants Nos. 11773028, 11603020, 11633001, 11173021, 11322324, 11653002 and 11421303, the project of Knowledge Innovation Program of Chinese Academy of Science, the Fundamental Research Funds for the Central Universities and the Strategic Priority Research Program of the Chinese Academy of Sciences Grant No. XDB23010200.
1,116,691,496,968
arxiv
\section{Introduction} \label{section:introduction} The classical Lehmer conjecture says that there is an absolute constant~$C>0$ such that if~$\a\in{\bar{\QQ}}^*$ is a not a root of unity, then its absolute logarithmic height satisfies \[ h(\a) \ge \frac{C}{\bigl[\mathbb{Q}(\a):\mathbb{Q}\bigr]}. \] Various authors have extended this conjecture to elliptic curves and to higher dimensional abelian varieties. We review these conjectures and some of the progress made in proving them in Section~\ref{section:surveyolderresults}. The main results of the present paper are: (1) an explicit Fourier expansion of the ``Bernoulli-part'' of the canonical height on abelian surfaces defined over non-archimedean local fields (Theorem~\ref{theorem:fourierexpansionofL}); (2) a lower bound for a torsion-and-difference average of the Bernoulli part of the height on abelian surfaces defined over function fields (Theorem~\ref{theorem:hlen23len23}). This is an analogue of the key lemma in~\cite{hindrysilverman:lehmer}, which dealt with elliptic curves. We also prove: (3) a Lehmer-type lower bound with exponent~$2$ for the canonical height of non-torsion points on abelian surfaces defined over function fields that is conditional on the assumption that the torsion-and-difference average of the ``intersection part'' of the canonical height is at least as large as a certain local-global constant (Corollary~\ref{corollary:conditionallehmer}). Before explaining the statements of our results in more detail, we briefly recall the local decomposition of the canonical height on abelian varieties. See Section~\ref{section:overviewlocalhts} for further details and references. Let~$k$ be a algebraically closed field of characterstic~$0$, let~$K/k$ be a $1$-dimensional function field, let~$A/K$ be an abelian variety, and let~$\ASD\in\operatorname{Div}(A)$ be an ample symmetric divisor on~$A$. The associated canonical height \[ {\hat h}_{A,\ASD} : A({\bar K}) \longrightarrow \mathbb{R}_{\ge0} \] may be decomposed as a sum of normalized local canonical heights \[ \hat\lambda_{A,\ASD,v} : \bigl(A\setminus|\ASD|\bigr)({\bar K}_v) \longrightarrow \mathbb{R}, \] one for each absolute value on~${\bar K}$, where the normalization condition \begin{equation} \label{eqn:normcondlhatADv} \lim_{N\to\infty} \frac{1}{N^{2g}} \sum_{P\in A[N]\setminus|\ASD|} \hat\lambda_{A,\ASD,v}(P) = 0 \end{equation} serves to uniquely determine~$\hat\lambda_{A,\ASD,v}$. The local height further decomposes into an ``intersection-part'' and a ``Bernoulli-part,'' which we denote respectively by~$\hat\lambda^\Int_{A,\ASD,v}$ and~$\hat\lambda^{\textup{Bern}}_{A,\ASD,v}$. The intersection part is given by \[ \hat\lambda^\Int_{A,\ASD,v}(P) = \left( \parbox{.55\hsize}{ intersection index of $\overline{P}$ and $\overline{\ASD}$ on the $v$-fiber of the N\'eron model of~$A$ } \right) - \kappa^\Int_{A,\ASD,v}, \] where the constant~$\kappa^\Int_{A,\ASD,v}$ is chosen so that~$\hat\lambda_{A,\ASD,v}^\Int$ itself satisfies the normalization condition~\eqref{eqn:normcondlhatADv}, and then the Bernoulli part is what's left over, i.e., \[ \hat\lambda_{A,\ASD,v}^{\vphantom{\Int}}(P) = \hat\lambda^\Int_{A,\ASD,v}(P)+\hat\lambda^{\textup{Bern}}_{A,\ASD,v}(P) \] We also note that \[ \text{$A$ has potential good reduction at $v$} \quad\Longrightarrow\quad \hat\lambda^{\textup{Bern}}_{A,\ASD,v}=\kappa_{A,\ASD,v}^\Int=0. \] (See Section~\ref{section:overviewlocalhts} for the further details.) For any finite extension~$L/K$, we write~$M_L$ for an appropriate normalized set of absolute values on~$L$, and we let \[ M_L^{\textup{bad}}(A) = \{ v\in M_L : \text{$A$ has bad reduction at $v$} \}. \] Then for~$P\in{A(L)}\setminus|\ASD|$, we define an ``inter\-sec\-tion-part'' and a ``Bernoulli-part'' of the global canonical height via \begin{align*} {\hat h}_{A,\ASD}^{\textup{Bern}}(P) &= \frac{1}{[L:K]} \sum_{v\in M_L^{\textup{bad}}(A)} \hat\lambda_{A,\ASD,v}^{\textup{Bern}}(P),\\ {\hat h}_{A,\ASD}^\Int(P) &= \frac{1}{[L:K]} \sum_{v\in M_L} \hat\lambda_{A,\ASD,v}^\Int(P). \end{align*} With this notation, there exists a \emph{local-global height constant}~$\kappa_{A,\ASD}$ so that \begin{equation} \label{eqn:hPhPBhPIkapap} {\hat h}_{A,\ASD}(P) = {\hat h}_{A,\ASD}^{\textup{Bern}}(P) + {\hat h}_{A,\ASD}^\Int(P) - \kappa_{A,\ASD} \quad\text{for all~$P\in{A({\bar K})}\setminus|\ASD|$.} \end{equation} We note that if~$\dim(A)=1$, i.e., if~$A$ is an elliptic curve, then $\kappa_{A,\ASD}=0$. However, if~$\dim(A)\ge2$, then~$\kappa_{A,\ASD}$ is generally positive. Our main results are a Fourier series calculation and the following lower bound for the Bernoulli part of the canonical height in the case that~$A$ is an abelian surface defined over a function field. This theorem, and the Fourier averaging lemmas that we prove along the way, are analogues of the key lemmas and results in~\cite{hindrysilverman:lehmer}, where similar results are proven for elliptic curves. However, we note that the main theorem in~\cite{hindrysilverman:lehmer} is an unconditional Lehmer-type lower bound for the canonical height on elliptic curves (with non-integral $j$-invariant), while our result for abelian surfaces only gives a lower bound for a suitable average of the Bernoulli part of the height. \begin{theorem}[Theorem~$\ref{theorem:hlen23len23}$ and Corollary~$\ref{corollary:conditionallehmer}$] \label{theorem:mainthmintro} Fix the following quantities\textup: \begin{notation} \item[$k$] an algebraically closed field of characterstic~$0$. \item[$K/k$] a $1$-dimensional function field. \item[$(A,\ThetaDivisor)/K$] an abelian variety~$A$ defined over~$K$ with an effective symmetric principal polarization $\Theta\in\operatorname{Div}_K(A)$. \item[${\hat h}_{A,\ThetaDivisor}$] the canonical height on~$A$ for the divisor $\ThetaDivisor$. \item[${\hat h}_{A,\ThetaDivisor}^{\textup{Bern}},{\hat h}_{A,\ThetaDivisor}^\Int$] the Bernoulli and intersection parts of the canonical height on~$A$ for the divisor $\ThetaDivisor$. \end{notation} Assume that for every place~$v$ of~$K$, the abelian variety~$A$ has either potential good reduction at~$v$ or totally multiplicative reduction at~$v$, and that~$A$ has at least one place of multiplicative reduction. \begin{parts} \Part{(a)} There are constants~$\Cl[DZ]{jj10},\Cl[DZ]{jj11},\Cl[DZ]{jj8},\Cl[DZ]{jj9}>0$ and an integer~$d\ge1$ so that for all finite extensions~$L/K$ and all sets of points $\Sigma\subset{A(L)}$ of~${\hat h}_{A,\ThetaDivisor}$-height at most~$\Cr{jj10}$, there is a subset~$\Sigma_0\subseteq\Sigma$ with~$\#\Sigma_0\ge\Cr{jj11}\#\Sigma$ so that the following double average\footnote{The averaging notation~$\operatorname{\hbox{\normalfont\calligra{Avg}}}$ is fairly self-explanatory, but see Section~\ref{section:avgperiodicfuncs} for the precise definition.} of the Bernoulli part of the heights of the points in~$\Sigma_0$ satisfies \[ \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \;\; {\hat h}_{A,\ThetaDivisor}^{\textup{Bern}}(P-Q+T) \ge \frac{\Cr{jj8}}{[L:K]^{2/3}} - \frac{\Cr{jj9}}{\#\Sigma}. \] \Part{(b)} Suppose that the subset~$\Sigma_0$ in~\textup{(a)} can always be chosen so that it satisfies the further estimate \begin{equation} \label{eqn:AvgPQAvgdIntgeLK23} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \;\; {\hat h}_{A,\ThetaDivisor}^\Int(P-Q+T) \ge \kappa_{\ThetaDivisor}, \end{equation} where~$\kappa_{\ThetaDivisor}$ is the constant appearing in~\eqref{eqn:hPhPBhPIkapap}. Then there is a constant~$\Cl[DZ]{jj12}>0$ so that every non-torsion $P\in{A({\bar K})}$ satisfies \[ {\hat h}_{A,\ThetaDivisor}(P) \ge \frac{\Cr{jj12}}{\bigl[K(P):K\bigr]^2}. \] \end{parts} \end{theorem} We make several remarks, but again we refer the reader to Section~\ref{section:surveyolderresults} for more details of the history and known results surrounding Lehmer's conjecture. To ease notation, we write \[ D = \bigl[K(P):K\bigr] \] for the degree of the field of definition of~$P$ (although we note that later we assign a different meaning to~$D$). \begin{remark} Our proof uses the Fourier averaging technique that has previously been used for the classical Lehmer conjecture~\cite[Blanksby--Montgomery~(1971)]{MR0296021} and for Lehmer's conjecture on elliptic curves~\cite[Hindry--Silverman~(1990)]{hindrysilverman:lehmer}. A crucial ingredient in the one-dimensional cases is that the Fourier series associated to the local heights has non-negative coefficients, a fact that is no long true in the higher dimensional case. Thus our proof has two key components. First we compute the relevant two-dimensional Fourier series attached to a periodic two-variable quadratic form with its associated hexagonal fundamental domain. Second we deal with the issue that the Fourier series has both positive and negative Fourier coefficients via a subsidiary averaging process over a suitable collection of torsion points. \end{remark} \begin{remark} Currently the best known result for general abelian surfaces over number fields\footnote{Presumably Masser's proof carries over to the function field setting, where the~$\epsilon$ might well be superfluous.} is due to Masser. More generally it is proven in~\cite[Masser 1984]{MR766295} that on an abelian variety of dimension~$g$, there is a Lehmer estimate \begin{equation} \label{eqn:masserhAPD2g62g} {\hat h}_A(P) \ge \frac{C_\epsilon(A/K)}{D^{2g+6+2/g+\epsilon}}. \end{equation} Thus for abelian surfaces, i.e., for~$g=2$, Masser's lower bound is $O(D^{-11})$, which may be compared to our conditional lower bound of~$O(D^{-2})$ and with the conjectural lower bound of~$O(D^{-1/2})$. Masser's proof uses auxiliary polynomials and methods from Diophantine approximation, a technique that has been long used in studying Lehmer's conjecture. \end{remark} \begin{remark} One would of course like to prove a result for number fields that is analogous to Theorem~\ref{theorem:mainthmintro}, much as was done in~\cite{hindrysilverman:lehmer} for elliptic curves. However, the Fourier expansion for the archimedean local height is likely to include negative Fourier coefficients, just as in the non-archimedean case. And these negative Fourier coefficients would vitiate the argument used in the present paper, since we rely on the fact that our absolute values are discrete, and thus that the component groups on the N\'eron model are finite and are well-behaved under finite extension. For archimedean absolute values, the ``fiber'' on the ``N\'eron model'' should be viewed as having ``bad reduction'' with ``component group'' equal to a real torus. We thus have no consistent way to calculate which multiples of a point lie on (or near) the ``identity component'' in an archimedean topology. \end{remark} \begin{remark} Fourier averaging techniques have also been used successfully for studying Lang's height lower bound conjecture, in which one fixes a field~$K$ and varies the abelian variety. Lang's conjecture asserts (roughly) that for all abelian varieties of~$A/K$ of dimesion~$g$ and all points~$P\in{A(K)}$ whose multiples are Zariski dense in~$A$, we have \[ {\hat h}_A(P) \ge c_1(K,g) h(A/K) - c_2(K,g), \] where~$h(A/K)$ is an appropriate height of the abelian variety. For~$g=1$, this was proven for function fields, and conditionally for number fields assuming on Szpiro's conjecture, using Fourier averaging~\cite[Hindry--Silverman]{hindrysilverman:lehmer}. The difficulty in directly extending these proofs to abelian surfaces, even conditionally on a Szpiro-type conjecture, is again the two-fold problem of negative Fourier coefficients and that pesky~$\kappa_{A,\ASD}$ constant. However, see~\cite[David (1993)]{MR1254751} and~\cite[Pazuki (2013)]{MR3081000} for Lang-style lower bounds for abelian varieties in which the lower bound has a correction term that measures the distance to the boundary of moduli space. \end{remark} \begin{remark} We also hope that it may be possible to extend our results to abelian varieties of dimension three or greater. However, it seems a challenging problem to write down an explicit formula for the Fourier series of the Bernoulli part of the local height in higher dimensions, since our two-dimensional hexagonal fundamental domain (see Figure~\ref{figure:hexagonandsquare}) would be replaced by a~$g$-dimensional parallelepiped. However, if this could be done, we would not be surprised if it could be used to prove a function field Lehmer-type bound with exponent~$2$ for the Bernoulli-part of the height. \end{remark} \section{Survey of Previous Results and Methods} \label{section:surveyolderresults} In this section we give a brief overview of the study of Lehmer-type height lower bounds. We continue with the notation \[ D = \bigl[K(P):K\bigr]. \] Lehmer's original conjecture~\cite[Lehmer~(1933)]{MR1503118}, actually phrased as a question, says that\footnote{We assume throughout this historical survey (Section~\ref{section:surveyolderresults}) that ``trivial counter-examples'' are excluded. Thus for Lehmer's original conjecture, we assume that~$\a$ is non-zero and not a root of unity, for elliptic curves~$P$ is a non-torsion point, and for abelian varieties we assume that the iterates of~$P$ are Zariski dense.} \[ h(\a) \ge CD^{-1}. \] General Lehmer-type estimates\footnote{We use the phrase ``Lehmer-type estimate'' to mean a height lower bound that decays at worst polynomially in the degree~$D$. We note that it is relatively easy to obtain exponentially decaying bounds.} were proven in the classical case in~\cite[Blanksby--Montgomery~1971]{MR0296021} using Fourier series methods and in~\cite[Stewart~1978]{MR507748} using auxiliary polynomials. Both proofs give bounds of the form \[ h(\a) \ge CD^{-2}(\log D)^{-1}. \] Stewart's methods were applied in~\cite[Dobrowolski~1979]{dobrowolski:lehmer} to achieve the following bound, which is ever-so-close to Lehmer's conjecture: \[ h(\a) \ge CD^{-1} \left(\frac{\log\log D}{\log D}\right)^3. \] Dobrowolski's innovation was to use the Frobenius $p$-power map for suitably many~$p$ to greatly increase the power of the vanishing lemma. The first general Lehmer-type estimate for elliptic curves was given in~\cite[Anderson--Masser~(1980)]{MR591611}, where a lower bound of roughly~$D^{-10}$ was given. This was subsequently improved in~\cite[Masser~(1989)]{masser:lehmer} to \[ {\hat h}_E(P) \ge C D^{-3}(\log D)^{-2}. \] Masser's proof uses auxiliary polynomials. A Fourier series proof of the same precision was given in~\cite[Zhang~(1989)]{zhang1989unpublished}. Stronger results are known for restricted collections of elliptic curves. Notable is the result~\cite[Laurent (1983)]{laurent:lehmer}, who proves a Dobrowolski-type bound \[ {\hat h}_E(P) \ge CD^{-1} \left(\frac{\log\log D}{\log D}\right)^3 \quad\text{if $E$ has complex multiplication.} \] And building on Zhang's ideas, Masser's result was improved in~\cite[Hindry--Silverman~(1990)]{hindrysilverman:lehmer} to \[ {\hat h}_E(P) \ge C D^{-2}(\log D)^{-2} \quad\text{if $j(E)$ is non-integral.} \] There are also many results proving Lehmer-type estimates for points defined over restricted types of fields. One of the earliest such results is the proof~\cite[Smyth~(1971)]{smyth:lehmer} that Lehmer's conjecture is true for all~$\a\in{\bar{\QQ}}^*$ such that~$\a^{-1}$ is not a ${\bar{\QQ}}/\mathbb{Q}$-Galois conjugate of~$\a$. (One says that~$\a$ is non-reciprocal.) Even stronger results are known for points defined over abelian extensions of the ground field~$K$. It is shown in~\cite[Amoroso--Dvornicich(2000)]{MR1740514} that \[ h(\a) \ge C(K) > 0 \quad\text{for all non-zero non-roots of unity $\a\in K^{\textup{ab}}$,} \] and analogous estimates for points defined over~$K^{\textup{ab}}$ were proven for elliptic curves in~\cite[Baker~(2003)]{MR1979685} and~\cite[Silverman~(2004)]{MR2029512}, and then for abelian varieties in~\cite[Baker--Silverman~(2004)]{MR2067482}. Note that in these abelian extension results, the lower bounds are independent of~$D$. Under the weaker assumption that~$K(P)/K$ is a Galois extension, it is shown in~\cite[Galateau--Mah{\'e}~(2017)]{MR3598828} that the elliptic Lehmer conjecture is true. We next consider higher dimensional abelian varieties. For a simple abelian variety~$A/K$ of dimension~$g$ and appropriate choice of canonical height, the current conjecture~\cite{MR1799933,MR766295} appears to be \[ {\hat h}_A(P) \ge C D^{-1/g}, \] although no one has yet even managed to get even~$D^{-1}$. It is shown in~\cite[Masser~(1984)]{MR766295} that \[ {\hat h}_A(P) \ge C_\epsilon D^{-2g-6-2/g-\epsilon}, \] and if~$A$ has complex multiplication, then Dobrowolski-type bounds have been proven in~\cite[David--Hindry~(2000)]{MR1799933} and~\cite[Ratazzi~(2008)]{MR2445828}. For the $g$-fold product $E^g$ of an elliptic curve, the estimate \[ {\hat h}_{E^g}(P) \ge C D^{-1-1/2g}(\log D)^{-2/g} \] is proven in~\cite[Galateau--Mah{\'e}~(2017)]{MR3598828}. A Lehmer-type conjecture involves fixing one geometric object such as~$\mathbb{G}_m$,~$E$, or~$A$ defined over a field~$K$, and finding height lower bounds for points defined over extensions of~$K$. Dem'janenko and Lang conjectured a different sort of height lower bound for elliptic curves by fixing a field~$K$ and allowing the elliptic curve to vary. The original conjecture had the form \begin{multline*} {\hat h}_E(P) \ge c_1(K)\log{\operatorname{\mathsf{N}}}{\mathcal D}_{E/K} - c_2(K) \\ \text{for all $E/K$ and all non-torsion $P\in E(K)$,} \end{multline*} and this has been generalized to abelian varieties with the log-discri\-mi\-nant replaced by an appropriate height of the abelian variety, e.g., the height~$h(A/K)$ used by Faltings in his proof of the Mordell conjecture. A Fourier series argument was used in~\cite[Hindry--Silverman~(1988)]{hindrysilverman:integralpts} to prove that Lang's conjecture for elliptic curves is a consequence of Szpiro's conjectured inequality relating the discriminant and the conductor of an elliptic curve, so in particular Lang's conjecture is a theorem over one-dimensional characteristic~$0$ function fields. However, for higher dimensional abelian varieties, the best known estimates include an error term that grows as the moduli point of the abelian variety approaches the boundary of the associated moduli space; see for example~\cite[David~(1993)]{MR1254751}. The definition of the canonical height of points on abelian varieties can be extended to assign a canonical height to subvarieties of higher dimension, and one can formulate a Lehmer conjecture and prove Lehmer-type lower bounds for these higher dimesional heights. See for example the series of papers by David and Philippon~\cite{MR1478502,MR1949109,MR2355454}. In this brief section, we have only touched on some of the work done on Lehmer's conjecture. For additional information, the reader might consult the lengthy (unpublished) survey article~\cite[Verger-Gaugry~(2019)]{VergerGaugrysurvey} that includes an extensive bibliography of articles related to the conjectures of Lehmer and Schinzel-Zassenhaus. \section{An Overview of Canonical Local and Global Heights} \label{section:overviewlocalhts} We follow the exposition in Hindry's notes~\cite{hindrynotesonlocalheights}; see also the articles~\cite{MR1418354,MR1458753,MR1662481} by Werner (especially~\cite{MR1458753}). We set the following notation:\footnote{For comparison with~\cite{hindrynotesonlocalheights}, we note that Hindry's~$i_v(D,P)$ is our~$\left\langle\overline{D}\cdot\overline{P}\right\rangle_{{\mathcal A},v}$, and we have adopted his $B_{D,v}\bigl(j_v(P)\bigr)$ notation; cf.\ \cite[(3.10) and (3.11)]{hindrynotesonlocalheights}. We also point the reader to the brief discussion of the function field setting in~\cite[Section~5]{hindrynotesonlocalheights}.} \begin{notation} \item[$k$] an algebraically closed field~$k$ of characteristic $0$. \item[$K/k$] a $1$-dimensional function field over $k$. \item[$M_L$] for finite extensions~$L/K$, a complete set of absolute values on~$L$, normalized so that~$w(L^*)=\mathbb{Z}$ for all~$w\in{M_L}$. \item[$A/K$] an abelian variety of dimension~$g$ defined over~$K$. \item[${\mathcal A}$] the N\'eron model of $A/K$. \item[${\mathcal A}^{\circ}$] the identity component of the N\'eron model of $A/K$. \end{notation} For~$P=[x_0,\ldots,x_N]\in\mathbb{P}^N({\bar K})$, the \emph{Weil height} of~$P$ is defined by choosing a finite extension~$L/K$ with~$P\in\mathbb{P}^N(L)$ and setting \[ h(P) = \frac{1}{[L:K]}\sum_{w\in M_L} \max\bigl\{-w(x_i)\bigr\}. \] The value is independent of the choice of~$L$.\footnote{Those who are familiar with the theory of Weil heights may wonder where the local factor~$[L_w:K_w]$ has gone. The answer is that there is no residue degree, since our scalar field~$k$ is algebraically closed, and the ramification degree is already absorbed in the way that we have normalized the absolute values in~$M_K$ and~$M_L$, i.e., if~$\a\in{K^*}$ and~$w\in{M_L}$ lies over~$v\in{M_K}$, then $w(\a)=e(w/v)v(\a)$ already includes the ramification degree.} \begin{theorem} \label{theorem:neronfncexist} \textup{(N\'eron)} Let~$A/K$ be an abelian variety. There exists is a unique collection of functions \[ \hat\lambda_{\ASD,v} : A({\bar K}_v)\setminus|\ASD| \longrightarrow \mathbb{R}, \quad\text{where $\ASD\in\operatorname{Div}_K(A)$ and $v\in M_K$,} \] so that the following are true\textup: \begin{parts} \Part{(a)} The map~$\hat\lambda_{\ASD,v}$ is continuous, where we give~$A({\bar K}_v)$ the $v$-adic topology. \Part{(b)} For all $\ASD,\ASD'\in\operatorname{Div}_K(A)$ and all $v\in{M_K}$, \[ \hat\lambda_{\ASD+\ASD',v} = \hat\lambda_{\ASD,v} + \hat\lambda_{\ASD',v} \quad\text{on $A({\bar K}_v)\setminus\bigl(|\ASD|\cup|\ASD'|\bigr)$.} \] \Part{(c)} For all morphisms $\varphi:A\to{B}$ of abelian varieties over~$K$ and all $\ASD\in\operatorname{Div}_K(B)$, \[ \hat\lambda_{A,\varphi^*\ASD,v} = \hat\lambda_{B,\ASD,v}\circ \varphi \quad\text{on $A({\bar K}_v)\setminus|\varphi^*\ASD|$.} \] \Part{(d)} For all rational functions~$f\in{K(A)}$, \[ \hat\lambda_{\div(f),v} = v\circ f \quad\text{on $A({\bar K}_v)\setminus\bigl|\div(f)\bigr|$.} \] \Part{(e)} \textup{(Normalization)} For all $\ASD\in\operatorname{Div}_K(A)$ and all~$v\in{M_K}$, we have\footnote{Without this normalization, which N\'eron did not impose in his original formulation, the function~$\hat\lambda_{\ASD,v}$ is only well-defined up to an~$M_K$-constant. We also mention that if the absolute value on~$K$ is archimedean, then the normalization condition is equivalent to $\int_{A({\bar K}_v)} \hat\lambda_{\ASD,v}(P)\,d\mu(P) = 0$, where~$\mu$ is Haar measure on~$A({\bar K}_v)\cong{A(\mathbb{C})}$.} \begin{equation} \label{eqn:limn2gPAn0} \lim_{N\to\infty} N^{-2g} \sum_{P\in A[N]\setminus|\ASD|} \hat\lambda_{\ASD,v}(P)=0. \end{equation} \Part{(f)} \textup{(Good Reduction)} If~$A$ has potential good reduction at~$v$, then\footnote{N\'eron proved that this formula is true up to a constant. See~\cite{MR1413570} for a proof that the average of the intersection multiplicities over torsion points goes to~$0$, which implies that the constant vanishes.} \[ \hat\lambda_{\ASD,v}(P) = \left\langle\overline{\ASD}\cdot\overline{P}\right\rangle_{{\mathcal A},v} \] is the intersection index over~$v$ of the closures of~$\ASD$ and~$P$ in~${\mathcal A}$. In the case of potential good reduction we have \[ \hat\lambda_{\ASD,v}(P) \ge 0 \quad\text{for all~$P\in A({\bar K}_v)\setminus|\ASD|$.} \] \Part{(g)} \textup{(Bad Reduction)} Let \[ j_v : A(K) \longrightarrow ({\mathcal A}/{\mathcal A}^\circ)_v(k) \] be the homomorphism that sends a point to its image in the group of components of the N\'eron model over~$v$. Then there is a function\footnote{N\'eron further proved that the values of~$\mathbb{B}_{\ASD,v}$ are rational numbers with denominators dividing~$2\#({\mathcal A}/{\mathcal A}^\circ)_v(k)$.} \[ \mathbb{B}_{\ASD,v} : ({\mathcal A}/{\mathcal A}^\circ)_v(k) \longrightarrow \mathbb{R} \] so that \begin{equation} \label{eqn:LDvPivDPBBDvP} \hat\lambda_{\ASD,v}(P) = \left\langle\overline{\ASD}\cdot\overline{P}\right\rangle_{{\mathcal A},v} + \mathbb{B}_{\ASD,v}\bigl(j_v(P)\bigr) - \kappa_{\ASD,v}, \end{equation} where again~$\kappa_{\ASD,v}$ is chosen so that~\eqref{eqn:limn2gPAn0} holds. \Part{(h)} \textup{(Local-Global Decomposition)} There is a constant $\kappa_{\ASD}$ so that for all finite extensions~$L/K$ and all $P\in{A(L)}\setminus|\ASD|$,\footnote{\textbf{Important Note}: When the local heights are normalized via~\eqref{eqn:limn2gPAn0}, then their weighted sum will generally differ from the glocal height by a non-zero constant that we have denoted~$\kappa_{A,\ASD}$; see~\cite[Appendix]{hindrynotesonlocalheights} for an example. However, if~$\dim(A)=1$, then~$\kappa_{A,\ASD}=0$, which is why this issue does not arise when working with elliptic curves.} \[ {\hat h}_\ASD(P) = \frac{1}{[L:K]} \sum_{w\in M_K} \hat\lambda_{\ASD,w}(P) - \kappa_{\ASD}. \] \end{parts} \end{theorem} \begin{definition} \label{definition:intandbernlochts} With notation as in Theorem~\ref{theorem:neronfncexist}, we define the \emph{normalized intersection local height} and the \emph{normalized Bernoulli local height} to be, respectively, \begin{equation} \label{eqn:deflambdaintandbern} \hat\lambda_{\ASD,v}^\Int(P) = \left\langle\overline{\ASD}\cdot\overline{P}\right\rangle_{{\mathcal A},v} - \kappa_{\ASD,v}^\Int \quad\text{and}\quad \hat\lambda_{\ASD,v}^{\textup{Bern}}(P) = \mathbb{B}_{\ASD,v}\bigl(j_v(P)\bigr) - \kappa_{\ASD,v}^{\textup{Bern}}, \end{equation} where the constants~$\kappa_{\ASD,v}^\Int$ and~$\kappa_{\ASD,v}^{\textup{Bern}}$ are chosen to ensure the normalization formulas \begin{equation} \label{eqn:normalizationIntBern} \frac{1}{N^{2g}}\sum_{T\in A[N]} \hat\lambda_{\ASD,v}^\Int(T) \xrightarrow[N\to\infty]{} 0 \quad\text{and}\quad \frac{1}{N^{2g}}\sum_{T\in A[N]} \hat\lambda_{\ASD,v}^{\textup{Bern}}(T) \xrightarrow[N\to\infty]{} 0 \end{equation} We note that Theorem~\ref{theorem:neronfncexist}(f) says that if~$A$ has potential good reduction at~$v$, then \[ \hat\lambda_{\ASD,v}=\hat\lambda_{\ASD,v}^\Int,\quad \kappa_{\ASD,v}^\Int=0,\quad\text{and}\quad \hat\lambda_{\ASD,v}^{\textup{Bern}}=0, \] so the added complication of~\eqref{eqn:deflambdaintandbern} and~\eqref{eqn:normalizationIntBern} are only needed if~$A$ does not have potential good reduction. \end{definition} \begin{definition} \label{definition:globalintbernhts} We define the \emph{global intersection height} and the \emph{global Bernoulli height} as follows: For~$P\in{A({\bar K})\setminus|\ASD|}$, \begin{align*} {\hat h}_{A,\ASD}^\Int(P) &= \frac{1}{\bigl[K(P):K\bigr]} \sum_{w\in M_{K(P)}} \hat\lambda_{A,\ASD,v}^\Int(P), \\* {\hat h}_{A,\ASD}^{\textup{Bern}}(P) &= \frac{1}{\bigl[K(P):K\bigr]} \sum_{w\in M_{K(P)}}\hat\lambda_{A,\ASD,v}^{\textup{Bern}}(P). \end{align*} \end{definition} The next result summarizes how our various normalizations and normalizing constants fit together. \begin{proposition} \label{proposition:hteqintplusbernhts} With notation as in Theorem~$\ref{theorem:neronfncexist}$ and Definitions~$\ref{definition:intandbernlochts}$ and~$\ref{definition:globalintbernhts}$, we have \begin{align} \label{eqn:lhatexacteqlhatintpluslhatbern} \hat\lambda_{\ASD,v}(P) &= \hat\lambda_{\ASD,v}^\Int(P) + \hat\lambda_{\ASD,v}^{\textup{Bern}}(P). \\ \label{eqn:hhatsumintbernparts} {\hat h}_{\ASD}(P) &= {\hat h}_{\ASD}^\Int(P) + {\hat h}_{\ASD}^{\textup{Bern}}(P) - \kappa_{A,\ASD}. \end{align} \end{proposition} \begin{proof} Using~\eqref{eqn:limn2gPAn0},~\eqref{eqn:LDvPivDPBBDvP}, \eqref{eqn:deflambdaintandbern}, and \eqref{eqn:normalizationIntBern}, we see that \begin{align*} 0 &= \lim_{N\to\infty} \frac{1}{N^{2g}}\sum_{T\in A[N]} \Bigl(\hat\lambda_{\ASD,v}(T) - \hat\lambda_{\ASD,v}^\Int(T) - \hat\lambda_{\ASD,v}^{\textup{Bern}}(T) \Bigr) \\ &= \lim_{N\to\infty} \frac{1}{N^{2g}}\sum_{T\in A[N]} (-\kappa_{\ASD,v}^{\vphantom{\Int}}+\kappa_{\ASD,v}^\Int+\kappa_{\ASD,v}^{\textup{Bern}}) \\ &= -\kappa_{\ASD,v}^{\vphantom{\Int}}+\kappa_{\ASD,v}^\Int+\kappa_{\ASD,v}^{\textup{Bern}}. \end{align*} Thus~$\kappa_{\ASD,v}^{\vphantom{\Int}}=\kappa_{\ASD,v}^\Int+\kappa_{\ASD,v}^{\textup{Bern}}$, which gives~\eqref{eqn:lhatexacteqlhatintpluslhatbern}. Then \begin{align*} {\hat h}_\ASD(P) + \kappa_{A,\ASD} &= \frac{1}{[L:K]} \sum_{w\in M_K} \hat\lambda_{\ASD,w}(P) \quad\text{from Theorem~\ref{theorem:neronfncexist}(h),} \\ &= \frac{1}{[L:K]} \sum_{w\in M_K} \Bigl( \hat\lambda_{\ASD,w}^\Int(P)+\hat\lambda_{\ASD,w}^{\textup{Bern}}(P) \Bigr) \quad\text{from \eqref{eqn:lhatexacteqlhatintpluslhatbern},} \\ &= {\hat h}_{\ASD}^\Int(P) + {\hat h}_{\ASD}^{\textup{Bern}}(P) \quad\text{from Definition~\ref{definition:globalintbernhts},} \end{align*} which proves~\eqref{eqn:hhatsumintbernparts}. \end{proof} \begin{remark} \label{remark:lhatberndefeverywhere} We note that~$j_v$ and N\'eron's Bernoulli function~$\mathbb{B}_{\ASD,v}$ are defined at all points, so the Bernoulli-part of the local height is well-defined everywhere, \[ \hat\lambda_{\ASD,v}^{\textup{Bern}} : A({\bar K}) \longrightarrow \mathbb{R}. \] This is in contrast to the intersection-part~$\hat\lambda_{\ASD,v}^\Int$ of the local height, which is only defined off of the support of the divisor~$\ASD$, since if~$P\in|\ASD|$, then the local intersection index~$\left\langle\overline{\ASD}\cdot\overline{P}\right\rangle_{{\mathcal A},v}$ is not defined. \end{remark} \section{Local Heights for Completely Split Multiplicative Reduction} \label{section:localhtformulanonarch} We continue with our discussion of (local) heights based on the material in~\cite{hindrynotesonlocalheights}. For this section, we fix a non-archimedean place~$v\in{M_K}$ such that~${\mathcal A}_v^\circ\cong\mathbb{G}_m^g$ is a split torus. There is then a $v$-adic uniformization \[ \mathbb{G}_m^g(K_v)/\Omega \xrightarrow{\;\;\sim\;\;} A(K_v), \] where~$\Omega$ is a (multiplicative) lattice, say spanned by the columns of\footnote{The $q_{ij}$ may live in a multi-quadratic extension of~$K$.} \[ \Omega = \operatorname{Multiplicative-Span} \left( \begin{array}{c|c|c|c} q_{11}^2 & q_{12}^2 & \cdots & q_{1g}^2 \\ q_{21}^2 & q_{22}^2 & \cdots & q_{2g}^2 \\ \vdots & \vdots & \ddots & \vdots \\ q_{g1}^2 & q_{g2}^2 & \cdots & q_{gg}^2 \\ \end{array} \right). \] We define matrices \[ {\boldsymbol q} = \begin{pmatrix} q_{11} & q_{12} & \cdots & q_{1g} \\ q_{21} & q_{22} & \cdots & q_{2g} \\ \vdots & \vdots & \ddots & \vdots \\ q_{g1} & q_{g2} & \cdots & q_{gg} \\ \end{pmatrix} \;\text{and}\; Q = \begin{pmatrix} v(q_{11}) & v(q_{12}) & \cdots & v(q_{1g}) \\ v(q_{21}) & v(q_{22}) & \cdots & v(q_{2g}) \\ \vdots & \vdots & \ddots & \vdots \\ v(q_{g1}) & v(q_{g2}) & \cdots & v(q_{gg}) \\ \end{pmatrix}, \] where~${\boldsymbol q}$ and~~$Q$ are symmetric, and~$Q$ is positive-definite. In general, when we apply~$v$ to vectors and matrices with entries in~$K_v$, we mean the associated vector or matrix obtained by applying~$v$ to the entries. So for example, we have~$Q=v({\boldsymbol q})$, and for~${\boldsymbol u}\in\mathbb{G}_m^g(K_v)$, we have~$v({\boldsymbol u})=\bigl(v(u_1),\ldots,v(u_g)\bigr)$. We introduce notation that will make it easier to work with linear algebra on multiplicative spaces. For (column) vectors \[ {\boldsymbol u}=(u_1,\ldots,u_g)\in\mathbb{G}_m^g(K_v) \quad\text{and}\quad {\boldsymbol m}=(m_1,\ldots,m_g)\in\mathbb{Z}^m, \] we define\footnote{The intuition is that ${}^t{\boldsymbol m}\star{\boldsymbol u}$ is $\exp({}^t{\boldsymbol m}\log{\boldsymbol u})$.} \[ {}^t{\boldsymbol m}\star{\boldsymbol u} = \prod_{i=1}^g u_i^{m_i}. \] Similarly, for the multiplicative period matrix~${\boldsymbol q}$ and integer vectors ${\boldsymbol m},{\boldsymbol n}\in\mathbb{Z}^g$, we define \[ {}^t{\boldsymbol m}\star {\boldsymbol q}\star {\boldsymbol n} = \prod_{i,j=1}^g q_{ij}^{m_in_j}. \] In particular, we note that \[ v({}^t{\boldsymbol m}\star {\boldsymbol q}\star {\boldsymbol n}) = {}^t{\boldsymbol m} Q {\boldsymbol n} = \sum_{i,j=1}^g m_in_jv(q_{ij}) \] is the value of the bilinear form associated to the positive-definite matrix~$Q$. Just as in the classical case over~$\mathbb{C}$, the change-of-basis formula for the multiplicative period matrix~${\boldsymbol q}$ may be described using the \emph{symplectic group} \begin{multline*} \Symp_{2g}(\mathbb{Z}) = \left\{ \begin{pmatrix} A&B\\ C&D\\ \end{pmatrix} \in \operatorname{Mat}_{2g\times2g}(\mathbb{Z}): \right. \\ \left. {\vrule height15pt depth0pt width0pt}^t\!\! \begin{pmatrix} A&B\\ C&D\\ \end{pmatrix} \begin{pmatrix} 0&I\\ -I&0\\ \end{pmatrix} \begin{pmatrix} A&B\\ C&D\\ \end{pmatrix} = \begin{pmatrix} 0&I\\ -I&0\\ \end{pmatrix} \right\}. \end{multline*} For our purposes, it suffices to describe the action of~$\Symp_{2g}(\mathbb{Z})$ on the period valuation matrix~$Q$. It is given by the formula \begin{equation} \label{eqn:symp2gaction} \begin{pmatrix} A&B\\ C&D\\ \end{pmatrix}\star Q = (AQ+B)(CQ+D)^{-1}. \end{equation} The following normalization lemma for the $2$-dimensional case will be used later. \begin{lemma} \label{lemma:quadformwbpositive} Let~$Q$ be a positive definite symmetric $2$-by-$2$ matrix. Then the~$\Symp_4(\mathbb{Z})$ equivalence class of~$Q$ via the action~\eqref{eqn:symp2gaction} contains a matrix \[ \begin{pmatrix} a & b \\ b & c \\ \end{pmatrix} \in \Symp_4(\mathbb{Z})\star Q \] satisfying \begin{equation} \label{eqn:acgeb20le12blealec} ac > b^2 \quad\text{and}\quad 0\le 2b \le a \le c. \end{equation} We will say that a matrix~$\SmallMatrix{a&b\\b&c\\}$ satisfying~\eqref{eqn:acgeb20le12blealec} is \emph{normalized}. \end{lemma} \begin{proof} Standard reduction theory of positive definite binary quadratic forms (Gaussian reduction) tells us that there is a matrix~$A\in\operatorname{SL}_2(\mathbb{Z})$ such that \[ A Q \,{}^t\!A = \begin{pmatrix} a & b \\ b & c \\ \end{pmatrix} \quad\text{with}\quad 0\le |2b| \le a \le c. \] We note that \[ A Q \,{}^t\!A = Q \star \begin{pmatrix} A & 0\\ 0 & {}^t\!A^{-1}\\ \end{pmatrix}, \quad\text{where}\quad \begin{pmatrix} A & 0\\ 0 & {}^t\!A^{-1}\\ \end{pmatrix} \in \Symp_4(\mathbb{Z}). \] This completes the proof if~$b\ge0$. And if~$b<0$, then we can change the sign of~$b$ using the following element of~$\Symp_4(\mathbb{Z})$: \[ \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1\\ \end{pmatrix} \star \begin{pmatrix} a&b\\ b&c\\ \end{pmatrix} = \begin{pmatrix} a&-b\\ -b&c\\ \end{pmatrix}. \] This completes the proof of Lemma~\ref{lemma:quadformwbpositive}. \end{proof} \begin{definition} The \emph{theta function} associated to the half-period matrix~${\boldsymbol q}$ is the function \begin{gather*} \ThetaFunction(\,\cdot\,,{\boldsymbol q}) : \mathbb{G}_m^g(K_v) \longrightarrow K_v,\\ \ThetaFunction({\boldsymbol u},{\boldsymbol q}) = \sum_{{\boldsymbol m}\in\mathbb{Z}^g} ({}^t{\boldsymbol m}\star{\boldsymbol q}\star{\boldsymbol m})({}^t{\boldsymbol m}\star{\boldsymbol u}). \end{gather*} \end{definition} Written out in full, we have \[ \ThetaFunction({\boldsymbol u},{\boldsymbol q}) = \sum_{{\boldsymbol m}\in\mathbb{Z}^g} \prod_{i,j=1}^g q_{ij}^{m_im_j} \cdot \prod_{k=1}^g u_k^{m_k}. \] The positive-definiteness of~$Q=v({\boldsymbol q})$ ensures that the sum converges for all~${\boldsymbol u}\in{{\bar K}}_v^*$. We next compute the transformation formula for~$\ThetaFunction$ when~${\boldsymbol u}$ is translated by an element of~$\Omega$. We observe that an element of~$\Omega$ is a product of powers of the columns of the matrix whose entries are~$q_{ij}^2$, so they are elements of~$\mathbb{G}_m^g(K_v)$ of the form~${\boldsymbol q}\star2{\boldsymbol n}$ with~${\boldsymbol n}\in\mathbb{Z}^g$. \begin{proposition} Let~${\boldsymbol n}\in\mathbb{Z}^g$ and~${\boldsymbol u}\in\mathbb{G}_m^g(K_v)$. \begin{parts} \vspace{2\jot} \Part{(a)} $\displaystyle \ThetaFunction\bigl({\boldsymbol u}\cdot ({\boldsymbol q}\star2{\boldsymbol n}),{\boldsymbol q}\bigr) = ({}^t{\boldsymbol n}\star{\boldsymbol q}\star{\boldsymbol n})^{-1} ({}^t{\boldsymbol n}\star{\boldsymbol u})^{-1} \ThetaFunction({\boldsymbol u},{\boldsymbol q}). $ \vspace{2\jot} \Part{(b)} $\displaystyle v\Bigl(\ThetaFunction\bigl({\boldsymbol u}\cdot ({\boldsymbol q}\star2{\boldsymbol n}),{\boldsymbol q}\bigr)\Bigr) = v\Bigl(\ThetaFunction({\boldsymbol u},{\boldsymbol q})\Bigr) - {}^t{\boldsymbol n}{Q}{\boldsymbol n}-{}^t{\boldsymbol n}{v({\boldsymbol u})}. $ \end{parts} \end{proposition} \begin{proof} We give the elementary verification in Appendix~\ref{section:verifyformulas}; see Proposition~\ref{proposition:qprop1}. \end{proof} \begin{proposition} The function \begin{gather*} \Lambda(\,\cdot\,,{\boldsymbol q}) : \mathbb{G}_m^g(K_v) \longrightarrow \mathbb{R},\\ \Lambda({\boldsymbol u},{\boldsymbol q}) = v\bigl(\ThetaFunction({\boldsymbol u},{\boldsymbol q})\bigr) + {\dfrac{1}{4}} {}^tv({\boldsymbol u}) Q^{-1} v({\boldsymbol u}), \end{gather*} is~$\Omega$-invariant, and hence descends to a function \[ \Lambda(\,\cdot\,,{\boldsymbol q}) : A(K_v)\cong \mathbb{G}_m^g(K_v)/\Omega \longrightarrow \mathbb{R}. \] \end{proposition} \begin{proof} We give the elementary verification in Appendix~\ref{section:verifyformulas}; see Proposition~\ref{proposition:qprop2}. \end{proof} \begin{theorem} \label{theorem:lDlDIlDBkk} Let~$(A,\ThetaDivisor)/K_v$ be a principally polarized abelian surface having totally split multiplicative reduction, where~$\ThetaDivisor\in\operatorname{Div}_K(A)$ is an effective symmetric principal polarization, and let~${\boldsymbol q}\subset\mathbb{G}_m^g(K_v)$ be an associated multiplicative period matrix. \begin{parts} \Part{(a)} There is a $2$-torsion point~$T_0\in{A[2]}$ so that \[ \ThetaDivisor=\ThetaDivisorT+T_0 \quad\text{with}\quad \ThetaDivisorT = \div\bigl(\ThetaFunction(\,\cdot\,,{\boldsymbol q})\bigr). \] \Part{(b)} Let \[ {\mathcal P} : \mathbb{G}_m^g({\bar K}_v) \longrightarrow A({\bar K}_v) \] denote the $v$-adic analytic uniformization of~$A$. Then there is a $\kappa'_v\in\mathbb{Q}$ so that for all ${\boldsymbol u}\in\mathbb{G}_m^g(K_v)$, \[ \hat\lambda_{\ThetaDivisorT,v}\bigl({\mathcal P}({\boldsymbol u})\bigr) = v\bigl(\ThetaFunction({\boldsymbol u},{\boldsymbol q})\bigr) + {\dfrac{1}{4}} {}^tv({\boldsymbol u}) Q^{-1} v({\boldsymbol u}) - \kappa'_v. \] \Part{(c)} Write \begin{align*} \hat\lambda_{\ThetaDivisorT,v}(P) &= \left\langle\overline{\ThetaDivisorT}\cdot\overline{P}\right\rangle_{{\mathcal A},v} + \mathbb{B}_{\ThetaDivisorT,v}\bigl(j_v(P)\bigr) - \kappa_v \\ &= \hat\lambda_{\ThetaDivisorT,v}^\Int(P) + \hat\lambda_{\ThetaDivisorT,v}^{\textup{Bern}}(P) \end{align*} as in~\eqref{eqn:LDvPivDPBBDvP} and~\eqref{eqn:lhatexacteqlhatintpluslhatbern}. Then \begin{align} \hat\lambda_{\ThetaDivisor,v}^\Int(P+T_0) &= \max_{\substack{{\boldsymbol u}\in\mathbb{G}_m^g({\bar K}_v)\\ {\mathcal P}({\boldsymbol u})=P\\}} v\bigl( \ThetaFunction({\boldsymbol u},{\boldsymbol q}) \bigr) - \kappa_v^\Int, \notag\\ \hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}(P+T_0) &=\min_{\substack{{\boldsymbol u}\in\mathbb{G}_m^g({\bar K}_v)\\ {\mathcal P}({\boldsymbol u})=P\\}} {\dfrac{1}{4}} {}^tv({\boldsymbol u}) Q^{-1} v({\boldsymbol u}) - \kappa_v^{\textup{Bern}}. \label{eqn:lBThvPmin} \end{align} \end{parts} \end{theorem} In the next section we are going to give an explicit formula for the Fourier series of the Bernoulli local height~$\hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}$ when~$\dim(A)=2$. For notational reasons, it is easier to renormalize the lattice and work with the standard torus~$\mathbb{R}^g/\mathbb{Z}^g$. Roughly speaking, we want to write ${\boldsymbol u}\in\mathbb{G}_m^g({\bar K}_v)$ as a (multiplicative) linear combination of the lattice vectors. But since we only require the valuations, we define a function \begin{equation} \label{eqn:xuQinvvu} {\boldsymbol x} : \mathbb{G}_m^g({\bar K}_v) \longrightarrow \mathbb{R}^g, \quad {\boldsymbol x}({\boldsymbol u}) = Q^{-1}v({\boldsymbol u}). \end{equation} For~$P\in{A({\bar K}_v)}$, we write~$P={\mathcal P}({\boldsymbol u}_P)$ for some choice of~${\boldsymbol u}_P\in\mathbb{G}_m^g({\bar K}_v)$, and then we set \[ {\boldsymbol x}_P = {\boldsymbol x}({\boldsymbol u}_P) \in\mathbb{R}^g. \] We note that~${\boldsymbol x}_P$ is well defined in~$\mathbb{R}^g/\mathbb{Z}^g$. We associate to the period valuation matrix~$Q$ the ``periodic quadratic form'' \begin{equation} \label{eqn:LQRgR} L_Q : \mathbb{R}^g \longrightarrow \mathbb{R},\quad L_Q({\boldsymbol x}_0) = \min_{\substack{{\boldsymbol x}\in\mathbb{R}^g\\ {\boldsymbol x}\equiv{\boldsymbol x}_0\pmodintext{\mathbb{Z}^g}\\}} {}^t{\boldsymbol x} Q {\boldsymbol x}, \end{equation} and we write its associated Fourier series as \[ L_Q({\boldsymbol x}) = \sum_{{\boldsymbol n}\in\mathbb{Z}^g} \FC_Q({\boldsymbol n})e^{2\pi i {}^t{\boldsymbol n}{\boldsymbol x}}. \] Then \begin{align} \label{equaton:LQhat0kappavB} {\dfrac{1}{4}} \FC_Q({\boldsymbol{0}}) &= \int_{\mathbb{R}^g/\mathbb{Z}^g} {\dfrac{1}{4}} L_Q({\boldsymbol x})\,d{\boldsymbol x} \notag\\ &= \lim_{\substack{N\to\infty\\\text{$N$ even}\\}} N^{-g} \sum_{{\boldsymbol t}\in N^{-1}\mathbb{Z}^g/\mathbb{Z}^g} {\dfrac{1}{4}} L_Q({\boldsymbol t}) \notag\\ &= \lim_{\substack{N\to\infty\\\text{$N$ even}\\}} N^{-2g} \sum_{T\in A[N]} \Bigl( \hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}(T) + \kappa_v^\mathbb{B} \Bigr) \quad\text{from \eqref{eqn:lBThvPmin},}\notag\\ &= \kappa_v^\mathbb{B} \quad\text{from \eqref{eqn:normalizationIntBern}.} \end{align} Then the Bernoulli local height is given by the formula \begin{align*} \hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}(P+T_0) &= \min_{{\mathcal P}({\boldsymbol u})=P} {\dfrac{1}{4}} {}^tv({\boldsymbol u}) Q^{-1} v({\boldsymbol u}) - \kappa_v^\mathbb{B} &&\text{from \eqref{eqn:lBThvPmin},} \\ &= \min_{{\mathcal P}({\boldsymbol u})=P} {\dfrac{1}{4}} {}^t{\boldsymbol x}(u) Q {\boldsymbol x}(u) - \kappa_v^\mathbb{B} &&\text{from \eqref{eqn:xuQinvvu} and ${}^tQ=Q$,} \\ &= {\dfrac{1}{4}} L_Q({\boldsymbol x}_P) - \kappa_v^\mathbb{B} &&\text{from \eqref{eqn:LQRgR},} \\ &= {\dfrac{1}{4}} L_Q({\boldsymbol x}_P) - {\dfrac{1}{4}} \FC_Q({\boldsymbol{0}}) &&\text{from \eqref{equaton:LQhat0kappavB}.} \end{align*} We record this result as a proposition. \begin{proposition} \label{proposition:lhatDvBLQxminusLhatQ} With notation as in this and the previous section, \[ \hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}(P+T_0) = {\dfrac{1}{4}} L_Q({\boldsymbol x}_P) - {\dfrac{1}{4}} \FC_Q({\boldsymbol{0}}) \quad\text{for all $P\in A({\bar K}_v)$.} \] \end{proposition} \begin{figure}[p] \begin{picture}(300,300)(-150,-175) \thicklines \color{orange} \polygon*(0,100)(100,100)(74.58,66.10) \color{black} \polygon(0,100)(100,100)(74.58,66.10) \color{red} \polygon*(100,0)(100,100)(74.58,66.10) \color{black} \polygon(100,0)(100,100)(74.58,66.10) \color{blue} \polygon*(0,-100)(-100,-100)(-66.10,-74.58) \color{black} \polygon(0,-100)(-100,-100)(-66.10,-74.58) \color{green} \polygon*(-100,0)(-100,-100)(-66.10,-74.58) \color{black} \polygon(-100,0)(-100,-100)(-66.10,-74.58) \color{yellow} \polygon*(-100,0)(-100,100)(0,100)(74.58,66.1)(100,0)(100,-100)(0,-100)(-66.10,-74.58) \color{black} \thinlines \put(0,0){\vector(1,0){120}} \put(0,0){\vector(-1,0){120}} \put(0,0){\vector(0,1){120}} \put(0,0){\vector(0,-1){120}} \thicklines \put(-100,100){\line(1,0){200}} \put(-100,-100){\line(1,0){200}} \put(100,-100){\line(0,1){200}} \put(-100,-100){\line(0,1){200}} \put(70,82){\makebox(0,0)[b]{\textbf{I}}} \put(90,68){\makebox(0,0)[t]{\textbf{II}}} \put(-65,-82){\makebox(0,0)[t]{\color{white}\textbf{III}}} \put(-85,-70){\makebox(0,0)[t]{\textbf{IV}}} \put(72,64){\makebox(0,0)[tr]{$\boldsymbol Q_{12}$}} \put(74.58,66.10){\circle*{5}} \put(-64,-72){\makebox(0,0)[bl]{$\boldsymbol Q_{34}$}} \put(-66.10,-74.58){\circle*{5}} \put(150,-110){\makebox(0,0)[rt]{$\begin{aligned} Q_{12} &= \left( \tfrac{c(a-b)}{2(ac-b^2)}, \tfrac{a(c-b)}{2(ac-b^2)} \right) \\ Q_{34} &= \left( -\tfrac{a(c-b)}{2(ac-b^2)}, -\tfrac{c(a-b)}{2(ac-b^2)} \right) \\ \end{aligned}$}} \end{picture} \begin{picture}(300,250)(-150,-150) \thicklines \color{orange} \polygon*(0,-100)(100,-100)(74.58,-133.90) \color{black} \polygon(0,-100)(100,-100)(74.58,-133.90) \color{red} \polygon*(-100,0)(-100,100)(-125.42,66.10) \color{black} \polygon(-100,0)(-100,100)(-125.42,66.10) \color{blue} \polygon*(0,100)(-100,100)(-66.10,133.90) \color{black} \polygon(0,100)(-100,100)(-66.10,133.90) \color{green} \polygon*(100,0)(100,-100)(125.42,-66.10) \color{black} \polygon(100,0)(100,-100)(125.42,-66.10) \color{yellow} \polygon*(-100,0)(-100,100)(0,100)(74.58,66.1)(100,0)(100,-100)(0,-100)(-66.10,-74.58) \color{black} \thinlines \put(0,0){\vector(1,0){120}} \put(0,0){\vector(-1,0){120}} \put(0,0){\vector(0,1){120}} \put(0,0){\vector(0,-1){120}} \thicklines \put(-100,100){\line(1,0){200}} \put(-100,-100){\line(1,0){200}} \put(100,-100){\line(0,1){200}} \put(-100,-100){\line(0,1){200}} \end{picture} \caption{The hexagon where $F=L$, and the associated decomposition of the unit square as an octagon and four triangles} \label{figure:hexagonandsquare} \end{figure} \section{A Hexagonal Fourier Calculation} \label{section:hexfouriercalcquadform} \begin{definition} \label{definition:glossary} Figure~\ref{figure:notationFLabc} gives a list of notation and conventions that will remain fixed throughout the remainder of this article. \end{definition} \begin{figure}[ht] \begin{center} \framebox{ $ \begin{aligned} {\boldsymbol e}(x) &= e^{2\pi i x},\quad \operatorname{\textbf{Sin}}(x)=\sin(2\pi x), \quad\text{and}\quad \operatorname{\textbf{Cos}}(x)=\cos(2\pi x).\\ a,b,c &\in\mathbb{R} \quad \text{with}\quad a,c>0 \quad\text{and}\quad D = ac-b^2 > 0. \\ \a&=a-b \quad\text{and}\quad \gamma=c-b. \\ F(x,y) &= ax^2 + 2bxy + cy^2\\ &= \a x^2 + b(x+y)^2 + \gamma y^2.\\ L(x,y) &= \min_{m,n\in\mathbb{Z}} F(x+m,y+n),\\ {\boldsymbol F}_0 &:= {\boldsymbol F}_0(m,n) = c\a m+a\gamma n, \\ {\boldsymbol F}_1 &:= {\boldsymbol F}_1(m,n) = c m - b n, \\ {\boldsymbol F}_2 &:= {\boldsymbol F}_2(m,n) = a n - b m, \\ {\boldsymbol F}_3 &:= {\boldsymbol F}_3(m,n) = \gamma m + \a n = {\boldsymbol F}_1 + {\boldsymbol F}_2.\\ |L| &= |a,b,c| = \max\bigl\{|a|,|b|,|c|\bigr\}. \end{aligned} $ } \end{center} \par\noindent \parbox[][][l]{.95\hsize}{ If we need to specify~$a,b,c$ in the notation, we write \begin{align*} F(x,y) &= F_{a,b,c}(x,y) = F(a,b,c;x,y),\\ L(x,y) &= L_{a,b,c}(x,y) = L(a,b,c;x,y), \end{align*} and similarly for~${\boldsymbol F}_0,\ldots,{\boldsymbol F}_3$. We say that~$F$,~$L$, and~$(a,b,c)$ are \emph{normalized} if they satisfy (cf.\ Lemma~\ref{lemma:quadformwbpositive}) \[ \framebox{$c\ge a \ge 2b \ge 0.$} \] \par If we are working over the $v$-adic completion of a field, all of the associated quantities will have a subscript~$v$, e.g.,~$(a_v,b_v,c_v)$ and $L_v$ and~${\boldsymbol F}_{i,v}$. } \caption{Notation and Conventions and Formulas} \label{figure:notationFLabc} \end{figure} We note that the following formal identities are true in the polynomial ring $\mathbb{Z}[a,b,c,m,n]$: \begin{equation} \label{eqn:idsforF0toF3} \left. \begin{array}{c} \begin{aligned} {\boldsymbol F}_0 - \a{\boldsymbol F}_1 & = Dn,\\ {\boldsymbol F}_0 - \gamma{\boldsymbol F}_2 & = Dm,\\ {\boldsymbol F}_0 + b{\boldsymbol F}_3 & = D(m+n),\\ \end{aligned} \\[7\jot] \begin{aligned} a {\boldsymbol F}_1 + b {\boldsymbol F}_2 & = Dm, \hspace{1em} & b {\boldsymbol F}_1 + c {\boldsymbol F}_2 & = Dn,\\ \a {\boldsymbol F}_1 + b {\boldsymbol F}_3 & = Dm, & -\gamma {\boldsymbol F}_1 + c {\boldsymbol F}_3 & = Dn,\\ -\a {\boldsymbol F}_2 + a {\boldsymbol F}_3 & = Dm, & \gamma {\boldsymbol F}_2 + b {\boldsymbol F}_3 & = Dn.\\ \end{aligned} \\ \end{array} \right\} \end{equation} Our next result gives the Fourier expansion of~$L(x,y)$, which is the~$\mathbb{Z}^2$-periodic version of the quadratic form~$F$. \begin{theorem} \label{theorem:fourierexpansionofL} With notation as in Figure~$\ref{figure:notationFLabc}$, and in particular with~$L:(\mathbb{R}/\mathbb{Z})^2\to\mathbb{R}$ being the periodic function \[ L(x,y) = \min_{\substack{\xi\in x+\mathbb{Z}\\ \eta\in y+\mathbb{Z}\\}} a\xi^2 + 2b\xi\eta+c\eta^2, \] the Fourier expansion \[ L(x,y) = \sum_{m,n\in\mathbb{Z}} \FC(m,n){\boldsymbol e}(mx+ny) \] of~$L(x,y)$ has Fourier coefficients given by the following formulas:\footnote{We note that since~$ac\ne0$, the assumptions~${\boldsymbol F}_1=0$ and~$(m,n)\ne(0,0)$ imply that~$n\ne0$, and similarly~${\boldsymbol F}_2=0$ and~$(m,n)\ne(0,0)$ imply that~$m\ne0$. Further, the fact that~$D=ac-b^2\ne0$ tells us that at least one of~$\a=a-b$ and~$\gamma=c-b$ is non-zero, so~${\boldsymbol F}_3=0$ and~$(m,n)\ne(0,0)$ implies that at least one of~$m$ and~$n$ is non-zero. And if we normalize~$a,b,c$, then~$\a\gamma\ne0$, so ${\boldsymbol F}_3=0$ and~$(m,n)\ne(0,0)$ implies that both~$m$ and~$n$ are non-zero.} \[ \FC(m,n) = \begin{cases} \dfrac{a^2c+ac^2-2b^3}{12D} \quad \text{if $(m,n)=(0,0)$,}\hidewidth \\[3\jot] \dfrac{(-1)^n \a c^2}{2 \pi^2 D n^2} &\text{if ${\boldsymbol F}_1=cm-bn=0$, $(m,n)\ne(0,0)$,} \\[3\jot] \dfrac{(-1)^m \gamma a^2}{2 \pi^2 D m^2} &\text{if ${\boldsymbol F}_2=an-bm=0$, $(m,n)\ne(0,0)$,} \\[3\jot] \dfrac{(-1)^{m+n+1} \a \gamma b}{2 \pi^2 D m n} &\text{if ${\boldsymbol F}_3=\gamma m+\a n=0$, $(m,n)\ne(0,0)$,} \\[3\jot] \dfrac{D^2 \displaystyle \operatorname{\textbf{Sin}} \left(\frac{ c \a m + a \gamma n }{2D}\right)} {4 \pi ^3 (c m - b n)(a n - b m) ( \gamma m + \a n ) } \quad\text{otherwise.} \hidewidth\\ \end{cases} \] \end{theorem} \begin{proof} The region \[ {\mathcal H} = \bigl\{ (x,y)\in\mathbb{R}^2 : F(x,y)=L(x,y) \bigr\} \] where~$F$ and~$L$ are equal is a hexagon, as shown in the bottom illustration in Figure~\ref{figure:hexagonandsquare}. The intersection of~${\mathcal H}$ with the unit square \[ {\mathcal S} = \bigl\{ (x,y)\in\mathbb{R}^2 : |x|\le\tfrac12,\;|y|\le\tfrac12 \bigr\} \] is the central octagon in both illustrations in Figure~\ref{figure:hexagonandsquare}. The set difference~${\mathcal S}\setminus{\mathcal H}$ consists of four triangles, which are shifted versions of the set difference~${\mathcal H}\setminus{\mathcal S}$ again as shown in Figure~\ref{figure:hexagonandsquare}. We label the four triangular regions as follows: \[ \begin{array}{l@{\quad}r@{}l} \hline \color{orange} \text{Region I} & L(x,y)&{}=F(x,y-1) \\ \hline \color{red} \text{Region II} & L(x,y)&{}=F(x-1,y) \\ \hline \color{blue} \text{Region III} & L(x,y)&{}=F(x,y+1) \\ \hline \color{green} \text{Region IV} & L(x,y)&{}=F(x+1,y) \\ \hline \end{array} \] We start with some observations that we use in the computation of the Fourier coefficients of~$L$. The functions~$F$ and~$L$ have the following symmetries: \begin{align*} F_{a,b,c}( x,y) &= F_{a,b,c}( -x,-y) = F_{c,b,a}( y,x) , \\ L_{a,b,c}( x,y) &= L_{a,b,c}( -x,-y) = L_{c,b,a}( y,x) . \end{align*} The sign change symmetry identifies Regions~I and~III, leading to the equality \begin{multline*} \DI_{\textup{III}} \bigl\{ F_{a,b,c}(x,y-1) - F_{a,b,c}(x,y) \bigr\}{\boldsymbol e}(mx+ny) \\* = \DI_{\textup{I}} \bigl\{ F_{a,b,c}(x,y+1) - F_{a,b,c}(x,y) \bigr\}{\boldsymbol e}(-mx-ny), \end{multline*} and similarly for Regions~II and~IV. The reflection symmetry together with the parameter swap $a\leftrightarrow{c}$ identifies Regions~I and~II, leading to the equality \begin{multline*} \DI_{\textup{II}} \bigl\{ F_{a,b,c}(x,y-1) - F_{a,b,c}(x,y) \bigr\}{\boldsymbol e}(mx+ny) \\* = \DI_{\textup{I}} \bigl\{ F_{c,b,a}(x,y-1) - F_{c,b,a}(x,y) \bigr\}{\boldsymbol e}(mx+ny), \end{multline*} and similarly for Regions~III and~IV. \par Using these observations, we find that \begin{align*} \FC & (m,n) = \DI_{\mathcal S} L(x,y){\boldsymbol e}(mx+ny) \\ &= \DI_{\mathcal S} F(x,y){\boldsymbol e}(mx+ny) \\ &\qquad {}+ \left( \DI_{\textup{I}} + \DI_{\textup{II}} + \DI_{\textup{III}} + \DI_{\textup{IV}} \right) \bigl\{ L(x,y)-F(x,y) \bigr\} {\boldsymbol e}(mx+ny) \\ &= \DI_{\mathcal S} F(x,y){\boldsymbol e}(mx+ny) \\ & \qquad {}+ \DI_{\textup{I}} \bigl\{ F(x,y-1) - F(x,y) \bigr\}{\boldsymbol e}(mx+ny) \\ & \qquad {}+ \DI_{\textup{II}} \bigl\{ F(x-1,y) - F(x,y) \bigr\}{\boldsymbol e}(mx+ny) \\ & \qquad {}+ \DI_{\textup{III}} \bigl\{ F(x,y+1) - F(x,y) \bigr\}{\boldsymbol e}(mx+ny) \\ & \qquad {}+ \DI_{\textup{IV}} \bigl\{ F(x+1,y) - F(x,y) \bigr\}{\boldsymbol e}(mx+ny) \\ &= \DI_{\mathcal S} F(x,y)\operatorname{\textbf{Cos}}(mx+ny) \\ & \qquad {}+ 2\DI_{\textup{I}} \bigl\{ F(x,y-1) - F(x,y) \bigr\}\operatorname{\textbf{Cos}}(mx+ny) \\ & \qquad {}+ 2\DI_{\textup{II}} \bigl\{ F(x-1,y) - F(x,y) \bigr\}\operatorname{\textbf{Cos}}(mx+ny) \\ &= \int_{-\frac12}^{\frac12} \int_{-\frac12}^{\frac12} (ax^2+2bxy+cy^2) \operatorname{\textbf{Cos}}(mx+ny)\, dx\, dy \\ & \qquad {}+ 2 \int_{\tfrac{a(c-b)}{2(ac-b^2)}}^{\tfrac12} \int_{\tfrac{c}{b}(\tfrac12-y)}^{\tfrac12-\tfrac{c-b}{a-b}(\tfrac12-y)} (c - 2 b x - 2 c y)\operatorname{\textbf{Cos}}(mx+ny) \, dx \, dy \\ & \qquad {}+ 2 \int_{\tfrac{c(a-b)}{2(ac-b^2)}}^{\tfrac12} \int_{\tfrac{a}{b}(\tfrac12-x)}^{\tfrac12-\tfrac{a-b}{c-b}(\tfrac12-x)} (a - 2 a x - 2 b y)\operatorname{\textbf{Cos}}(mx+ny) \, dy \, dx. \end{align*} It is now an easy task\footnote{Easy using a computer algebra system such as Mathematica, otherwise it is a feasible, but tedious, task.} to compute these integrals. We start with the case that~$m$ and~$n$ are non-zero integers, and we assume for the moment that \[ {\boldsymbol F}_1(m,n){\boldsymbol F}_2(m,n){\boldsymbol F}_3(m,n)\ne 0. \] Then the integral over Regions~I and~III is given explicitly by \begin{align*} &\left( \DI_{\textup{I}} + \DI_{\textup{III}} \right) \bigl\{ L(x,y)-F(x,y) \bigr\} {\boldsymbol e}(mx+ny) \\ &= \int_{\tfrac{a(c-b)}{2(ac-b^2)}}^{\tfrac12} \int_{\tfrac{c}{b}(\tfrac12-y)}^{\tfrac12-\tfrac{c-b}{a-b}(\tfrac12-y)} \hspace*{-1.0em} (c - 2 b x - 2 c y)\cdot \operatorname{\textbf{Cos}}(mx+ny) \, dx \, dy \\ &= \frac{b (a-b)}{4 \pi ^2 m ((c - b) m + (a - b) n)} (-1)^{m+n} \\ &\qquad{} + \frac{1}{4 \pi ^3 (c m - b n)} \cdot \left( \frac{a c - b^2}{(c - b) m + (a - b) n} \right)^2 \cdot \\ &\hspace{10em} {} \operatorname{\textbf{Sin}} \left(\frac{ c (a - b) m + a (c - b) n }{2(a c - b^2)}\right) \\ &= \frac{(-1)^{m+n} b \a}{4 \pi ^2 m {\boldsymbol F}_3} + \frac{D^2}{4 \pi ^3 {\boldsymbol F}_1 {\boldsymbol F}_3^2} \operatorname{\textbf{Sin}} \left(\frac{{\boldsymbol F}_0 }{2 D}\right). \end{align*} Further, as noted earlier, the integral over Regions~II and~IV is the same with the swaps~$a\leftrightarrow{c}$ and~$m\leftrightarrow{n}$.\footnote{We note that these swaps leave~${\boldsymbol F}_0$ and~${\boldsymbol F}_3$ invariant, while swapping $\a\leftrightarrow\gamma$ and ${\boldsymbol F}_1\leftrightarrow{\boldsymbol F}_2$.} Hence \begin{align*} \left( \DI_{\textup{I}} \right. & \left. + \DI_{\textup{II}} + \DI_{\textup{III}} + \DI_{\textup{IV}} \right) \bigl\{ L(x,y)-F(x,y) \bigr\} {\boldsymbol e}(mx+ny) \,dx\,dy\\ &= \left\{\frac{(-1)^{m+n} b \a}{4 \pi ^2 m {\boldsymbol F}_3} + \frac{D^2}{4 \pi ^3 {\boldsymbol F}_1 {\boldsymbol F}_3^2} \operatorname{\textbf{Sin}} \left(\frac{{\boldsymbol F}_0 }{2 D}\right) \right\} \\ &\omit\hfill$\displaystyle + \left\{\frac{(-1)^{m+n} b \gamma}{4 \pi ^2 n {\boldsymbol F}_3} + \frac{D^2}{4 \pi ^3 {\boldsymbol F}_2 {\boldsymbol F}_3^2} \operatorname{\textbf{Sin}} \left(\frac{{\boldsymbol F}_0 }{2 D}\right) \right\}$\\ &= \frac{(-1)^{m+n}b}{4\pi^2 m n} + \frac{D^2{\displaystyle\operatorname{\textbf{Sin}} \left({ {\boldsymbol F}_0 }/{2D}\right)}} {4 \pi^3 {\boldsymbol F}_1 {\boldsymbol F}_2 {\boldsymbol F}_3}. \end{align*} On the other hand, the integral of~$F$ over the square is simply \[ \DI_{{\mathcal S}} F(x,y)\operatorname{\textbf{Cos}}(mx+ny) = \frac{(-1)^{m+n+1}b}{4\pi^2 m n}, \] which cancels the first term in the sum of the four-triangle integrals.\footnote{Presumably this cancellation is not a coincidence!} Hence \begin{equation} \label{eqn:hatLmngen} \FC(m,n) = \frac{D^2{\displaystyle\operatorname{\textbf{Sin}} \left({ {\boldsymbol F}_0 }/{2D}\right)}} {4 \pi^3 {\boldsymbol F}_1 {\boldsymbol F}_2 {\boldsymbol F}_3}. \end{equation} One can check by a direct computation that the formula~\eqref{eqn:hatLmngen} for $\FC(m,n)$ is valid if one, but not both, of~$m$ and~$n$ is~$0$. (This despite the fact that~$m$ and~$n$ seem to appear in the denominators of some of the intermediate calculations.) We note that we can use~\eqref{eqn:idsforF0toF3} to rewrite the formula for~$\FC(m,n)$ so that the argument of the sine function is instead related to one of the other~${\boldsymbol F}_i$. Thus \begin{align*} \operatorname{\textbf{Sin}}\left(\frac{{\boldsymbol F}_0}{2D}\right) &= \operatorname{\textbf{Sin}}\left(\frac{\a{\boldsymbol F}_1+Dn}{2D}\right) = (-1)^n \operatorname{\textbf{Sin}}\left(\frac{\a{\boldsymbol F}_1}{2D}\right),\\ \operatorname{\textbf{Sin}}\left(\frac{{\boldsymbol F}_0}{2D}\right) &= \operatorname{\textbf{Sin}}\left(\frac{\gamma{\boldsymbol F}_2+Dm}{2D}\right) = (-1)^m \operatorname{\textbf{Sin}}\left(\frac{\gamma{\boldsymbol F}_2}{2D}\right),\\ \operatorname{\textbf{Sin}}\left(\frac{{\boldsymbol F}_0}{2D}\right) &= \operatorname{\textbf{Sin}}\left(\frac{-b{\boldsymbol F}_3+D(m+n)}{2D}\right) = (-1)^{m+n+1} \operatorname{\textbf{Sin}}\left(\frac{b{\boldsymbol F}_3}{2D}\right). \end{align*} Substituting these into~\eqref{eqn:hatLmngen} gives three additional formulas for~$\FC(m,n)$, \begin{align} \label{eqn:hatLmngen1} \FC(m,n) &= \frac{(-1)^nD^2}{4\pi^3} \frac{\operatorname{\textbf{Sin}}(\a{\boldsymbol F}_1/2D\bigr)}{{\boldsymbol F}_1{\boldsymbol F}_2{\boldsymbol F}_3} , \\ \label{eqn:hatLmngen2} \FC(m,n) &= \frac{(-1)^mD^2}{4\pi^3} \frac{\operatorname{\textbf{Sin}}(\gamma{\boldsymbol F}_2/2D\bigr)}{{\boldsymbol F}_1{\boldsymbol F}_2{\boldsymbol F}_3} , \\ \label{eqn:hatLmngen3} \FC(m,n) &= \frac{(-1)^{m+n+1}D^2}{4\pi^3} \frac{\operatorname{\textbf{Sin}}(b{\boldsymbol F}_3/2D\bigr)}{{\boldsymbol F}_1{\boldsymbol F}_2{\boldsymbol F}_3} . \end{align} \par Continuing with our assumption that~$(m,n)\ne(0,0)$, we consider the case that one of~${\boldsymbol F}_1,{\boldsymbol F}_2,{\boldsymbol F}_3$ vanishes. An important observation is that~\eqref{eqn:idsforF0toF3} and the fact that~$D\ne0$ tells us that at most one of these~${\boldsymbol F}_i(m,n)$ can vanish. \par We fix an integer pair~$(m,n)\ne(0,0)$, we take a sequence of values of~$(a,b,c)$ that cause one of the~${\boldsymbol F}_i$ to vanish. The integrals that occur in the computation of~$\FC(a,b,c;m,n)$ are integrals of a continuous function~$L$ over a compact set, where~$L$ depends continuously on~$(a,b,c)$, so we can move the limit as~${\boldsymbol F}_i(a,b,c;m,n)\to0$ across the integral. \par Thus~\eqref{eqn:hatLmngen1} yields \begin{align} \label{eqn:FCasF1to0} \lim_{{\boldsymbol F}_1\to0} \FC(m,n) &= \lim_{{\boldsymbol F}_1\to0} \frac{(-1)^nD^2}{4\pi^3} \frac{\operatorname{\textbf{Sin}}(\a{\boldsymbol F}_1/2D\bigr)}{{\boldsymbol F}_1{\boldsymbol F}_2{\boldsymbol F}_3} \notag\\ &= \lim_{{\boldsymbol F}_1\to0} \frac{(-1)^nD^2}{4\pi^3{\boldsymbol F}_2{\boldsymbol F}_3}\cdot \frac{\sin(2\pi\a{\boldsymbol F}_1/2D\bigr)}{2\pi\a{\boldsymbol F}_1/2D}\cdot\frac{2\pi\a}{2D}\notag\\ &= \frac{(-1)^nD\a}{4\pi^2} \lim_{{\boldsymbol F}_1\to0} \frac{1}{{\boldsymbol F}_2{\boldsymbol F}_3} . \end{align} As ${\boldsymbol F}_1=cm-bn\to0$, we have (note that $c\ne0$) \begin{align*} \lim_{{\boldsymbol F}_1\to0} {\boldsymbol F}_2 &= \lim_{cm\to bn} an-bm = \lim_{cm\to bn} an-\frac{b}{c}\cdot cm = an-\frac{b}{c}\cdot bn = \frac{Dn}{c}, \\ \lim_{{\boldsymbol F}_1\to0} {\boldsymbol F}_3 &= \lim_{cm\to bn} \gamma m + \a n = \lim_{cm\to bn} \frac{\gamma}{c}\cdot cm + \a n = \frac{\gamma}{c}\cdot bn + \a n = \frac{Dn}{c}. \end{align*} Substituting these two limits into~\eqref{eqn:FCasF1to0} yields \[ \lim_{{\boldsymbol F}_1\to0} \FC(m,n) = \frac{(-1)^n\a c^2}{4\pi^2 D n^2} . \] Similar computations using~\eqref{eqn:hatLmngen2} and~\eqref{eqn:hatLmngen3} give the analogous formulas for the values of~$\FC(m,n)$ as~${\boldsymbol F}_2\to0$ and as~${\boldsymbol F}_3\to0$. We leave the details to the reader. \par It remains to compute~$\FC(0,0)$, for which the relevant integrals are easy and left as an exercise. This concludes the proof of Theorem~\ref{theorem:fourierexpansionofL}. \end{proof} \section{Averaging (Periodic) Functions over (Torsion) Points} \label{section:avgperiodicfuncs} We introduce a convenient notation for the expected value (average) of a function over a set, and in particular over the $d$-torsion points of an abelian group. \begin{definition} \label{definition:Avg} Let~$S$ be a finite set, and let~$f:S\to\mathbb{R}$ be a real-valued function. We write \[ \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{x\in S} f(x) = \frac{1}{\#S} \sum_{x\in S} f(x). \] Similarly, \[ \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{x,y\in S\\ x\ne y\\}} f(x-y) = \frac{1}{\#S^2-\#S}\sum_{\substack{x,y\in S\\ x\ne y\\}}f(x-y). \] If~$S=A$ is an abelian group and~$d\ge1$, by a slight abuse of notation we write \[ (\operatorname{\hbox{\normalfont\calligra{Avg}}}_d f)(x) = \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{t\in A[d]}f(x+t) = \frac{1}{\#A[d]}\sum_{t\in A[d]} f(x+t) \] for the average of~$f$ at the $d$-torsion translates of~$x$, and we call~$\operatorname{\hbox{\normalfont\calligra{Avg}}}_df$ the~\emph{$d$-average of~$f$}. \end{definition} \begin{example} We illustrate Definition~\ref{definition:Avg} with three examples: \begin{parts} \Part{(1)} For a function~$L:(\mathbb{R}/\mathbb{Z})^2\to\mathbb{R}$ such as the one defined in Theorem~\ref{theorem:fourierexpansionofL}, we have \[ (\operatorname{\hbox{\normalfont\calligra{Avg}}}_d L)(x,y) = \frac{1}{d^2} \sum_{i=0}^{d-1} \sum_{j=0}^{d-1} L\left(x+\frac{i}{d},y+\frac{j}{d}\right). \] \Part{(2)} For an abelian variety~$A$ of dimension~$g$ and a function~$\lambda:A\to\mathbb{R}$, we have \[ (\operatorname{\hbox{\normalfont\calligra{Avg}}}_d\lambda)(P) = \frac{1}{d^{2g}} \sum_{T\in A[d]} \lambda(P+T). \] \Part{(3)} For any integer~$m$ and the function~${\boldsymbol e}_m(x)=e^{2\pi i m x}$, we have \[ (\operatorname{\hbox{\normalfont\calligra{Avg}}}_d{\boldsymbol e}_m)(x) = \begin{cases} {\boldsymbol e}_m(x) &\text{if $d\mid m$, } \\ 0 &\text{if $d\nmid m$. } \\ \end{cases} \] \end{parts} \end{example} \begin{definition} The \emph{2nd periodic Bernoulli polynomial} is the function defined by \[ \text{$\mathbb{B}_2(x) = x^2-x+\dfrac16$ for $0\le x\le 1$, and $\mathbb{B}_2(x+n)=\mathbb{B}_2(x)$ for $n\in\mathbb{Z}$.} \] The well-known Fourier expansion of~$\mathbb{B}_2(x)$ is \begin{equation} \label{eqn:B2fourier} \mathbb{B}_2(x) = \frac{1}{2\pi^2} \sideset{}{^\prime}\sum_{k\in\mathbb{Z}} \frac{{\boldsymbol e}(kx)}{k^2}, \end{equation} from which we immediately obtain the distribution relation \begin{equation} \label{eqn:B2distributionrelation} (\operatorname{\hbox{\normalfont\calligra{Avg}}}_N\mathbb{B}_2)(x) = \frac{1}{N^2}\mathbb{B}_2(Nx). \end{equation} \end{definition} We recall a Fej\'er kernel type estimate for~$\mathbb{B}_2$. \begin{lemma} \label{lemma:avg2ndBernpoly} Let~$R\ge1$ be an integer, and let \[ T \subset \frac{1}{R}\mathbb{Z} \quad\text{with}\quad N=\#T \] be a set of~$N$ distinct rational numbers whose denominators divide~$R$. Then \[ \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{s,t\in T\\sigma\ne t\\}} \; \mathbb{B}_2(s-t) \ge \frac{1}{6R^2} - \frac{1}{6(N-1)}. \] \end{lemma} \begin{proof} Let~$T=\{t_1,\ldots,t_N\}$. We compute \begin{align*} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{s,t\in T\\sigma\ne t\\}} {} & \; \mathbb{B}_2(s-t) \\ &= \frac{1}{N^2-N} \sum_{\substack{i,j=1\\i\ne j\\}}^N \mathbb{B}_2(t_i-t_j) \\ &= \frac{1}{N^2-N} \sum_{\substack{i,j=1\\i\ne j\\}}^N \frac{1}{2\pi^2} \sideset{}{^\prime}\sum_{k\in\mathbb{Z}} \frac{{\boldsymbol e}\bigl(k(t_i-t_j)\bigr)}{k^2} \quad\text{from \eqref{eqn:B2fourier},} \\ &= \frac{1}{2\pi^2(N^2-N)} \sideset{}{^\prime}\sum_{k\in\mathbb{Z}} \frac{1}{k^2} \sum_{\substack{i,j=1\\i\ne j\\}}^N {\boldsymbol e}\bigl(k(t_i-t_j)\bigr) \\ &= \frac{1}{2\pi^2(N^2-N)} \sideset{}{^\prime}\sum_{k\in\mathbb{Z}} \frac{1}{k^2} \biggl\{ \underbrace{\left| \sum_{i=1}^N {\boldsymbol e}(k t_i) \right|^2}_{\hidewidth \substack{ \text{this quantity is always $\ge0$,} \\ \text{and if $R\mid k$, then it equals $N^2$} \\ }\hidewidth } - N \biggr\} \\ &\ge \frac{1}{2\pi^2(N^2-N)} \sideset{}{^\prime}\sum_{k\in\mathbb{Z}} \frac{N^2-N}{(Rk)^2} - \frac{1}{2\pi^2(N^2-N)} \sideset{}{^\prime}\sum_{k\in\mathbb{Z}} \frac{N}{k^2} \\ &= \frac{1}{2\pi^2 R^2} \cdot 2\zeta(2) - \frac{1}{2\pi^2 (N-1)} \cdot 2\zeta(2) \\ &= \frac{1}{6R^2} - \frac{1}{6(N-1)}. \end{align*} This completes the proof of Lemma~\ref{lemma:avg2ndBernpoly}. \end{proof} We next express certain $d$-averages of the function~$L(x,y)$ in Theorem~\ref{theorem:fourierexpansionofL} in terms of $d$-averages of the second Bernoulli polynomial. \begin{corollary} \label{corollary:avglambdaberntobern2} Let~$a,b,c\in\mathbb{Z}$ with $D=ac-b^2>0$, let~$\a=a-b$ and~$\gamma=c-b$, and let~$d$ by an integer satisfying \begin{equation} \label{eqn:deqiuv02Dabc2q} d \equiv 0 \left(\bmod \frac{2D}{\gcd(a,b,c)^2} \right). \end{equation} Then the~$d$-average of the~$\mathbb{Z}^2$-periodic function \[ L(x,y) = \min_{\substack{\xi\in x+\mathbb{Z}\\ \eta\in y+\mathbb{Z}\\}} a\xi^2 + 2b\xi\eta+c\eta^2 \] is given by the formula \begin{align} \label{eqn:AvgdLxy3B2s} \operatorname{\hbox{\normalfont\calligra{Avg}}}_d L(x,y) = \FC(0,0) &+ \frac{ \a (c,b)^2}{ D d^2} \mathbb{B}_2\left( \frac{d (b x + c y)}{\gcd(c,b)} \right) \notag\\* &+ \frac{ \gamma (a,b)^2}{ D d^2} \mathbb{B}_2\left( \frac{d (a x + b y)}{\gcd(a,b)} \right) \notag\\* &+ \frac{ b(\a,\gamma)^2}{ D d^2} \mathbb{B}_2\left( \frac{d (\a x -\gamma y)}{\gcd(\a,\gamma)} \right). \end{align} \end{corollary} \begin{proof} The congruence condition~\eqref{eqn:deqiuv02Dabc2q} says that~$d$ satisfies \[ \frac{d\gcd(a,b,c)^2}{D}\in\mathbb{Z}, \] which in turn implies that \[ \operatorname{\textbf{Sin}}\left( \frac{c\a d m + a \gamma d n}{2D} \right) = \sin\left( \pi\cdot\frac{d\gcd(a,b,c)^2}{D}\cdot \frac{c\a m+a\gamma n}{\gcd(a,b,c)^2} \right) = 0, \] since~$a,c,\a,\gamma$ are all divisible by~$\gcd(a,b,c)$. Then Theorem~\ref{theorem:fourierexpansionofL} says that the associated Fourier coefficient satisfies \[ \FC(dm,dn) = 0 \quad\text{unless}\quad {\boldsymbol F}_1(m,n){\boldsymbol F}_2(m,n){\boldsymbol F}_3(m,n)=0. \] We note that if~$D\ne0$ and~$(m,n)\ne(0,0)$, then at most one of the linear forms~${\boldsymbol F}_1,{\boldsymbol F}_2,{\boldsymbol F}_3$ may vanish, so aside from~$\FC(0,0)$, the Fourier series splits into three sums. We compute (note $d$ is even, so the powers with~$(-1)^d$ may be omitted) \begin{align*} \operatorname{\hbox{\normalfont\calligra{Avg}}}_d L(x,y) - \FC(0,0) &= \hspace{2em} \sideset{}{^\prime}\sum_{\hidewidth \substack{m,n\in\mathbb{Z}\\ {\boldsymbol F}_1(m,n){\boldsymbol F}_2(m,n){\boldsymbol F}_3(m,n)=0\\}\hidewidth } \FC(dm,dn) {\boldsymbol e}(dmx+dny) \\ &= \sideset{}{^\prime}\sum_{\substack{m,n\in\mathbb{Z}\\ {\boldsymbol F}_1(m,n)=0\\}} \dfrac{(-1)^{dn} \a c^2}{2 \pi^2 D (dn)^2} {\boldsymbol e}(dmx+dny) \\ &\qquad{}+ \sideset{}{^\prime}\sum_{\substack{m,n\in\mathbb{Z}\\ {\boldsymbol F}_2(m,n)=0\\}} \dfrac{(-1)^{dm} \gamma a^2}{2 \pi^2 D (dm)^2} {\boldsymbol e}(dmx+dny) \\ &\qquad{}+ \sideset{}{^\prime}\sum_{\substack{m,n\in\mathbb{Z}\\ {\boldsymbol F}_3(m,n)=0\\}} \dfrac{(-1)^{dm+dn+1} \a \gamma b}{2 \pi^2 D (dm)(dn)} {\boldsymbol e}(dmx+dny)\\ &\omit\hfill \begin{tabular}[t]{l} using the formulas for $\FC(m,n)$\\ from Theorem~\ref{theorem:fourierexpansionofL},\\ \end{tabular} \\ &= \frac{ \a c^2}{2 \pi^2 D d^2} \sideset{}{^\prime}\sum_{\substack{m,n\in\mathbb{Z}\\ {\boldsymbol F}_1(m,n)=0\\}} \dfrac{1}{n^2} {\boldsymbol e}(dmx+dny) \\ &\qquad{}+ \frac{ \gamma a^2}{2 \pi^2 D d^2} \sideset{}{^\prime}\sum_{\substack{m,n\in\mathbb{Z}\\ {\boldsymbol F}_2(m,n)=0\\}} \dfrac{1}{m^2} {\boldsymbol e}(dmx+dny) \\ &\qquad{}+ \frac{ \a \gamma b}{2 \pi^2 D d^2} \sideset{}{^\prime}\sum_{\substack{m,n\in\mathbb{Z}\\ {\boldsymbol F}_3(m,n)=0\\}} \dfrac{-1}{mn} {\boldsymbol e}(dmx+dny). \end{align*} We rewrite the last three sums using \begin{align*} \bigl\{ (m,n)\in\mathbb{Z}^2 : {\boldsymbol F}_1(m,n)=0 \bigr\} &= \left\{ \left( \frac{bk}{(c,b)}, \frac{ck}{(c,b)} \right) : k\in\mathbb{Z} \right\}, \\ \bigl\{ (m,n)\in\mathbb{Z}^2 : {\boldsymbol F}_2(m,n)=0 \bigr\} &= \left\{ \left( \frac{ak}{(a,b)}, \frac{bk}{(a,b)} \right) : k\in\mathbb{Z} \right\}, \\ \bigl\{ (m,n)\in\mathbb{Z}^2 : {\boldsymbol F}_3(m,n)=0 \bigr\} &= \left\{ \left( \frac{\a k}{(\a,\gamma)}, \frac{-\gamma k}{(\a,\gamma)} \right) : k\in\mathbb{Z} \right\}. \end{align*} This yields \begin{align} \label{Avgdfourierseriesz} \operatorname{\hbox{\normalfont\calligra{Avg}}}_d L(x,y) - \FC(0,0) &= \frac{ \a (c,b)^2}{2 \pi^2 D d^2} \sideset{}{^\prime}\sum_{k\in\mathbb{Z}} \frac{1}{k^2} {\boldsymbol e}\left( \frac{d (b x + c y)}{(c,b)} k \right) \notag\\ &\qquad{}+ \frac{ \gamma (a,b)^2}{2 \pi^2 D d^2} \sideset{}{^\prime}\sum_{k\in\mathbb{Z}} \frac{1}{k^2} {\boldsymbol e}\left( \frac{d (a x + b y)}{(a,b)} k \right) \notag\\ &\qquad{}+ \frac{b(\a,\gamma)^2}{2 \pi^2 D d^2} \sideset{}{^\prime}\sum_{k\in\mathbb{Z}} \frac{1}{k^2} {\boldsymbol e}\left( \frac{d (\a x -\gamma y)}{(\a,\gamma)} k \right). \end{align} Using the Fourier series~\eqref{eqn:B2fourier} for~$\mathbb{B}_2$ for the three sums in~\eqref{Avgdfourierseriesz} gives the desired result. \end{proof} \section{Two Lower Bounds for the Local Height} \label{section:twolowerboundslocalheights} In this section we prove two lower bounds for averages of the Bernoulli part of the local height, one via Fourier averaging and one via the pigeonhole principle. Both estimates will be used in the proof of our Lehmer-type lower bound for the global height. The notation in Figure~\ref{figure:setupforlocallemmas} is used in the statement of both lemmas. \begin{figure} \framebox{\begin{minipage}{0.95\linewidth} \begin{tabular}{cl} $K_v$ & \parbox[t]{0.75\linewidth}{ a field that is complete with respect to a non-archimedean absolute value~$v$. } \\[5\jot] $(A,\Theta)/K_v$ & \parbox[t]{0.75\linewidth}{ an abelian variety~$A$ defined over~$K_v$ with an effective symmetric principal polarization $\Theta$, and such that~$A$ has totally split multiplicative reduction. } \\[9\jot] $(a,b,c)$ & \parbox[t]{0.75\linewidth}{ a normalized period valuation triple for $A/K_v$, i.e., if the period matrix is~${\boldsymbol q}$, then \\ \hspace*{2em} $a=v(q_{11}),\; b=v(q_{12})=v(q_{21}),\; c=v(q_{22}).$ } \\[9\jot] $D$ & ${}=ac-b^2$. \\ $d$ & \parbox[t]{0.65\linewidth}{ a positive integer satisfying \\ \hspace*{3em} $d \equiv 0 \left(\bmod \dfrac{2D}{\gcd(a,b,c)^2} \right).$ } \\[10\jot] $\Sigma$ & a finite subset of $A(K_v)$. \\ $N$ & ${}=\#\Sigma$. \end{tabular} \end{minipage} } \caption{Notation and Setup for Lemmas \ref{lemma:fourieravgbound} and \ref{lemma:pigeonholebound}.} \label{figure:setupforlocallemmas} \end{figure} \subsection{A Local Height Lower Bound via Fourier Averaging} \label{section:localhtbdviafourier} The main result of this section is an abelian surface analogue of the elliptic curve result~\cite[Proposition~1.2]{hindrysilverman:lehmer}. In order to handle the fact that for abelian surfaces, many of the Fourier coefficients of the Bernoulli-part of the local height are negative, the proof includes an average over~$d$-torsion points that eliminates the negative coefficients. For our eventual application to Lehmer-type height bounds, it is crucial that the value of~$d$ does not change when the base field is replaced by a (ramified) extension. \begin{lemma} \label{lemma:fourieravgbound} With notation as in Figure~$\ref{figure:setupforlocallemmas}$, we have \footnote{We recall that although~$\hat\lambda_{\ThetaDivisor,v}$ is only defined on the complement of the support of its associated divisor, we can extend~$\hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}$ to all of~$A(K_v)$. See Remark~\ref{remark:lhatberndefeverywhere}.} \begin{multline*} \smash[b]{ \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma\\P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} }\; \hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}(P-Q+T) \\* \ge \frac{1}{24d^2} \left( \frac{\a+\gamma+b}{D} - \frac{\a(c,b)^2+\gamma(a,b)^2+b(\a,\gamma)^2}{D(N-1)} \right). \end{multline*} \end{lemma} \begin{proof} We first note that since we are averaging over~$d$-torsion points and~$d$ is even, we may as well replace the principal polarization~$\ThetaDivisor$ with the divisor of~$\theta({\boldsymbol u},{\boldsymbol q})$, since they differ by a $2$-torsion point that will disappear when we take the average; cf.\ Theorem~\ref{theorem:lDlDIlDBkk}. \par An important observation is that for any point~$P$, the vector~$(x_P,y_P)$ is given by the coordinates of~$P$ in the group~$\mathbb{Z}^2/A\mathbb{Z}^2$ relative to the basis given by the columns of the matrix~$A=\SmallMatrix{a&b\\b&c\\}$. Thus \begin{multline} \label{eqn:xPyPabcduPvP} \begin{pmatrix} x_P \\ y_P \\ \end{pmatrix} = \begin{pmatrix} a&b\\ b&c\\ \end{pmatrix}^{-1} \begin{pmatrix} u_P \\ v_P \\ \end{pmatrix} = \frac{1}{D} \begin{pmatrix} c&-b\\ -b&a\\ \end{pmatrix} \begin{pmatrix} u_P \\ v_P \\ \end{pmatrix} \\ \quad\text{for some $u_p,v_P\in\mathbb{Z}$.} \end{multline} This yields the useful formulas \begin{equation} \label{eqn:bxcyvaxbyu} bx_P+cy_P=v_P,\quad ax_P+by_P=u_p,\quad \a x_P-\gamma y_P=u_P-v_P. \end{equation} We also note that for any points~$P$ and~$Q$, we have \begin{equation} \label{eqn:xPminusQxPminusxQ} x_{P-Q}\equiv x_P-x_Q\pmodintext{\mathbb{Z}} \quad\text{and}\quad y_{P-Q}\equiv y_P-y_Q\pmodintext{\mathbb{Z}}. \end{equation} To ease notation, we drop the~$\gcd$ from the notation~$\gcd(a,b)$. We compute \begin{align*} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma\\P\ne Q\\}} & \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \; 4 \hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}(P-Q+T) \\* &= \smash[b]{ \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma\\P\ne Q\\}} } \operatorname{\hbox{\normalfont\calligra{Avg}}}_{d} \Bigl( L(x_{P-Q},y_{P-Q}) - \FC(0,0) \Bigr) \\* &\omit\hfill\quad\text{from Proposition~\ref{proposition:lhatDvBLQxminusLhatQ},} \\ &= \smash[b]{ \frac{1}{N^2-N} \sum_{\substack{P,Q\in\Sigma\\P\ne Q\\}} } \biggl\{ \frac{ \a (c,b)^2}{ D d^2} \mathbb{B}_2\left( \frac{d (b x_{P-Q} + c y_{P-Q})}{(c,b)} \right) \\* &\hspace{8em} +\frac{ \gamma (a,b)^2}{ D d^2} \mathbb{B}_2\left( \frac{d (a x_{P-Q} + b y_{P-Q})}{(a,b)} \right) \\* &\hspace{8em} +\frac{ b(\a,\gamma)^2}{ D d^2} \mathbb{B}_2\left( \frac{d (\a x_{P-Q} -\gamma y_{P-Q})}{(\a,\gamma)} \right) \biggr\} \\* &\omit\hfill from Corollary~\ref{corollary:avglambdaberntobern2}, \\ &= \frac{ \a (c,b)^2}{ D d^2} \frac{1}{N^2-N} \sum_{\substack{P,Q\in\Sigma\\P\ne Q\\}} \mathbb{B}_2\left( \frac{d (bx_P+cy_P) - d(bx_Q+cy_Q)}{(c,b)} \right) \\* &+\frac{ \gamma (a,b)^2}{ D d^2} \frac{1}{N^2-N} \sum_{\substack{P,Q\in\Sigma\\P\ne Q\\}} \mathbb{B}_2\left( \frac{d(ax_P+by_P) - d(ax_Q+by_Q)}{(a,b)} \right) \\* &+\frac{ b(\a,\gamma)^2}{ D d^2} \smash[b]{ \frac{1}{N^2-N} \sum_{\substack{P,Q\in\Sigma\\P\ne Q\\}} } \mathbb{B}_2\left( \frac{d(\a x_P-\gamma y_P) - d(\a x_Q-\gamma y_Q)}{(\a,\gamma)} \right) \\* &\omit\hfill from \eqref{eqn:xPminusQxPminusxQ}, \\ &= \frac{ \a (c,b)^2}{ D d^2} \frac{1}{N^2-N} \sum_{\substack{P,Q\in\Sigma\\P\ne Q\\}} \mathbb{B}_2\left( \frac{d (v_P - v_Q)}{(c,b)} \right) \\* &+\frac{ \gamma (a,b)^2}{ D d^2} \frac{1}{N^2-N} \sum_{\substack{P,Q\in\Sigma\\P\ne Q\\}} \mathbb{B}_2\left( \frac{d(u_P-u_Q)}{(a,b)} \right) \\* &+\frac{ b(\a,\gamma)^2}{ D d^2} \frac{1}{N^2-N} \smash[b]{ \sum_{\substack{P,Q\in\Sigma\\P\ne Q\\}} } \mathbb{B}_2\left( \frac{d\bigl( (u_P-v_P)-(u_Q-v_Q)\bigr)}{(\a,\gamma)} \right) \\* &\omit\hfill from \eqref{eqn:bxcyvaxbyu}, \\ &\ge \frac{ \a (c,b)^2}{ D d^2} \cdot\frac{1}{6}\left( \frac{1}{(c,b)^2} - \frac{1}{N-1} \right) \\* &+\frac{ \gamma (a,b)^2}{ D d^2} \cdot\frac{1}{6}\left( \frac{1}{(a,b)^2} - \frac{1}{N-1} \right) \\* &+\frac{ b(\a,\gamma)^2}{ D d^2} \cdot\frac{1}{6}\left( \frac{1}{(\a,\gamma)^2} - \frac{1}{N-1} \right) \\* &\omit\hfill from Lemma \ref{lemma:avg2ndBernpoly}, since $d,u_P,v_P\in\mathbb{Z}$. \end{align*} A little bit of algebra yields the desired result, which concludes the proof of Lemma~\ref{lemma:fourieravgbound}. \end{proof} \subsection{A Local Height Lower Bound via the Pigeonhole Principle} \label{section:pigeonholdbound} The main result of this section is an analogue for abelian surfaces of~\cite[Lemma~4]{MR747871} and~\cite[Proposition~1.3]{hindrysilverman:lehmer}. However, the proof is intrinsically more complicated than in the case of elliptic curves, since it relies on a lower bound for the average of the local height over a carefully chosen set of torsion points, and that lower bound ultimately relies on the explicit Fourier expansion of the periodic quadratic form given in Theorem~\ref{theorem:fourierexpansionofL}. \begin{lemma} \label{lemma:pigeonholebound} With notation as in Figure~$\ref{figure:setupforlocallemmas}$, there exists a subset~$\Sigma'\subseteq\Sigma$ containing \[ \#\Sigma' \ge 6^{-3} \#\Sigma \] elements such that for all distinct $P,Q\in\Sigma'$ we have \[ \operatorname{\hbox{\normalfont\calligra{Avg}}}_d \hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}(P-Q) \ge \frac{ \a (c,b)^2 + \gamma (a,b)^2 + b(\a,\gamma)^2}{144 D d^2}. \] \end{lemma} \begin{proof}[Proof of Lemma $\ref{lemma:pigeonholebound}$] As in the proof of Lemma~\ref{lemma:fourieravgbound}, the fact that we're taking the~$d$-average with~$d$ even means that we may replace the principal polarization~$\ThetaDivisor$ with the divisor of~$\theta({\boldsymbol u},{\boldsymbol q})$. \par We start with the formula \begin{align} \label{eqn:AvgdLxy3B2st} \operatorname{\hbox{\normalfont\calligra{Avg}}}_d 4 \hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}(R) &= \operatorname{\hbox{\normalfont\calligra{Avg}}}_d L(x_R,y_R) - \FC(0,0) \quad\text{from Proposition~\ref{proposition:lhatDvBLQxminusLhatQ},} \notag\\* &= \frac{ \a (c,b)^2}{ D d^2} \mathbb{B}_2\left( \frac{d (b x_R + c y_R)}{\gcd(c,b)} \right) \quad\text{from Corollary~\ref{corollary:avglambdaberntobern2},} \notag\\* &+ \frac{ \gamma (a,b)^2}{ D d^2} \mathbb{B}_2\left( \frac{d (a x_R + b y_R)}{\gcd(a,b)} \right) \notag\\* &+ \frac{ b(\a,\gamma)^2}{ D d^2} \mathbb{B}_2\left( \frac{d (\a x_R -\gamma y_R)}{\gcd(\a,\gamma)} \right). \end{align} \par To ease notation, we momentarily define \[ \|\,\cdot\,\|_\mathbb{Z}:\mathbb{R}\longrightarrow \left[0,\frac12\right],\quad \|t\|_\mathbb{Z} = \min_{n\in\mathbb{Z}} |t+n|, \] i.e.,~$\|t\|_\mathbb{Z}$ is the distance from~$t$ to the closest integer to~$t$. It is easy to check that for all~$t\in\mathbb{R}$, the periodic Bernoulli polynomial satisfies \[ \|t\|_\mathbb{Z} \le\frac16 \quad\Longrightarrow\quad \mathbb{B}_2(t) \ge \frac{1}{36}, \] since by periodicity and symmetry $\mathbb{B}_2(-t)=\mathbb{B}_2(t)$, it suffices to check for~$0\le{t}\le\frac16$. Hence if~$R$ satisfies the three inequalities \begin{equation} \label{eqn:3ineqforR} \left. \hspace*{4em} \begin{aligned} \left\|\dfrac{d (b x_R + c y_R)}{(c,b)}\right\|_\mathbb{Z}&\le \dfrac16, \\ \left\|\dfrac{d (a x_R + b y_R)}{(a,b)}\right\|_\mathbb{Z}&\le \dfrac16, \\ \left\|\dfrac{d (\a x_R -\gamma y_R)}{(\a,\gamma)}\right\|_\mathbb{Z}&\le \dfrac16,\\ \end{aligned} \hspace*{4em} \right\} \end{equation} then each of the three Bernoulli polynomial values appearing in~\eqref{eqn:AvgdLxy3B2st} is greater~$1/36$. This proves that \[ \operatorname{\hbox{\normalfont\calligra{Avg}}}_d \hat\lambda_{\ThetaDivisor,v}^{\textup{Bern}}(R) \ge \frac{ \a (c,b)^2 + \gamma (a,b)^2 + b(\a,\gamma)^2}{36 D d^2} \quad\text{if $R$ satisfies \eqref{eqn:3ineqforR}.} \] \par We consider the map \begin{align*} \Sigma &\longrightarrow (\mathbb{R}/\mathbb{Z})^3,\\ R &\longmapsto \left(\frac{d (b x_R + c y_R)}{(c,b)},\frac{d (a x_R + b y_R)}{(a,b)},\frac{d (\a x_R -\gamma y_R)}{(\a,\gamma)}\right). \end{align*} We divide the centered fundamental domain for~$(\mathbb{R}/\mathbb{Z})^3$ into~$6^3$ equally sized cubes whose sides have length~$6^{-1}$. Then the pigeon-hole principle ensures that we can find a subset \[ \Sigma'\subseteq\Sigma \quad\text{with}\quad \#\Sigma' \ge B^{-3}\#\Sigma \] such that the points in~$\Sigma'$ all lie in the same small cube. It follows that for all pairs~$P,Q\in\Sigma'$ we have {\small \begin{align} \label{eqn:PQdif1} \left\| \frac{d (b x_{P-Q} + c y_{P-Q})}{(c,b)} \right\|_\mathbb{Z} &= \left\| \frac{d (b x_P + c y_P)}{(c,b)} - \frac{d (b x_Q + c y_Q)}{(c,b)} \right\|_\mathbb{Z} \le \frac16,\\ \label{eqn:PQdif2} \left\| \frac{d (a x_{P-Q} + b y_{P-Q})}{(a,b)} \right\|_\mathbb{Z} &= \left\| \frac{d (a x_P + b y_P)}{(a,b)} - \frac{d (a x_Q + b y_Q)}{(a,b)} \right\|_\mathbb{Z} \le \frac16,\\ \label{eqn:PQdif3} \left\| \frac{d (\a x_{P-Q} + \gamma y_{P-Q})}{(\a,\gamma)} \right\|_\mathbb{Z} &= \left\| \frac{d (\a x_P + \gamma y_P)}{(\a,\gamma)} - \frac{d (\a x_Q + \gamma y_Q)}{(\a,\gamma)} \right\|_\mathbb{Z} \le \frac16. \end{align} }\ignorespaces We note that for the three equalities in~\eqref{eqn:PQdif1},~\eqref{eqn:PQdif2} and~\eqref{eqn:PQdif3}, we are using the fact that the quantities~$x_{P-Q}$ and~$y_{P-Q}$ are multiplied by integers. This combined with the fact that~$x_{P-Q}$ and~$y_{P-Q}$ satisfy \[ x_{P-Q}\equiv x_P-x_Q\pmodintext{\mathbb{Z}} \quad\text{and}\quad y_{P-Q}\equiv y_P-y_Q\pmodintext{\mathbb{Z}} \] and the fact that we are using the norm on~$\mathbb{R}/\mathbb{Z}$ justifies the equalities. Thus all differences of points in~$\Sigma'$ satisfy~\eqref{eqn:3ineqforR}, which completes the proof of Lemma~\ref{lemma:pigeonholebound}. \end{proof} \section{A Bound for Small Differences Lying on $\Theta$} \label{section:diffsontheta} As noted earlier, the Bernoulli part of the local height~$\hat\lambda_{\ASD,v}^{\textup{Bern}}$ is defined at every point, but the intersection part~$\hat\lambda_{\ASD,v}^\Int$ is defined only away from the support of the associated divisor~$\ASD$. That means that if we want to use the local-global decomposition of the global height~${\hat h}_\ASD$ described in Theorem~\ref{theorem:neronfncexist}(h), we must restrict to points lying in the complement~$A({\bar K})\setminus|\ASD|$ of the support of~$\ASD$. However, since ulimately we want to study points of small height, it will suffice to use the following lemma, whose proof relies on Ullmo and Zhang's proof of the Bogomolov conjecture. \begin{lemma} \label{lemma:diffsontheta} Let~${\bar K}$ be an algebraically closed field of characteristic~$0$, let~$A/{\bar K}$ be an abelian surface, let~$\Theta\subset{A}$ be an irreducible curve of genus at least~$2$, and let~${\hat h}_A$ be a canonical height on~$A$ relative to some ample symmetric divisor. There are constants~$\Cl[DZ]{bg1},\Cl[DZ]{bg2}>0$ that depend only on~$A/{\bar K}$,~$\Theta$, and~${\hat h}_A$ so that for all finite subsets \begin{equation} \label{eqn:SiginnhPlebg1} \Sigma\subset \Theta \cap \bigl\{ P \in A({\bar K}) : {\hat h}_A(P) \le \Cr{bg1} \bigr\} \end{equation} there exists a subset~$\Sigma'\subset\Sigma$ satisfying \[ \#\Sigma' \ge \Cr{bg2} \cdot \#\Sigma \quad\text{and}\quad (P-Q+A_{\textup{tors}})\cap\Theta=\emptyset~\text{for all distinct $P,Q\in\Sigma'$.} \] \end{lemma} \begin{proof} The Bogomolov conjecture for (curves on) abelian varieties, which was proven by Ullmo~\cite{MR1609514} and Zhang~\cite{MR1609518}, says that there is a constant~$\Cl[DZ]{bg3}>0$, depending only on~$A,\Theta,{\hat h}_A$, such that the set \[ \Xi = \Xi(A,\Theta,{\hat h}_A) := \Bigl( \Theta \cap \bigl\{ P\in A({\bar K}) : {\hat h}_A(P)\le\Cr{bg3} \bigr\}\Bigr) \quad\text{is finite.} \] In other words, there are a bounded number of points of~$A$ that lie on~$\Theta$ and have small height. \par We set~$\Cr{bg1}=\frac14\Cr{bg3}$. Then \begin{align*} P,Q\in\Sigma & \quad\text{and}\quad T\in A_{\textup{tors}} \quad\text{and}\quad P-Q+T\in\Theta\\ &\quad\Longrightarrow\quad {\hat h}_A(P-Q+T) = {\hat h}_A(P-Q) \le 2{\hat h}_A(P)+2{\hat h}_A(Q) \\ &\omit\hfill parallelogram formula, \\ &\quad\Longrightarrow\quad {\hat h}_A(P-Q+T) \le 4\,\Cr{bg1} \\ &\omit\hfill from \eqref{eqn:SiginnhPlebg1}, since $P,Q\in\Sigma$, \\ &\quad\Longrightarrow\quad {\hat h}_A(P-Q+T) \le \Cr{bg3}, \quad\text{since $\Cr{bg1}=\frac14\Cr{bg3}$, } \\ &\quad\Longrightarrow\quad P-Q+T \in \Xi. \end{align*} To ease notation, we let \[ N = \#\Sigma \qquad\text{and}\qquad \nu = \nu(A,\Theta,{\hat h}_A) := \max\bigl\{ \#\Xi, 2 \bigr\}, \] and we let \[ \Sigma = \{P_1,P_2,\ldots,P_N\}. \] \par We build the set~$\Sigma'$ one step at a time. We first consider the differences of~$P_1$ with the other elements of~$\Sigma$, translated by torsion points, i.e., we consider the sets \[ P_1-P_2+A_{\textup{tors}},\;P_1-P_3+A_{\textup{tors}},\;\ldots,\;P_1-P_N+A_{\textup{tors}}. \] The implication proven earlier implies that at most~$\nu=\#\Xi$ of these sets may contain a point lying on~$\Theta$, so relabeling the elements of~$\Sigma$, we have shown that \begin{align*} (P_1-P_2+A_{\textup{tors}})\cap\Theta&=\emptyset,\\ (P_1-P_3+A_{\textup{tors}})\cap\Theta&=\emptyset,\\ \omit\hfill$\vdots$\hfill\\ (P_1-P_{N-\nu}+A_{\textup{tors}})\cap\Theta&=\emptyset. \end{align*} \par We next consider the differences of~$P_2$ with the higher-indexed elements of~$\Sigma$, again translated by torsion points, \[ P_2-P_3+A_{\textup{tors}},\; P_2-P_4+A_{\textup{tors}},\;\ldots,\;P_2-P_{N-\nu}+A_{\textup{tors}}. \] As in the previous step, at most~$\nu$ of these sets contains a point lying on~$\Theta$, so relabeling again, we have shown that \[ (P_2-P_3+A_{\textup{tors}})\cap\Theta=\emptyset,\;\ldots,\; (P_2-P_{N-2\nu}+A_{\textup{tors}})\cap\Theta=\emptyset. \] Continuing in this fashion, at the~$k$th step (until we run out of points in~$\Sigma$), we will have shown that \[ (P_k-P_{k+1}+A_{\textup{tors}})\cap\Theta=\emptyset,\;\ldots,\; (P_k-P_{N-k\nu}+A_{\textup{tors}})\cap\Theta=\emptyset. \] This works as long as \[ N-k\nu > k,\quad\text{and thus as long as}\quad k < \frac{N}{\nu+1}. \] Since~$\nu\ge2$ by assumption, we may certainly run the above algorithm until~$k=\lceil{N/2\nu}\rceil$. Then by construction the set \[ \Sigma' = \{P_1,P_2,\ldots,P_k\} \] has the property that \[ (P_i-P_j+A_{\textup{tors}}) \cap \Theta = \emptyset \quad\text{for all $1\le i<j\le k$,} \] and the size of the set~$\Sigma'$ satisfies \[ \#\Sigma' \ge \left\lceil\frac{N}{2\nu}\right\rceil \ge \frac{1}{2\nu}\#\Sigma. \] This completes the proof of Lemma~\ref{lemma:diffsontheta} with~$\Cr{bg2}=1/2\nu$. \end{proof} \section{A Lehmer-Type Height Bound for Abelian Surfaces} \label{section:lehmerabeliansurface} In this section we prove an unconditional, albeit somewhat technical, lower bound for average values of the Bernoulli part of the canonical height. We also prove a corollary giving an exponent~$2$ Lehmer-type lower bound for the canonical height that is conditional on the assumption that the average of the intersection part of the canonical height is at least as large as the local-global constant~$\kappa_\Theta$ appearing in Theorem~\ref{theorem:neronfncexist}(h). \begin{theorem} \label{theorem:hlen23len23} We set the following notation\textup: \begin{notation} \item[$k$] an algebraically closed field of characterstic~$0$. \item[$K/k$] a $1$-dimensional function field. \item[$(A,\ThetaDivisor)/K$] an abelian variety~$A$ defined over~$K$ with an irreducible effective symmetric principal polarization $\Theta\in\operatorname{Div}_K(A)$. \item[${\hat h}_{A,\ThetaDivisor}$] the canonical height on~$A$ for the divisor $\ThetaDivisor$. \item[${\hat h}_{A,\ThetaDivisor}^{\textup{Bern}}$] the Bernoulli part of the canonical height on~$A$ for the divisor $\ThetaDivisor$; see Definition~$\ref{definition:globalintbernhts}$. \end{notation} Assume that for every place~$v$ of~$K$, the abelian variety~$A$ has either potential good reduction at~$v$ or totally multiplicative reduction at~$v$, and that~$A$ has at least one place of multiplicative reduction.\footnote{For ease of exposition, we have excluded abelian surfaces having partial multiplicative reduction (surface with fibers ${\mathcal A}_v^\circ={\mathcal E}\rtimes\mathbb{G}_m$ where~${\mathcal E}$ is an elliptic curve), although we expect that these cases could be handled similarly. We also note that although the assumption that~$A$ have at least one place of potential multiplicative reduction is required for our proof, it is a relatively weak assumption. For example, if~$A/K$ has everywhere good reduction and is not isotrivial, then it necessarily has a non-simple fiber~${\mathcal A}_v$, i.e., a fiber that is isogenous to a product of elliptic curves.} There are constants~$\Cl[DZ]{jj1},\Cl[DZ]{jj2},\Cl[DZ]{jj3},\Cl[DZ]{jj4}>0$ and an integer~$d\ge1$ that depend only on~$A/K$ so that the following holds\textup: \par For all finite extensions~$L/K$ and all sets of points \begin{equation} \label{eqn:SigmainPALhPleC} \Sigma \subseteq \bigl\{ P \in A(L) : {\hat h}_{A,\ThetaDivisor}(P) \le \Cr{jj1} \bigr\}, \end{equation} there is a subset $\Sigma_0\subseteq\Sigma$ having the following three properties\textup: \begin{gather} \label{eqn:subset0geCsubset} \#\Sigma_0 \ge \Cr{jj2}\cdot \#\Sigma \\ \label{eqn:PQTnotinThetadistPQTtors} P-Q+T \notin|\Theta| \quad\text{for all distinct $P,Q\in\Sigma_0$ and all $T\in A_{\textup{tors}}$.} \\ \label{eqn:AvgPQAvgdBerngeLK23} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \;\; {\hat h}_{A,\ThetaDivisor}^{\textup{Bern}}(P-Q+T) \ge \frac{\Cr{jj3}}{[L:K]^{2/3}} - \frac{\Cr{jj4}}{\#\Sigma}. \end{gather} \end{theorem} \begin{corollary} \label{corollary:conditionallehmer} With notation as in Theorem~$\ref{theorem:hlen23len23}$, suppose that for every finite~$L/K$ and every set of points~$\Sigma$ satisfying~\eqref{eqn:SigmainPALhPleC}, there is a subset~$\Sigma_0\subseteq\Sigma$ satisfying \eqref{eqn:subset0geCsubset},~\eqref{eqn:PQTnotinThetadistPQTtors},~\eqref{eqn:AvgPQAvgdBerngeLK23}, and also\footnote{We note that~\eqref{eqn:PQTnotinThetadistPQTtors} ensures that~${\hat h}_{A,\ThetaDivisor}^\Int$ is well-defined at all of the~$P-Q-T$ points under consideration.} \begin{equation} \label{eqn:AvgPQAvgdIntgeLK23} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \;\; {\hat h}_{A,\ThetaDivisor}^\Int(P-Q+T) \ge \kappa_{\ThetaDivisor}, \end{equation} where~$\kappa_{\ThetaDivisor}$ is the constant appearing in Theorem~\textup{\ref{theorem:neronfncexist}(h)}. Then every non-torsion~$P\in{A({\bar K})}$ satisfies \[ {\hat h}_{A,\ThetaDivisor}(P) \ge \frac{\Cl[DZ]{jj5}}{\bigl[K(P):K\bigr]^2}. \] \end{corollary} \begin{remark} The assumption~\eqref{eqn:AvgPQAvgdIntgeLK23} in Corollary~\ref{corollary:conditionallehmer} says roughly that (on average) the intersection part of the local heights, by itself, is sufficient to compensate for the difference between the canonical height and the sum of the local heights. It is unclear to the authors whether this is likely to be true, but we have included it in order to explain how the somewhat technical estimate in Theorem~\ref{theorem:hlen23len23} can be incorporated into the proof of a Lehmer-type estimate, as was done unconditionally for elliptic curves in~\cite{hindrysilverman:lehmer}. \end{remark} \begin{proof}[Proof of Theorem~$\ref{theorem:hlen23len23}$] We first replace~$K$ by a finite extension over which~$A$ has everywhere good or totally multiplicative reduction, which may require some adjustment in the constants. We let \[ n = [L:K]. \] As in the statement of the theorem, all of the constants may depend on~$A/K$, but they are independent of~$L$,~$n$ and~$P\in{A(L)}$. We let \[ S = \{v\in M_K : \text{$A$ has bad reduction at $v$} \}. \] For each~$v\in{S}$ we fix a uniformization \[ \mathbb{G}_m^2({\bar K}_v)\longrightarrow A({\bar K}_v) \] with kernel spanned (multiplicatively) by the columns of the matrix \[ {\boldsymbol q}_v = \begin{pmatrix} q_{v,11} & q_{v,12} \\ q_{v,21} & q_{v,22} \\ \end{pmatrix} \] whose associated $\ThetaFunction$-function has divisor that is a translation of~$\Theta$ be a $2$-torsion point. The valuation matrix \[ Q_v = v({\boldsymbol q}_v) = \begin{pmatrix} v(q_{v,11}) & v(q_{v,12}) \\ v(q_{v,21}) & v(q_{v,22}) \\ \end{pmatrix} = \begin{pmatrix} a_v & b_v \\ b_v & c_v \\ \end{pmatrix} \] is symmetric and positive-definite. As usual, we let \[ \a_v=a_v-b_v\quad\text{and}\quad \gamma_v=c_v-b_v. \] After a change of basis as described in Lemma~\ref{lemma:quadformwbpositive}, we may assume that the triple is normalized, and thus that \[ D_v = a_vc_v-b_v^2 > 0 \quad\text{and}\quad 0\le 2b_v\le a_v\le c_v. \] To ease notation, we define two functions on~$\mathbb{Z}^3$, where we note that the expressions~$\operatorname{\xi}(a,b,c)$ and~$\operatorname{\Delta}(a,b,c)$ are the quantities appearing in both Lemma~\ref{lemma:fourieravgbound} and Lemma~\ref{lemma:pigeonholebound}: \begin{align} \operatorname{\Delta}(a,b,c) &= \dfrac{D}{\gcd(a,b,c)^2}. \label{eqn:abcFunctionThree} \\ \operatorname{\xi}(a,b,c) &= \dfrac{\a\gcd(c,b)^2 + \gamma\gcd(a,b)^2 + b\gcd(\a,\gamma)^2}{D}. \label{eqn:abcFunctionOne} \end{align} For the proof of Theorem~\ref{theorem:hlen23len23}, it is crucial to observe that these functions satisfy the homogeneity formulas \[ \operatorname{\xi}(ea,eb,ec)=e\operatorname{\xi}(a,b,c) \quad\text{and}\quad \operatorname{\Delta}(ea,eb,ec)=\operatorname{\Delta}(a,b,c), \] since these homogeneity properties allow us to control the height bounds as for ramified extensions~$L_w/K_v$. \par For~$w\in{M_L}$ with~$w\mid{v}$, we denote the ramification index of~$w/v$ by~$e_w$, so~$w|_K=e_wv$. In particular, the valuations of the multiplicative periods of~$A$ are multiplied by~$e_w$ when we move from~$K$ to~$L$. Thus for places~$v$ of bad reduction, we have \begin{equation} \label{eqn:aweqavetc} \left. \begin{aligned} a_w = e_w a_v,\quad b_w &= e_w b_v,\quad c_w = e_w c_v, \\ \a_w = e_w \a_v,\quad \gamma_w &= e_w g_v, \\ D_w = a_wc_w-b_w^2 &= e_w^2 D_v, \\ \operatorname{\Delta}(a_w,b_w,c_w) &= \operatorname{\Delta}(a_v,b_v,c_v), \\ \operatorname{\xi}(a_w,b_w,c_w) &= e_w\operatorname{\xi}(a_v,b_v,c_v). \\ \end{aligned} \right\} \end{equation} We define the integer~$d$ by the formula \[ d = 2 \operatorname{LCM} \bigl\{ \operatorname{\Delta}(a_v,b_v,c_v) : v \in S \bigr\}. \] We note that~$d$ depends only on~$A/K$, i.e., it is independent of the extension field~$L/K$. We may thus replace~$L$ with the compositum of~$L$ and~$K\bigl(A[d]\bigr)$, at the potential cost of multiplying~$n=[L:K]$ be up to~$d^4$. Since~$d$ depends only on~$A/K$, this requires only an adjustment of various constants. We henceforth assume that \[ A[d] \subset A(L). \] \par We choose a place~$v_0\in{M_K}$ such that the fiber of the N\'eron model of~$A$ is a torus, i.e., ${\mathcal A}_{v_0}(k)\cong\mathbb{G}_m^2(k)$. (By assumption, there is at least one such place.) Then among the~$w\in{M_L}$ lying over~$v_0$, we choose~$w_0$ to have largest ramfication index, i.e., \[ e_{w_0} = \max\{ e_w : w\in M_L,\,w\mid v \}. \] We also let \[ M_{A/K}^{\textup{bad}} = \{v\in M_K : \text{$A$ has bad reduction at $v$} \}, \] and similarly for~$M_{A/L}^{\textup{bad}}$. \par Let~$\Sigma$ be a set satisfying~\eqref{eqn:SigmainPALhPleC}. We start by applying Lemma~\ref{lemma:pigeonholebound} to~$\Sigma\subset{A(L)}\subset{A(L_{w_0})}$ to find a subset~$\Sigma'\subseteq\Sigma$ satisfying \begin{equation} \label{eqn:NSig6n3S} \#\Sigma' \ge 6^{-3} \#\Sigma \end{equation} and such that for all distinct $P,Q\in\Sigma'$ we have \begin{equation} \label{eqn:avgdinlehpf} \operatorname{\hbox{\normalfont\calligra{Avg}}}_d \hat\lambda_{\ThetaDivisor,w_0}^{\textup{Bern}}(P-Q) \ge \frac{ \operatorname{\xi}(a_{w_0},b_{w_0},c_{w_0})}{144 d^2} = \frac{ e_{w_0}\operatorname{\xi}(a_{v_0},b_{v_0},c_{v_0})}{144 d^2}. \end{equation} \par We next apply Lemma~\ref{lemma:diffsontheta} to the set~$\Sigma'$ to find a subset~$\Sigma_0\subseteq\Sigma'$ satisfying\footnote{If we only want the lower bound on the Bernoulli part of the height, it is not necessary to use Lemma~\ref{lemma:diffsontheta}, since the Bernoulli part of the height is defined on all of~$A$. However, any application to the global height will need to also include the intersection part of the height, which is not defined on the support of~$\ThetaDivisor$.} \begin{equation} \label{eqn:sigmaprimeprime} N := \#\Sigma_0 \ge \Cr{bg2} \cdot \#\Sigma' \end{equation} and \begin{equation} \label{eqn:pminusqplustnotintheta} P-Q+T \notin|\Theta| \quad\text{for all distinct $P,Q\in\Sigma_0$ and all $T\in A_{\textup{tors}}$.} \end{equation} \par We now estimate the double average~\eqref{eqn:AvgPQAvgdBerngeLK23} for the set~$\Sigma_0$ and the integer~$d$. We note that~\eqref{eqn:pminusqplustnotintheta} ensures that the points~$P-Q+T$ appearing in this calculation do not lie on the divisor~$\Theta$, and thus the local heights are well-defined at all such points. Thus \begin{align} \label{eqn:AvgAvglBern} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} & \;\; {\hat h}_{A,\ThetaDivisor}^{\textup{Bern}}(P-Q+T) \notag\\ &= \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \sum_{w\in M_{A/L}^{\textup{bad}}} \frac{1}{n} \l^{\textup{Bern}}_{\ThetaDivisor,w}(P-Q+T) \notag\\ &= \sum_{w\in M_{A/L}^{\textup{bad}}} \frac{1}{n} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \l^{\textup{Bern}}_{\ThetaDivisor,w}(P-Q+T). \end{align} We split the sum in~\eqref{eqn:AvgAvglBern} into three pieces: \begin{parts} \Part{(1)} For the absolute value~$w_0$, we use the lower bound from Lemma~\ref{lemma:pigeonholebound}. \Part{(2)} For the absolute values~$w$ dividing~$v_0$ that are not equal to~$w_0$, we use the lower bound provided by the full strength of Lemma~\ref{lemma:fourieravgbound}. \Part{(3)} For the absolute values~$w$ with $w\in{M_{A/L}^{\textup{bad}}}$ that do not divide~$v_0$, we again use Lemma~\ref{lemma:fourieravgbound}, but we discard the positive contribution coming from the~$1/D^2$ terms. \end{parts} Carrying out these three estimates yields the following: \begin{align} (1)\quad & \frac{1}{n} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \l^{\textup{Bern}}_{\ThetaDivisor,w_0}(P-Q+T) \notag\\ &\quad{}\ge \frac{1}{n} \cdot \frac{ e_{w_0}\operatorname{\xi}(a_{v_0},b_{v_0},c_{v_0})}{144 d^2} \quad\text{from~\eqref{eqn:avgdinlehpf},}\notag \\ &\quad{}= \Cl[DZ]{dz1}\cdot \frac{e_{w_0}}{n}. \label{eqn:avgdestimate1} \\ (2)\enspace& \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\w\mid v_0,\, w\ne w_0\\}} \frac{1}{n} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \l^{\textup{Bern}}_{\ThetaDivisor,w}(P-Q+T) \notag\\ &\quad{}\ge \smash[b]{ \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\w\mid v_0,\,w\ne w_0\\}} } \frac{1}{n}\cdot\frac{1}{24d^2} \left( \frac{\a_w+\gamma_w+b_w}{D_w} - \frac{\operatorname{\xi}(a_w,b_w,c_w)}{N-1} \right) \notag \\ &\omit\hfill applying Lemma~\ref{lemma:fourieravgbound} to $\Sigma_0$ and $w$, \notag \\ &= \frac{1}{24 n d^2} \smash[b]{ \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\w\mid v_0,\,w\ne w_0\\}} } \left( \frac{e_w(\a_{v_0}+\gamma_{v_0}+b_{v_0})}{e_w^2D_{v_0}} - \frac{e_w\operatorname{\xi}(a_{v_0},b_{v_0},c_{v_0})}{N-1} \right) \notag \\ &\omit\hfill using the homogeneity formulas~\eqref{eqn:aweqavetc}, \notag \\ &= \frac{\a_{v_0}+\gamma_{v_0}+b_{v_0}}{24 n d^2 D_{v_0}} \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\w\mid v_0,\,w\ne w_0\\}} \frac{1}{e_w} - \frac{\operatorname{\xi}(a_{v_0},b_{v_0},c_{v_0})}{24nd^2(N-1)} \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\w\mid v_0,\,w\ne w_0\\}} e_w \notag \\ &= \frac{\a_{v_0}+\gamma_{v_0}+b_{v_0}}{24 n d^2 D_{v_0}} \biggl( \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\w\mid v_0,\,w\ne w_0\\}} \frac{1}{e_w} \biggr) - \frac{\operatorname{\xi}(a_{v_0},b_{v_0},c_{v_0})(n-e_{w_0})}{24nd^2(N-1)} \notag \\ &\omit\hfill since in $\sum_{w\mid v}e_w=n$ for all $v$, \notag \\ &\ge \frac{\Cl[DZ]{dz2}}{n} \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\w\mid v_0,\,w\ne w_0\\}} \frac{1}{e_w} - \frac{\Cl[DZ]{dz3}}{(N-1)}. \label{eqn:avgdestimate2} \\ (3)\enspace& \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\ w\nmid v_0\\}} \frac{1}{n} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \l^{\textup{Bern}}_{\ThetaDivisor,w}(P-Q+T) \notag\\ &\ge\frac{1}{n} \smash[b]{ \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\ w\nmid v_0\\}} } \frac{1}{24d^2} \left( - \frac{\operatorname{\xi}(a_w,b_w,c_w)}{N-1} \right) \notag \\ &\omit\hfill applying Lemma~\ref{lemma:fourieravgbound} to $\Sigma_0$ and $w$, \notag \\ &= \frac{1}{n} \smash[b]{ \sum_{\substack{w\in M_{A/L}^{\textup{bad}}\\ w\nmid v_0\\}} } \frac{1}{24d^2} \left( - \frac{e_w \operatorname{\xi}(a_v,b_v,c_v)}{N-1} \right) \notag \\ &\omit\hfill using the homogeneity formulas~\eqref{eqn:aweqavetc}, \notag \\ &= - \frac{1}{24d^2n} \sum_{\substack{v\in M_{A/K}^{\textup{bad}}\\ v\ne v_0\\}} \left( \frac{\operatorname{\xi}(a_v,b_v,c_v)}{N-1} \right) \sum_{\substack{w\in M_L\\ w\mid v\\}} e_w \notag \\ &= - \frac{1}{24d^2(N-1)} \smash[b]{ \sum_{\substack{v\in S(A/K)\\ v\ne v_0\\}} } \operatorname{\xi}(a_v,b_v,c_v) \notag \\ &\omit\hfill since $\smash{\sum_{w\mid v}e_w=n}$, \notag \\ &= \smash[t]{ - \frac{\Cl[DZ]{dz4}}{N-1}. } \label{eqn:avgdestimate3} \end{align} Substituting the sum of the three estimates~\eqref{eqn:avgdestimate1},~\eqref{eqn:avgdestimate2},~\eqref{eqn:avgdestimate3} into~\eqref{eqn:AvgAvglBern}, we find that \begin{multline} \label{eqn:maxRSAKRCrdz34} \smash[b]{ \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} } \;\; {\hat h}_{\ThetaDivisor,\ThetaDivisor}^{\textup{Bern}}(P-Q+T) \\ \ge \frac{1}{n} \biggl\{ \Cr{dz1} e_{w_0} + \Cr{dz2} \sum_{\substack{w\in M_L\\w\mid v_0,\,w\ne w_0\\}} \frac{1}{e_w} \biggr\} - \frac{\Cr{dz3}+\Cr{dz4}}{N-1}. \end{multline} Since \[ e_{w_0} = \max\{ e_w : w\mid v_0 \} \quad\text{and}\quad \sum_{w\mid{v_0}}e_w=n, \] we can apply Lemma~\ref{lemma:holderineq} to the quantity in braces in~\eqref{eqn:maxRSAKRCrdz34} to obtain the following lower bound, with newly relabeled constants depending on~$A/K$ and where we have used~\eqref{eqn:NSig6n3S} and~\eqref{eqn:sigmaprimeprime} to estimate $N=\#\Sigma_0$ in terms of~$\#\Sigma$.\footnote{We remark that in order to apply Lemma~\ref{lemma:holderineq}, the integer~$n$ must satisfy $n^2\ge\Cr{dz2}/\Cr{dz1}$. There is no harm in our making this assumption, since these constants are given explicitly by \[ \Cr{dz2} = \frac{\a_{v_0}+\gamma_{v_0}+b_{v_0}}{24d^2D_{v_0}} \quad\text{and}\quad \Cr{dz1} = \frac{\operatorname{\xi}(a_{v_0},b_{v_0},c_{v_0}) }{144d^2} \ge \frac{\a_{v_0}+\gamma_{v_0}+b_{v_0}}{144d^2D_{v_0}}, \] and thus $\Cr{dz2}/\Cr{dz1}\le6$. Hence it suffices to assume that~$n\ge3$. } \begin{align*} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \;\; {\hat h}_{\ThetaDivisor,\ThetaDivisor}^{\textup{Bern}}(P-Q+T) &\ge \frac{1}{n} \cdot \Cl[DZ]{dz5}n^{1/3} - \frac{\Cl[DZ]{dz6}}{N-1} \\ &\ge \frac{\Cr{jj3}}{n^{2/3}} - \frac{\Cr{jj4}}{\#\Sigma}. \end{align*} This completes the proof of Theorem~\ref{theorem:hlen23len23}. \end{proof} \begin{proof}[Proof of Corollary~$\ref{corollary:conditionallehmer}$] Let~$P_0\in{A({\bar K})}$ be a non-torsion point, and to ease notation, let \[ L = K(P_0) \quad\text{and}\quad n = [L:K]. \] We take~$M$ to be the largest integer satisfying \begin{equation} \label{eqn:M2leChP0} M^2 \le \frac{\Cr{jj1}}{{\hat h}_{A,\Theta}(P_0)}, \end{equation} where~$\Cr{jj1}$ is the constant appearing in~\eqref{eqn:SigmainPALhPleC}. We consider the set of points \[ \Sigma = \{ mP_0 : 0 \le m \le M-1 \} \subset\bigl\{ P\in A(L) : {\hat h}_{A,\Theta}(P) \le \Cr{jj1} \bigr\}, \] where the inclusion follows from~${\hat h}_{A,D}(mP_0)=m^2{\hat h}_{A,D}(P_0)$ and our choice of~$M$. \par Then, according to~\eqref{eqn:subset0geCsubset},~\eqref{eqn:AvgPQAvgdBerngeLK23}, and~\eqref{eqn:AvgPQAvgdIntgeLK23}, we can find a subset~$\Sigma_0\subseteq\Sigma$ with $\#\Sigma_0\ge\Cr{jj2}\#\Sigma=\Cr{jj2}M$ that satisfies \begin{align} \label{eqn:AvgPQAvgdBerngeLK23x} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \;\; {\hat h}_{A,\ThetaDivisor}^{\textup{Bern}}(P-Q+T) &\ge \frac{\Cr{jj3}}{n^{2/3}} - \frac{\Cr{jj4}}{M}. \\ \label{eqn:AvgPQAvgdIntgeLK23x} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \;\; {\hat h}_{A,\ThetaDivisor}^\Int(P-Q+T) &\ge \kappa_{\ThetaDivisor}. \end{align} Proposition~\ref{eqn:hhatsumintbernparts} says that \[ {\hat h}_{A,\ThetaDivisor} = {\hat h}_{A,\ThetaDivisor}^\Int + {\hat h}_{A,\ThetaDivisor}^{\textup{Bern}}-\kappa_\ThetaDivisor, \] so adding~\eqref{eqn:AvgPQAvgdBerngeLK23x} to~\eqref{eqn:AvgPQAvgdIntgeLK23x} yields \begin{equation} \label{eqn:Avgjj3423} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \;\; {\hat h}_{A,\ThetaDivisor}(P-Q+T) \ge \frac{\Cr{jj3}}{n^{2/3}} - \frac{\Cr{jj4}}{M}. \end{equation} \par But for any points~$P,Q\in\Sigma$ and for any torsion point~$T\in{A_{\textup{tors}}}$, we have \begin{align*} {\hat h}_{A,\ThetaDivisor}(P-Q+T) & = {\hat h}_{A,\ThetaDivisor}(P-Q) \\ & \le 2{\hat h}_{A,\ThetaDivisor}(P)+2{\hat h}_{A,\ThetaDivisor}(Q) \\ & \le 4 \max_{P\in\Sigma} {\hat h}_{A,\ThetaDivisor}(P) \\ & \le 4 \max_{0\le m < M} {\hat h}_{A,\ThetaDivisor}(mP_0) \\ & \le M^2 {\hat h}_{A,\ThetaDivisor}(P_0). \end{align*} Hence \begin{equation} \label{eqn:AvgM2hP0} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{\substack{P,Q\in\Sigma_0\\ P\ne Q\\}} \operatornamewithlimits{\hbox{\normalfont\calligra{Avg}}}_{T\in A[d]} \;\; {\hat h}_{A,\ThetaDivisor}(P-Q+T) \le M^2 {\hat h}_{A,\ThetaDivisor}(P_0). \end{equation} Combining~\eqref{eqn:Avgjj3423} and~\eqref{eqn:AvgM2hP0} yields \[ M^2 {\hat h}_{A,\ThetaDivisor}(P_0) \ge \frac{\Cr{jj3}}{n^{2/3}} - \frac{\Cr{jj4}}{M}. \] Setting~$M$ to be the smallest integer satisfying \begin{equation} \label{eqn:Mge2Cn23} M \ge \frac{2\Cr{jj4}n^{2/3}}{\Cr{jj3}} \end{equation} yields (after adjusting constants) \[ n^{4/3} {\hat h}_{A,\ThetaDivisor}(P_0) \ge \frac{\Cr{jj5}}{n^{2/3}}. \] \par This completes the proof of Corollary~\ref{corollary:conditionallehmer} provided that we can justify choosing~$M$ to satisfy~\eqref{eqn:Mge2Cn23}, since we earlier in~\eqref{eqn:M2leChP0} assumed that~$M$ satisfies an upper bound. In other words, we need to check that there is an integer~$M$ in the interval \[ \frac{2\Cr{jj4}n^{2/3}}{\Cr{jj3}} \le M \le \sqrt{\frac{\Cr{jj1}}{{\hat h}_{A,\ThetaDivisor}(P_0)}}. \] But if there is no such~$M$, then we find that \[ \sqrt{\frac{\Cr{jj1}}{{\hat h}_{A,\ThetaDivisor}(P_0)}} \le \frac{2\Cr{jj4}n^{2/3}}{\Cr{jj3}} + 1, \] and squaring both sides and adjusting constants, we see that \[ {\hat h}_{A,\ThetaDivisor}(P_0) \ge \frac{\Cl[DZ]{jj6}}{n^{4/3}}, \] which is an even stronger inequality than the one that we are trying to prove. \end{proof} The following is a more precise and fully explicated version of~\cite[Lemma~3.1]{hindrysilverman:lehmer}. \begin{lemma} \label{lemma:holderineq} Let $\a,\b,n>0$ be positive real numbers satisfying \begin{equation} \label{eqn:n2gebetaalpha} n^2 \ge \b/\a, \end{equation} and let $e_0,\ldots,e_r>0$ be positive real numbers satisfying \[ e_0 = \max\{e_0,\ldots,e_r\} \quad\text{and}\quad n = e_0+\cdots+e_r. \] Then \begin{equation} \label{eqn:alphae0betaei} \a e_0 + \b \sum_{i=1}^r \frac{1}{e_i} \ge (\a^2 \b n)^{\frac{1}{3}}. \end{equation} \end{lemma} \begin{proof} Since~$e_0$ is the largest of the~$e_i$ and~$n$ is the sum of the~$e_i$, we can estimate \begin{equation} \label{eqn:e0genr1} e_0 \ge \frac{e_0+\cdots+e_r}{r+1} = \frac{n}{r+1}. \end{equation} We compute \begin{align} \label{eqn:r2ge4nr1ei1} r^2&= \left( \sum_{i=1}^r e_i^{1/2}\cdot e_i^{-1/2} \right)^2 \notag \\ &\le \left( \sum_{i=1}^r e_i \right)\left( \sum_{i=1}^r e_i^{-1} \right) &&\text{Cauchy-Schwartz inequality,} \notag\\ &= (n-e_0)\left( \sum_{i=1}^r e_i^{-1} \right) &&\text{since $e_0+\cdots+e_r=n$,} \notag\\ &\le \frac{rn}{r+1} \left( \sum_{i=1}^r e_i^{-1} \right) &&\text{using \eqref{eqn:e0genr1}.} \end{align} We use this estimate to bound the left-hand side of~\eqref{eqn:alphae0betaei} as \begin{align} \label{eqn:ae0bsumi1rinf} \a e_0 + \b \sum_{i=1}^r \frac{1}{e_i} &\ge \a \frac{n}{r+1} + \b \frac{r^2+r}{n} \quad\text{using \eqref{eqn:e0genr1} and \eqref{eqn:r2ge4nr1ei1},} \notag\\ &\ge \inf_{t>0} \left\{ \frac{\a n}{t+1} + \frac{\b}{n}(t^2+t) \right\} \notag\\ &= \inf_{x>1} \left\{ \frac{\a n}{x} + \frac{\b}{n}(x^2-x) \right\} \quad\text{setting $x=t+1$,} \notag\\ &= (\a^2\b n)^{1/3} \inf_{u>\gamma} \left\{ \frac{1}{u}+ u^2 - \gamma u \right\} \\ &\omit\hfill setting $\gamma=\left(\dfrac{\b}{\a n^2}\right)^{1/3}$ \hspace*{-10pt} and $u=\gamma x$. \notag \end{align} To ease notation, we let \[ f(\gamma,u) = u^{-1} + u^2 - \gamma u. \] The fact that \[ \frac{d^2\phantom u}{du^2}(u^{-1}+u^2-\gamma u) = 2u^{-3} + 2 > 0 \quad\text{for all $u>0$} \] shows that~$f(\gamma,u)$ has at most one minimum on the half-line~$u>0$, and then the fact that~$f(\gamma,u)\to\infty$ as~$u\to0^+$ and as~$u\to\infty$ shows that it has a unique minimum. We thus get a well-defined function \[ F(w) = \inf_{u>0} f(w,u) = \inf_{u>0} \{ u^{-1} + u^2 - wu \} \quad\text{for $w\in\mathbb{R}$.} \] \par We claim that~$F(w)$ is a strictly decreasing function. To see why, we note that our earlier discussion shows that \[ F(w) = f\bigl(w,U(w)\bigr) = U(w)^{-1}+U(w)^2-wU(w), \] where~$u=U(w)$ is the unique real solution to the equation \[ \frac{\partial f}{\partial u}(w,u) = -u^{-2} + 2u - w = 0. \] Hence \begin{align*} \frac{dF}{dw} &= \frac{d\phantom w}{dw}f\bigl(w,U(w)\bigr) \\ & = \frac{\partial f}{\partial w}\bigl(w,U(w)\bigr) + \underbrace{ \frac{\partial f}{\partial u}\bigl(w,U(w)\bigr) }_{\text{this is 0}}\cdot \frac{dU}{dw}(w) \\ &= -U(w) < 0. \end{align*} \par Returning to our earlier calculation and using the assumption~\eqref{eqn:n2gebetaalpha} that~$\gamma\le1$, we find that \begin{align*} \a e_0 + \b \sum_{i=1}^r \frac{1}{e_i} &\ge (\a^2\b n)^{1/3} \inf_{u>\gamma} \left\{ u^{-1} + u^2 - \gamma u \right\} \quad\text{from~\eqref{eqn:ae0bsumi1rinf},} \\ &\ge (\a^2\b n)^{1/3} F(\gamma) \quad\text{by defintiion of $F(w)$,} \\ &\ge (\a^2\b n)^{1/3} F(1) \quad\begin{tabular}[t]{l} for all $0\le\gamma\le1$, since $F(w)$\\ is a decreasing function,\\ \end{tabular} \\ &= (\a^2\b n)^{1/3} \quad\text{since it is easy to compute $F(1)=1$.} \end{align*} This completes the proof of Lemma~\ref{lemma:holderineq}. \end{proof} \begin{acknowledgement} The authors would like to thank Dan Abramovich, Matt Baker, and David Grant for their helpful advice. \end{acknowledgement} \bibliographystyle{plain}
1,116,691,496,969
arxiv
\section{Introduction} Low-dimensional factor models are commonly used in many fields to account for latent variables in a panel dataset. Conditions for identification in low-dimensional factor models have been derived in \cite{AndersonRubin1956}. These conditions indicate problematic points in the parameter space where identification is lost. Weak identification arises when the true value of the parameters is close, in some sense, to one of these problematic points. Several papers address the weak identification problem in generalized methods of moments (GMM) models, including \cite{StockWright2000} and \cite{Kleibergen2005}. This paper describes how to reparameterize low-dimensional factor models to fit the weak identification theory developed for GMM models. Papers covering weak identification theory for GMM models require knowledge that classifies each parameter as weakly identified or strongly identified. Reparameterizations that satisfy the weak identification classification can be challenging to find. \cite{HanMcCloskey2019} provide a general strategy for finding a reparameterization based on solving a sequence of differential equations. Most of the nonlinear models for which this classification has been solved have relatively few parameters. For example, \cite{AndrewsMikusheva2016Geometric} verify the weak identification classification in a simplified small-scale DSGE model with six parameters. \cite{AndrewsMikusheva2016Geometric} state, ``even in this simple highly stylized model, deriving the weakly and strongly identified directions in the parameter space is messy, and such derivations will be difficult if not impossible in richer, more empirically relevant models.''\footnote{\cite{AndrewsMikusheva2016Geometric}, Section S8, pg. 27.} In this paper, the low-dimensional factor models that we classify constitute a class of empirically relevant models. Furthermore, identification-robust hypothesis tests benefit from a reparameterization that makes the nuisance parameters strongly identified. There are two types of identification-robust hypothesis tests in the weak identification literature: ones that require strongly identified nuisance parameters, such as the K test from \cite{Kleibergen2005}, and ones that allow for weakly identified nuisance parameters, such as the test in \cite{ChaudhuriZivot2011}. The reparameterizations described in this paper make available the identification-robust hypothesis tests that require strongly identified nuisance parameters. Otherwise, those tests would have to be projected over the nuisance parameters, making them very conservative. This paper focuses on weak identification in low-dimensional factor models with one or two factors. The primary reason for this focus is that the conditions for identification given in \cite{AndersonRubin1956} are necessary and sufficient in this case. (Also see Chapter 7 in \cite{Bollen1989} for a good discussion on the identification conditions in low-dimensional factor models.) With three or more factors, the conditions given in \cite{AndersonRubin1956} are either necessary or sufficient, but not both. To the author's knowledge, no necessary and sufficient conditions for identification in low-dimensional factor models with three or more factors are available. In the two-factor model, we also focus on the case that one of the factors is strong, in the sense that it has at least three nonzero factor loadings. The primary reason for this focus is to keep the reparameterization manageable. We leave the case of three or more factors or the case of two factors with both factors weak (in the sense that neither factor has three nonzero factor loadings) to future research. We include simulations comparing various identification-robust hypothesis tests. We find that tests that require the reparameterization to plug in strongly identified nuisance parameters have good size and power properties. In contrast, tests that allow for weakly identified nuisance parameters can be very conservative under weak identification. We also document the fact that estimates of the number of factors frequently include weakly identified factors. These simulations complement simulations by \cite{BriggsMacCallum2003} and \cite{Ximenez2006}, which compare estimators of the factors when the factor loadings are near zero. Empirical applications of factor models are often weakly identified. Anytime the number of factors is unknown, the researcher must consider the possibility that one of the factors is weak. \cite{Attanasio2020AER} use a factor model with one factor to model parental investments in children. We note evidence of a second factor that may be weakly identified and compute identification-robust confidence intervals in this application. \cite{Cox_weak_id_w_bounds} uses these reparameterizations to analyze the combination of weak identification with bounds in low-dimensional factor models. The idea is that inequalities on the parameters can shrink the identified set and confidence intervals. \cite{Cox_weak_id_w_bounds} proposes an identification-robust quasi-likelihood ratio test that uses information from the inequalities when identification is weak. It should be pointed out that weak identification in low-dimensional factor models is different from weak factors in high-dimensional factor models, as in \cite{Onatski2012} or \cite{Freyaldenhoven2022}. In high-dimensional factor models, the primary problem is an accumulation of noise from an increasing number of variables that do not have much information about the factors. In low-dimensional factor models, the problem is loss of identification. In fact, one strong factor in a low-dimensional factor model, with a finite number of fixed nonzero factor loadings, would count as a weak factor in a high-dimensional factor model. There is also a related approach to factor models that allows for nonzero covariances between the errors. In the high-dimensional setting, this is handled by assuming an approximate factor structure, as in \cite{ChamberlainRothschild1983} or \cite{Bai2003}. In the low-dimensional setting, \cite{Williams2020} presents strategies for identifying factor model parameters using only a subset of the covariance restrictions. \cite{Williams2020} does not consider weak identification or identification-robust inference. The remainder of the paper proceeds as follows. Section 2 describes a general low-dimensional factor model. Section 3 gives the reparameterization for a factor model with one factor. Section 4 gives the reparameterization for a factor model with two factors. Section 5 presents the simulations. Section 6 presents the empirical application. Section 7 concludes. An appendix contains additional details on the reparameterizations, simulations, and empirical application. \section{A Low-Dimensional Factor Model} \label{Section2} Suppose a researcher observes $p$ variables in a dataset, $W_i=(W_{1i}, W_{2i}, ... W_{pi})'$, for $i=1,...,n$. A factor model for $W_i$ hypothesizes a common variable or factor that contributes to the variation of multiple $W_{ji}$. The factor model is defined by the equation, \begin{equation} W_i=\Lambda f_i+\epsilon_i, \label{model} \end{equation} where $\Lambda$ is a $p\times m$ matrix of factor loadings, $f_i$ is an $m$-vector of unobserved factors, and $\epsilon_i$ is a $p$-vector of unobserved errors. In low-dimensional factor models, $p$ is fixed as the sample size increases. See \cite{LawleyMaxwell1971} and \cite{Anderson1984} for traditional presentations of low-dimensional factor models. The model in (\ref{model}) can be generalized to allow $W_i$ to be the errors from a system of linear regressions; see Section \ref{RegressionErrors} in the appendix. The objective in factor models is to estimate $\Lambda$ and the covariance matrix of the factors. Let $\Sigma$ denote the covariance matrix of factors and let $\Phi$ denote the covariance matrix of the errors. We assume $\Phi$ is a diagonal matrix and that $f_i$ is uncorrelated with $\epsilon_i$. These assumptions ensure $f_i$ is the only source of common variation. Factor models imply a covariance matrix relationship between the factors and the observed variables. If we let $\Omega$ denote the covariance matrix of $W_i$, then \begin{equation} \Omega=\Lambda\Sigma\Lambda'+\Phi. \label{covariance_equation} \end{equation} This equation relates the factor-model parameters that we are interested in ($\Lambda$, $\Sigma$, and $\Phi$) to the covariance matrix of the observed variables. Identification and estimation of factor models is focused on exploiting this relationship. For identification, notice that (\ref{covariance_equation}) implies an indeterminacy in the $\Sigma$ and $\Lambda$ parameters. For any $m\times m$ invertible matrix, $M$, both $(\Lambda,\Sigma)$ and $(M\Lambda,M\inv\Sigma M\inv)$ imply the same value of $\Omega$. This means that $m^2$ additional restrictions are needed to identify $\Lambda$ and $\Sigma$ from (\ref{covariance_equation}). \cite{BaiLi2012} describe five sets of restrictions that are commonly used. In this paper, we assume the first $m$ rows of $\Lambda$ are $I_m$, the $m$-dimensional identity matrix. This corresponds to IC1 in \cite{BaiLi2012}. The weak identification analysis is the same under different restrictions, once the factors are appropriately rotated and rescaled. Other requirements for identification using (\ref{covariance_equation}) are the subject of Sections \ref{Section3} and \ref{Section4}. For estimation, (\ref{covariance_equation}) can be used as moments in a GMM model. Let $vec(A)$ denote the vectorization operator for a matrix $A$ and let $vech(A)$ denote the vectorization operator for a square symmetric matrix $A$ that only takes the values at or below the diagonal. Collect the factor model parameters as $\gamma=(vec(\Lambda)',vech(\Sigma)',diag(\Phi)')'$, and let $\Omega(\gamma)=\Lambda\Sigma\Lambda'+\Phi$. We can write the sample moments as $g(\gamma,W_i)=vech(W_iW'_i-\Omega(\gamma))$. The GMM estimator then minimizes \begin{equation} Q_n(\gamma)=n\bar g_n(\gamma)'\widehat V^{-1}_n\bar g_n(\gamma), \label{GMM_objective} \end{equation} where $\bar g_n(\gamma)=n\inv \isum g(\gamma,W_i)$ and $\widehat V_n$ is an estimator of the asymptotic variance of $n^{-1}\sum_{i=1}^n vech(W_iW'_i)$. Using (\ref{covariance_equation}) as moments in a GMM model allows us to apply the weak identification theory that has been developed for GMM. We note that the maximum likelihood (ML) estimator is a more common estimator for low-dimensional factor models, where the factors and errors are assumed to be jointly normally distributed. The GMM estimator will be asymptotically equivalent to ML when the GMM objective function is efficiently weighted and the model is correctly specified. \cite{Cox_weak_id_w_bounds} considers weak identification in a class of minimum distance models that covers the ML estimator for low-dimensional factor models. \section{One Factor} \label{Section3} We consider a factor model with one factor and three observed variables. Three observed variables is the minimum number that is possible to identify the factor model parameters. The reparameterization presented in this section is extended to a one-factor model with more observed variables in Section \ref{1FMV} in the appendix. With one factor and three observed variables, there are six parameters in the model. They are the variance of the factor, $\sigma^2$, assumed to be positive, the variances of the errors, $\Phi=\text{diag}(\phi_1,\phi_2,\phi_3)$, and two factor loadings, $\lambda_2$ and $\lambda_3$. The factor loading matrix is \begin{equation} \Lambda = \left[\begin{array}{c}1\\\lambda_2\\\lambda_3\end{array}\right]. \end{equation} Note that we use $\lambda_1=1$ as the additional restriction. It assigns the units of the factor to be the same as the units of $W_{1i}$. Identification is determined by equation (\ref{covariance_equation}), which writes the covariance matrix of $W_i$ as a nonlinear function of the parameters. Let $\omega_j=\text{Var}(W_{ji})$ for $j\in\{1,2,3\}$, $\rho_2=\text{Cov}(W_{1i},W_{2i})$, $\rho_3=\text{Cov}(W_{1i},W_{3i})$, and $\tau=\text{Cov}(W_{2i},W_{3i})$. We can write out equation (\ref{covariance_equation}) as \begin{equation} \left[\begin{array}{ccc}\omega_1&\rho_2&\rho_3\\\rho_2&\omega_2&\tau\\\rho_3&\tau&\omega_3\end{array}\right]=\left[\begin{array}{ccc}\sigma^2+\phi_1&\lambda_2\sigma^2&\lambda_2\sigma^2\\\lambda_2\sigma^2&\lambda_2^2\sigma^2+\phi_2&\lambda_2\lambda_3\sigma^2\\\lambda_3\sigma^2&\lambda_2\lambda_3\sigma^2&\lambda_3^2\sigma^2+\phi_3\end{array}\right]. \label{Ex1_covariance_equation} \end{equation} Equation (\ref{Ex1_covariance_equation}) is composed of six scalar equations that set a nonlinear function of six unknown factor model parameters, $\gamma=(\lambda_2,\lambda_3,\sigma^2,\phi_1,\phi_2,\phi_3)'$, equal to six identified variances/covariances of $W_i$, $vech(\Omega)=(\omega_1,\rho_2,\rho_3,\omega_2,\tau,\omega_3)'$. The factor model parameters are identified if and only if these equations can be inverted for a unique value of $\gamma$. \cite{AndersonRubin1956} give a simple necessary and sufficient condition on the factor loadings for $\gamma$ to be identified from (\ref{Ex1_covariance_equation}). If both $\lambda_2\neq 0$ and $\lambda_3\neq 0$, then $\gamma$ is identified. When $\lambda_2\neq 0$ and $\lambda_3\neq 0$, then $\sigma^2$ can be identified by $\sigma^2=\tau\inv\rho_2\rho_3$. In this case, the other parameters in $\gamma$ are then easily identified. When $\lambda_3=0$ (or, by symmetry, $\lambda_2=0$), it is easy to see that $\gamma$ cannot be identified. The equations given by $\rho_3=\lambda_3\sigma^2$ and $\tau=\lambda_2\lambda_3\sigma^2$ are always zero. While these two equations can identify $\lambda_3=0$, there are still five other parameters to be identified and only four other equations. Thus, the identified set for $\gamma$ is given by a curve in the parameter space that satisfies $\lambda_3=0$ and these four other equations. Weak identification theory for GMM models requires knowledge that classifies each parameter as weakly identified or strongly identified. This rules out models where the identified set is determined by a curve in the parameter space. To satisfy this classification, such models require a careful reparameterization. \cite{HanMcCloskey2019} discuss this problem and provide a general strategy for finding a reparameterization based on solving a sequence of differential equations using the Jacobian of the moments. Here, we give a closed-form reparameterization. \begin{reparameterization}\label{reparameterization1} Let \begin{align*} \rho_2&=\lambda_2\sigma^2\\ \rho_3&=\lambda_3\sigma^2\\ \omega_1&=\sigma^2+\phi_1\\ \omega_2&=\lambda_2^2\sigma^2+\phi_2\\ \omega_3&=\lambda_3^2\sigma^2+\phi_3\\ \beta&=\sigma^2, \end{align*} and let $\pi=(\rho_2,\rho_3,\omega_1,\omega_2,\omega_3)'$. The model in (\ref{model}) with one factor and three observed variables can equivalently be parameterized by $\gamma=(\lambda_2,\lambda_3,\sigma^2,\phi_1,\phi_2,\phi_3)'$ or $\theta=(\pi',\beta)'$. \end{reparameterization} \textbf{Remarks.} (1) Reparameterization \ref{reparameterization1} uses the scalar equations in (\ref{Ex1_covariance_equation}) to define the new parameters. The strategy is simple: replace an original parameter with a corresponding identified parameter from the covariance matrix of $W_i$. Note that we cannot replace $\sigma^2$ with $\tau=\lambda_2\lambda_3\sigma^2$ because that replacement would not be invertible when $\lambda_2\lambda_3=0$. Thus, we leave $\sigma^2$ in the parameters by simply redefining it to be $\beta$. The non-invertibility of $\tau$ for $\sigma^2$ becomes the key focus of the identification analysis. (2) In the new parameterization, the identified parameters are the ones in $\pi$, while the weakly identified parameter is $\beta$. Thus, each parameter can be classified as strongly or weakly identified. (3) Estimation theory in weakly identified models tends to require further structure than just classifying the parameters; see Assumption C in \cite{StockWright2000} and Assumption A in \cite{AndrewsCheng2012}. The reparameterized factor model does not satisfy this further structure. Assumption C in \cite{StockWright2000} is very restrictive in nonlinear models; see the discussion in Section 2 in \cite{AndrewsGuggenberger2017}. Assumption A in \cite{AndrewsCheng2012} requires identification to be determined by whether a vector of strongly identified parameters is zero. Identification in the reparameterized factor model is determined by whether a \textit{function of} the strongly identified parameters is zero. It is unclear if there exists a further reparameterization that satisfies Assumption A in \cite{AndrewsCheng2012} without adding assumptions to the model. The reparameterized model does satisfy the structure required by Assumption 1 in \cite{Cox_weak_id_w_bounds} when the objective function in (\ref{GMM_objective}) is recast as a minimum distance objective function. \qed \medskip Using Reparameterization \ref{reparameterization1}, we can write the moments as functions of $\theta=(\pi',\beta)'$. The reparameterized moments are \begin{equation} g(\theta,W_i)=vech(W_iW'_i)-vech(\Omega(\theta))= \left(\begin{array}{rcl} W_{1i}^2&-&\omega_1\\ W_{1i}W_{2i}&-&\rho_2\\ W_{1i}W_{3i}&-&\rho_3\\ W_{2i}^2&-&\omega_2\\ W_{2i}W_{3i}&-&\beta\inv\rho_2\rho_3\\ W_{3i}^2&-&\omega_3 \end{array}\right). \label{reparameterized_moments} \end{equation} The only nontrivial moment is the one with $\beta\inv\rho_2\rho_3$, corresponding to the $\tau$ parameter in (\ref{Ex1_covariance_equation}). It is easy to see that $\beta$ is identified if and only if $\rho_2\rho_3\neq 0$. This is equivalent to the condition from \cite{AndersonRubin1956}, except stated in terms of the new parameters.\footnote{This demonstrates an important part of the classification needed for weak identification in GMM models. The identification status of potentially weakly identified parameters must be determined only by the values of strongly identified parameters. According to this principle, the condition from \cite{AndersonRubin1956} is insufficient because the factor loadings are not strongly identified. Reparameterization \ref{reparameterization1} is careful to translate this condition to an equivalent one on the new parameters.} Weak identification arises when the true value of the parameters is considered as a sequence of values indexed by the sample size that converges to a non-identified limit at a particular rate. Denote this sequence of true values by a subscript $n$. To determine the correct rate, we look at the columns of the Jacobian of the moments. Write out the Jacobian of (\ref{reparameterized_moments}): \begin{equation} \frac{\partial}{\partial\theta'}g(\theta,W_i)=G(\theta)=-\left[\begin{array}{cccccc}0&0&1&0&0&0\\1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&0&1&0&0\\\beta\inv\rho_3&\beta\inv\rho_2&0&0&0&-\beta^{-2}\rho_2\rho_3\\0&0&0&0&1&0\end{array}\right]. \end{equation} The only column that can go to zero is the last one, which depends on $-\beta^{-2}\rho_2\rho_3$. If this value converges to zero at the $n^{-1/2}$ rate, then we have weak identification. \begin{definition}\label{def1} A sequence of parameters, $\theta_n=(\pi'_n,\beta_n)'=(\rho_{2n},\rho_{3n},\omega_{1n},\omega_{2n},\omega_{3n},\beta_n)'$, induces weak identification of the model in (\ref{model}) with one factor and three observed variables if $\sqrt{n}\rho_{2n}\rho_{3n}\rightarrow b$ for some $b\in\R$. \end{definition} \textbf{Remark.} In Definition \ref{def1}, the key value that determines the strength of identification is the product between $\rho_{2n}$ and $\rho_{3n}$. This means that weak identification can arise from one converging to zero at the $n^{-1/2}$ rate while the other converges to a nonzero value. Weak identification can also arise from both converging to zero at slower rates than $n^{-1/2}$. For example, if $\rho_{2n}=n^{-1/3}$ and $\rho_{3n}=n^{-1/6}$, then the model is weakly identified. The practical consequence of this slower convergence rate is that the influence of weak identification covers a larger neighborhood of $(\rho_2,\rho_3)=(0,0)$ in the parameter space---one that shrinks slower than $n^{-1/2}$. \qed\medskip Reparameterization \ref{reparameterization1} is useful for identification-robust hypothesis testing.\footnote{By inverting identification-robust hypothesis tests, Reparameterization \ref{reparameterization1} is also useful for constructing identification-robust confidence intervals.} When testing a hypothesis on the variance of the factor, $H_0: \sigma^2=\sigma^2_0$, for some hypothesized value, $\sigma^2_0$, all of the nuisance parameters in the reparameterized model are strongly identified. This means that we can use the versions of the identification-robust test statistics in \cite{StockWright2000}, \cite{Kleibergen2005}, and \cite{AndrewsMikusheva2016Functional} that plug in estimators of the nuisance parameters. Otherwise, we would have to project these tests over the nuisance parameters, making them very conservative. Another alternative is to use identification-robust tests that are designed to handle weakly identified nuisance parameters, such as \cite{ChaudhuriZivot2011}, \cite{DAndrews2017}, or \cite{IAndrews2018}. These options are compared in the simulations. Section \ref{OtherHypotheses} in the appendix shows how Reparameterization \ref{reparameterization1} is useful for testing other hypotheses in factor models. \section{Two Factors} \label{Section4} We consider a factor model with two factors and five observed variables. Five observed variables is the minimum number that is possible to identify the factor model parameters. The reparameterization presented in this section is extended to a two-factor model with more observed variables in Section \ref{2FMV} in the appendix. With two factors and five observed variables, there are 14 parameters in the model. The covariance matrix of the factors, $\Sigma$, contains the variances of the factors, $\sigma^2_1$ and $\sigma^2_2$, and the covariance between the factors, $\sigma_{12}$. The variance matrix of the errors contains five parameters, $\Phi=\text{diag}(\phi_1, \phi_2, \phi_3, \phi_4, \phi_5)$. The factor loading matrix contains six parameters: \begin{equation} \Lambda=\left[\begin{array}{cc}1&0\\0&1\\\lambda_{31}&\lambda_{32}\\\lambda_{41}&\lambda_{42}\\\lambda_{51}&\lambda_{52}\end{array}\right]. \end{equation} Note that we use $\left[\begin{array}{cc}\lambda_{11}&\lambda_{12}\\\lambda_{21}&\lambda_{22}\end{array}\right]=\left[\begin{array}{cc}1&0\\0&1\end{array}\right]$ as the additional restrictions. These restrictions assign the units of the factors to be the same as the units of $W_{1i}$ and $W_{2i}$. They also require $W_{1i}$ to not depend on the second factor and $W_{2i}$ to not depend on the first factor. In this paper, we assume $\Sigma$, the covariance matrix of the factors, is positive definite. We also assume $\lambda_{31}\neq 0$ and $\lambda_{41}\neq 0$. This requires at least one of the factors to be strong, in the sense that it has at least three nonzero factor loadings. It further requires the researcher to know which factor is strong, as well as which three variables have the nonzero factor loadings. This significantly simplifies the reparameterization, for reasons given below. Identification is determined by equation (\ref{covariance_equation}), which writes the covariance matrix of $W_i$ as a nonlinear function of the parameters. Let $\lambda_j$ denote the $2\times 1$ vector denoting the $j$th row of $\Lambda$ for $j\in\{1,...,5\}$. Specifically, $\lambda_1=(1,0)'$ and $\lambda_2=(0,1)'$, because of the additional restrictions, while $\lambda_j=(\lambda_{j1},\lambda_{j2})'$ for $j\in\{3,4,5\}$. We can write out equation (\ref{covariance_equation}) as \begin{equation} \Omega=\left[\begin{array}{ccccc}\sigma_1^2+\phi_1&\cdot&\cdot&\cdot&\cdot\\\sigma_{12}&\sigma_2^2+\phi_2&\cdot&\cdot&\cdot\\\lambda'_1\Sigma\lambda_3&\lambda'_2\Sigma\lambda_3&\lambda'_3\Sigma\lambda_3+\phi_3&\cdot&\cdot\\\lambda'_1\Sigma\lambda_4&\lambda'_2\Sigma\lambda_4&\lambda'_3\Sigma\lambda_4& \lambda'_4\Sigma\lambda_4+\phi_4&\cdot\\\lambda'_1\Sigma\lambda_5&\lambda'_2\Sigma\lambda_5&\lambda'_3\Sigma\lambda_5& \lambda'_4\Sigma\lambda_5&\lambda'_5\Sigma\lambda_5+\phi_5\end{array}\right]. \label{Ex2_covariance_equation} \end{equation} Equation (\ref{Ex2_covariance_equation}) is composed of 15 scalar equations that set a nonlinear function of 14 unknown factor model parameters, $\gamma=(vec(\Lambda)',vech(\Sigma)',diag(\Phi)')'$, equal to 15 identified variances/covariances of $W_i$. The factor model parameters are identified if and only if these equations can be inverted for a unique value of $\gamma$. \cite{AndersonRubin1956} give a necessary and sufficient condition on the factor loadings for $\gamma$ to be identified from (\ref{Ex2_covariance_equation}). If, for any row deleted from $\Lambda$, the remaining rows can be rearranged into two $2\times 2$ full-rank matrices, then the factor model parameters are identified. Otherwise, they are not. This condition is more complicated than in the one-factor case. Still, it is a necessary and sufficient condition that we can use to characterize weak identification in factor models. We next give the reparameterization that satisfies the weak identification classification in GMM models. We follow the same strategy as Reparameterization \ref{reparameterization1} and define the new parameters to be elements of the covariance matrix of $W_i$. \begin{reparameterization}\label{reparameterization2} Let \begin{alignat*}{3} \omega_1&=Var(W_{1i})&&=\sigma_1^2+\phi_1\\ \omega_2&=Var(W_{2i})&&=\sigma_2^2+\phi_2\\ \omega_3&=Var(W_{3i})&&=\lambda_{31}^2\sigma_1^2+2\lambda_{31}\lambda_{32}\sigma_{12}+\lambda_{32}^2\sigma_2^2+\phi_3\\ \omega_4&=Var(W_{4i})&&=\lambda_{41}^2\sigma_1^2+2\lambda_{41}\lambda_{42}\sigma_{12}+\lambda_{42}^2\sigma_2^2+\phi_4\\ \omega_5&=Var(W_{5i})&&=\lambda_{51}^2\sigma_1^2+2\lambda_{51}\lambda_{52}\sigma_{12}+\lambda_{52}^2\sigma_2^2+\phi_5\\ \rho_{31}&=Cov(W_{1i},W_{3i})&&=\lambda_{31}\sigma_1^2+\lambda_{32}\sigma_{12}\\ \rho_{41}&=Cov(W_{1i},W_{4i})&&=\lambda_{41}\sigma_1^2+\lambda_{42}\sigma_{12}\\ \rho_{51}&=Cov(W_{1i},W_{5i})&&=\lambda_{51}\sigma_1^2+\lambda_{52}\sigma_{12}\\ \rho_{32}&=Cov(W_{2i},W_{3i})&&=\lambda_{31}\sigma_{12}+\lambda_{32}\sigma_2^2\\ \rho_{42}&=Cov(W_{2i},W_{4i})&&=\lambda_{41}\sigma_{12}+\lambda_{42}\sigma_2^2\\ \rho_{52}&=Cov(W_{2i},W_{5i})&&=\lambda_{51}\sigma_{12}+\lambda_{52}\sigma_2^2\\ \chi&=Cov(W_{3i}, W_{4i})&&=\lambda_{31}\lambda_{41}\sigma_1^2+(\lambda_{31}\lambda_{42}+\lambda_{32}\lambda_{41})\sigma_{12}+\lambda_{32}\lambda_{42}\sigma_2^2. \end{alignat*} Also let $\beta=\sigma^2_2$, $\omega=(\omega_1,\omega_2,\omega_3,\omega_4,\omega_5)'$, $\rho=(\rho_{31},\rho_{41},\rho_{51},\rho_{32},\rho_{42},\rho_{52})'$, and $\pi=(\rho',\omega',\chi,\sigma_{12})'$. The model in (\ref{model}) with two factors and five observed variables can equivalently be parameterized by $\gamma=(vec(\Lambda)',vech(\Sigma)',diag(\Phi)')'$ or $\theta=(\pi',\beta)'$. \end{reparameterization} \textbf{Remarks.} (1) Reparameterization \ref{reparameterization2} uses the scalar equations in (\ref{Ex2_covariance_equation}) to define the new parameters. The strategy is the same as before: replace an original parameter with a corresponding identified parameter from the covariance matrix of $W_i$. Appendix \ref{Reparam2Details} shows that this reparameterization is well-defined and invertible. 13 of the 14 parameters can be replaced in a way that is invertible. The remaining parameter is $\beta=\sigma^2_2$, the variance of the second factor. $\beta$ is potentially weakly identified depending on the value of $\pi$. (2) As before, in the new parameterization, the identified parameters are the ones in $\pi$, while the weakly identified parameter is $\beta$. Thus, each parameter can be classified as strongly or weakly identified. \qed \medskip There are two remaining covariances that were not used in Reparameterization \ref{reparameterization2}. We can write them out using the new parameters: \begin{align} Cov(W_{3i},W_{5i})&=\lambda_{31}\lambda_{51}\sigma_1^2+(\lambda_{31}\lambda_{52}+\lambda_{32}\lambda_{51})\sigma_{12}+\lambda_{32}\lambda_{52}\sigma_2^2\label{tau1}\\ &=\frac{\rho_{32}(\rho_{52}\rho_{41}-\rho_{51}\rho_{42})+\chi(\beta\rho_{51}-\sigma_{12}\rho_{52})}{\beta\rho_{41}-\sigma_{12}\rho_{42}}=:\tau_{35}(\pi,\beta)\nonumber\\ Cov(W_{4i},W_{5i})&=\lambda_{41}\lambda_{51}\sigma_1^2+(\lambda_{41}\lambda_{52}+\lambda_{42}\lambda_{51})\sigma_{12}+\lambda_{42}\lambda_{52}\sigma_2^2\label{tau2}\\ &=\frac{\rho_{42}(\rho_{52}\rho_{31}-\rho_{51}\rho_{32})+\chi(\beta\rho_{51}-\sigma_{12}\rho_{52})}{\beta\rho_{31}-\sigma_{12}\rho_{32}}=:\tau_{45}(\pi,\beta). \nonumber \end{align} Note that the denominators in the above expressions are nonzero because $\beta\rho_{41}-\sigma_{12}\rho_{42}=(\sigma_1^2\sigma_2^2-\sigma_{12}^2)\lambda_{41}\neq 0$ and $\beta\rho_{31}-\sigma_{12}\rho_{32}=(\sigma_1^2\sigma_2^2-\sigma_{12}^2)\lambda_{31}\neq 0$.\footnote{This explains why we need $\lambda_{31}\neq 0$ and $\lambda_{41}\neq 0$. When $\lambda_{31}=0$ or $\lambda_{41}=0$, Reparameterization \ref{reparameterization2} is not well defined. It is unclear if it is possible to find another reparameterization for this case.} The functions $\tau_{35}(\pi,\beta)$ and $\tau_{45}(\pi,\beta)$ defined in (\ref{tau1}) and (\ref{tau2}) are key for analyzing identification of $\beta$. If, for a fixed value of $\pi$, either function can be inverted for the value of $\beta$, then $\beta$ is identified. We give a condition that determines the set of values of $\pi$ for which $\tau_{35}(\pi,\beta)$ or $\tau_{45}(\pi,\beta)$ can be inverted for $\beta$. \begin{proposition}\label{prop1} Let $\pi=(\rho',\omega',\chi,\sigma_{12})'$, where $\rho=(\rho_{31},\rho_{41},\rho_{51},\rho_{32},\rho_{42},\rho_{52})'$. $\tau_{35}(\pi,\beta)$ can be inverted for the value of $\beta$ if and only if $(\rho_{52}\rho_{41}-\rho_{51}\rho_{42})(\rho_{32}\rho_{41}-\chi\sigma_{12})\neq 0$. $\tau_{45}(\pi,\beta)$ can be inverted for the value of $\beta$ if and only if $(\rho_{52}\rho_{31}-\rho_{51}\rho_{32})(\rho_{31}\rho_{42}-\chi\sigma_{12})\neq 0$. \end{proposition} \textbf{Remark.} Let $s_1(\pi)=(\rho_{52}\rho_{41}-\rho_{51}\rho_{42})(\rho_{32}\rho_{41}-\chi\sigma_{12})$ and $s_2(\pi)=(\rho_{52}\rho_{31}-\rho_{51}\rho_{32})(\rho_{31}\rho_{42}-\chi\sigma_{12})$. Proposition \ref{prop1} says that $\beta$ is identified if and only if $(s_1(\pi),s_2(\pi))$ $\neq (0,0)$. This characterization of identification is more useful for us than the one in \cite{AndersonRubin1956} because it is stated in terms of strongly identified parameters. It is also possible to test empirically because it amounts to testing a function of strongly identified parameters. \qed\medskip Using Reparameterization \ref{reparameterization2}, we can write the moments as functions of $\theta=(\pi',\beta)'$. The reparameterized moments are \begin{equation} g(\theta,W_i)=vech(W_iW'_i)-vech(\Omega(\theta))= \left(\begin{array}{rcl} W_{1i}^2&-&\omega_1\\ W_{1i}W_{2i}&-&\sigma_{12}\\ W_{1i}W_{3i}&-&\rho_{31}\\ W_{1i}W_{4i}&-&\rho_{41}\\ W_{1i}W_{5i}&-&\rho_{51}\\ W_{2i}^2&-&\omega_2\\ W_{2i}W_{3i}&-&\rho_{32}\\ W_{2i}W_{4i}&-&\rho_{42}\\ W_{2i}W_{5i}&-&\rho_{52}\\ W_{3i}^2&-&\omega_3\\ W_{3i}W_{4i}&-&\chi\\ W_{3i}W_{5i}&-&\tau_{35}(\pi,\beta)\\ W_{4i}^2&-&\omega_4\\ W_{4i}W_{5i}&-&\tau_{45}(\pi,\beta)\\ W_{5i}^2&-&\omega_5 \end{array}\right). \label{Ex2_reparameterized_moments} \end{equation} Notice that $\sigma_{12}$, the covariance between the two factors, is automatically identified without the reparameterization from the covariance between $W_{1i}$ and $W_{2i}$. There are only two nontrivial moments, given by $\tau_{35}(\pi,\beta)$ and $\tau_{45}(\pi,\beta)$. To define weak identification sequences, we evaluate the derivative of these two moments with respect to $\beta$: \begin{equation}\label{jacobian2} \frac{\partial}{\partial\beta}\left[\begin{array}{c}\tau_{35}(\pi,\beta)\\\tau_{45}(\pi,\beta)\end{array}\right]=\left[\begin{array}{c}s_1(\pi)(\rho_{41}\beta-\sigma_{12}\rho_{42})^{-2}\\s_2(\pi)(\rho_{31}\beta-\sigma_{12}\rho_{32})^{-2}\end{array}\right]. \end{equation} Since $\rho_{41}\beta-\sigma_{12}\rho_{42}=(\sigma_1^2\sigma_2^2-\sigma_{12}^2)\lambda_{41}\neq 0$ and $\rho_{31}\beta-\sigma_{12}\rho_{32}=(\sigma_1^2\sigma_2^2-\sigma_{12}^2)\lambda_{31}\neq 0$, the only way (\ref{jacobian2}) can go to zero is if $s_1(\pi)$ and $s_2(\pi)$ converge to zero. Weak identification arises when these converge at the $n^{-1/2}$ rate. \begin{definition}\label{def2} A sequence of parameters, $\theta_n=(\pi'_n,\beta_n)'$, induces weak identification of the model in (\ref{model}) with two factors and five observed variables if $\sqrt{n}s_1(\pi_n)\rightarrow b_1$ and $\sqrt{n}s_1(\pi_n)\rightarrow b_2$ for some $(b_1,b_2)\in\R^2$. \end{definition} \textbf{Remark.} In Definition \ref{def2}, the key values that determine the strength of identification are $s_1(\pi)$ and $s_2(\pi)$. Both must be converging to zero at the $n^{-1/2}$ rate for weak identification. While these functions are somewhat complicated, the important thing is that they only depend on strongly identified parameters. This is a consequence of the required classification for weak identification in GMM models. Notice that $s_1(\pi)$ and $s_2(\pi)$ are themselves products of components, each of which can go to zero. There are lots of possible sequences of parameters $\pi_n$ converging at various rates to a limit $\pi_0$ that induce $n^{-1/2}$ convergence of $s(\pi)$. For example, $\rho_{52}\rho_{41}-\rho_{51}\rho_{42}$ can converge to zero at the $n^{-1/4}$ rate while $\rho_{41}\rho_{32}-\chi\sigma_{12}$ can converge to zero at the $n^{-1/4}$ rate to get convergence of $s_1(\pi_n)$ at the $n^{-1/2}$ rate. \qed\medskip Reparameterization \ref{reparameterization2} is useful for identification-robust hypothesis testing. When testing a hypothesis on the variance of the second factor, $H_0: \sigma^2_2=\sigma^2_{2,0}$, for some hypothesized value, $\sigma^2_{2,0}$, all of the nuisance parameters in the reparameterized model are strongly identified. This means that we can use the versions of the identification-robust test statistics in \cite{StockWright2000}, \cite{Kleibergen2005}, and \cite{AndrewsMikusheva2016Functional} that plug in estimators of the nuisance parameters. Otherwise, we would have to project these tests over the nuisance parameters, making them very conservative. Another alternative is to use identification-robust tests that are designed to handle weakly identified nuisance parameters, such as \cite{ChaudhuriZivot2011}, \cite{DAndrews2017}, or \cite{IAndrews2018}. These options are compared in the simulations. Section \ref{OtherHypotheses} in the appendix shows how Reparameterization \ref{reparameterization2} is useful for testing other hypotheses in factor models. \section{Simulations} In this section, we use simulations to compare identification-robust hypothesis tests in factor models with one or two factors. \subsection{Identification-Robust Hypothesis Tests} We compare various identification-robust hypothesis tests. We divide the tests into two types: ``original parameterization tests,'' which can be implemented without Reparameterizations \ref{reparameterization1} or \ref{reparameterization2}, and ``reparameterization tests,'' which require Reparameterizations \ref{reparameterization1} or \ref{reparameterization2} to be implemented. The original parameterization tests that we include are the projected \cite{AndersonRubin1949} test (AR-Proj) from \cite{StockWright2000}, the projected K test (K-Proj) from \cite{Kleibergen2005}, the two-step test (CZ) from \cite{ChaudhuriZivot2011}, the conditional linear combination test (CLC) from \cite{IAndrews2018}, and three two-step tests from \cite{DAndrews2017} denoted AR-AR, AR-LM, and AR-QLR. The reparameterization tests that we include are the plug-in \cite{AndersonRubin1949} test (AR-Plug) from \cite{StockWright2000}, the plug-in K test (K-Plug) and CLR test (CLR-Plug) from \cite{Kleibergen2005}, and the plug-in conditional likelihood ratio test (AM-Plug) from \cite{AndrewsMikusheva2016Functional}. \medskip \textbf{Remarks on the Identification-Robust Tests.} (1) The plug-in versions of the full-vector tests from \cite{StockWright2000}, \cite{Kleibergen2005}, and \cite{AndrewsMikusheva2016Functional} are available because, after the reparameterization, the nuisance parameters are strongly identified. (2) \cite{IAndrews2018} focuses on identification-robust confidence sets. To convert this to a hypothesis test, we invert the robust confidence set defined in equation (12) in that paper, with $\gamma=0.05$ and $\alpha=0.05$. This ensures the nominal size of the CLC test is $0.05$ and the CLC test is comparable to the other identification-robust tests. This differs from the recommendation in \cite{IAndrews2018} of reporting two confidence sets, one identification-robust and one not, together with a coverage distortion size. (3) The CZ test is computed with $\tau=0.045$ and $\zeta=0.005$ to ensure the nominal size is $0.05$ and the CZ test is comparable to the other identification-robust tests computed. For the same reason, the tests from \cite{DAndrews2017} are computed with $\alpha_1=0.005$ and $\alpha_2=0.045$. (4) Several other identification-robust tests are omitted. This includes the versions of the AR, K, and CLR tests developed for empirical likelihood in \cite{GuggenbergerSmith2005, GuggenbergerSmith2008}, \cite{Otsu2006}, and \cite{GuggenbergerRamalhoSmith2012}. We expect these tests to be similar to the GMM versions of the AR, K, and CLR tests, and thus omit them. \cite{AndrewsMikusheva2016Geometric} propose a geometric test for curved null hypotheses in minimum distance models. Our hypotheses have unbounded curvature, and thus this test reduces to the AR-Proj test. \cite{AndrewsGuggenberger2019} propose identification and singularity-robust tests. Our moments have a nonsingular variance matrix, and thus the tests should asymptotically reduce to the AR and CLR tests. \qed \subsection{One Factor Simulations} In the one-factor model, we take the variance of the factor, $\sigma^2$, to be $1$. We take all the variances of the errors, $\phi_j$, to be $1$ for $j\in\{1,2,3\}$. Let $b\ge 0$. We consider two different specifications of the factor loadings. In the first specification, $\lambda_2=1$ and $\lambda_3=n^{-1/2}b$. In the second specification, $\lambda_2=\lambda_3=n^{-1/4}\sqrt{b}$. Note that the same value of $b$ should lead to the same ``strength'' of identification, measured in terms of $\sqrt{n}\rho_2\rho_3$. With these parameter values, $f_i$ and $\epsilon_i$ are simulated iid with a joint normal distribution. $W_i$ is calculated using (\ref{model}). In the simulations, we take $n=500$ and report results using $1000$ simulation draws. We test the hypothesis $H_0: \sigma^2=1.5$. When $b=0$, the parameter values are observationally equivalent to a vector of parameter values under the null, and thus the simulated rejection probabilities estimate a null rejection probability. When $b\neq 0$, the alternative hypothesis is true, and the simulated rejection probabilities estimate the power function. \begin{table}[t] \scalebox{\shrinkageparameter}{ \begin{threeparttable} {\scriptsize \caption{Rejection Probabilities of Nominal 5\% Tests in a One-Factor Model}\label{table-1F-Power} \begin{center} \begin{tabular}{cccccccccccccc} \hline\hline\vspace{-0.2cm}&&&&&&&&&&&&&\\ &\multicolumn{5}{c}{$(\lambda_2,\lambda_3)=(1,n^{-1/2}b)$}&&\multicolumn{5}{c}{$(\lambda_2,\lambda_3)=n^{-1/4}(\sqrt{b},\sqrt{b})$}&&\\ \cline{2-6}\cline{8-12}\vspace{-0.1cm}\\ {Test}&{$b=0$}&{$b=1$}&{$b=2$}&{$b=5$}&{$b=10$}&& {$b=0$}&{$b=1$}&{$b=2$}&{$b=5$}&{$b=10$}&&{Time}\\ \hline\vspace{-0.2cm}\\ \multicolumn{14}{c}{Original Parameterization Tests}\\ \hline\vspace{-0.2cm}\\ AR-Proj&0.1&0.1&0.1&1.1&9.4&&0.1&0.1&0.2&1.7&11.0&&0.01\\ CZ&2.4&3.3&4.1&14.9&50.8&&2.7&3.8&5.6&19.9&55.7&&0.06\\ CLC&4.0&4.4&6.1&18.8&54.6&&4.1&4.8&7.4&23.3&58.9&&0.04\\ AR-AR&2.1&2.6&3.8&13.9&48.9&&2.8&3.2&4.8&18.6&53.6&&0.03\\ AR-LM&0&0&0&0&0.4&&0&0&0&0&1.1&&0.01\\ \hline\vspace{-0.2cm}\\ \multicolumn{14}{c}{Reparameterization Tests}\\ \hline\vspace{-0.2cm}\\ AR-Plug&5.7&6.1&8.0&23.2&62.2&&5.9&6.2&9.7&28.1&66.4&&0.01\\ AM-Plug&11.1&12.7&15.1&27.7&64.0&&10.7&10.1&14.5&29.1&67.0&&150\\ \hline \end{tabular} \begin{tablenotes} \item {\em Note:} The parameters in the data generating process are $\sigma^2=\phi_1=\phi_2=\phi_3=1$ and $n=500$. This implies that $\sqrt{n}\rho_2\rho_3=b$ for all specifications. The entries in the table denote the rejection probabilities for testing $H_0: \sigma^2=1.5$, reported in percentages out of 1000 simulations. The entries in the ``Time'' column report the average time to compute each test in seconds per simulation. \end{tablenotes} \end{center} } \end{threeparttable} } \end{table} Table \ref{table-1F-Power} reports the simulated rejection probabilities for the identification-robust tests in the one-factor model with $b\in\{0,1,2,5,10\}$. \medskip \textbf{Remarks on Table \ref{table-1F-Power}.} (1) The CLR-Plug and K-Plug tests are omitted because they reduce to the AR-Plug test when the number of moments is equal to the number of parameters. Similarly, the K-Proj test is omitted because it reduces to the AR-Proj test. (2) The columns with $b=0$ correspond to null rejection probabilities. We see that the AR-Proj and AR-LM tests are extremely conservative, the CZ and AR-AR tests are moderately conservative, and the CLC test is slightly conservative. Conversely, the AR-Plug test has slight over-rejection under the null, and the AM-Plug test has significant finite-sample over-rejection.\footnote{In unreported simulations, we increased the sample size and found that the null rejection probability of the AM-Plug test approaches 5\%. This suggests the over-rejection of the AM-Plug test is a finite-sample result.} (3) The columns with $b>0$ show the power function of the tests. As the strength of identification increases, the power of the tests increases. The AM-Plug and AR-Plug tests have the highest power, followed by the CLC, CZ, and AR-AR tests. This ranking follows the ranking based on null rejection probability. Overall, the AR-Plug and CLC tests provide a good trade-off between size and power. (4) We also note that the AM-Plug test is much more computationally expensive than the other tests. \qed \medskip We also investigate the problem of estimating the number of factors in these specifications. We consider estimators that come from the model selection literature for GMM following \cite{Andrews1999}. With only three observed variables, we compare a zero-factor model and a one-factor model and choose the one that minimizes AIC or BIC. We also report the probability of rejecting the specification of the zero-factor model using a GMM J-test. Table \ref{table-1F-Number} reports these estimates for $b\in\{0,1/3,2/3,1,2\}$. \medskip \begin{table}[t] \scalebox{\shrinkageparameter}{ \begin{threeparttable} {\scriptsize \caption{Estimates of the Number of Factors in a One-Factor Model}\label{table-1F-Number} \begin{center} \begin{tabular}{cccccccccccc} \hline\hline\vspace{-0.2cm}&&&&&&&&&&&\\ &\multicolumn{5}{c}{$(\lambda_2,\lambda_3)=(1,n^{-1/2}b)$}&&\multicolumn{5}{c}{$(\lambda_2,\lambda_3)=n^{-1/4}(\sqrt{b},\sqrt{b})$}\\ \cline{2-6}\cline{8-12}\vspace{-0.2cm}\\ {Estimator}&{$b=0$}&{$b=1/3$}&{$b=2/3$}&{$b=1$}&{$b=2$}&& {$b=0$}&{$b=1/3$}&{$b=2/3$}&{$b=1$}&{$b=2$}\\ \hline\vspace{-0.2cm}\\ AIC&100&100&100&100&100&&12.3&76.1&95.7&99.4&100\\ BIC&100&100&100&100&100&&0&11.7&45.6&75.2&99.3\\ J-Test&100&100&100&100&100&&5.4&64.0&91.2&98.6&100\\ \hline \end{tabular} \begin{tablenotes} \item {\em Note:} The parameters in the data generating process are $\sigma^2=\phi_1=\phi_2=\phi_3=1$ and $n=500$. This implies that $\sqrt{n}\rho_2\rho_3=b$ for all specifications. The entries for rows AIC and BIC denote the percentage of simulations out of 1000 for which one factor was estimated. One minus the entry gives the percentage of simulations for which zero factors were estimated. The entry for the J-Test row denotes the percentage of simulations out of 1000 for which the GMM J-Test for the specification of a zero-factor model rejects. \end{tablenotes} \end{center} } \end{threeparttable} } \end{table} \textbf{Remarks on Table \ref{table-1F-Number}.} (1) When $\lambda_2=1$, AIC and BIC both estimate one factor always. The J-test for the specification of the zero-factor model also rejects always. This emphasizes the fact that weakly identified factors can still be consistently detected by statistical methods. (2) When both $\lambda_2$ and $\lambda_3$ are close to zero, the AIC and BIC estimates do not always detect the factors. In theory, both $\lambda_2$ and $\lambda_3$ need to converge to zero at close to the $n^{-1/2}$ rate in order for BIC to estimate zero factors with a probability bounded away from zero asymptotically. This is apparent in Table \ref{table-1F-Number} from the small values of $b$ for which the AIC and BIC detect the factor and for which the J-test rejects with high probability. \qed \subsection{Two Factor Simulations} \label{two-factor-simulations} In the two-factor model, we take the covariance matrix of the factors to be $\Sigma=I_2$. We take all the variances of the errors, $\phi_j$, to be $1$ for $j\in\{1,2,3,4,5\}$. Let $b_1, b_2\ge 0$. We consider three different specifications of the factor loadings. In the first specification, $\lambda_{31}=\lambda_{41}=\lambda_{52}=1$, $\lambda_{51}=0$, and $(\lambda_{32},\lambda_{42})=n^{-1/2}(b_1,b_2)$. In this specification, the second factor is weak because it only has two nonzero factor loadings in the limit. In the second specification, $\lambda_{31}=\lambda_{41}=\lambda_{51}=\lambda_{52}=1$ and $(\lambda_{32},\lambda_{42})=(1-n^{-1/2}b_2,1-n^{-1/2}b_1)$. In this specification, both factors are strong, but they cannot be separately identified because $\lambda_3$, $\lambda_4$, and $\lambda_5$ are collinear in the limit. In the third specification, $\lambda_{31}=\lambda_{41}=1$, $\lambda_{51}=0$, and $(\lambda_{32},\lambda_{42},\lambda_{52})=n^{-1/4}(\sqrt{b_1},b_1^{-1/2}b_2,\sqrt{b_1})$. (When $b_1=0$, we take $b_1^{-1/2}b_2=0$.) In this specification, the second factor is weak because it only has one nonzero factor loading in the limit. Also note that the convergence rate is slower relative to the first specification. In all specifications, the same value of $(b_1,b_2)$ should lead to the same ``strength'' of identification, measured in terms of $\sqrt{n}(s_1(\pi),s_2(\pi))$. With these parameter values, $f_i$ and $\epsilon_i$ are simulated iid with a joint normal distribution. $W_i$ is calculated using (\ref{model}). In the simulations, we take $n=500$ and report results using $1000$ simulation draws. We test the hypothesis $H_0: \sigma^2_2=1.5$. When $(b_1,b_2)=(0,0)$, the parameter values are observationally equivalent to a vector of parameter values under the null, and thus the simulated rejection probabilities estimate a null rejection probability. When $(b_1,b_2)\neq (0,0)$, the alternative hypothesis is true, and the simulated rejection probabilities estimate the power function. \begin{table}[p] \begin{center} \rotatebox{270}{ \scalebox{0.9}{ \begin{threeparttable} {\scriptsize \caption{Rejection Probabilities of Nominal 5\% Tests in a Two-Factor Model}\label{table-2F-Power} \begin{center} \begin{tabular}{cccccccccccccccccccc} \hline\hline\vspace{-0.2cm}&&&&&&&&&&&&&&&&&&&\\ &\multicolumn{5}{c}{$\lambda_{32}=n^{-1/2}b_1$, $\lambda_{42}=0$, $\lambda_{51}=0$, $\lambda_{52}=1$}&&\multicolumn{5}{c}{$\lambda_{32}=1$, $\lambda_{42}=1-n^{-1/2}b_1$, $\lambda_{51}=1$, $\lambda_{52}=1$}&&\multicolumn{5}{c}{$(\lambda_{32},\lambda_{52})=n^{-1/4}(\sqrt{b_1},\sqrt{b_1})$, $\lambda_{42}=0$, $\lambda_{51}=0$}&\\ \cline{2-6}\cline{8-12}\cline{14-18}\vspace{-0.1cm}\\ {Test}&{$b_1=0$}&{$b_1=2$}&{$b_1=5$}&{$b_1=10$}&{$b_1=20$}&&{$b_1=0$}&{$b_1=2$}&{$b_1=5$}&{$b_1=10$}&{$b_1=20$}&&{$b_1=0$}&{$b_1=2$}&{$b_1=5$}&{$b_1=10$}&{$b_1=20$}&&{Time}\\ \hline\vspace{-0.2cm}\\ \multicolumn{20}{c}{Original Parameterization Tests}\\ \hline\vspace{-0.2cm}\\ AR-Proj&0&0&0.1&0.4&2.6&&0&0&0&0&0.9&&0&0&0.1&0.4&2.7&&0.05\\ K-Proj&0&0&0.1&0.2&2.2&&0&0&0&0&0.8&&0&0&0.1&0.2&2.5&&0.25\\ CZ&0&0&0.1&3.9&59.4&&0&0&0&0.3&42.9&&0&0.1&1.4&14.7&60.6&&0.16\\ CLC&0&0&0&0.1&28.0&&0&0&0&0&22.4&&0&0&0&1.2&30.4&&0.36\\ AR-AR&0.5&1.0&2.2&11.4&54.2&&0.5&0.7&1.5&5.2&30.6&&0.7&1.1&2.7&14.0&54.8&&0.09\\ AR-LM&0&0&0&0&0.8&&0&0&0&0&0&&0&0&0&0&0.8&&0.06\\ AR-QLR&0&0&0&0.2&2.1&&0&0&0&0&0.7&&0&0&0.1&0.3&2.2&&0.06\\ \hline\vspace{-0.2cm}\\ \multicolumn{20}{c}{Reparameterization Tests}\\ \hline\vspace{-0.2cm}\\ AR-Plug&6.0&7.0&14.0&38.7&77.3&&4.9&5.8&8.8&19.5&57.5&&5.9&8.0&16.7&43.6&77.5&&0.04\\ K-Plug&4.1&6.1&15.1&47.3&83.9&&4.7&5.1&8.3&25.6&69.3&&3.7&6.8&21.7&52.4&84.3&&0.05\\ CLR-Plug&5.9&6.5&14.6&47.4&84.1&&5.0&5.9&9.1&25.4&69.5&&5.7&7.3&19.4&53.3&84.2&&0.91\\ \hline \end{tabular} \begin{tablenotes} \item {\em Note:} The parameters in the data generating process are $\sigma^2_1=\sigma^2_2=\phi_1=\phi_2=\phi_3=\phi_4=\phi_5=\lambda_{31}=\lambda_{41}=1$, $\sigma_{12}=0$, $b_2=0$, and $n=500$. This implies that $\sqrt{n}(s_1(\pi),s_2(\pi))=(b_1,b_2)+o(1)$ for all specifications. The entries in the table denote the rejection probabilities for testing $H_0: \sigma^2_2=1.5$, reported in percentages out of 1000 simulations. The entries in the ``Time'' column report the average time to compute each test in seconds per simulation. \end{tablenotes} \end{center} } \end{threeparttable} } } \end{center} \end{table} Table \ref{table-2F-Power} reports the simulated rejection probabilities for the identification-robust tests in the two-factor model with $b_1\in\{0,2,5,10,20\}$ and $b_2=0$. \medskip \textbf{Remarks on Table \ref{table-2F-Power}.} (1) The AM-Plug test is omitted because of computational cost. A projected version of the CLR test is omitted because (a) we expect it to be very conservative like the AR-Proj and K-Proj tests, and (b) it is computationally difficult to project the CLR test because one must recalculate the critical value at each point. (2) The columns with $(b_1,b_2)=(0,0)$ correspond to null rejection probabilities. We see that the reparameterization tests have reasonable rejection probabilities under the null. On the contrary, all the original parameterization tests are extremely conservative. This is expected for the projected versions of the full-vector tests. This is somewhat surprising for the CZ, CLC, AR-AR, AR-LM, and AR-QLR tests, which include modifications to correct for the conservativeness of the projections. The CZ, CLC, AR-LM, and AR-QLR tests are based on an efficient LM statistic, which is designed to be efficient under strong identification. In addition, the CZ, AR-AR, AR-LM, and AR-QLR tests restrict the projection to a first-stage confidence set, which should not lead to conservativeness under strong identification. Under weak identification, however, the validity of the tests still rely on projection. The CLC test is based on a linear combination of the AR statistic and the efficient LM statistic, which is shown to be admissible for a full-vector hypothesis in \cite{Andrews2016}. Still, the test must be projected over all the nuisance parameters. While these modifications work well under strong identification, they are unable to overcome the conservativeness under weak identification. (3) The columns with $b_1>0$ show the power function of the tests. As the strength of identification increases, measured using $b_1$, the power of the tests increases. The AR-Plug, K-Plug, and CLR-Plug tests all have very similar power. The AR-Plug test has slightly lower power, while the CLR-Plug test takes somewhat more time to compute. Overall, the reparameterization tests are recommended, with the K-Plug test performing particularly well. \qed \medskip We also investigate the problem of estimating the number of factors in these specifications. We consider estimators that come from the model selection literature for GMM following \cite{Andrews1999}. With five observed variables, we compare a one-factor model and a two-factor model and choose the one that minimizes AIC or BIC. We also report the probability of rejecting the specification of the one-factor model using a GMM J-test. Table \ref{table-2F-Number} reports these estimates for $b_1\in\{0,1,1.5,2,5\}$ and $b_2=0$. \medskip \begin{table}[t] \begin{center} \scalebox{\shrinkageparameter}{ \begin{threeparttable} {\scriptsize \caption{Estimates of the Number of Factors in a Two-Factor Model}\label{table-2F-Number} \begin{center} \begin{tabular}{cccccc} \hline\hline\vspace{-0.2cm}&&&&&\\ &\multicolumn{5}{c}{$(\lambda_{32},\lambda_{52})=n^{-1/4}(\sqrt{b_1},\sqrt{b_1})$, $\lambda_{42}=0$, $\lambda_{51}=0$}\\ \cline{2-6}\vspace{-0.2cm}\\ {Estimator}&{\hspace{4mm}$b_1=0$\hspace{4mm}}&{\hspace{4mm}$b_1=1$\hspace{4mm}}&{\hspace{4mm}$b_1=1.5$\hspace{4mm}}&{\hspace{4mm}$b_1=2$\hspace{4mm}}&{\hspace{4mm}$b_1=5$\hspace{4mm}}\\ \hline\vspace{-0.2cm}\\ AIC&17.4&96.6&99.6&99.9&100\\ BIC&10.9&47.7&72.2&87.3&99.9\\ J-Test&15.1&92.0&98.8&99.9&100\\ \hline \end{tabular} \begin{tablenotes} \item {\em Note:} The parameters in the data generating process are $\sigma^2_1=\sigma^2_2=\phi_1=\phi_2=\phi_3=\phi_4=\phi_5=\lambda_{31}=\lambda_{41}=1$, $\sigma_{12}=\lambda_{51}=0$, $b_2=0$, and $n=500$. This implies that $\sqrt{n}(s_1(\pi),s_2(\pi))=(b_1,b_2)$ for all specifications. The entries for rows AIC and BIC denote the percentage of simulations out of 1000 for which two factors were estimated. One minus the entry gives the percentage of simulations for which one factor was estimated. The entry for the J-Test row denotes the percentage of simulations out of 1000 for which the GMM J-Test for the specification of a one-factor model rejects. \end{tablenotes} \end{center} } \end{threeparttable} } \end{center} \end{table} \textbf{Remarks on Table \ref{table-2F-Number}.} (1) The estimates for the first two specifications are omitted. This is because, in those specifications, AIC and BIC both estimate two factors always. The J-test for the specification of the one-factor model also rejects always. This emphasizes the fact that the second factor can still be consistently detected by statistical methods. (2) In the third specification, when $\lambda_{32}$ and $\lambda_{52}$ both converge to zero at the $n^{-1/4}$ rate, the AIC and BIC estimates do not always detect the second factor. In theory, both need to converge to zero at close to the $n^{-1/2}$ rate in order for BIC to estimate one factor with a probability bounded away from zero asymptotically. This is apparent in Table \ref{table-2F-Number} from the small values of $b_1$ for which the AIC and BIC detect the factor and for which the J-test rejects with high probability. \qed \section{Empirical Application} We consider an empirical application to a factor model used by \cite{Attanasio2020AER}. \cite{Attanasio2014} report improved cognition and language development in the treated children following a randomized intervention in Colombia. The dataset includes a variety of variables measuring time spent with the child and material investments in the child. \cite{Attanasio2020AER} specify a factor model for these variables and label the common factor ``parental investments.'' They argue that changes in the parental investment factor between the treatment and control groups explains the effects of the intervention found in \cite{Attanasio2014}. \cite{Attanasio2020AER} consider a factor model for the variables measuring time and material investments. Table C.1 in \cite{Attanasio2020AER} reports a variety of estimates of the number of factors. The estimates range from one to four factors. Ultimately, \cite{Attanasio2020AER} specify a model with one factor for the treatment group and one factor for the control group. We consider a factor model for only the variables measuring material investments and find evidence of two factors for each group.\footnote{See Section \ref{empirical_application_details} in the appendix for a list of the variables and details on the factor model specification.} The AIC and BIC both estimate two factors, and the J-tests for the specification of a one-factor model reject with p-values less than $10^{-8}$ for both groups. We estimate the variances of the factors in two-factor models for the treatment and control groups allowing for weak identification. Let $\sigma^2_{c,1}$ and $\sigma^2_{c,2}$ denote the variances of the factors for the control group, and let $\sigma^2_{t,1}$ and $\sigma^2_{t,2}$ denote the variances of the factors for the treatment group. Table \ref{EmpiricalResults2} reports point estimates and confidence intervals (CIs) for the variances of the factors. Table \ref{EmpiricalResults2} includes a standard CI calculated using a t-statistic, an original-parameterization identification-robust CI formed by inverting the AR-AR test, and a reparameterization identification-robust CI formed by inverting the CLR-Plug test. The AR-AR test is chosen to represent the original-parameterization identification-robust CIs because it is the least conservative in the simulations with two factors in Section \ref{two-factor-simulations}. \begin{table}[t] \begin{center} \scalebox{\shrinkageparameter}{ \begin{threeparttable} {\scriptsize \caption{Parental Investment Factors}\label{EmpiricalResults2} \begin{tabular}{lccccc} \hline\hline\vspace{-0.2cm}\\ &\multicolumn{2}{c}{Control}&&\multicolumn{2}{c}{Treatment}\\ \cline{2-3}\cline{5-6}\vspace{-0.2cm}\\ &{$\sigma^2_{c,1}$}&{$\sigma^2_{c,2}$}&&{$\sigma^2_{t,1}$}&{$\sigma^2_{t,2}$}\\ \hline\vspace{-0.2cm}\\ Point Estimate&0.99&0.28&&1.01&0.08\\ Standard CI&[0.85,1.14]&[0.17,0.39]&&[0.84,1.18]&[0.04,0.12]\\ Orig.-Param. Id.-Robust CI&[0.79,1.71]&[0.09,0.41]&&[0.93,10]&[0,0.12]\\ Reparam. Id.-Robust CI&[0.90,1.36]&[0.18,0.34]&&[0.98,10]&[0.02,0.08]\\ \hline \end{tabular} {\scriptsize \begin{tablenotes} \item {\em Note:} The point estimate minimizes the GMM objective function. The standard CI is calculated using a t-statistic. The original parameterization identification-robust CI is calculated by inverting the AR-AR test. The reparameterization identification-robust CI is calculated by inverting the CLR-Plug test. \end{tablenotes} } } \end{threeparttable} } \end{center} \end{table} \textbf{Remarks.} (1) The standard CI is only valid under strong identification. The identification-robust CIs are valid under both strong and weak identification. Here, some of the identification-robust CIs are substantially wider than the standard CI, indicating that weak identification is relevant. Still, the identification-robust confidence intervals are not necessarily wider than the standard CI. They are not nested. (2) The original-parameterization identification-robust CIs contain the reparameterization identification-robust CIs. This is consistent with the simulation results, which showed that the CLR-Plug test is less conservative and more powerful than the AR-AR test. (3) The identification-robust CIs can be exceptionally long, especially the CI for $\sigma^2_{t,1}$. For $\sigma^2_{t,1}$, the upper bound of both identification-robust CIs is 10. This upper bound is an arbitrary truncation of the parameter values considered. If larger parameter values were considered, the upper bound would probably be larger and possibly unbounded. This is a common feature of identification-robust CIs under weak identification. \cite{Cox_weak_id_w_bounds} shows how to use additional inequalities to make identification-robust CIs more informative in this case. In factor models, useful inequalities come from nonnegativity of the variances of the idiosyncratic errors. \cite{Cox_weak_id_w_bounds} shows that using these inequalities leads to identification-robust CIs that are reasonable in length. \qed \section{Conclusion} This paper describes how to reparameterize low-dimensional factor models with one or two factors to fit the weak identification theory developed for GMM models. The reparameterizations are useful for identification-robust hypothesis testing. Simulations and an empirical application show the benefit of using the reparameterizations in order to use identification-robust hypothesis tests that require strongly identified nuisance parameters.
1,116,691,496,970
arxiv
\section{Introduction and plan of the paper} The objective of this paper is to study potential examples of {\it twisted holography}, in the sense of \cite{Mezei:2017kmw,Costello:2017fbo,Costello:2018zrm,Ishtiaque:2018str}. All our examples will take the form of some collection of protected SCFT correlation functions encoded in a topological quantum-mechanical system \cite{Gaiotto:2010be,Dimofte:2011py,Beem:2016cbd,Dedushenko:2016jxl,Dedushenko:2017avn}. We conjecture them to be holographically dual to twisted M-theory \cite{Costello:2016nkh,Gaiotto:2019wcc} on appropriate backgrounds. In all of the examples, we will identify hidden structures in the SCFT correlation functions which support the conjecture. We will leave detailed calculation on the twisted M-theory side to future work. Here we reserve the term ``twisted M-theory'' to the five-dimensional holomorphic-topological theory which describes topologically twisted M-theory on an $\Omega$-deformed $\bC_{\epsilon_1} \times \bC_{\epsilon_2} \times \bC_{\epsilon_3}$ background \cite{Costello:2016nkh}. This theory has a triality symmetry \cite{Gaiotto:2019wcc} which permutes the $\Omega$ deformation parameters $\epsilon_i$. We will find an analogous triality symmetry emerging in a very non-trivial way in the protected SCFT correlation functions. Our main example are the protected sphere correlation functions for the three-dimensional ${\cal N}=8$ ``M2 brane'' SCFT, i.e. the SCFT which appears at low energy on a stack of $N$ $M2$ branes in flat space. We study the correlation functions in the UV description of the SCFT provided by the ${\cal N}=4$ ADHM gauge theory, i.e. the D2-D6 worldvolume SQFT. Because this theory is self-mirror, we can compute the correlation functions either as ``Higgs branch'' correlation functions or as ``Coulomb branch'' correlation functions. Adopting some tricks from the study of the sphere partition function \cite{Marino:2011eh}, we define a grand canonical version of the correlation functions and take a careful large $N$ limit. We conjecture a decomposition of the protected correlation functions into a ``perturbative'' and a ``non-perturbative'' pieces. The perturbative piece manifests a hidden triality invariance, broken by the non-perturbative piece. We conjecture a concise, purely algebraic characterization of the perturbative piece. The perturbative piece of the protected correlation functions has the correct structure to be holographically dual to a perturbative twisted M-theory background, which should be a deformation of $S^1 \times \bC \times \bC$. We conduct extensive numerical and algebraic tests of the conjectures. We also consider some other examples: \begin{itemize} \item The Higgs branch sphere correlation functions for the three-dimensional ${\cal N}=4$ SCFT associated to M2 branes at an $A_1$ singularity. We push the analysis as far as for the previous case. The conjectural dual background is a perturbative deformation of $S^1 \times \frac{\bC\times \bC}{\mathbb{Z}_2}$. \item The Higgs branch sphere correlation functions for the three-dimensional ${\cal N}=4$ SCFT associated to M2 branes at an $A_n$ singularity. We only do a partial analysis. The conjectural dual background is a perturbative deformation of $S^1 \times \frac{\bC\times \bC}{\mathbb{Z}_{n+1}}$. \item The line defect junction Schur indices for the four-dimensional ${\cal N}=4$ SYM with $U(N)$ gauge group. These are natural 4d analogue of Coulomb branch correlation functions, except that they involve BPS line defects wrapping a compact circle in the geometry. We define a somewhat peculiar grand canonical version of the correlation functions. Concrete examples of grand canonical correlation functions manifest exact triality invariance up to an overall normalization and some analytic subtleties, without any need of a perturbative expansion. The conjectural dual background is a perturbative deformation of $S^1 \times \bC^* \times \bC^*$. \end{itemize} \section{Protected correlation functions for M2 branes} The low energy super-conformal field theory residing on the world-volume of $N$ M2 branes is of considerable theoretical interest. At large $N$, it gives the best understood example of an holographic duality which is {\it not} based on a 't Hooft expansion, as the expected gravitational dual is given by M-theory on an $AdS_4 \times S^7$ background \cite{Maldacena:1997re}. There is a particularly interesting collection of protected correlation functions of local operators on a three-sphere which are exactly computable via localization \cite{Dedushenko:2016jxl,Dedushenko:2017avn}. These correlation functions played a crucial role in a recent, strikingly successful conformal bootstrap analysis \cite{Agmon:2019imm}. Furthermore, it has been proposed \cite{Mezei:2017kmw} that the correlation functions should be holographically dual to an analogous protected sector of M-theory on $AdS_4 \times S^7$, giving a notable example of ``twisted holography''. The protected sphere correlation functions are computed by a topological $U(N)$-gauged matrix quantum mechanics, with a schematic action \begin{equation} \frac{1}{\epsilon_1} \int_{S^1} \Tr \left[\epsilon_2 A_t + X D_t Y + J D_t I \right] dt \end{equation} for adjoint fields $X,Y$ and (anti)fundamental fields $I,J$. The correlation functions are computed with anti-periodic boundary conditions for the fields on the circle. Intuitively, the quantum mechanics describes the supersymmetric motion of the M2 branes along four of the eight transverse directions. The basic observables \begin{equation} O_{l m} = \mathrm{STr}\, X^{l+m} Y^{l-m} \end{equation} deform the algebra of holomorphic functions of the transverse positions of the $N$ M2 branes in $\bC\times \bC$. The main claim of \cite{Mezei:2017kmw} is that the topological quantum mechanics should have a two-dimensional holographic dual description encoding the corresponding protected sector of M-theory on $AdS_4 \times S^7$. The two-dimensional theory was presented as a 2d gauge theory with an infinite-dimensional gauge symmetry, which is essentially the algebra of complex symplectomorphisms of $\bC\times \bC$. The effective action of the ``gravitational'' 2d gauge theory was not determined a-priori, but should be derived order-by-order by comparison with the topological QM. In principle, comparison with supergravity localization could then give information about the low energy effective action of M-theory. We would like to sharpen the proposal by identifying the holographic dual as a five-dimensional holomorphic (symplectic)-topological theory defined on an $AdS_2 \times S^3$-like background which arises from the localization of the full M-theory background. The natural five-dimensional candidate is the $\Omega$-deformed twisted M-theory defined in \cite{Costello:2016nkh}. This theory is uniquely renormalizable in an appropriate sense, with no adjustable parameters in the effective action beyond the $\Omega$ deformation parameters. The three $\Omega$ deformed factors of the $\bC_{\epsilon_1} \times \bC_{\epsilon_2} \times \bC_{\epsilon_3}$ transverse geometry should correspond to the normal directions to $AdS_2 \times S^3$ in $AdS_4 \times S^7$, in this order. In particular, the triality symmetry permuting these factors should hold perturbatively, but may be broken by instanton corrections which explore the full transverse geometry. At the local level, this twisted holographic duality is already demonstrated in \cite{Costello:2017fbo}: the OPE of local operators in the topological quantum mechanics can be reproduced by perturbative calculations in twisted M-theory on an $\bR \times \bC \times \bC$ background. The M2 brane backreaction can be treated perturbatively, as the $N$ dependence of OPE coefficients is polynomial. The emergent triality invariance of the OPE was demonstrated in \cite{Gaiotto:2019wcc}. \footnote{The local holographic duality can be justified by a simple argument, completely analogous to the argument given in \cite{Costello:2018zrm} for D3 brane twisted holography. The argument involves the topological twist and $\Omega$ deformation of the conventional Maldacena argument \cite{Maldacena:1997re} for holography, where the world-volume theory of a stack of D-branes in flat space becomes dual to the near-horizon limit of the back-reacted geometry. We can start from $N$ M2 branes in flat space and apply the deformation. The bulk M-theory becomes Costello's 5d holomorphic-topological theory defined on $\bR \times \bC^2$. The M2 brane world-volume theory becomes precisely the auxiliary 1d quantum mechanical system discussed above. The topological quantum mechanics is coupled in a unique gauge-invariant way to the bulk twisted M-theory \cite{Costello:2016nkh}. We naturally deduce that the 1d quantum mechanics should be dual to Costello's theory on whatever 5d background is produced by the M2 branes back-reaction. } Our objective is to study the full correlation functions of the system, where the topological direction is compactified to a circle. This introduces two new phenomena: \begin{itemize} \item The $N$-dependence of correlation functions is much richer and definitely not polynomial. A careful analysis is required to disentangle the systematic large $N$ expansion of the correlation functions. \item The topological quantum mechanics is not an {\it absolute} theory: there is a non-trivial space of solution of OPE Ward identities, analogous to the space of conformal blocks of a two-dimensional chiral algebra. The protected sphere correlation functions of the physical theory give a very specific element of this space of solutions. Other solutions may or not have a useful physical meaning. \end{itemize} One of the most exciting aspects of twisted holography is the possibility to study in an exactly solvable model aspects of quantum gravity such as sums over semiclassical saddles with different geometry. The precise holographic interplay between the space of possible solutions of Ward identities and the sum over geometries is not currently understood. At the very least, it should select geometries of the twisted gravity theory which can be extended to geometries for the underlying physical gravity theory. As a preparation to a full holographic analysis, we will accomplish two objectives on the field theory side: \begin{itemize} \item We will disentangle the $N$ dependence of correlation functions and identify a ``perturbative part'' which may match perturbative holographic calculations around a dominant semiclassical saddle. The perturbative part enjoys the emergent triality symmetry which is expected from the twisted M-theory side. \item We will characterize the full space of solutions of the OPE Ward identities and identify a set of conjectural quadratic constraints which uniquely characterize the perturbative part of the physical correlation functions in a purely algebraic way. We will test the conjecture both numerically and analytically. \end{itemize} We expect the quadratic constraints, somewhat analogous to the ``string equation'' in topological gravity, to play an important role in a direct proof of the twisted holography correspondence. \subsection{The BPS algebra} The M2 brane SCFT has a variety of different gauge theory UV descriptions. We focus on the description which arises from worldvolume theory of $N$ D2 branes in the presence of a single D6 brane, i.e. a ${\cal N}=4$ $U(N)$ gauge theory coupled to an adjoint hypermultiplet and a single fundamental hypermultiplet. \footnote{The localization analysis of protected sphere correlation functions is not currently available in other descriptions, such as the ABJM theory.} The protected sphere correlation functions of the M2 brane theory can be identified with the protected Higgs branch correlation functions of the ${\cal N}=4$ theory \cite{Dedushenko:2016jxl}. Alternatively, they can be identified with the protected Coulomb branch correlation functions of the same theory \cite{Dedushenko:2017avn}. The two descriptions are isomorphic, but the isomorphism is very non-trivial. The Higgs branch presentation only involves polynomials in the elementary fields and preserves the most symmetry. The Coulomb branch presentation involves disorder (monopole) operators, but reveals a hidden commutative subalgebra with useful properties. We refer to \cite{Costello:2017fbo,Gaiotto:2019wcc} for a detailed discussion and only review here the results we need for calculations. \subsubsection{Higgs branch presentation} The ``quantum'' Higgs branch algebra $\aA_{N}$ is defined as a quantum Hamiltonian reduction \cite{Yagi:2014toa,Bullimore:2015lsa}. The operators in the algebra are gauge-invariant polynomials in adjoint elementary fields $(X,Y)$ and (anti)fundamental $(I,J)$. The elementary fields have non-trivial commutation relations \begin{equation} [X^a_b, Y^c_d]=\epsilon_1 \delta^a_d\delta^c_b \qquad \qquad [J^b, I_a]=\epsilon_1 \delta^b_a \end{equation} and one quotients by the ideal generated by the F-term relation \begin{equation} X^a_c Y^c_b - X^c_b Y^a_c + I_b J^a = \epsilon_2 \delta^a_b \end{equation} i.e. the relation can be assumed to hold when placed at the very left (or right) of an operator. The algebra $\aA_{N}$ has an $SU(2)$ global symmetry rotating $(X,Y)$ as a doublet. This is an inner automorphism of the algebra, generated by \begin{equation} \frac{1}{\epsilon_1} \Tr X^2 \qquad \qquad \frac{1}{\epsilon_1}\Tr Y^2\qquad \qquad \frac{1}{2\epsilon_1} \Tr (XY + YX) \end{equation} With the help of the commutation relations and F-term relation, every gauge-invariant operator can be simplified to a polynomial in the elementary symmetrized traces \begin{equation} O_{lm} = \mathrm{STr} X^{l+m} Y^{l-m} \end{equation} This claim is not immediately obvious. One can define a collection of moves which applied recursively will lead to the desired result: \begin{enumerate} \item We can apply commutation relations until the operator ordering agrees to the ordering of gauge contractions, so that we have a polynomial in expressions of the form $\mathrm{Tr} P(X,Y)$ or $I P(X,Y) J$ where $P(X,Y)$ is some sequence of $X$ and $Y$ fields. Each commutation produces extra terms with fewer symbols, to be simplified recursively. \item We can use the F-term relation to reorder the $X$ and $Y$ fields in each sequence, so that we have a polynomial in expressions of the form $\mathrm{Tr} S(X,Y)$ or $I S(X,Y) J$ where $S(X,Y)$ is a symmetrized sequence of $X$ and $Y$ fields. Each application of the F-term relations produces extra terms with fewer $X$ and $Y$ symbols, to be simplified recursively. \item We can use the F-term relation to map $I S(X,Y) J$ to a polynomial in $\mathrm{Tr} S'(X,Y)$ operators with the same number of or fewer $X$ and $Y$ symbols, to be simplified recursively. \end{enumerate} The operators $O_{l,-l}, \cdots O_{l,l}$ form an irreducible representation of the $SU(2)$ global symmetry rotating $(X,Y)$ as a doublet. If we only use the above transformations to reduce a gauge-invariant operator, such as a commutator $[O_{lm}, O_{l'm'}]$, to polynomials in symmetrized traces, the rank $N$ only enters the calculation as the value of $O_{0,0} = \mathrm{Tr} \,1$. Following \cite{Costello:2017fbo}, we define an universal algebra $\aA$ with generators $O_{lm}$ and commutation relations \begin{equation} [O_{lm}, O_{l'm'}] = \cdots \end{equation} computed by a recursive application of the rules above, with $N$ left arbitrary. For any specific value of $N$, the $O_{lm}$ generators will satisfy further polynomial constraints due to the trace relations. For example, for $N=1$ one has $O_{lm} O_{l'm'} = O_{l+l',m+m'}$. These constraints can be thought of as an algebra morphism $\aA \to \aA_{N}$. The universal algebra $\aA$ will play an important role in our large $N$ analysis. In the following, we will find it useful to consider a slightly different normalization and labelling of the basic generators: \begin{equation} t_{m,n} = \frac{1}{\epsilon_1} \mathrm{STr} X^{m} Y^{n} \end{equation} We can also package together operators belonging to the same irreducible $SU(2)$ representation into a standard generating function: \begin{equation} t_n(u) = \sum_{m=0}^n {n \choose m} u_1^m u_2^{n-m} t_{n,m} \end{equation} In this normalization, the commutation relations are a non-linear deformation \begin{equation} [ t_{a,b}, t_{c,d} ] = (ad-bc)t_{a+c-1,b+d-1} + O(\epsilon_i) \end{equation} of the commutation relations of the Lie algebra $\mathfrak{s}$ of complex Hamiltonian symplectomorphisms of $\bC^2$: This is the gauge algebra employed in \cite{Mezei:2017kmw} and identified there as area-preserving diffeomorphisms of the two-sphere. The presentation of $\aA$ as a deformation of $U(\mathfrak{s})$ will be useful throughout the paper. \subsubsection{A concise presentation} Notably, the commutation relations defining $\aA$ can all be derived recursively from a simple generating set \footnote{This set is actually a bit redundant. For example, the last relation is in the $SU(2)$ orbit of \begin{equation} [ t_{3,0}, t_{0,d} ] = 3 d\, t_{2,d-1} + \sigma_2 \frac{d (d - 1) (d - 2)}{4} t_{0,d-3}+ \frac32 \sigma_3 \sum_{m=0}^{d-3} (m + 1)(d - m - 2) t_{0,m} t_{0,d-3-m} \end{equation}}: \begin{align} [ t_{0,0}, t_{c,d} ] &= 0 \cr [ t_{1,0}, t_{c,d} ] &= d \,t_{c,d-1}\cr [ t_{0,1}, t_{c,d} ] &= -c\, t_{c-1,d}\cr [ t_{2,0}, t_{c,d} ] &= 2 d\, t_{c+1,d-1} \cr [ t_{1,1}, t_{c,d} ] &= (d-c) \,t_{c,d} \cr [ t_{0,2}, t_{c,d} ] &= - 2 c \,t_{c-1,d+1} \cr [ t_{3,0}, t_{c,d} ] &= 3 d\, t_{c+2,d-1} + \sigma_2 \frac{d (d - 1) (d - 2)}{4} t_{c,d-3}+ \cr &+ \frac32 \sigma_3 \sum_{m=0}^{d-3} \sum_{n=0}^c \frac{{m + n + 1 \choose n + 1} (n + 1) {d - m + c - n - 2 \choose c - n + 1} (c - n + 1)}{{d + c\choose c}} t_{n,m} t_{c-n,d-3-m} \end{align} Here we employed some convenient combinations of the $\epsilon_i$ parameters: \begin{equation} \sigma_2 \equiv \epsilon_1^2 + \epsilon_1\epsilon_2 + \epsilon_2^2 \qquad \qquad \sigma_3 = -\epsilon_1 \epsilon_2 (\epsilon_1 + \epsilon_2) \end{equation} The commutation relations preserve the scaling symmetry which assigns weight $1$ to $\epsilon_i$ and $\frac{n+m}{2}-1$ to $t_{n,m}$. In $SU(2)_R$ invariant notation, with $(u,v) = u_1 v_2 - u_2 v_1$, the generating relations become \begin{align} [ t_{0}, t_{c}(v) ] &= 0 \cr [ t_{1}(u), t_{c}(v) ] &= c (u,v)t_{c-1}(v) \cr [ t_{2}(u), t_{c}(v) ] &= 2 (u,v) u \cdot \partial_v t_c(v) \cr [ t_{3}(u), t_{c}(v) ] &= \frac{3}{c+1} (u,v) (u \cdot \partial_v)^2 t_{c+1}(v) + \sigma_2 \frac{c (c -1) (c - 2)}{4} (u,v)^3 t_{c-3}(v) +\cr &+\frac32 \sigma_3 (u,v)^3 \sum_{m=0}^{c-3} (m+1)(c-m-2) t_m(v) t_{c-m-3}(v) \end{align} Notice that $t_{2,0}$, $t_{1,1}$, $t_{0,2}$ are the generators of infinitesimal $SU(2)_R$ rotations. Commutators with $t_{1,0} = \epsilon_1^{-1} \Tr X$ and $t_{0,1} = \epsilon_1^{-1} \Tr Y$ are also very easy to compute. The only laborious calculation is the reorganization of $[ t_{3,0}, t_{0,n} ]$ into a polynomial in the $t$'s. Once that is done, we can reconstruct $[ t_{3,0}, t_{c,d} ]$ by taking commutators with $t_{2,0}$. We refer the reader to the Appendices of \cite{Oh:2020hph} for an example of the algebraic manipulations which can be employed to derive the above commutation relations. A further $SU(2)_R$ rotation gives us $[t_{2,1}, t_{c,d}]$ and in particular $t_{n+1,0} = n^{-1} [t_{n,0},t_{2,1}]$, which can be used to recursively compute $[t_{n,0},t_{c,d}]$ and then all other commutators. This presentation of the algebra makes manifest an important hidden property: triality invariance. Define $\epsilon_3 = - \epsilon_1 - \epsilon_2$. Then \begin{equation} \sigma_2 = \frac12 \sum_{i=1}^3 \epsilon_i^2 \qquad \qquad \sigma_3 = \prod_{i=1}^3 \epsilon_i \end{equation} and $\aA$ is manifestly invariant under permutations of the $\epsilon_i$! These are identified with the $\Omega$-deformation parameters of the dual twisted M-theory background. Triality is broken at finite $N$ by the value of $t_{0,0}= \frac{N}{\epsilon_1}$. \footnote{The algebra $\aA$ is conjecturally equipped with a three-parameter family of truncations $\aA_{N_1, N_2, N_3}$ which specialize the central generator $t_{0,0}$ as \begin{equation} t_{0,0} = \frac{N_1}{\epsilon_1}+\frac{N_2}{\epsilon_2}+\frac{N_3}{\epsilon_3} \end{equation} and should describe protected local operators at the intersection of three mutually orthogonal stacks of M2 branes. } This makes it obvious that triality can at best be a property of correlation functions in some a large $N$ limit. \subsubsection{The Coulomb branch presentation} The ``quantum'' Coulomb branch algebra of a three-dimensional ${\cal N}=4$ gauge theory has a more intricate practical definition \cite{Braverman:2016wma,Bullimore:2015lsa}, mostly due to the fact that it involves monopole operators. It always includes a commutative subalgebra defined by gauge-invariant polynomials in a single adjoint vectormultiplet field. Denote as $\aC_N$ the quantum Coulomb branch of a ${\cal N}=4$ $U(N)$ gauge theory coupled to an adjoint hypermultiplet and a single fundamental hypermultiplet, with $\epsilon_1$ being the quantization parameter and $\epsilon_2$ the ``quantum mass parameter'' for the adjoint hypermultiplet. As this gauge theory is self-mirror, $\aC_N$ must be isomorphic to $\aA_N$ and provide an alternative presentation of the M2 brane protected algebra. The isomorphism, though, is far from trivial. The quantum Coulomb branch algebras $\aC_N$ can be identified with certain spherical Cherednik algebras \cite{Kodera:2016faj}, which can also be identified with $\aA_N$. They $\aC_N$ algebras have an uniform-in-$N$ description as truncation of a shifted $\mathfrak{gl}(1)$ affine Yangian algebra $\aC$ \cite{Kodera:2016faj,2019arXiv190307734W} \footnote{More precisely, it can be given as a subalgebra of the affine Yangian reviewed in \cite{2014arXiv1404.5240T}, with $e^{\mathrm{here}}_n=e^{\mathrm{there}}_n$, $h^{\mathrm{here}}_n=\psi^{\mathrm{there}}_{n+1}$,$f^{\mathrm{here}}_n=f^{\mathrm{there}}_{n+1}$ and $\psi^{\mathrm{there}}_0=0$.} equipped with algebra morphisms $\aC \to \aC_N$. The algebra $\aC$ is triality-invariant and conjecturally isomorphic to $\aA$ \cite{Gaiotto:2019wcc}. Conjecturally, we can build the isomorphism as follows. Define recursively \begin{equation} e_n = - \frac12 [ t_{2,2},e_{n-1}] \qquad \qquad f_n = \frac12 [ t_{2,2},f_{n-1}] \end{equation} starting from $e_0 = t_{0,1}$ and $f_0 = t_{1,0}$. Then one observes that \begin{equation} [e_n, f_m] = h_{n+m} \end{equation} and the $h_{n}$ commute with each other and with $t_{1,1}$. Furthermore, $t_{2,2}$ is a polynomial in the $h_n$ and we can find other polynomials $d_n$ such that \begin{equation} [d_n, e_m] = - n e_{n+m-1} \qquad \qquad [d_n, f_m] = n f_{n+m-1} \end{equation} These relations, the explicit relation between $d_n$ and $h_m$'s and several more Serre and quadratic relations define the affine Yangian. In the following, we only need to know that the commuting generators $d_n$ exist, are given as the trace of specific polynomials of the adjoint vectormultiplet field by the Coulomb branch presentation of the affine Yangian and and match specific polynomials in the $t_{a,b}$ generators. \subsection{Correlation functions as twisted traces} Protected sphere correlation functions for any $N$ behave as correlation functions of a topological 1d system. We can compute correlation functions of any ordered sequence of operators, with a twisted cyclicity relation \begin{equation} \langle O_1 \cdots O_k t_{n,m} \rangle^{(N)} = (-1)^{n+m} \langle t_{n,m} O_1 \cdots O_k \rangle^{(N)} \end{equation} In other words, the collection of all protected sphere correlation functions gives a {\it twisted trace} on the quantized Higgs branch algebra $\aA_N$. We can immediately promote the correlation functions to a twisted trace for the universal algebra $\aA$, without any loss of information. Any operator in $\aA$ which vanishes in $\aA_N$ will vanish when inserted in the correlation functions pulled back from $\aA_N$. The twisted cyclicity relations are the basic OPE Ward identities satisfied by the correlation functions. They are rather constraining. For example, they determine correlation functions involving ``odd'' operators in terms of correlation functions involving even operator only, because if $n+m$ is odd \begin{equation} \langle O_1 \cdots O_k t_{n,m} \rangle = \frac12 \langle [O_1 \cdots O_k, t_{n,m}] \rangle \end{equation} It is easy to see that the odd twisted trace relations do not put any constraint on correlation functions containing even operators only. Even operators, instead, give symmetries of the correlation functions: if $n+m$ is even, we have \begin{equation} \langle [O_1 \cdots O_k, t_{n,m}] \rangle = 0 \end{equation} We have found ample evidence of the following conjecture: the twisted trace relations allow one to express any correlation function as a linear combination of the ``extremal'' correlators \begin{equation} \langle t_{2,0}^{\sum_i n_i} \prod_i t_{0,2n_i} \rangle \end{equation} and do not impose any further relations on the extremal correlators. Thus the values of extremal correlators parameterize the space of solutions of OPE Ward identities. The reduction to extremal correlators proceeds recursively by the transformation \begin{equation} \langle \cdots t_{n,m} \cdots \rangle \to \langle \cdots t_{n,m} \cdots \rangle - c \langle [t_{n-1,m+1}, \cdots t_{2,0} \cdots ]\rangle \end{equation} where $c$ is a number selected to set to $1$ the coefficient of $\langle \cdots t_{n,m} \cdots \rangle$ in the commutator and $n+m>2$ or $n=m=1$. The transformation never requires $c$ to depend on $\sigma_i$. In particular, the reduction to the basis of extremal correlators appears to be a property of twisted traces on $U(\mathfrak{s})$ which is inherited by $\aA$. For each $N$, the actual protected correlation functions will produce a specific solution of the twisted trace relations. As the space of solutions is linear, any linear combination of correlation functions $\langle O_1 \cdots O_k \rangle^{(N)}$ for different values of $N$ will define a twisted trace for $\aA$. In the following, we will often employ a very special {\it Grand Canonical} linear combination: \begin{equation} \langle O_1 \cdots O_k \rangle_\mu = \sum_{N=0}^\infty e^{\frac{2 \pi \mu N}{\epsilon_1}} \langle O_1 \cdots O_k \rangle^{(N)} \end{equation} which satisfies \begin{equation} \partial_\mu \langle O_1 \cdots O_k \rangle_\mu = 2 \pi \langle O_1 \cdots O_k t_{0,0} \rangle_\mu \end{equation} \subsubsection{Higgs branch localization} In the Higgs branch presentation, protected sphere correlation functions are computed by $N$-dimensional integrals over the eigenvalues of complexified holonomies: \begin{equation} \langle O_1 \cdots O_k \rangle^{(N)} = \frac{1}{N!} \left[\prod_{i=1}^N \int_{-\infty}^\infty d\sigma_i e^{2 \pi i \zeta \sigma_i}\right] \left[\prod_{i<j} 4 \sinh^2 \pi(\sigma_i - \sigma_j)\right] \langle O_1 \cdots O_k \rangle^{(N)}_{\mathrm{hyper}} \end{equation} where the ``free hypermultiplet'' correlation functions \begin{equation} \langle O_1 \cdots O_k t_{n,m} \rangle^{(N)}_{\mathrm{hyper}}(\sigma_i) \end{equation} are computed by Wick contractions from Green functions \begin{equation} \langle X^a_b Y^c_d \rangle = \epsilon_1 \delta^a_d \delta^c_b \frac{1}{1+e^{2 \pi (\sigma_a- \sigma_c) }} \qquad \qquad \langle Y^c_d X^a_b \rangle = -\epsilon_1 \delta^a_d \delta^c_b \frac{1}{1+e^{2 \pi (\sigma_c- \sigma_a) }} \end{equation} and partition function \begin{equation} \langle 1 \rangle^{(N)}_{\mathrm{hyper}} = \prod_i \frac{1}{2 \cosh \pi \sigma_i} \prod_{i,j} \frac{1}{2 \cosh \pi(\sigma_i - \sigma_j)} \end{equation} The FI parameter $\zeta$ is given by \begin{equation} \zeta = i \left(\frac12 + \frac{\epsilon_2}{\epsilon_1} \right) \end{equation} The integral remains convergent as long as \begin{equation} -1<\mathrm{Re} \frac{\epsilon_2}{\epsilon_1} <0 \end{equation} The finite $N$ partition function can be analytically continued outside of the strip, with poles along the real axis which become denser as $N$ increases. \footnote{The localization integral could also be modified by a mass $m$ for the adjoint hypermultiplet. This is equivalent, though, to the insertion of $e^{2 \pi m t_{1,1}}$ in correlation functions and does not add new information. The role of mass and FI parameters is exchanged in the mirror symmetric picture we employ later on in the Coulomb branch description of the algebra. } The integral is straightforward but combinatorially daunting as a function of $N$. The systematic large $N$ expansion is poorly understood, but is expected to involve powers of $N^{-\frac12}$. Several calculations at the leading order in $N$ were done in \cite{Mezei:2017kmw}. \subsubsection{Grand canonical ensemble and free Fermi gas} The large $N$ analysis is somewhat simpler in a grand-canonical ensemble, where one adds up correlation functions with different values of $N$: \begin{equation} \langle O_1 \cdots O_k \rangle_\mu = \sum_{N=0}^\infty e^{\frac{2 \pi \mu N}{\epsilon_1}} \langle O_1 \cdots O_k \rangle^{(N)} \end{equation} Then the large $N$ limit is probed at large positive values of $\frac{\mu}{\epsilon_1}$. The reason for the simplification is the Cauchy determinant identity, which allows one to combine the integration measure and the adjoint hypermultiplet partition function into a single determinant \cite{Kapustin:2010xq}: \begin{equation} \frac{\prod_{i<j} 4 \sinh^2 \pi(\sigma_i - \sigma_j)}{\prod_{i,j} 2 \cosh \pi(\sigma_i - \sigma_j)} = \sum_{s\in S_N} (-1)^s \prod_i \frac{1}{2 \cosh \pi (\sigma_i - \sigma_{s(i)})} \end{equation} and thus the correlation function as \begin{equation} \langle O_1 \cdots O_k \rangle^{(N)} = \frac{1}{N!} \sum_{s\in S_N} (-1)^s \left[\prod_{i=1}^N \int_{-\infty}^\infty \frac{ d\sigma_i e^{2 \pi i \zeta \sigma_i}}{4 \cosh \pi \sigma_i \cosh \pi (\sigma_i - \sigma_{s(i)})}\right] \frac{\langle O_1 \cdots O_k \rangle^{(N)}_{\mathrm{hyper}}}{\langle 1 \rangle^{(N)}_{\mathrm{hyper}}} \end{equation} The grand canonical partition function is then written as the partition function of a free Fermi gas \cite{Marino:2011eh} \begin{equation} Z(\mu) \equiv \langle 1 \rangle_\mu = \det \left[1+ e^{\frac{2 \pi \mu}{\epsilon_1}} \hat \rho \right] \end{equation} with single-particle density operator $\hat \rho$ given by an integral operator with kernel \begin{equation} \rho(\sigma, \sigma') = \frac{e^{2 \pi i \zeta \sigma}}{4 \cosh \pi \sigma \cosh \pi (\sigma - \sigma')} \end{equation} The large $\frac{\mu}{\epsilon_1}$ limit of the partition function is well understood. We will review it momentarily. More general correlation functions also have a Fermi gas interpretation. Indeed, the grand canonical sum of expectation values of the form \begin{equation} \frac{1}{N!} \sum_{s\in S_N} (-1)^s \left[\prod_{i=1}^N \int_{-\infty}^\infty \frac{ d\sigma_i e^{2 \pi i \zeta \sigma_i}}{4 \cosh \pi \sigma_i \cosh \pi (\sigma_i - \sigma_{s(i)})}\right] \sum_{i_1<i_2 \cdots<i_n} f_n(\sigma_{i_a}) \end{equation} can be written as the free Fermi gas expectation value of an operator acting on $n$ particles by multiplication by $f_n(\sigma_{i_a})$. It is easy to see that a general correlation function with $n$ $X$ fields will insert in the integral a function of up to $n$ variables. \subsubsection{Large $\mu$ limit} The partition function has a very nice behaviour for large positive $\frac{\mu}{\epsilon_1}$ \cite{Nosaka:2015iiw}: \begin{equation} Z(\mu) \equiv \langle 1 \rangle_\mu \sim Z_0(\epsilon_i) e^{\frac{4 \pi}{3 \sigma_3} \mu^3 + \frac{\pi \sigma_2}{4 \sigma_3} \mu} \end{equation} up to exponentially suppressed corrections. In particular, the perturbative expansion of the free energy truncates to a cubic polynomial in $\mu$, with no $\mu^{-1}$ corrections. A striking feature of this perturbative expression is the triality invariance of the coefficients of $\mu^3$ and $\mu$. The whole partition function is definitely not triality invariant. Indeed, the original integral is invariant only under the trivial Weyl symmetry $\epsilon_2 \leftrightarrow \epsilon_3$. It is also worth noticing that the prefactor $\frac{1}{\sigma_3}$ is the ``equivariant volume'' of the internal $\bC_{\epsilon_1} \times \bC_{\epsilon_2} \times \bC_{\epsilon_3}$ factor of the conjectural dual twisted M-theory background, and appears naturally as an overall prefactor in the twisted M-theory action. The parameter $\sigma_3$ thus plays a loop-counting role in twisted M-theory, and the perturbative expressions we find below are compatible with that interpretation. The leading coefficient $Z_0(\epsilon_i)$ has a conjectural expression \begin{equation} \log Z_0(\epsilon_i) = \frac12 A(1) + \frac14 A\left(- 2 \frac{\epsilon_2}{\epsilon_1} \right)+ \frac14 A\left(- 2 \frac{\epsilon_3}{\epsilon_1} \right) \end{equation} with \begin{equation} A(z) = \frac{2 \zeta_3}{z \pi^2}\left(1-\frac{z^3}{16}\right)+ \frac{z^2}{\pi^2} \int_0^\infty \frac{x dx}{e^{z x}-1}\log (1-e^{- 2 x}) \qquad \qquad \mathrm{Re}\,z>0 \end{equation} The range of definition of $A(z)$ covers the physical strip $-1<\mathrm{Re} \frac{\epsilon_2}{\epsilon_1} <0$. The function $A(z)$ is rather singular as $z$ approaches the imaginary axis, so it is not clear that $Z_0(\epsilon_i)$ can be analytically continued beyond the physical strip. Within the physical strip, it is not triality invariant. In the following, we will typically strip $Z_0(\epsilon_i)$ off perturbative expressions by rescaling the correlation functions. Our main conjectural claim is that the grand-canonical correlation functions also have a truncated perturbative expansion at large positive $\frac{\mu}{\epsilon_1}$, i.e. the ratio \begin{equation} Z(\mu)^{-1} \langle t_{m_1,n_1} \cdots t_{m_a,n_a} \rangle_\mu \end{equation} approaches a polynomial in $\mu$ up to exponentially suppressed corrections. We can thus define a ``perturbative part'' of correlation functions: \begin{equation} \langle t_{m_1,n_1} \cdots t_{m_a,n_a} \rangle^{\mathrm{pert}}_\mu \equiv e^{\frac{4 \pi}{3 \sigma_3} \mu^3 + \frac{\pi \sigma_2}{4 \sigma_3} \mu} \left[ Z(\mu)^{-1} \langle t_{m_1,n_1} \cdots t_{m_a,n_a} \rangle_\mu \right]_{\mathrm{pert}} \end{equation} Furthermore, because we have no inverse powers of $\mu$ in the expansion, we can simply set $\mu=0$ and encode the full $\mu$ dependence into $t_{0,0}$ insertions. Experimentally, we find that the perturbative correlation functions $\langle t_{m_1,n_1} \cdots t_{m_a,n_a} \rangle^{\mathrm{pert}}_0$ are triality invariant. They are Laurent polynomials in $\sigma_3$ and polynomials in $\sigma_2$, of appropriate weight under the rescaling of $\epsilon_i$. They are natural candidates to match holographic calculations in some semiclassical saddle for twisted M-theory. In the remainder of this section, we will find a simple conjectural characterization of perturbative correlation functions. \subsubsection{Coulomb branch localization} The Coulomb branch presentation of the $\aA$ algebra allows for an alternative localization calculation of the correlation functions. The calculation of general correlation functions is rather cumbersome, as it requires explicit ``Abelianized'' expressions for the monopole operators. Correlation functions of the commutative $d_n$ generators, though, are much simpler. We can write \begin{equation} \langle d_{n_1} \cdots d_{n_k} \rangle^{(N)} = \frac{1}{N!} \sum_{s\in S_N} (-1)^s \left[\prod_{i=1}^N \int_{-\infty}^\infty \frac{ d\sigma_i}{4 \cosh \pi \sigma_i \cosh \pi (\sigma_i - \sigma_{s(i)}+ \zeta)}\right] \prod_j \left[ \sum_i p_{n_j}(\sigma_i)\right] \end{equation} Here the $p_n(\sigma)$ polynomials are given by a generating series \begin{equation} \partial_z^2 \log \Gamma\left(\frac12 - i \sigma + z \right) = \sum_n \frac{p_n(\sigma)}{z^{n+1}} \end{equation} Because the $d_n$ generators commute, we can define a generating function \begin{equation} Z_d(\tau_i) = \langle e^{\sum_n \tau_n d_n} \rangle^{\mathrm{pert}}_0 \equiv e^{F_d(\tau_i)} \end{equation} where $F_d(\tau_i)$ is the generating function of connected correlation functions $\langle d_{n_1} \cdots d_{n_k}\rangle^{\mathrm{pert}}_c$. With some numerical experimentation, we find a simple conjectural pattern: \begin{equation} \langle d_{n_1} \cdots d_{n_k}\rangle^{\mathrm{pert}}_c = \sum_m c_{n_*;m} \sigma_2^m \sigma_3^{-\frac23 m + \frac13 \sum_i (n_i-1)} \end{equation} where the only non-vanishing terms have a power of $\sigma_3$ greater or equal $-1$, as expected for a loo-counting parameter. For example, we have \begin{align} \langle d_0 d_0 d_0\rangle^{\mathrm{pert}}_c &= \frac{1}{6 \pi^2 \sigma_3} \cr \langle d_0 \rangle^{\mathrm{pert}}_c &= \frac{\pi \sigma_2}{4 \sigma_3} \cr \langle d_2 d_0 d_0 d_0 d_0 \rangle^{\mathrm{pert}}_c &= -\frac{2}{\pi^4 \sigma_3} \cr \langle d_2 d_0 d_0 \rangle^{\mathrm{pert}}_c &= -\frac{5 \sigma_2}{12 \pi^2 \sigma_3} \cr \langle d_2 d_0 \rangle^{\mathrm{pert}}_c &= -\frac{1}{3 \pi^2} \cr \langle d_2 \rangle^{\mathrm{pert}}_c &= -\frac{3 \sigma_2^2}{64 \sigma_3} \cr \langle d_1 d_1 d_0 d_0 d_0 \rangle^{\mathrm{pert}}_c &= -\frac{2}{\pi^4 \sigma_3} \cr \langle d_1 d_1 d_0 \rangle^{\mathrm{pert}}_c &= -\frac{\sigma_2}{3 \pi^2 \sigma_3} \cr \langle d_1 d_1 \rangle^{\mathrm{pert}}_c &= -\frac{1}{12 \pi^2}-\frac{1}{64} \end{align} etcetera. \subsubsection{A recursion relation} Inspection of the numerical data reveals a very simple recursion relation satisfied by $d_0$ insertions: \begin{equation} \label{eq:rec} \pi^2 \partial_{\tau_0}^2 F_d(\tau_i) + \left(\sum_n n \tau_n \partial_{\tau_{n-1}} \right)^2 F_d(\tau_i) = \sum \lambda_n \tau_n \end{equation} where $\lambda_n$ are functions of $\sigma_2$ and $\sigma_3$ only. This gives a quadratic relation on the perturbative correlation functions. Experimentally, we find that this recursion relation combines with the twisted trace relations to uniquely fix all correlation functions! In order to understand the origin of this recursion relation, it is useful to consider the Fermi gas representation of the free energy \begin{equation} F_d(\tau_i) = 2 \pi \Tr \log \left(1+ \hat \rho^{-1}_C [\tau_i] \right) \end{equation} where the Coulomb branch density operator is represented by the kernel \begin{equation} \rho_C(\sigma, \sigma';\tau_i) = \frac{e^{\sum_{n=0}^\infty \tau_n p_n (\sigma)} }{4 \cosh \pi \sigma \cosh \pi (\sigma - \sigma'+\zeta)} \end{equation} In the following we will set $\epsilon_1$ to $1$ for simplicity. It can be restores by a trivial rescaling of $\sigma$ and $\tau_i$. We also assume large positive $\tau_0 \equiv 2 \pi \mu$. It is useful to observe that $\rho_C(\sigma, \sigma';\tau_i)$ has limited range, and if $|\sigma| \gg 1$ it is well approximated by \begin{equation} \rho_C(\sigma, \sigma';\tau_i) \sim \frac{e^{\pm \pi \sigma + \sum_{n=0}^\infty \tau_n p_n (\sigma)} }{2 \cosh \pi (\sigma - \sigma'+\zeta)} \end{equation} up to exponential corrections. The differential operator $i \sum_n n \tau_n \partial_{\tau_{n-1}}$ acts as a translation of the argument of the $p_n(\sigma)$ polynomials. That means the combinations \begin{equation} \pi \tau_0 \pm i \sum_n n \tau_n \partial_{\tau_{n-1}} \end{equation} acts as a uniform translation on $\rho_C(\sigma, \sigma';\tau_i)$ in the regions $\pm \sigma \gg 1$. This suggests that the differential operator in the recursion relation \ref{eq:rec} annihilates the perturbative contribution to the free energy from the regions $\pi |\sigma| > \tau_0 - c$ where $c$ is some appropriate cutoff. It would be nice to complete this argument and show the origin of the linear source on the right hand side of \ref{eq:rec}. We can give here some explicit examples of conjectural perturbative correlators. We have two-point functions \begin{align} \langle t_{1}(u) t_{1}(v) \rangle^{\mathrm{pert}}_\mu &=\left[ \frac{1}{\sigma_3}\mu^2+ \frac{\sigma_2}{16 \sigma_3} \right] (u,v)\cr \langle t_{2}(u) t_{2}(v) \rangle^{\mathrm{pert}}_\mu &=\left[ \frac{16}{3 \pi \sigma_3}\mu^3+ \frac{4 \sigma_2}{3 \pi \sigma_3} \mu + \frac{1}{6 \pi^2} + \frac{1}{32} \right] (u,v)^2\cr \langle t_{3}(u) t_{3}(v) \rangle^{\mathrm{pert}}_\mu &=\left[ \frac{3}{\sigma_3}\mu^4+ \frac{15 \sigma_2}{8 \sigma_3} \mu^2 + \frac{3}{2 \pi} \mu + \frac{27}{256} \frac{\sigma_2^2}{\sigma_3}\right] (u,v)^3 \end{align} and three-point functions \begin{align} \langle t_{1}(u) t_{1}(v) t_{2}(w) \rangle^{\mathrm{pert}}_\mu &=\left[ \frac{1}{\sigma_3}\mu^2+ \frac{\sigma_2}{16 \sigma_3} \right] (u,w)(v,w)\cr \langle t_{1}(u) t_{2}(v) t_{3}(v) \rangle^{\mathrm{pert}}_\mu &=\left[ \frac{8}{\pi \sigma_3}\mu^3+ \frac{2 \sigma_2}{\pi \sigma_3} \mu + \frac{1}{4 \pi^2} + \frac{3}{64} \right] (u,w)(v,w)^2\cr \langle t_{2}(u) t_{2}(v) t_{2}(v) \rangle^{\mathrm{pert}}_\mu&=\left[ \frac{32}{3\pi \sigma_3}\mu^3+ \frac{8 \sigma_2}{3\pi \sigma_3} \mu + \frac{1}{3 \pi^2} + \frac{1}{16} \right] (u,v)(u,w)(v,w) \end{align} We attach to the paper submission a Mathematica notebook which can compute general correlation functions. \section{M2 branes at an $A_1$ singularity} The 3d ${\cal N}=4$ SQFT which flows to the world-volume theory of $N$ M2 branes at an $A_1$ singularity has two mirror descriptions. The well known UV description as a stack of $N$ D2 branes in the presence of 2 D6 branes gives an ADHM quiver with two flavours. A mirror description of the latter is a two-node necklace quiver with $U(N)$ gauge groups and a single flavour at the first node \cite{Porrati:1996xi, deBoer:1996mp}. We are interested in the Higgs branch protected correlators of the latter theory, or the Coulomb branch of the former. The corresponding algebra $\aA_N^{(2)}$ is conjecturally associated to twisted M-theory backgrounds where the $\bC \times \bC$ factor is replaced by the $A_1$ singularity or its deformation/resolution. \footnote{The opposite choices are also interesting, but are associated to a more intricate version of twisted M-theory, where the $A_1$ singularity lies in the $\Omega$ deformed directions. We will not study it here. } The 3d ${\cal N}=4$ SCFT has an $SU(2)$ flavour symmetry inherited by the algebra $\aA_N^{(2)}$. It is the geometric isometry group of the $A_1$ singularity. In the Higgs branch description, it acts on the pair of bi-fundamental hypermultiplets. It is hidden in the Coulomb branch description, much as in the case of $\aA_N$. If we denote the doublet of bifundamental hypermultiplets as $X_\alpha$, $Y_\alpha$ and the fundamentals as $I,J$, then the F-term relations take the schematic form \begin{align} \epsilon^{\alpha \beta} X_\alpha Y_\beta + J I &= z_1 1_{N \times N} \cr \epsilon^{\alpha \beta} Y_\alpha X_\beta &= z_2 1_{N \times N} \end{align} In a manner similar to the case of $\aA_N$, we can reduce all operators to polynomials in the $SU(2)$ irreps of spin $k$: \begin{equation} \epsilon_1^{-1} \Tr X_{(\alpha_1} Y_{\alpha_2} \cdots X_{\alpha_{2k-1}} Y_{\alpha_{2k})} \end{equation} The $\Tr X_{(\alpha_1} Y_{\alpha_2)}$ are the $SU(2)$ generators. We will label the elementary operators $t^{(2)}_{a,b}$ by the $SU(2)$ quantum numbers as for $\aA_N$, so that $\ell = \frac{a+b}{2}$ and $m = \frac{a-b}{2}$. Notice that $a-b$ is now always even. The Coulomb branch description $\aC^{(2)}_N$ of the algebra makes it easier to see its triality properties. Indeed, the Abelianized monopole operators have expressions which are simply identical to these of $\aC_N$, except that some operators are missing. More precisely, the elementary monopole operators in $\aC^{(2)}_N$ can be built within $\aC_N$ from the first few $d_n$ generators, together with $e_0$ and $f_1 + z f_0$. Conjecturally, these elements in $\aC$ generate the correct universal $\aC^{(2)}$. It is easy to check that $e_0$, $f_1 + z f_0$, $d_1 + \frac{z}{2} d_0$ generate a $\mathfrak{su}(2)$ Lie algebra, which we identify with the global $SU(2)$ symmetry of $\aA_N^{(2)}$. Similarly, we embed \begin{equation} t^{(2)}_{2n,0} = t_{n,0} \end{equation} and act with $t^{(2)}_{0,2}= f_1 + z f_0$ to build a full conjectural embedding of $\aA_N^{(2)}$ into $\aA_N$ and lift it to an embedding/definition of $\aA^{(2)}$ into $\aA$. We can use that embedding to derive a concise conjectural presentation for the commutators defining $\aA^{(2)}$: \begin{align} [ t^{(2)}_{0,0}, t^{(2)}_{c,d} ] &= 0 \cr [ t^{(2)}_{2,0}, t^{(2)}_{c,d} ] &= d\, t_{c+1,d-1} \cr [ t^{(2)}_{1,1}, t^{(2)}_{c,d} ] &= \frac12 (d-c) \,t_{c,d} \cr [ t^{(2)}_{0,2}, t^{(2)}_{c,d} ] &= - c \,t_{c-1,d+1} \cr [ t^{(2)}_{2 d,0}, t^{(2)}_{0,4} ] &= 4 d\, t^{(2)}_{2d-1,3} + \frac{2d(d-1)}{2d+1} (\sigma_2 d^2 -z^2)t^{(2)}_{2d-3,1}+\cr +& \sigma_3 \sum_{k=1}^{d-1}\frac{2 (d-k)(2k-1)(2d-k+1)}{2d+1} \left( t^{(2)}_{2k-2,0} t^{(2)}_{2d-2k-1,1}+t^{(2)}_{2d-2k-1,1}t^{(2)}_{2k-2,0} \right) \end{align} Notice the explicit triality invariance. The algebra $\aA_N^{(2)}$ is a deformation of the algebra of Hamiltonian symplectomorphisms on $\frac{\bC \times \bC}{\bZ_2}$: \begin{equation} [ t^{(2)}_{a,b}, t^{(2)}_{c,d} ] = \frac12 (ad-bc)t^{(2)}_{a+c-1,b+d-1} + O(\epsilon_i) \end{equation} \subsection{Correlation functions} We can solve the twisted trace conditions in the same manner as for the case of $\aA$, conjecturally reducing any correlation function to a linear combination of $\langle (t^{(2)}_{2,0})^{\sum_i n_i} \prod_i t_{0,2n_i} \rangle$ extremal correlators. The localization expressions for the correlation functions can be also manipulated in a familiar way. On the Higgs branch side, we can employ the Cauchy identity \begin{equation} \frac{\prod_{i<j} 4 \sinh \pi(\sigma_i - \sigma_j)\sinh \pi(\sigma'_i - \sigma'_j)}{\prod_{i,j} 2 \cosh \pi(\sigma_i - \sigma'_j)} = \sum_{s\in S_N} (-1)^s \prod_i \frac{1}{2 \cosh \pi (\sigma_i - \sigma'_{s(i)})} \end{equation} to arrive to a standard Fermi gas description of the grand canonical partition function, involving the integral operator with kernel \begin{equation} \rho^{(2)}_H(\sigma, \sigma'') = \frac{e^{2 \pi i \zeta_1 \sigma}}{2 \cosh \pi \sigma} \int_{-\infty}^\infty \frac{e^{2 \pi i \zeta_2 \sigma'}}{4\cosh \pi (\sigma - \sigma')\cosh \pi (\sigma' - \sigma'')} \end{equation} where the parameters $\zeta_1$, $\zeta_2$ are (affine) linearly related to $z_1$, $z_2$ or $\epsilon_2$, $z$. On the Coulomb branch side, one has a Fermi gas description with Fourier-transformed kernel: \begin{equation} \rho^{(2)}_C(\sigma, \sigma') = \frac{1}{8 \cosh \pi \sigma \cosh \pi (\sigma +\zeta')\cosh \pi (\sigma - \sigma'+\zeta)} \end{equation} where $\zeta' = -i z$. The $d_n$ insertions are controlled by the same $p_n(\sigma)$ polynomials. We define the grand canonical perturbative correlation functions as before. The main difference is that now we have \begin{equation} Z^{(2)}(\mu) \sim Z^{(2)}_0(\epsilon_i) e^{\frac{2 \pi}{3 \sigma_3} \mu^3 + \frac{\pi z^2}{2 \sigma_3} \mu} \end{equation} Experimentally, we find that the perturbative, grand canonical perturbative connected correlators of the $d_n$ generators satisfy a recursion relation \begin{equation} \label{eq:rec2} 4 \pi^2 \partial_{\tau_0}^2 F_d(\tau_i) + \left(\sum_n n \tau_n \partial_{\tau_{n-1}} \right)^2 F_d(\tau_i) = \sum \lambda_n \tau_n \end{equation} analogous to \ref{eq:rec}, which determines them uniquely. We also find a simple recursion for the $z$ dependence: \begin{equation} \label{eq:rec2z} \partial_{z} F_d(\tau_i) + \frac12 \left(\sum_n n \tau_n \partial_{\tau_{n-1}} \right) F_d(\tau_i) = \sum \lambda'_n \tau_n. \end{equation} One final, unexplained experimental observation is that the grand canonical perturbative correlation functions \section{M2 branes at an $A_k$ singularity} The SCFT associated to $M2$ branes at an $A_k$ singularity can be obtained either from a necklace quiver of $k+1$ nodes with a single flavour or as an ADHM quiver with $k+1$ flavours. We consider the Higgs branch correlators in the former theory, or Coulomb branch correlators in the latter and take the uniform-in-$N$ limit. We do not have a concise presentation of the resulting algebra. We expect it to admit generators $t^{(k)}_{a,b}$ with $a-b$ multiple of $k$, as well as a triality invariant presentation which deforms \begin{equation} [ t^{(k)}_{a,b}, t^{(k)}_{c,d} ] = \frac12 (ad-bc)t^{(k)}_{a+c-1,b+d-1} + O(\epsilon_i) \end{equation} depending on $\sigma_2$, $\sigma_3$ and the $k$ deformation parameters $z_i$. Using the Coulomb branch description, one can conjecturally embed in into the Coulomb branch for the theory with no flavours \cite{Gaiotto:2019wcc}. The embedding includes $e_0$, the first few $d_n$'s and \begin{equation} f_k + \left[ \sum_i z_i \right] f_{k-1} + \left[ \sum_{i<j} z_i z_j \right] f_{k-2} + \cdots+ \left[ \prod_i z_i \right] f_0 \end{equation} Coulomb branch correlators of the $d_n$'s can be computed as before from localization integrals and the Fermi Gas construction. We expect recursion relations of the form \begin{equation} \label{eq:reck} k^2 \pi^2 \partial_{\tau_0}^2 F_d(\tau_i) + \left(\sum_n n \tau_n \partial_{\tau_{n-1}} \right)^2 F_d(\tau_i) = \sum \lambda_n \tau_n \end{equation} as well as \begin{equation} \label{eq:reckz} k \partial_{z_a} F_d(\tau_i) + \left(\sum_n n \tau_n \partial_{\tau_{n-1}} \right) F_d(\tau_i) = \sum \lambda'_{a,n} \tau_n. \end{equation} Given an explicit presentation of the algebra, one should be able to compute all correlation functions via twisted trace relations and the recursion relations. We leave it for future work. \section{Hidden triality in the Schur index} The final collection of protected correlation functions we will consider will be the Schur indices for line defect junctions in 4d ${\cal N}=4$ SYM with $U(N)$ gauge group. Recall that the Schur index is a specialization of the superconformal index which is available for any 4d ${\cal N}=2$ SQFT \cite{Gadde:2011uv}. It can be thought of as a supersymmetric partition function on $S^1 \times S^3$. It can be decorated by collections of BPS line defects wrapping the $S^1$ factor of the $S^1 \times S^3$ space-time geometry \cite{Dimofte:2011py}. From the point of view of the superconformal index, the resulting correlation functions count local operators at supersymmetric junctions of half-BPS line defect. These ``Schur correlation functions'' \footnote{Not to be confused with a different, and presumably incompatible, ``Higgs branch'' generalization of the Schur index, which inserts local operators rather than line defects and gives rise to torus conformal blocks of a certain chiral algebra \cite{Dedushenko:2019yiw}. These also are potential targets for twisted holography calculations \cite{Costello:2018zrm}, but will be discussed elsewhere.} have many properties in common with Coulomb branch sphere correlation functions in 3d ${\cal N}=4$ SQFTs. In particular, the OPE of line defects gives a quantization of the algebra of functions on the Coulomb branch of the 4d theory compactified on a circle. The Schur correlation functions behave as a twisted trace on the algebra, with a twist which is trivial for 4d SCFTs. From this point on, with ``Coulomb branch'' we will always refer to the Coulomb branch of the 4d theory compactified on the circle, and with ``quantum Coulomb branch algebra'' we will always refer to the non-commutative algebra of line operators which arise from a twisted circle compactification on the circle \cite{Gaiotto:2010be}, which controls the OPE in the Schur correlators. We are interested in the Schur correlation functions of 4d ${\cal N}=4$ $U(N)$ SYM, possibly deformed by an ${\cal N}=2^*$ flavour fugacity. In order to relate this to the M-theory considerations in the previous sections, we may notice a few facts: \begin{itemize} \item The Coulomb branch of ${\cal N}=2^*$ $U(N)$ SYM \cite{Donagi:1995cf} (in the generic complex structure relevant here) is a multiplicative analogue of the Higgs or Coulomb branch algebras of the M2 brane theory. The $X$ and $Y$ adjoint matrices are replaced by $GL(N)$ group elements $U$ and $V$ and the moment map relation is replaced by the constraint \begin{equation} \zeta U V - \zeta^{-1} V U = J I \end{equation} \item The analogy with the M2 brane theory becomes stronger after a standard string duality, mapping D3 branes wrapping a circle to M2 branes with a transverse $\bC^* \times \bC^*$ geometry. Wilson loops map to BPS operators charged under rotations of one $\bC^*$ factor. 't Hooft loops map to BPS operators charged under rotations of the second $\bC^*$ factor. S-duality acts geometrically on $\bC^* \times \bC^*$ as $u \to u^a v^b$, $v \to u^c v^d$. \item If we take the uniform-in $N$ limit and turn off the mass deformation parameters, the Poisson algebra of functions on the Coulomb branch becomes the universal enveloping algebra $U(\mathfrak{t})$ of the Lie algebra of hamiltonian symplectomorphisms of $\bC^* \times \bC^*$. The uniform-in-$N$ limit of the quantum, mass deformed Coulomb branch algebra is a two-parameter deformation of that. It is a natural candidate for the Koszul dual to the algebra of observables of twisted M-theory on $\bR \times \bC^* \times \bC^*$. \end{itemize} We will denote as $L_{0,1}$ the BPS operator associated to the fundamental Wilson loop, $L_{0,-1}$ the anti-fundamental one and as $L_{m,n}$ their S-duality images, with $m$,$n$ co-prime. We plan to make manifest a large $N$ hidden triality of both OPE and correlation functions, mixing the quantization parameter $q$ and the complexified fugacity $\zeta$ for the ${\cal N}=2^*$ deformation. \subsection{The quantum Coulomb branch algebra} The quantum Coulomb branch algebra $\aB_N$ can be presented in an Abelianized form \cite{Drukker:2009id,Alday:2009fs,Gomis:2011pf,Bullimore:2015lsa}, where the Wilson line defects are given as symmetric polynomials in gauge fugacities $\sigma_i$, such as the fundamental and anti-fundamental \begin{equation} L_{0,1}= \sum_i \sigma_i \qquad \qquad L_{0,-1}= \sum_i \sigma_i^{-1} \end{equation} More general Wilson-'t Hooft operators are given as intricate difference operators acting on the $\sigma_i$ by linear combinations of $\sigma_i \to q^n \sigma_i$ transformations. Explicit expressions are available for the elementary 't Hooft operators $L_{\pm 1,n}$ of magnetic charge $\pm 1$ and general electric charge $n$ (aligned to the magnetic charge) as Macdonald difference operators. It is possible to find explicit transformations manifesting the $SL(2,\bZ)$ S-duality symmetry of $\aB_N$. For example, one could realize the transformation kernels for the $S$ transformation as supersymmetric indices \cite{Gang:2013sqa,Cordova:2016uwk} of the 3d ${\cal N}=4$ $T[U(N)]$ gauge theories \cite{Gaiotto:2008ak}, generalizing the classical results of \cite{Gaiotto:2013bwa}. Appropriate S-duality transformations map the $L_{0,\pm 1}$ operators into the $L_{\pm 1,n}$. Alternative, manifestly S-dual presentations of the algebra in terms of skeins on a punctured torus are also available \cite{Bullimore:2013xsa}. Mathematically, the algebra $\aB_N$ should coincide with the spherical DAHA algebra $\text{\bf S\"H}_N$ and the uniform-in-$N$ limit is presented in an explicitly $SL(2,\bZ)$-invariant and triality invariant form as $\text{\bf S\"H}_\infty$ in reference \cite{2009arXiv0905.2555S}. Following that reference, will normalize \begin{equation} \ell_{0,\pm1} = \frac{1}{q^{-\frac12}- q^{\frac12}} L_{0,\pm 1} \qquad \qquad \ell_{\pm1,n} = \frac{1}{q^{-\frac12}- q^{\frac12}} L_{\pm 1,n} \end{equation} This rescaling is compatible with S-duality. It is analogous to the $\epsilon_1^{-1}$ factor in the definition of $t_{n,m}$ for the M2 brane algebra. It is instructive to rediscover some of the relations from \cite{2009arXiv0905.2555S}. From the definition, we find \begin{equation} [\ell_{1,n},\ell_{0,\pm 1}] = \pm \ell_{1,n\pm1} \qquad \qquad [\ell_{-1,n},\ell_{0,\pm 1}] = \mp \ell_{-1,n\pm1} \end{equation} Because of S-duality, it must be possible to define $\ell_{m,n}$, with $m$ and $n$ coprime, such that if $m n' -n m'=1$ we have \begin{equation} [\ell_{m,n},\ell_{m',n'}] = \ell_{m+m',n+n'} \end{equation} and furthermore $[\ell_{m,n},\ell_{-m,-n}]=0$. Such $\ell_{m,n}$ can be found explicitly by applying the above relation recursively, starting from the expressions for $\ell_{0,\pm 1}$ and $\ell_{\pm 1,n}$. We denote $\ell_{m,n}$ with $m$ and $n$ coprime as ``minimal'' generators. Using these definitions, we can then compute more general commutators, such as \begin{equation} [\ell_{1,0},\ell_{1,3}]= (q_1 + q_2 + q_3) \ell_{2,3} + (1-q_1)(1-q_2)(1-q_3) \ell_{1,1}\ell_{1,2} \end{equation} where we defined $q_1 = q$, $q_2 = \zeta$, $q_3 = q_1^{-1} q_2^{-1}$. We can also write that as \begin{equation} [\ell_{1,0},\ell_{1,3}]= (q_1^{-1} + q_2^{-1} + q_3^{-1}) \ell_{1,1}\ell_{1,2}-(q_1 + q_2 + q_3) \ell_{1,2}\ell_{1,1} \end{equation} which implies the S-dual image \begin{equation} [\ell_{a,b},\ell_{a+3 c,b+3d}]= (q_1^{-1} + q_2^{-1} + q_3^{-1}) \ell_{a+c,b+d}\ell_{a+2 c,b+2 d}-(q_1 + q_2 + q_3) \ell_{a+2 c,b+2 d}\ell_{a+c,b+d} \end{equation} whenever $(a d - b c) = \pm 1$. These commutators are invariant under triality transformations permuting the $q_i$, as expected. Another important observation is that commutators $[\ell_{1,n},\ell_{-1,n'}]$ give generators $\ell_{0,n+n'}$ built from Wilson line defects of higher charge, which all commute with $\ell_{0,\pm 1}$. With the help of S-duality, we can get canonical definitions of $\ell_{n,m}$ for non-coprime $n$,$m$. When $\zeta=1$, it is known that the quantum Coulomb branch algebra reduces to the symmetric product of $N$ copies of the quantum torus algebra $x y = q y x$. Correspondingly, $\aB$ reduces to the universal enveloping algebra of the Lie algebra \begin{equation} [\ell_{m,n},\ell_{m',n'}] = [m n' - n m']_q \ell_{m+m',n+n'} \end{equation} of the quantum torus algebra. Setting $q\to 1$ as well gives the universal enveloping algebra $U(\mathfrak{t})$ of the Lie algebra $\mathfrak{t}$ of Hamiltonian symplectomorphisms of $\bC^* \times \bC^*$: \begin{equation} [\ell_{m,n},\ell_{m',n'}] = (m n' - n m') \ell_{m+m',n+n'} \end{equation} as desired. Next, we will test the triality properties of the correlators. We can begin by studying somewhat heuristically the consequences of the trace relations. \subsection{Reduction to Wilson line correlators} We have not worked out the precise space of solutions of trace relations. We expect the analysis to proceed in a manner analogous as for $U(\mathfrak{t})$. We can give an example of such a reduction $U(\mathfrak{t})$. We have relations such as \begin{equation} \langle \ell_{0,1} \ell_{0,1} \ell_{0,-2} \rangle = \frac12 \langle \ell_{0,1} \ell_{0,1} [\ell_{-1,-1},\ell_{1,-1}] \rangle = \frac12 \langle \ell_{-1,0} \ell_{0,1} \ell_{1,-1}\rangle+\frac12 \langle \ell_{0,1} \ell_{-1,0} \ell_{1,-1}\rangle \end{equation} which allows us to write \begin{equation} \langle \ell_{0,1} \ell_{-1,0} \ell_{1,-1}\rangle=\langle \ell_{0,1} \ell_{0,1} \ell_{0,-2} \rangle + \langle \ell_{-1,1}\ell_{1,-1}\rangle \end{equation} Because of S-duality, \begin{equation} \langle \ell_{-1,1}\ell_{1,-1}\rangle = \langle \ell_{0,1}\ell_{0,-1}\rangle \end{equation} and thus we have reduced the non-trivial three-point function $\langle \ell_{0,1} \ell_{-1,0} \ell_{1,-1}\rangle$ to a linear combination of Wilson line correlation functions. It is reasonable to hope that all correlation functions may be expressible as linear combinations of Wilson line correlation functions, perhaps satisfying some further constraints. This would be analogue to the reduction to correlation functions of the $d_n$ operators in the 3d case. We thus focus on Schur correlation functions of Wilson lines. \subsection{Wilson line correlation functions} In the presence of Wilson line defect insertions, the Schur index is a contour integral of a ration of theta functions multiplied by appropriate characters of the gauge group: \begin{equation} \langle \prod_a W_{R_a} \rangle^{(N)} = \frac{1}{N!} \left[\prod_{i=1}^N \oint_{|\sigma_i|=1} ds_i\right] \frac{(q)_\infty^{3N}(\tau) \prod_{i<j} \theta(\sigma_i/\sigma_j;q)\theta(\sigma_j/\sigma_i;q)}{\prod_{i,j} \theta(\sigma_i/\sigma_j \zeta^{-1};q)} \prod_a \chi_{R_a}(\sigma_*) \end{equation} with $|q|<|\zeta|^{-1}<1$ and \begin{equation} \theta(\zeta;q) = (\zeta^{\frac12}-\zeta^{-\frac12})\prod_{n=1}^\infty (1-q^n)(1-\zeta q^n)(1-\zeta^{-1} q^n) = \sum_{n \in \bZ} (-1)^n \zeta^{n+\frac12} q^{\frac12(n^2 + n)} \end{equation} We have \begin{equation} \theta(e^{ 2 \pi i n} q^m \zeta;q) = (-1)^{n+m} q^{-\frac{m^2}{2}} \zeta^{-m}\theta(\zeta;q) \end{equation} and $\theta(\zeta^{-1};q) = - \theta(\zeta;q)$. In order to proceed, we would like some analogue of a grand canonical partition function. Consider the following function: \begin{equation} G(\zeta,u;q) = \frac{\theta(\zeta u;q) (q)_\infty^3}{\theta(\zeta;q) \theta(u;q)} \end{equation} It satisfies \begin{align} G(e^{ 2 \pi i n} q^m \zeta,u;q) &= u^{-m} G(\zeta,u;q) \cr G(\zeta,e^{ 2 \pi i n} q^m u;q) &= \zeta^{-m} G(\zeta,u;q) \end{align} It has a useful Fourier expansion valid in the fundamental region $|q|<|\zeta|<1$: \begin{equation} G(\zeta,u;q) =- \sum_n \frac{\zeta^n}{1-u q^n} \end{equation} Notice \begin{equation} G(q \zeta^{-1},u;q) =-u^{-1} G(\zeta,u^{-1};q) \end{equation} Among other things, the $G(\zeta,u;q)$ function is used to define the two point function of a complex fermion on the torus coupled to a Spin$_c$ bundle. Because of bosonization, it obeys an interesting Frobenius determinant formula \begin{equation} \det_{i,j} G(v_i/w_j,u;q) = \frac{\theta(u \prod_i v_i/w_i;q)}{\theta(u;q)}\frac{(q)_\infty^{3N} \prod_{i<j} \theta(v_i/v_j;q)\theta(w_j/w_i;q)}{\prod_{i,j} \theta(v_i/w_j;q)} \end{equation} We are particularly interested in the case where $w_i = v_i \zeta$, in which case we have \begin{equation} \det_{i,j} G(v_i/v_j\zeta^{-1},u;q) = \frac{\theta(u \zeta^{-N};q)}{\theta(u;q)}\frac{(\eta)q)_\infty^{3N}\prod_{i<j} \theta(v_i /v_j;q)\theta(v_j /v_i;q)}{\prod_{i,j} \theta(v_i/v_j\zeta^{-1};q)} \end{equation} so that \cite{Bourdier:2015wda} \begin{equation} \frac{\theta(u \zeta^N;q)}{\theta(u;q)}\langle \prod_a W_{R_a} \rangle^{(N)} = \frac{1}{N!} \left[\prod_{i=1}^N \oint_{|\sigma_i|=1} ds_i\right] \det_{i,j} G(\sigma_i/ \sigma_j \zeta^{-1},u;\tau) \prod_a \chi_{R_a}(\sigma_*) \end{equation} This means we can define grand canonical correlation functions as \begin{equation} \langle \prod_a W_{R_a} \rangle_{\xi,u} = \sum_N \xi^N \frac{\theta(u \zeta^{-N};q)}{\theta(u;q)} \langle \prod_a W_{R_a} \rangle^{(N)} \end{equation} and they will have a free Fermi gas interpretation. Notice that we introduced two new fugacities, $\xi$ and $u$. This is a bit redundant, but will be very useful. \subsection{Explicit examples and triality invariance} The single particle density operator $\hat \rho$ is an integral operator which acts on functions on $S^1$ as convolution with $G(\zeta,u;q)$. In Fourier transform, it acts on functions on $\bZ$ as multiplication by $-\frac{\zeta^{-n}}{1-u q^n}$. As a consequence, we can immediately compute the gran canonical partition function \begin{equation} Z(\xi,u;\zeta,q) = \prod_{n=-\infty}^\infty \left(1- \frac{\xi \zeta^{-n}}{1-u q^n} \right) =\prod_{n=-\infty}^\infty \frac{1-u q^n- \xi \zeta^{-n} }{1-u q^n} \end{equation} in the fundamental region $|q|<|\zeta|^{-1}<1$. Notice that the naive Weyl symmetry acting on the ${\cal N}=2^*$ fugacity $\zeta \to q^{-1} \zeta^{-1}$, i.e. $q_2 \leftrightarrow q_3$, must be accompanied by a redefinition of the auxiliary fugacities: \begin{equation} Z(\xi,u;q^{-1} \zeta^{-1},q) = \prod_{n=-\infty}^\infty \frac{1-u q^n- \xi \zeta^{n} q^n}{1-u q^n}= \prod_{n=-\infty}^\infty \frac{1-u^{-1} q^{n}+ \xi u^{-1} \zeta^{-n} }{1-u^{-1} q^{n}} \end{equation} i.e. $Z(\xi,u;\zeta,q) = Z(-\xi u^{-1} ,u^{-1};q^{-1} \zeta^{-1},q)$. We can also compute some correlation functions \cite{Drukker:2015spa}. The Wilson line operators map to very simple operators in the Fermi gas description. For example, $W_{0,\pm 1} = \sum_i \sigma_i^{\pm 1}$ maps to an operator acting on single fermions. In Fourier transform, the fermion modes are labelled by an integer, and $W_{0,\pm 1}$ acts on the integer label as a shift by $\pm 1$. We find \begin{equation} Z(\xi,u;\zeta,q)^{-1} \langle W_{0,1} W_{0,-1} \rangle = \mathrm{Tr} \frac{\xi \hat \rho}{1+ \xi \hat \rho} \hat W_{0,1} \hat W_{0,-1} - \mathrm{Tr} \frac{\xi \hat \rho}{1+ \xi \hat \rho} \hat W_{0,1}\frac{\xi \hat \rho}{1+ \xi \hat \rho} \hat W_{0,-1} \end{equation} where $\hat W_{0,\pm 1}$ acts by shift $f_n \to f_{n\pm 1}$. As a consequence, we have \begin{equation} q^{-1} (1-q)^2 \frac{\langle \ell_{0,1} \ell_{0,-1} \rangle}{Z(\xi,u;\zeta,q)} = - \sum_n \frac{\xi \zeta^{-n}}{1-u q^n-\xi \zeta^{-n}} - \sum_n \frac{\xi^2 \zeta^{-2n-1}}{\left(1-u q^n-\xi \zeta^{-n}\right)\left(1-u q^{n+1}-\xi \zeta^{-n-1}\right)} \end{equation} which is also invariant under the Weyl symmetry $\zeta \to q^{-1} \zeta^{-1}$, $u \to u^{-1}$, $\xi \to -\xi u^{-1}$. We can manipulate that expression in two ways, as \begin{equation} q^{-1} (1-q)^2 Z(\xi,u;\zeta,q)^{-1} \langle \ell_{0,1} \ell_{0,-1} \rangle = - \sum_n \frac{\xi \zeta^n\left(1-u q^{n+1}\right) }{\left(1-u q^n-\xi \zeta^n\right)\left(1-u q^{n+1}-\xi \zeta^{n+1}\right)} \end{equation} or \begin{equation} q^{-1} (1-q)^2 Z(\xi,u;\zeta,q)^{-1} \langle t_{0,1} t_{0,-1} \rangle = - \sum_n \frac{\xi \zeta^{n+1}\left(1-u q^{n}\right) }{\left(1-u q^n-\xi \zeta^n\right)\left(1-u q^{n+1}-\xi \zeta^{n+1}\right)} \end{equation} and then take a linear combination \begin{equation} q^{-1} (1-q)^2(\zeta-1) Z(\xi,u;\zeta,q)^{-1} \langle t_{0,1} t_{0,-1} \rangle = (q-1) \sum_n \frac{u q^{n}\xi \zeta^{n+1} }{\left(1-u q^n-\xi \zeta^n\right)\left(1-u q^{n+1}-\xi \zeta^{n+1}\right)} \end{equation} to a neat final form \begin{equation} Z(\xi,u;\zeta,q)^{-1} \langle t_{0,1} t_{0,-1} \rangle =- \frac{1}{(1-\zeta)(1-q)} \sum_n \frac{u q^{n+1}\xi \zeta^{-n} }{\left(1-u q^n-\xi \zeta^{-n}\right)\left(1-u q^{n+1}-\xi \zeta^{-n-1}\right)} \end{equation} Here we get to the crucial point: this expression converges for most values of $\zeta$, $q$, except at $|\zeta||q|=1$. It can be thought of as an analytic continuation of the original expression. It has a manifest non-trivial triality symmetry $q \leftrightarrow \zeta$, $\xi \leftrightarrow \mu$, which together with the $\zeta \to q^{-1} \zeta^{-1}$, $u \to u^{-1}$, $\xi \to -\xi u^{-1}$ Weyl transformation generates a full $S_3$ triality group. We can also take a different linear combination \begin{equation} Z(\xi,u;\zeta,q)^{-1} \langle t_{0,1} t_{0,-1} \rangle = \frac{1}{(1-q)(1-q^{-1} \zeta^{-1})} \sum_n \frac{\xi \zeta^{-n-1}}{\left(1-\mu q^n-\xi \zeta^{-n}\right)\left(1-\mu q^{n+1}-\xi \zeta^{-n-1}\right)} \end{equation} which converges away from $|\zeta|=1$. In conclusion, the correlation function is well-defined away from $|\zeta|=|q|=1$ and triality invariant! Parsing through the definitions of the $\ell_{0,n}$, we find that the expression \begin{equation} \tilde \ell_{0,2} = \frac{1}{q^{-1}-q} \sum_i \sigma_i^2 \end{equation} is a triality-invariant linear combination of $\ell_{0,2}$ and $\ell_{0,1}^2$. We have again a nice and triality-invariant expression \begin{equation} Z(\xi,u;\zeta,q)^{-1} \langle \tilde \ell_{0,2} \tilde \ell_{0,-2} \rangle = -\frac{1}{(1-\zeta^2)(1-q^2)} \sum_n \frac{\mu q^{n+2}\xi \zeta^{-n} }{\left(1-\mu q^n-\xi \zeta^{-n}\right)\left(1-\mu q^{n+2}-\xi \zeta^{-n-2}\right)} \end{equation} We can also compute with some work \begin{align} \frac{ \langle \ell_{0,1} \ell_{0,1} \tilde \ell_{0,-2} \rangle}{Z(\xi,u;\zeta,q)} = \sum_n \frac{(1-\zeta)^{-1}(1-q)^{-1}(1-q\zeta)^{-1} \mu q^{n+2}\xi \zeta^{-n} }{\left(1-\mu q^n-\xi \zeta^{-n}\right)\left(1-\mu q^{n+1}-\xi \zeta^{-n-1}\right)\left(1-\mu q^{n+2}-\xi \zeta^{-n-2}\right)} \end{align} which is again triality invariant. Based on these examples. it is natural to conjecture that the normalized grand canonical Schur correlators are all triality invariant. \noindent {\bf Acknowledgements.} We thank Tadashi Okazaki for collaboration at an early stage of the project. We thank Jihwan Oh, Yehao Zhou for providing proofs of some conjectural commutation relations. This research was supported in part by a grant from the Krembil Foundation. J.A. and D.G. are supported by the NSERC Discovery Grant program and by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities.
1,116,691,496,971
arxiv
\section{Introduction} Integrable systems have been the interest of many branches in mathematics and physics. They have many applications in real life, for example in optical fiber, vortex filaments, ocean and water waves, and gravitational fields \cite{KangXia2019}-\cite{Osborne2010}. The Korteweg deVries, nonlinear Schr\"{o}dinger equation and Sine-Gordon are all well known examples of integrable systems, where their intrinsic solutions are solitons. The phenomena of solitons arise from these systems show linear and nonlinear effects. \\ Nonlocal PT symmetries, reverse-spacetime and reverse-time have been studied for the NLS and KdV under the inverse scattering transformation \cite{AblowitzMusslimani2016}-\cite{AblowitzSegur1981}. This motivates us to snoop on solutions of a six-order six-component NLS-type AKNS system based on Riemann-Hilbert formulation\cite{Yang2018}-\cite{AblowitzFokas2003}, giving an extension to the dynamical behaviours of the solitons and their solutions \cite{AlleAhmedMa}. Throughout this paper, we formulate the AKNS hierarchy for the six-component AKNS system of sixth-order and solve the Riemann-Hilbert problem, with the contour being the real line and taking the jump matrix being the identity matrix \cite{KangXiaMa2019}-\cite{MaActa2022}. This paper is outline as follows. In section 2, we construct the AKNS hierarchy associated with the corresponding six-order six-component integrable system. In section 3, we formulate the Riemann-Hilbert problems associated with the corresponding matrix spectral problems. In section 4, we obtain general soliton solutions where the reflection coefficients are zero \cite{Yang2010}-\cite{DrazinJohnson1989}, while in section 5, we explicit exact one and two soliton solutions and explore their dynamical behaviours along with three and four solitons. Finally, in the last section, we come to conclusion and remarks. \section{Six-component AKNS Hierarchy} \subsection{Six-component AKNS hierarchy of coupled six-order integrable equations} Consider the $4 \times 4$ matrix spatial spectral problem \cite{Ma2018} \begin{align}\label{Spatialspectralproblem} \psi_{x} &= {\rm i} U \psi, \end{align} where $\psi$ is the eigenfunction and $U(u,\lambda)$ the spectral matrix is given by \begin{equation}\label{Umatrix} U(u,\lambda)=\begin{pmatrix} \alpha_{1} \lambda & p_{1} & p_{2} & p_{3} \\ r_{1} & \alpha_{2} \lambda & 0 & 0 \\ r_{2} & 0 & \alpha_{2} \lambda & 0 \\ r_{3} & 0 & 0 & \alpha_{2} \lambda \end{pmatrix} =\lambda \Lambda + P(u), \end{equation} where $\Lambda=diag(\alpha_{1},\alpha_{2},\alpha_{2},\alpha_{2})$, $\lambda$ is the spectral parameter, $\alpha_{1},\alpha_{2}$ are two real constants and $u= (p,r^T)^T$ is a vector of six potentials, where $p=(p_1,p_2,p_3)$ and $r=(r_1,r_2,r_3)^T$ are vector functions of $(x,t)$ and $\{p_{i},r_{i} \}_{i=1,2,3} \in \mathcal{S}(\mathbb{R})$, the Schwartz space, and $p_{i},r_{i} \rightarrow 0$ as $x \rightarrow \pm \infty$, so \begin{equation} P=\begin{pmatrix} 0 & p_{1} & p_{2} & p_{3} \\ r_{1} & 0 & 0 & 0 \\ r_{2} & 0 & 0 & 0 \\ r_{3} & 0 & 0 & 0 \end{pmatrix}. \end{equation} Let's construct the AKNS soliton hierarchies. To do so, we need to solve the stationary zero curvature equation \begin{equation}\label{stationaryZC} W_{x}={\rm i}[U,W], \end{equation} for which \begin{equation} W=\begin{pmatrix} a & b_{1} & b_{2} & b_{3} \\ c_{1} & d_{11} & d_{12} & d_{13} \\ c_{2} & d_{21} & d_{22} & d_{23} \\ c_{3} & d_{31} & d_{32} & d_{33} \end{pmatrix}, \end{equation} is a solution of this equation, where $a,b_{i},c_{i},d_{ij}$ are scalar components for $i,j \in \{1,2,3\}$. From the stationary zero curvature equation we get: \begin{align}\label{recursivesystem6order} \begin{cases} a_{x}&={\rm i} \big( -\sum\limits_{i=1}^{3}b_{i}r_{i}+\sum\limits_{i=1}^{3}c_{i}p_{i} \big), \\ b_{i,x}&= {\rm i}(\alpha\lambda b_{i}-a p_{i} +d_{1i}p_{1}+d_{2i}p_{2}+d_{3i}p_{3}), \quad i \in \{1,2,3\}, \\ c_{i,x}&= {\rm i}(-\alpha\lambda c_{i} +a r_{i}-d_{i1}r_{1}-d_{i2}r_{2}-d_{i3}r_{3}), \quad i \in \{1,2,3\}, \\ d_{ij,x}&={\rm i}(b_{j}r_{i}-c_{i}p_{j}), \quad i,j \in \{1,2,3\}, \end{cases} \end{align} where $\alpha=\alpha_{1}-\alpha_{2}$. We expand $W$ in Laurent series: \begin{equation} W = \sum\limits_{m= 0}^{\infty}W_{m}\lambda^{-m} \quad \text{with} \quad W_{m}=\begin{pmatrix} a^{[m]} & b_{1}^{[m]} & b_{2}^{[m]} & b_{3}^{[m]} \\ c_{1}^{[m]} & d_{11}^{[m]} & d_{12}^{[m]} & d_{13}^{[m]} \\ c_{2}^{[m]} & d_{21}^{[m]} & d_{22}^{[m]} & d_{23}^{[m]} \\ c_{3}^{[m]} & d_{31}^{[m]} & d_{32}^{[m]} & d_{33}^{[m]} \\ \end{pmatrix}, \end{equation} explicitly, \begin{align} a&= \sum\limits_{m= 0}^{\infty}a^{[m]}\lambda^{-m}, \hspace{0.5cm} b_{i}= \sum\limits_{m= 0}^{\infty}b_{i}^{[m]}\lambda^{-m}, \\ c_{i}&= \sum\limits_{m= 0}^{\infty}c_{i}^{[m]}\lambda^{-m}, \hspace{0.5cm} d_{ij}= \sum\limits_{m= 0}^{\infty}d_{ij}^{[m]}\lambda^{-m}, \end{align} for $i,j \in \{1,2,3\}$ and $m \geq 0$. The system (\ref{recursivesystem6order}) generates the recursive relations: \begin{align}\label{recursive6order} & b_{i}^{[0]}=0, \, c_{i}^{[0]}=0, \quad \text{for} \quad i \in \{1,2,3\}, \\ & a_{x}^{[0]}=0, \\ & d_{ij,x}^{[0]}=0, \quad \text{for} \quad i,j \in \{1,2,3\}, \\ & b_{i}^{[m+1]}= \frac{1}{\alpha} (-{\rm i} b_{i,x}^{[m]}+a^{[m]}p_{i} -d_{1i}^{[m]}p_{1}-d_{2i}^{[m]}p_{2}-d_{3i}^{[m]}p_{3}), \quad i \in \{1,2,3\}, \\ & c_{i}^{[m+1]}= \frac{1}{\alpha} ({\rm i} c_{i,x}^{[m]}+a^{[m]} r_{i} -d_{i1}^{[m]}r_{1}-d_{i2}^{[m]}r_{2} -d_{i3}^{[m]}r_{3}), \quad i \in \{1,2,3\}, \\ & a_{x}^{[m]}={\rm i}(-\sum\limits_{i=1}^{3}b_{i}^{[m]}r_{i} +\sum\limits_{i=1}^{3}c_{i}^{[m]}p_{i}), \\[3mm] & d_{ij,x}^{[m]}={\rm i}(b_{j}^{[m]}r_{i}-c_{i}^{[m]}p_{j}), \quad i,j \in \{1,2,3\}, \end{align} where all the involved functions are defined as follows: \begin{flalign} &\begin{aligned} \begin{cases} a^{[0]} &=\beta_{1}, \quad a^{[1]} =0, \quad a^{[2]} =-\frac{\beta}{\alpha^{2}} \mathbf{T_{0,0}}, \quad a^{[3]} =- {\rm i} \frac{\beta}{\alpha^{3}} (\mathbf{T_{0,1}}-\mathbf{T_{1,0}}), \\ a^{[4]} &=\frac{\beta}{\alpha^{4}} \bigg[ 3\mathbf{T_{0,0}}^2 +\mathbf{T_{0,2}}-\mathbf{T_{1,1}}+\mathbf{T_{2,0}} \bigg], \\ a^{[5]} &={\rm i} \frac{\beta}{\alpha^{5}} \bigg[ 6 \mathbf{T_{0,0}} ( \mathbf{T_{0,1}} -\mathbf{T_{1,0}} ) + \mathbf{T_{0,3}}-\mathbf{T_{3,0}}+\mathbf{T_{2,1}}-\mathbf{T_{1,2}} \bigg] , \\ a^{[6]} &=-\frac{\beta}{\alpha^{6}} \Bigg[ 10 \mathbf{T_{0,0}}^{3} + 10 \mathbf{T_{0,0}} (\mathbf{T_{0,2}}+\mathbf{T_{2,0}}) + 5 \Big( \mathbf{T_{1,0}}^{2} + \mathbf{T_{0,1}}^{2} \Big) \\& \hspace{6cm} + (\mathbf{T_{0,4}} + \mathbf{T_{4,0}} -\mathbf{T_{1,3}} -\mathbf{T_{3,1}}{}{} + \mathbf{T_{2,2}}) \Bigg], \\ a^{[7]} &=-{\rm i} \frac{\beta}{\alpha^{7}} \bigg[ 30 \mathbf{T_{0,0}}^{2} (\mathbf{T_{0,1}}-\mathbf{T_{1,0}}) +5 \mathbf{T_{0,0}} (\mathbf{T_{2,1}}-\mathbf{T_{1,2}}) \\& \hspace{2cm} +10 \mathbf{T_{0,0}} (\mathbf{T_{0,3}}-\mathbf{T_{3,0}}) +10 \mathbf{T_{1,1}} (\mathbf{T_{0,1}}-\mathbf{T_{1,0}}) + 5 ( \mathbf{T_{0,1}} \mathbf{T_{2,0}} -\mathbf{T_{1,0}} \mathbf{T_{0,2}} ) \\& \hspace{3cm} + 20 ( \mathbf{T_{0,1}} \mathbf{T_{0,2}} -\mathbf{T_{1,0}} \mathbf{T_{2,0}} ) + 5 (\mathbf{T_{5,0}} -\mathbf{T_{4,1}} +\mathbf{T_{3,2}} -\mathbf{T_{2,3}} +\mathbf{T_{1,4}} -\mathbf{T_{0,5}}) \bigg], \end{cases} \end{aligned}& \\ &\begin{aligned} \begin{cases} b^{[0]}_{k} &=0, \quad b^{[1]}_{k} =\frac{\beta}{\alpha} p_{k}, \quad b^{[2]}_{k} =-{\rm i} \frac{\beta}{\alpha^2} p_{k,x}, \quad b^{[3]}_{k} =-\frac{\beta}{\alpha^3} \bigg[ p_{k,xx} + 2\mathbf{T_{0,0}} p_{k} \bigg], \\ b^{[4]}_{k} &={\rm i} \frac{\beta}{\alpha^{4}} \bigg[ p_{k,xxx} +3 \mathbf{T_{0,0}} p_{k,x} +3 \mathbf{T_{1,0}} p_{k} \bigg] , \\ b^{[5]}_{k} &=\frac{\beta}{\alpha^{5}} \bigg[ p_{k,xxxx}+ 4 \mathbf{T_{0,0}} p_{k,xx}+(6 \mathbf{T_{1,0}} +2\mathbf{T_{0,1}})p_{k,x} +(4 \mathbf{T_{2,0}} +2 \mathbf{T_{1,1}} +2 \mathbf{T_{0,2}} +6 \mathbf{T_{0,0}}^{2})p_{k} \bigg], \\ b^{[6]}_{k} &= -{\rm i} \frac{\beta}{\alpha^{6}} \Bigg[ p_{k,xxxxx} + 5\mathbf{T_{0,0}} p_{k,xxx} + (10 \mathbf{T_{1,0}} + 5 \mathbf{T_{0,1}}) p_{k,xx} \\& + \Bigg( 10 \mathbf{T_{2,0}} + 5 \mathbf{T_{0,2}} + 10 \mathbf{T_{1,1}} + 10 \mathbf{T_{0,0}}^{2} \Bigg) p_{k,x} + \Bigg( 5 \mathbf{T_{3,0}} + 5 \mathbf{T_{2,1}} + 5 \mathbf{T_{1,2}} + 20 \mathbf{T_{0,0}} \mathbf{T_{1,0}} \Bigg) p_{k} \Bigg], \\ b^{[7]}_{k} &= -\frac{\beta}{\alpha^{7}} \Bigg[ p_{k,xxxxxx} + 6 \mathbf{T_{0,0}} p_{k,xxxx} + (9\mathbf{T_{0,1}}+15\mathbf{T_{1,0}}) p_{k,xxx} \\& +(15\mathbf{T_{0,0}}^{2}+11\mathbf{T_{0,2}}+20\mathbf{T_{2,0}}+25\mathbf{T_{1,1}})p_{k,xx} \\ &+\bigg( \mathbf{T_{0,0}}(15\mathbf{T_{0,1}}+45\mathbf{T_{1,0}})+15\mathbf{T_{3,0}}+4\mathbf{T_{0,3}}+20\mathbf{T_{1,2}}+25\mathbf{T_{2,1}} \bigg) p_{k,x} \\ &+ \bigg( (20\mathbf{T_{0,0}}^{3}+\mathbf{T_{0,0}}(20\mathbf{T_{0,2}}+35\mathbf{T_{2,0}}+25\mathbf{T_{1,1}})) +10\mathbf{T_{0,1}}^{2} \\& \hspace{3cm} +25\mathbf{T_{1,0}}^{2}+20\mathbf{T_{1,0}} \mathbf{T_{0,1}}+2\mathbf{T_{0,4}}+6\mathbf{T_{4,0}}+4\mathbf{T_{1,3}}+9\mathbf{T_{3,1}}{}{}+11\mathbf{T_{2,2}} \bigg) p_{k} \Bigg], \end{cases} \end{aligned}& \end{flalign} \begin{flalign} &\begin{aligned} \begin{cases} c^{[0]}_{k} &=0 , \quad c^{[1]}_{k} =\frac{\beta}{\alpha} r_{k} , \quad c^{[2]}_{k} ={\rm i} \frac{\beta}{\alpha^2} r_{k,x} , \quad c^{[3]}_{k} =-\frac{\beta}{\alpha^3} \bigg[ r_{k,xx} + 2 \mathbf{T_{0,0}} r_{k} \bigg] , \\ c^{[4]}_{k} &=-{\rm i} \frac{\beta}{\alpha^{4}} \bigg[ r_{k,xxx} +3 \mathbf{T_{0,0}} r_{k,x} +3\mathbf{T_{0,1}} r_{k} \bigg] , \\ c^{[5]}_{k} &=\frac{\beta}{\alpha^{5}} \bigg[ r_{k,xxxx} +4 \mathbf{T_{0,0}} r_{k,xx} +(6 \mathbf{T_{0,1}} +2\mathbf{T_{1,0}})r_{k,x} +(4 \mathbf{T_{0,2}} +2 \mathbf{T_{1,1}} +2 \mathbf{T_{2,0}} +6 \mathbf{T_{0,0}}^{2})r_{k} \bigg], \\ c^{[6]}_{k} &= {\rm i} \frac{\beta}{\alpha^{6}} \Bigg[ q_{k,xxxxx} + 5\mathbf{T_{0,0}} q_{k,xxx} + (10 \mathbf{T_{0,1}} + 5 \mathbf{T_{1,0}}) q_{k,xx} \\& + \Bigg( 10 \mathbf{T_{0,2}} + 5 \mathbf{T_{2,0}} + 10 \mathbf{T_{1,1}} + 10 \mathbf{T_{0,0}}^{2} \Bigg) q_{k,x} + \Bigg( 5 \mathbf{T_{0,3}} + 5 \mathbf{T_{1,2}} + 5 \mathbf{T_{2,1}} + 20 \mathbf{T_{0,0}} \mathbf{T_{0,1}} \Bigg) q_{k} \Bigg] \\ c^{[7]}_{k} &= -\frac{\beta}{\alpha^{7}} \Bigg[ q_{k,xxxxxx} + 6 \mathbf{T_{0,0}} q_{k,xxxx} + (15\mathbf{T_{0,1}}+9\mathbf{T_{1,0}}) q_{k,xxx} \\& +(15\mathbf{T_{0,0}}^{2}+20\mathbf{T_{0,2}}+11\mathbf{T_{2,0}}+25\mathbf{T_{1,1}})q_{k,xx} \\ &+\bigg( \mathbf{T_{0,0}}(45\mathbf{T_{0,1}}+15\mathbf{T_{1,0}})+4\mathbf{T_{3,0}}+15\mathbf{T_{0,3}}+25\mathbf{T_{1,2}}+20\mathbf{T_{2,1}} \bigg) q_{k,x} \\ &+ \bigg( (20\mathbf{T_{0,0}}^{3}+\mathbf{T_{0,0}}(35\mathbf{T_{0,2}}+20\mathbf{T_{2,0}}+25\mathbf{T_{1,1}})) +25\mathbf{T_{0,1}}^{2} \\& \hspace{3cm} +10\mathbf{T_{1,0}}^{2}+20\mathbf{T_{1,0}} \mathbf{T_{0,1}}+6\mathbf{T_{0,4}}+2\mathbf{T_{4,0}}+9\mathbf{T_{1,3}}+4\mathbf{T_{3,1}}{}{}+11\mathbf{T_{2,2}} \bigg) q_{k} \Bigg], \end{cases} \end{aligned}& \\ &\begin{aligned} \begin{cases} d^{[0]}_{kj} &= \beta_{2}, \, \text{for} \, \, k=j, \quad \text{and} \quad d^{[0]}_{kj}=0, \, \, \text{for} \, \, k \neq j, \quad \text{where} \quad k,j \in \{1,2,3\} \\ d^{[1]}_{kj} &= 0 , \quad d^{[2]}_{kj} = \frac{\beta}{\alpha^{2}} p_{j} r_{k} , \quad d^{[3]}_{kj} = -{\rm i} \frac{\beta}{\alpha^{3}} (p_{j,x} r_{k} - p_{j} r_{k,x}) , \\ d^{[4]}_{kj} &= - \frac{\beta}{\alpha^{4}} \bigg[ 3 \mathbf{T_{0,0}} p_{j} r_{k} +p_{j,xx} r_{k} - p_{j,x} r_{k,x} + p_{j} r_{k,xx} \bigg] , \\ d^{[5]}_{kj} &= {\rm i} \frac{\beta}{\alpha^{5}} \bigg[ 2 (\mathbf{T_{1,0}} - \mathbf{T_{0,1}}) p_{j} r_{k} +4 \mathbf{T_{0,0}} (p_{j,x} r_{k} - p_{j} r_{k,x}) + p_{j,xxx} r_{k} - p_{j} r_{k,xxx} + p_{j,x} r_{k,xx} -p_{j,xx} r_{k,x} \bigg], \\ d^{[6]}_{kj} &= \frac{\beta}{\alpha^{6}} \bigg[ \bigg( \mathbf{T_{0,0}}^{2} + 5(\mathbf{T_{0,2}} + \mathbf{T_{1,1}} + \mathbf{T_{2,0}}) \bigg) p_{j} r_{k} +5 \mathbf{T_{0,1}} p_{j} r_{k,x} +5 \mathbf{T_{1,0}} p_{j,x} r_{k} \\ & +5 \mathbf{T_{0,0}} (p_{j} r_{k,xx} -p_{j,x} r_{k,x} +p_{j,xx} r_{k}) +p_{j,xxxx} r_{k}-p_{j,xxx} r_{k,x} +p_{j,xx} r_{k,xx}-p_{j,x} r_{k,xxx}+p_{j} r_{k,xxxx} \bigg] \\ d^{[7]}_{kj} &= -{\rm i} \frac{\beta}{\alpha^{7}} \bigg[ \bigg( \mathbf{T_{0,0}} (\mathbf{T_{1,0}} -\mathbf{T_{0,1}}) -4(\mathbf{T_{3,0}}-\mathbf{T_{0,3}}) +(\mathbf{T_{2,1}}-\mathbf{T_{1,2}}) \bigg) p_{j} r_{k} \\ &+\bigg( 15 \mathbf{T_{0,0}}^{2} +8 \mathbf{T_{0,2}} +11 \mathbf{T_{2,0}} +13 \mathbf{T_{1,1}} \bigg) p_{j,x} r_{k} -\bigg( 15 \mathbf{T_{0,0}}^{2} +11 \mathbf{T_{0,2}} +8 \mathbf{T_{2,0}} +13 \mathbf{T_{1,1}} \bigg) p_{j} r_{k,x} \\ & +\bigg( 3 \mathbf{T_{0,1}} + 9 \mathbf{T_{1,0}} \bigg) p_{j,xx} r_{k} +\bigg( 3 \mathbf{T_{0,1}} - 3 \mathbf{T_{1,0}} \bigg) p_{j,x} r_{k,x} -\bigg( 9 \mathbf{T_{0,1}} + 3 \mathbf{T_{1,0}} \bigg) p_{j} r_{k,xx} \\ &+ 6 \mathbf{T_{0,0}} (p_{j,xxx} r_{k} -p_{j,xx} r_{k,x}+p_{j,x} r_{k,xx} -p_{j} r_{k,xxx}) \\ &+p_{j,xxxxx} r_{k}-p_{j,xxxx} r_{k,x}+p_{j,xxx} r_{k,xx} -p_{j,xx} r_{k,xxx}+p_{j,x} r_{k,xxxx}-p_{j} r_{k,xxxxx} \bigg], \end{cases} \end{aligned}& \end{flalign} where $\alpha=\alpha_{1}-\alpha_{2}$, $\beta=\beta_{1}-\beta_{2}$ and \begin{small} \begin{align*} \begin{cases} &\mathbf{T_{0,0}}=\sum\limits_{i=1}^{3} p_{i} r_{i}, \\[5mm] &\mathbf{T_{0,1}}=\sum\limits_{i=1}^{3} p_{i} r_{i,x}, \, \mathbf{T_{1,0}}=\sum\limits_{i=1}^{3} p_{i,x} r_{i}, \\[5mm] &\mathbf{T_{0,2}}=\sum\limits_{i=1}^{3} p_{i} r_{i,xx}, \, \mathbf{T_{2,0}}=\sum\limits_{i=1}^{3} p_{i,xx} r_{i}, \, \mathbf{T_{1,1}}=\sum\limits_{i=1}^{3} p_{i,x} r_{i,x}, \\[5mm] &\mathbf{T_{0,3}}=\sum\limits_{i=1}^{3} p_{i} r_{i,xxx}, \, \mathbf{T_{1,2}}=\sum\limits_{i=1}^{3} p_{i,x} r_{i,xx}, \, \mathbf{T_{2,1}}=\sum\limits_{i=1}^{3} p_{i,xx} r_{i,x}, \, \mathbf{T_{3,0}}=\sum\limits_{i=1}^{3} p_{i,xxx} r_{i}, \\[6mm] &\mathbf{T_{0,4}}=\sum\limits_{i=1}^{3} p_{i} r_{i,xxxx}, \, \mathbf{T_{1,3}}=\sum\limits_{i=1}^{3} p_{i,x} r_{i,xxx}, \, \mathbf{T_{2,2}}=\sum\limits_{i=1}^{3} p_{i,xx} r_{i,xx}, \\& \mathbf{T_{3,1}}{}{}=\sum\limits_{i=1}^{3} p_{i,xxx} r_{i,x} \, \mathbf{T_{4,0}}=\sum\limits_{i=1}^{3} p_{i,xxxx} r_{i}. \end{cases} \end{align*} \end{small} \\ We always assume that $b^{[m]}=(b^{[m]}_{1},b^{[m]}_{2},b^{[m]}_{3})$ and $c^{[m]}=(c^{[m]}_{1},c^{[m]}_{2},c^{[m]}_{3})^{T}$, for $m \in \{1,2,3,4,5,6,7\}$. To derive the six-order six-component AKNS integrable hierarchy, we take the Lax matrices \begin{equation} V^{[6]}=V^{[6]}(u,\lambda)=(\lambda^{6} W)_{+}= \sum\limits_{m= 0}^{6}W_{m}\lambda^{6-m}. \end{equation} By taking the modification terms to be zero. \newline We begin with the spatial and temporal equations of the spectral problems, with the associated Lax pair $\{U,V\}$: \cite{Ma2018} \begin{align}\label{AKNSsystem} \psi_{x} &= {\rm i} U \psi, \\ \psi_{t} &= {\rm i} {V} \psi, \end{align} where $V=V^{[6]}$ and $\psi$ is the eigenfunction. \newline The Lax matrix operator $V$ is determined by the compatibility condition $\psi_{xt}=\psi_{tx}$ which leads to the zero curvature equation: \begin{equation} U_{t} - V_{x} + {\rm i} [U,V] = 0 , \end{equation} that gives the six-component system of soliton equations \begin{equation} u_{t}=\begin{pmatrix} p^{T} \\ r \end{pmatrix}_{t} ={\rm i} \begin{pmatrix} \alpha {b}^{[7]T} \\ -\alpha c^{[7]} \end{pmatrix} \end{equation} where ${b}^{[7]}$ and ${c}^{[7]}$ are defined earlier, and \begin{equation} V= \begin{pmatrix} \Vbold{11}{} & \Vbold{12}{} & \Vbold{13}{} & \Vbold{14}{} \\ \Vbold{21}{} & \Vbold{22}{} & \Vbold{23}{} & \Vbold{24}{} \\ \Vbold{31}{} & \Vbold{32}{} & \Vbold{33}{} & \Vbold{34}{} \\ \Vbold{41}{} & \Vbold{42}{} & \Vbold{43}{} & \Vbold{44}{} \end{pmatrix} \end{equation} where \begin{align} \nonumber \Vbold{11}{} &= a^{[0]} \lambda^{6} + a^{[2]} \lambda^{4} + a^{[3]} \lambda^{3} + a^{[4]} \lambda^{2} + a^{[5]} \lambda + a^{[6]}, & \Vbold{12}{} &= b_{1}^{[1]} \lambda^{5} + b_{1}^{[2]} \lambda^{4} +b_{1}^{[3]} \lambda^{3} + b_{1}^{[4]} \lambda^{2} + b_{1}^{[5]} \lambda + b_{1}^{[6]}, \\ \nonumber \Vbold{13}{} &= b_{2}^{[1]} \lambda^{5} + b_{2}^{[2]} \lambda^{4} +b_{2}^{[3]} \lambda^{3} + b_{2}^{[4]} \lambda^{2} + b_{2}^{[5]} \lambda + b_{2}^{[6]}, & \Vbold{14}{} &= b_{3}^{[1]} \lambda^{5} + b_{3}^{[2]} \lambda^{4} +b_{3}^{[3]} \lambda^{3} + b_{3}^{[4]} \lambda^{2} + b_{3}^{[5]} \lambda + b_{3}^{[6]}, \\ \nonumber \Vbold{21}{} &= c_{1}^{[1]} \lambda^{5} + c_{1}^{[2]} \lambda^{4} +c_{1}^{[3]} \lambda^{3} + c_{1}^{[4]} \lambda^{2} + c_{1}^{[5]} \lambda + c_{1}^{[6]}, & \Vbold{22}{} &= d_{11}^{[0]} \lambda^{6} + d_{11}^{[2]} \lambda^{4} + d_{11}^{[3]} \lambda^{3} + d_{11}^{[4]} \lambda^{2} + d_{11}^{[5]} \lambda + d_{11}^{[6]}, \\ \nonumber \Vbold{23}{} &= d_{12}^{[2]} \lambda^{4} + d_{12}^{[3]} \lambda^{3} + d_{12}^{[4]} \lambda^{2} + d_{12}^{[5]} \lambda + d_{12}^{[6]}, & \Vbold{24}{} &= d_{13}^{[2]} \lambda^{4} + d_{13}^{[3]} \lambda^{3} + d_{13}^{[4]} \lambda^{2} + d_{13}^{[5]} \lambda + d_{13}^{[6]}, \\ \nonumber \Vbold{31}{} &= c_{2}^{[1]} \lambda^{5} + c_{2}^{[2]} \lambda^{4} +c_{2}^{[3]} \lambda^{3} + c_{2}^{[4]} \lambda^{2} + c_{2}^{[5]} \lambda + c_{2}^{[6]}, & \Vbold{32}{} &= d_{21}^{[2]} \lambda^{4} + d_{21}^{[3]} \lambda^{3} + d_{21}^{[4]} \lambda^{2} + d_{21}^{[5]} \lambda + d_{21}^{[6]}, \\ \nonumber \Vbold{33}{} &= d_{22}^{[0]} \lambda^{6} + d_{22}^{[2]} \lambda^{4} + d_{22}^{[3]} \lambda^{3} + d_{22}^{[4]} \lambda^{2} + d_{22}^{[5]} \lambda + d_{22}^{[6]}, & \Vbold{34}{} &= d_{23}^{[2]} \lambda^{4} + d_{23}^{[3]} \lambda^{3} + d_{23}^{[4]} \lambda^{2} + d_{23}^{[5]} \lambda + d_{23}^{[6]}, \\ \nonumber \Vbold{41}{} &= c_{3}^{[1]} \lambda^{5} + c_{3}^{[2]} \lambda^{4} +c_{3}^{[3]} \lambda^{3} + c_{3}^{[4]} \lambda^{2} + c_{3}^{[5]} \lambda + c_{3}^{[6]}, & \Vbold{42}{} &= d_{31}^{[2]} \lambda^{4} + d_{31}^{[3]} \lambda^{3} + d_{31}^{[4]} \lambda^{2} + d_{31}^{[5]} \lambda + d_{31}^{[6]}, \\ \nonumber \Vbold{43}{} &= d_{32}^{[2]} \lambda^{4} + d_{32}^{[3]} \lambda^{3} + d_{32}^{[4]} \lambda^{2} + d_{32}^{[5]} \lambda + d_{32}^{[6]}, & \Vbold{44}{} &= d_{33}^{[0]} \lambda^{6} + d_{33}^{[2]} \lambda^{4} + d_{33}^{[3]} \lambda^{3} + d_{33}^{[4]} \lambda^{2} + d_{33}^{[5]} \lambda + d_{33}^{[6]}. \end{align} Thus, we deduce the coupled AKNS system of sixth order equations:\cite{Ma2018} \begin{align} \nonumber p_{k,t} &= -{\rm i} \frac{\beta}{\alpha^{6}} \Bigg[ p_{k,xxxxxx} +6 (\sum\limits_{i=1}^{3} p_{i} r_{i}) p_{k,xxxx} +(9 \sum\limits_{i=1}^{3} p_{i} r_{i,x} +15 \sum\limits_{i=1}^{3} p_{i,x} r_{i}) p_{k,xxx} \\ \nonumber &+ \bigg( 15 (\sum\limits_{i=1}^{3} p_{i} r_{i})^{2} + 11 \sum\limits_{i=1}^{3} p_{i} r_{i,xx} + 20 \sum\limits_{i=1}^{3} p_{i,xx} r_{i} + 25 \sum\limits_{i=1}^{3} p_{i,x} r_{i,x} \bigg) p_{k,xx} \\ \nonumber &+ \bigg( (\sum\limits_{i=1}^{3} p_{i} r_{i}) (15 \sum\limits_{i=1}^{3} p_{i} r_{i,x} +45 \sum\limits_{i=1}^{3} p_{i,x} r_{i}) + 15 \sum\limits_{i=1}^{3} p_{i,xxx} r_{i} + 4 \sum\limits_{i=1}^{3} p_{i} r_{i,xxx} \\& \nonumber + 20 \sum\limits_{i=1}^{3} p_{i,x} r_{i,xx} + 25 \sum\limits_{i=1}^{3} p_{i,xx} r_{i,x} \bigg) p_{k,x} \\& \nonumber + \bigg( 20 (\sum\limits_{i=1}^{3} p_{i} r_{i})^{3} +(\sum\limits_{i=1}^{3} p_{i} r_{i}) (20 \sum\limits_{i=1}^{3} p_{i} r_{i,xx} +35 \sum\limits_{i=1}^{3} p_{i,xx} r_{i} +25 \sum\limits_{i=1}^{3} p_{i,x} r_{i,x}) \\& \nonumber + 10 (\sum\limits_{i=1}^{3} p_{i} r_{i,x})^{2} + 20 (\sum\limits_{i=1}^{3} p_{i,x} r_{i}) (\sum\limits_{i=1}^{3} p_{i} r_{i,x}) + 25 (\sum\limits_{i=1}^{3} p_{i,x} r_{i})^{2} \\& + 2 \sum\limits_{i=1}^{3} p_{i} r_{i,xxxx} + 4 \sum\limits_{i=1}^{3} p_{i,x} r_{i,xxx} + 11 \sum\limits_{i=1}^{3} p_{i,xx} r_{i,xx} + 9 \sum\limits_{i=1}^{3} p_{i,xxx} r_{i,x} + 6 \sum\limits_{i=1}^{3} p_{i,xxxx} r_{i} \bigg) p_{k} \Bigg], \nonumber \end{align} \vspace{-1cm} \begin{align} \nonumber r_{k,t} &= {\rm i} \frac{\beta}{\alpha^{6}} \Bigg[ q_{k,xxxxxx} +6 (\sum\limits_{i=1}^{3} p_{i} r_{i}) q_{k,xxxx} +(15 \sum\limits_{i=1}^{3} p_{i} r_{i,x} +9 \sum\limits_{i=1}^{3} p_{i,x} r_{i}) q_{k,xxx} \\ \nonumber &+ \bigg( 15 (\sum\limits_{i=1}^{3} p_{i} r_{i})^{2} + 20 \sum\limits_{i=1}^{3} p_{i} r_{i,xx} + 11 \sum\limits_{i=1}^{3} p_{i,xx} r_{i} + 25 \sum\limits_{i=1}^{3} p_{i,x} r_{i,x} \bigg) q_{k,xx} \\ \nonumber &+ \bigg( (\sum\limits_{i=1}^{3} p_{i} r_{i}) (45 \sum\limits_{i=1}^{3} p_{i} r_{i,x} +15 \sum\limits_{i=1}^{3} p_{i,x} r_{i}) + 4 \sum\limits_{i=1}^{3} p_{i,xxx} r_{i} + 15 \sum\limits_{i=1}^{3} p_{i} r_{i,xxx} \\& \nonumber + 25 \sum\limits_{i=1}^{3} p_{i,x} r_{i,xx} + 20 \sum\limits_{i=1}^{3} p_{i,xx} r_{i,x} \bigg) q_{k,x} \\& \nonumber + \bigg( 20 (\sum\limits_{i=1}^{3} p_{i} r_{i})^{3} +(\sum\limits_{i=1}^{3} p_{i} r_{i}) (35 \sum\limits_{i=1}^{3} p_{i} r_{i,xx} +20 \sum\limits_{i=1}^{3} p_{i,xx} r_{i} +25 \sum\limits_{i=1}^{3} p_{i,x} r_{i,x}) \\& \nonumber + 25 (\sum\limits_{i=1}^{3} p_{i} r_{i,x})^{2} + 20 (\sum\limits_{i=1}^{3} p_{i,x} r_{i}) (\sum\limits_{i=1}^{3} p_{i} r_{i,x}) + 10 (\sum\limits_{i=1}^{3} p_{i,x} r_{i})^{2} \\& \label{coupledequs} + 6 \sum\limits_{i=1}^{3} p_{i} r_{i,xxxx} + 9 \sum\limits_{i=1}^{3} p_{i,x} r_{i,xxx} + 11 \sum\limits_{i=1}^{3} p_{i,xx} r_{i,xx} + 4 \sum\limits_{i=1}^{3} p_{i,xxx} r_{i,x} + 2 \sum\limits_{i=1}^{3} p_{i,xxxx} r_{i} \bigg) q_{k} \Bigg], \end{align} where $k \in \{1,2,3\}$. \subsection{Nonlocal reverse-time six-component AKNS system} We study the nonlocal reverse-time by considering specific reductions for the spectral matrix \begin{equation}\label{Ureduction} U^{T}(x,-t,-\lambda)=-CU(x,t,\lambda)C^{-1}, \end{equation} where $C= \begin{pmatrix} 1 & 0 \\ 0 & \Sigma \end{pmatrix}$ and $\Sigma$ is a constant invertible symmetric $3 \times 3$ matrix, in other words $\det \Sigma \neq 0$ and $\Sigma^{T}=\Sigma$. Because $U(x,t,\lambda)=\lambda \Lambda+P(x,t)$, for $P= \begin{pmatrix} 0 & p \\ r & 0 \end{pmatrix}$, using the reduction (\ref{Ureduction}) we can easily prove that \begin{equation}\label{Pequ} P^{T}(x,-t)=-C P(x,t) C^{-1}. \end{equation} It follows from (\ref{Pequ}) that \begin{equation}\label{prrelation} p^{T}(x,-t)=- \Sigma r(x,t) \quad \text{i.e.} \quad r(x,t) = -\Sigma^{-1} p^{T}(x,-t). \end{equation} Similarly from $V(x,t,\lambda)=\lambda^{6} \Omega + Q(x,t,\lambda)$ along with (\ref{prrelation}), one can prove with a tedious calculations that \begin{equation} Q^{T}(x,-t,-\lambda) = C Q(x,t,\lambda) C^{-1}, \end{equation} and \begin{equation}\label{VQequ} V^{T}(x,-t,-\lambda) = C V(x,t,\lambda) C^{-1}, \end{equation} where $\Omega=\textit{diag}(\beta_{1},\beta_{2},\beta_{2},\beta_{2})$. \\ It is interesting that the two nonlocal Lax matrices $U^{T}(x,-t,-\lambda)$ and $V^{T}(x,-t,-\lambda)$ satisfy the equivalent zero curvature equation: \begin{equation} U_{t}^{T}(x,-t,-\lambda) + V_{x}^{T}(x,-t,-\lambda) +{\rm i} \big[ U^{T}(x,-t,-\lambda), V^{T}(x,-t,-\lambda) \big] = 0. \end{equation} By taking $\Sigma=diag(\rho^{-1}_{1},\rho^{-1}_{2},\rho^{-1}_{3})$, where $\rho_{1}, \rho_{2}, \rho_{3}$ are non-zero real, we deduce from (\ref{prrelation}) the nonlocal relation between the components of the vectors $p$ and $r$, that is \begin{equation} r_{i}(x,t) = -\rho_{i} p_{i}(x,-t) \quad \text{for} \quad i \in \{1,2,3\}. \end{equation} Hence, we can reduce the coupled equations (\ref{coupledequs}) to the nonlocal reverse-time six-order equation: \begin{align*} p_{k,t}(x,t) &= -{\rm i} \frac{\beta}{\alpha^{6}} \Bigg[ p_{k,xxxxxx}(x,t) \\ & -6 \bigg( \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i}(x,-t) \bigg) p_{k,xxxx} \\& - \bigg( 9 \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i,x}(x,-t) +15 \sum\limits_{i=1}^{3} \rho_{i} p_{i,x}(x,t) p_{i}(x,-t) \bigg) p_{k,xxx} \\ &+ \bigg( 15 \big( \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i}(x,-t) \big)^{2} - 11 \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i,xx}(x,-t) \\& \hspace{3cm} - 20 \sum\limits_{i=1}^{3} \rho_{i} p_{i,xx}(x,t) p_{i}(x,-t) - 25 \sum\limits_{i=1}^{3} \rho_{i} p_{i,x}(x,t) p_{i,x}(x,-t) \bigg) p_{k,xx} \\ &+ \Bigg( \bigg( \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i}(x,-t) \bigg) \bigg( 15 \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i,x}(x,-t) +45 \sum\limits_{i=1}^{3} \rho_{i} p_{i,x}(x,t) p_{i}(x,-t) \bigg) \\& \hspace{2cm} - 15 \sum\limits_{i=1}^{3} \rho_{i} p_{i,xxx}(x,t) p_{i}(x,-t) - 4 \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i,xxx}(x,-t) \\& \hspace{3cm} - 20 \sum\limits_{i=1}^{3} \rho_{i} p_{i,x}(x,t) p_{i,xx}(x,-t) - 25 \sum\limits_{i=1}^{3} \rho_{i} p_{i,xx}(x,t) p_{i,x}(x,-t) \Bigg) p_{k,x} \\& + \Bigg( -20 \bigg( \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i}(x,-t) \bigg)^{3} +\bigg( \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i}(x,-t) \bigg) \bigg( 20 \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i,xx}(x,-t) \\& +35 \sum\limits_{i=1}^{3} \rho_{i} p_{i,xx}(x,t) p_{i}(x,-t) +25 \sum\limits_{i=1}^{3} \rho_{i} p_{i,x}(x,t) p_{i,x}(x,-t) \bigg) \\& + 10 \big(\sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i,x}(x,-t) \big)^{2} + 20 \bigg( \sum\limits_{i=1}^{3} \rho_{i}p_{i,x}(x,t) p_{i}(x,-t) \bigg) \bigg( \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i,x}(x,-t) \bigg) \\& + 25 \big( \sum\limits_{i=1}^{3} \rho_{i} p_{i,x}(x,t) p_{i}(x,-t)\big)^{2} - 2 \sum\limits_{i=1}^{3} \rho_{i} p_{i}(x,t) p_{i,xxxx}(x,-t) \\& - 4 \sum\limits_{i=1}^{3} \rho_{i} p_{i,x}(x,t) p_{i,xxx}(x,-t) - 11 \sum\limits_{i=1}^{3} \rho_{i} p_{i,xx}(x,t) p_{i,xx}(x,-t) \\& - 9 \sum\limits_{i=1}^{3} \rho_{i} p_{i,xxx}(x,t) p_{i,x}(x,-t) - 6 \sum\limits_{i=1}^{3} \rho_{i} p_{i,xxxx}(x,t) p_{i}(x,-t) \Bigg) p_{k} \Bigg] \end{align*} for $k \in \{1,2,3\}$. \\[3mm] We can see that when all $\rho_{i} < 0$ for $i \in \{1,2,3\}$, the dispersive term and the nonlinear terms attract. Hence, we obtain the focusing nonlocal reverse-time six-component six-order equation. Otherwise, if $\rho_{i}$'s are not all negative for $i \in \{1,2,3\}$, then we have combined focussing and defocussing cases. \section{Riemann-Hilbert problems} The spatial and temporal spectral problem of the six-component six-order AKNS equations can be written: \begin{equation}\label{spatialequ} \psi_{x}=i U \psi = i (\lambda \Lambda + P) \psi, \end{equation} \begin{equation}\label{temporalequ} \psi_{t}=i V \psi = i (\lambda^{6} \Omega + Q) \psi, \end{equation} where $\Lambda=\textit{diag}(\alpha_{1},\alpha_{2},\alpha_{2},\alpha_{2})$, $\Omega=\textit{diag}(\beta_{1},\beta_{2},\beta_{2},\beta_{2})$, and \begin{equation} P= \begin{pmatrix}\label{Pequation} 0 & p_{1} & p_{2} & p_{3} \\ r_{1} & 0 & 0 & 0 \\ r_{2} & 0 & 0 & 0 \\ r_{3} & 0 & 0 & 0 \end{pmatrix}, \quad Q= \begin{pmatrix} \Qbold{11}{} & \Qbold{12}{} & \Qbold{13}{} & \Qbold{14}{} \\ \Qbold{21}{} & \Qbold{22}{} & \Qbold{23}{} & \Qbold{24}{} \\ \Qbold{31}{} & \Qbold{32}{} & \Qbold{33}{} & \Qbold{34}{} \\ \Qbold{41}{} & \Qbold{42}{} & \Qbold{43}{} & \Qbold{44}{} \end{pmatrix}, \end{equation} \begin{align*} \Qbold{11}{} &= a^{[2]} \lambda^{4} + a^{[3]} \lambda^{3} + a^{[4]} \lambda^{2} + a^{[5]} \lambda + a^{[6]}, & \Qbold{12}{} &= b_{1}^{[1]} \lambda^{5} + b_{1}^{[2]} \lambda^{4} +b_{1}^{[3]} \lambda^{3} + b_{1}^{[4]} \lambda^{2} + b_{1}^{[5]} \lambda + b_{1}^{[6]}, \\ \Qbold{13}{} &= b_{2}^{[1]} \lambda^{5} + b_{2}^{[2]} \lambda^{4} +b_{2}^{[3]} \lambda^{3} + b_{2}^{[4]} \lambda^{2} + b_{2}^{[5]} \lambda + b_{2}^{[6]}, & \Qbold{14}{} &= b_{3}^{[1]} \lambda^{5} + b_{3}^{[2]} \lambda^{4} +b_{3}^{[3]} \lambda^{3} + b_{3}^{[4]} \lambda^{2} + b_{3}^{[5]} \lambda + b_{3}^{[6]}, \\ \Qbold{21}{} &= c_{1}^{[1]} \lambda^{5} + c_{1}^{[2]} \lambda^{4} +c_{1}^{[3]} \lambda^{3} + c_{1}^{[4]} \lambda^{2} + c_{1}^{[5]} \lambda + c_{1}^{[6]}, & \Qbold{22}{} &= d_{11}^{[2]} \lambda^{4} + d_{11}^{[3]} \lambda^{3} + d_{11}^{[4]} \lambda^{2} + d_{11}^{[5]} \lambda + d_{11}^{[6]}, \\ \Qbold{23}{} &= d_{12}^{[2]} \lambda^{4} + d_{12}^{[3]} \lambda^{3} + d_{12}^{[4]} \lambda^{2} + d_{12}^{[5]} \lambda, + d_{12}^{[6]}, & \Qbold{24}{} &= d_{13}^{[2]} \lambda^{4} + d_{13}^{[3]} \lambda^{3} + d_{13}^{[4]} \lambda^{2} + d_{13}^{[5]} \lambda + d_{13}^{[6]}, \\ \Qbold{31}{} &= c_{2}^{[1]} \lambda^{5} + c_{2}^{[2]} \lambda^{4} +c_{2}^{[3]} \lambda^{3} + c_{2}^{[4]} \lambda^{2} + c_{2}^{[5]} \lambda + c_{2}^{[6]}, & \Qbold{32}{} &= d_{21}^{[2]} \lambda^{4} + d_{21}^{[3]} \lambda^{3} + d_{21}^{[4]} \lambda^{2} + d_{21}^{[5]} \lambda + d_{21}^{[6]}, \\ \Qbold{33}{} &= d_{22}^{[2]} \lambda^{4} + d_{22}^{[3]} \lambda^{3} + d_{22}^{[4]} \lambda^{2} + d_{22}^{[5]} \lambda + d_{22}^{[6]}, & \Qbold{34}{} &= d_{23}^{[2]} \lambda^{4} + d_{23}^{[3]} \lambda^{3} + d_{23}^{[4]} \lambda^{2} + d_{23}^{[5]} \lambda + d_{23}^{[6]}, \\ \Qbold{41}{} &= c_{3}^{[1]} \lambda^{5} + c_{3}^{[2]} \lambda^{4} +c_{3}^{[3]} \lambda^{3} + c_{3}^{[4]} \lambda^{2} + c_{3}^{[5]} \lambda + c_{3}^{[6]}, & \Qbold{42}{} &= d_{31}^{[2]} \lambda^{4} + d_{31}^{[3]} \lambda^{3} + d_{31}^{[4]} \lambda^{2} + d_{31}^{[5]} \lambda + d_{31}^{[6]}, \\ \Qbold{43}{} &= d_{32}^{[2]} \lambda^{4} + d_{32}^{[3]} \lambda^{3} + d_{32}^{[4]} \lambda^{2} + d_{32}^{[5]} \lambda + d_{32}^{[6]}, & \Qbold{44}{} &= d_{33}^{[2]} \lambda^{4} + d_{33}^{[3]} \lambda^{3} + d_{33}^{[4]} \lambda^{2} + d_{33}^{[5]} \lambda + d_{33}^{[6]}. \end{align*} Throughout the presentation of this paper, we assume that $\alpha=\alpha_{1}-\alpha_{2}<0$ and $\beta=\beta_{1}-\beta_{2}<0$. \newline To find soliton solutions we start with an initial condition $(p(x,0),r^{T}(x,0))^{T}$ and evolute in time to reach $(p(x,t),r^{T}(x,t))^{T}$. Taking $p_{i}$ and $r_{i}$ in Schwartz space, they will decay exponentially, i.e., $p_{i} \rightarrow 0$ and $r_{i} \rightarrow 0$ as $x,t \rightarrow \pm \infty$ for $i \in \{1,2,3\}$. Therefore from the spectral problems (\ref{spatialequ}) and (\ref{temporalequ}), the asymptotic behaviour of the fundamental matrix $\psi$ can be written as \begin{equation} \psi(x,t) \leadsto e^{i \lambda \Lambda x+i \lambda^{6}\Omega t}. \end{equation} Hence, the solution of the spectral problems can be written in the form: \begin{equation}\label{psiequ} \psi(x,t) = \phi(x,t) e^{i \lambda \Lambda x+i \lambda^{6}\Omega t}. \end{equation} The Jost solution of the eigenfunction (\ref{psiequ}) requires that \cite{Yang2010,DrazinJohnson1989} \begin{equation}\label{boundary} \quad \phi(x,t) \rightarrow I_{4},\quad \text{as}\quad x,t \rightarrow \pm \infty, \end{equation} where $I_{4}$ is the $4\times 4$ identity matrix. The Lax pair (\ref{spatialequ}) and (\ref{temporalequ}) can be rewritten in terms of $\phi$ using equation (\ref{psiequ}), giving the equivalent expression of the spectral problems \begin{equation}\label{spatialphi} \phi_{x} = i \lambda [\Lambda,\phi] +i P \phi, \end{equation} \begin{equation}\label{temporalphi} \phi_{t} = i \lambda^{6} [\Omega,\phi] +i Q \phi. \end{equation} To construct the Riemann-Hilbert problems and their solutions in the reflectionless case, we are going to use the adjoint scattering equations of the spectral problems $\psi_{x}=iU\psi$ and $\psi_{t}=iV^{[6]}\psi$. Their adjoints are \begin{equation}\label{psiadjointequation} \tilde{\psi}_{x} = -i \tilde{\psi} U, \end{equation} \begin{equation}\label{psiadjointtemporalequation} \tilde{\psi}_{t} = -i \tilde{\psi} V^{[6]}, \end{equation} and the equivalent spectral adjoint equations read \begin{equation}\label{adjointspatialphi} \tilde{\phi}_{x} = -i \lambda [\tilde{\phi},\Lambda]-i\tilde{\phi} P, \end{equation} \begin{equation}\label{adjointtemporalphi} \tilde{\phi}_{t} = -i \lambda^{6} [\tilde{\phi},\Omega]-i\tilde{\phi} Q. \end{equation} Because $tr(iP)=0$ and $tr(iQ)=0$, using Liouvilles's formula \cite{Yang2010}, it is easy to see that the $(det(\phi))_{x}=0$, that is, $det(\phi)$ is a constant, and utilizing the boundary condition (\ref{boundary}), we conclude \begin{equation} det(\phi)=1, \end{equation} hence the Jost matrix $\phi$ is invertible. \newline Furthermore, as $\phi^{-1}_{x}=-\phi^{-1} \phi_{x} \phi^{-1}$, we can derive from (\ref{spatialphi}), \begin{equation}\label{adjointspatialphiinv} \phi^{-1}_{x} = -i \lambda [\phi^{-1},\Lambda]-i\phi^{-1} P. \end{equation} Thus, we can see that both $(\phi^{+})^{-1}$ and $(\phi^{-})^{-1}$ satisfies the spatial adjoint equation (\ref{adjointspatialphi}). We can also show that both satisfies the temporal adjoint equation (\ref{adjointtemporalphi}) as well. \par Notice that if the eigenfunction $\phi(x,t,\lambda)$ is a solution of the spectral problem (\ref{spatialphi}), then $\phi^{-1}(x,t,\lambda)$ is a solution of the adjoint spectral problem (\ref{adjointspatialphi}), implying that $C \phi^{-1}(x,t,\lambda)$ is also a solution of (\ref{adjointspatialphi}) with the same eigenvalue because $\phi_{x}^{-1}=- \phi^{-1} \phi_{x} \phi^{-1}$. In a similar way, the nonlocal $\phi^{T}(x,-t,-\lambda) C$ is also a solution of the spectral adjoint problem (\ref{adjointspatialphi}). Since the boundary condition is the same for both solutions as $x \rightarrow \pm \infty$, this guarantees the uniqueness of the solution, so \begin{equation}\label{phiTphiminus} \phi^{T}(x,-t,-\lambda) = C \phi^{-1}(x,t,\lambda) C^{-1}. \end{equation} As a result, if $\lambda$ is an eigenvalue of the spectral problems, then $-\lambda$ is also an eigenvalue and the relation (\ref{phiTphiminus}) holds. \newline Now, we are going to work with the spatial spectral problem (\ref{spatialphi}), assuming that the time is $t=0$. \newline For notation simplicity, we denote $Y^{+}$ and $Y^{-}$ to indicate the boundary conditions are set as $x \rightarrow \infty$ and $x \rightarrow -\infty$, respectively. \newline We know that \begin{equation}\label{phiboundary} \phi^{\pm} \rightarrow I_{4} \quad \text{when} \quad x \rightarrow \pm \infty. \end{equation} From (\ref{psiequ}), this allows us to write \begin{equation}\label{psiphirelation} \psi^{\pm} = \phi^{\pm} e^{i \lambda \Lambda x}. \end{equation} Both $\psi^{+}$ and $\psi^{-}$ satisfy the spectral spatial differential equation (\ref{spatialequ}), i.e. both are two solutions of that equation. Thus, they are linearly dependent, hence there exists a scattering matrix $S(\lambda)$, such that \begin{equation}\label{psiminusplus} \psi^{-} = \psi^{+} S(\lambda). \end{equation} Substituting (\ref{psiphirelation}) into (\ref{psiminusplus}), leads to \begin{equation}\label{psiplusminus} \phi^{-} = \phi^{+} e^{i \lambda \Lambda x} S(\lambda) e^{-i \lambda \Lambda x}, \quad \text{for} \quad \lambda \in \mathbb{R}, \end{equation} where \begin{equation} S(\lambda)=(s_{ij})_{4 \times 4}= \begin{pmatrix} s_{11} & s_{12} & s_{13} & s_{14} \\ s_{21} & s_{22} & s_{23} & s_{24} \\ s_{31} & s_{32} & s_{33} & s_{34} \\ s_{41} & s_{42} & s_{43} & s_{44} \end{pmatrix}. \end{equation} Given that $det(\phi^{\pm})=1$, we obtain \begin{equation} det(S(\lambda))=1. \end{equation} In addition, we can show from (\ref{psiplusminus}) and (\ref{phiTphiminus}) that $S(\lambda)$ possess the involution relation \begin{equation}\label{Sequ} S^{T}(-\lambda) = C S^{-1}(\lambda) C^{-1}. \end{equation} we deduce from (\ref{Sequ}) that \begin{equation}\label{s11hats11relation} \hat{s}_{11}({\lambda})=s_{11}(-\lambda), \end{equation} where the inverse scattering data matrix $S^{-1}=(\hat{s}_{ij})_{4 \times 4}$ for $i,j \in \{1,2,3,4\}$. \\[3mm] From $\phi^{-} = \phi^{+} e^{i \lambda \Lambda x} S(\lambda) e^{-i \lambda \Lambda x}$, $\phi^{\pm} \rightarrow I_{4}$ when $x \rightarrow \pm \infty$. In order to formulate Riemann-Hilbert problems we need to analyse the analyticity of the Jost matrix $\phi^{\pm}$. \newline To do so, we can use the Volterra integral equations to write the solutions $\phi^{\pm}$ in a uniquely manner by using the spatial spectral problem (\ref{spatialequ}): \begin{align}\label{Volt1} \phi^{-}(x,\lambda) &= I_{4} + i \int\limits^{x}_{- \infty} e^{i \lambda \Lambda (x-y)} P(y) \phi^{-}(y,\lambda) e^{i \lambda \Lambda (y-x)} dy, \\ \phi^{+}(x,\lambda) &= I_{4} - i \int\limits^{+ \infty}_{x} e^{i \lambda \Lambda (x-y)} P(y) \phi^{+}(y,\lambda) e^{i \lambda \Lambda (y-x)} dy. \end{align} We denote the matrix $\phi^{-}$ to be \begin{equation} \phi^{-}= \begin{pmatrix} \phi^{-}_{11} & \phi^{-}_{12} & \phi^{-}_{13} & \phi^{-}_{14} \\ \phi^{-}_{21} & \phi^{-}_{22} & \phi^{-}_{23} & \phi^{-}_{24} \\ \phi^{-}_{31} & \phi^{-}_{32} & \phi^{-}_{33} & \phi^{-}_{34} \\ \phi^{-}_{41} & \phi^{-}_{42} & \phi^{-}_{43} & \phi^{-}_{44} \end{pmatrix}. \end{equation} and $\phi^{+}$ is denoted similarly. So from (\ref{Volt1}) the components of the first column of $\phi^{-}$ are \begin{align} \phi^{-}_{11} &= 1 + i \int_{- \infty}^{x} (p_{1}(y) \phi^{-}_{21}(y,\lambda)+p_{2}(y) \phi^{-}_{31}(y,\lambda) +p_{3}(y) \phi^{-}_{41}(y,\lambda)) dy, \\ \phi^{-}_{21} &= i \int^{x}_{- \infty} r_{1}(y) \phi^{-}_{11}(y,\lambda) e^{-i \lambda \alpha (x-y)} dy, \\ \phi^{-}_{31} &= i \int^{x}_{- \infty} r_{2}(y) \phi^{-}_{11}(y,\lambda) e^{-i \lambda \alpha (x-y)} dy, \\ \phi^{-}_{41} &= i \int^{x}_{- \infty} r_{3}(y) \phi^{-}_{11}(y,\lambda) e^{-i \lambda \alpha (x-y)} dy. \end{align} \\ Similarly, the components of the second column of $\phi^{-}$ are \begin{align} \phi^{-}_{12} &= i \int_{- \infty}^{x} \bigg( p_{1}(y) \phi^{-}_{22}(y,\lambda)+p_{2}(y) \phi^{-}_{32}(y,\lambda) +p_{3}(y) \phi^{-}_{42}(y,\lambda) \bigg) e^{i \lambda \alpha (x-y)} dy, \\ \phi^{-}_{22} &=1 + i \int^{x}_{- \infty} r_{1}(y) \phi^{-}_{12}(y,\lambda) dy , \\ \phi^{-}_{32} &= i \int^{x}_{- \infty} r_{2}(y) \phi^{-}_{12}(y,\lambda) dy , \\ \phi^{-}_{42} &= i \int^{x}_{- \infty} r_{3}(y) \phi^{-}_{12}(y,\lambda) dy, \end{align} and the components of the third column of $\phi^{-}$ are \begin{align} \phi^{-}_{13} &= i \int_{- \infty}^{x} \bigg( p_{1}(y) \phi^{-}_{23}(y,\lambda)+p_{2}(y) \phi^{-}_{33}(y,\lambda) +p_{3}(y) \phi^{-}_{43}(y,\lambda) \bigg) e^{i \lambda \alpha (x-y)} dy , \\ \phi^{-}_{23} &= i \int^{x}_{- \infty} r_{1}(y) \phi^{-}_{13}(y,\lambda) dy , \\ \phi^{-}_{33} &= 1 + i \int^{x}_{- \infty} r_{2}(y) \phi^{-}_{13}(y,\lambda) dy , \\ \phi^{-}_{43} &= i \int^{x}_{- \infty} r_{3}(y) \phi^{-}_{13}(y,\lambda) dy, \end{align} and finally the components of the fourth column of $\phi^{-}$ are \begin{align} \phi^{-}_{14} &= i \int_{- \infty}^{x} \bigg( p_{1}(y) \phi^{-}_{24}(y,\lambda)+p_{2}(y) \phi^{-}_{34}(y,\lambda) +p_{3}(y) \phi^{-}_{44}(y,\lambda) \bigg) e^{i \lambda \alpha (x-y)} dy , \\ \phi^{-}_{24} &= i \int^{x}_{- \infty} r_{1}(y) \phi^{-}_{14}(y,\lambda) dy , \\ \phi^{-}_{34} &= i \int^{x}_{- \infty} r_{2}(y) \phi^{-}_{14}(y,\lambda) dy , \\ \phi^{-}_{44} &= 1 + i \int^{x}_{- \infty} r_{3}(y) \phi^{-}_{14}(y,\lambda) dy. \end{align} Recall that $\alpha < 0$. If $Im(\lambda)>0$ and $y<x$ then, $Re(e^{-i \lambda \alpha (x-y)})$ decays exponentially and so each integral of the first column of $\phi^{-}$ converges. As a result, the components of the first column of $\phi^{-}$, are analytic in the upper half complex plane for $\lambda \in \mathbb{C}_{+}$, and continuous for $\lambda \in \mathbb{C}_{+} \cup \mathbb{R}$. \newline In the same way, for $y > x$, the components of the last three columns of $\phi^{+}$ are analytic in the upper half plane for $\lambda \in \mathbb{C}_{+}$ and continuous for $\lambda \in \mathbb{C}_{+} \cup \mathbb{R}$. \\[3mm] It is worth mentioning the case when $Im(\lambda) < 0$, then the first column $\phi^{+}$ is analytic in the lower half plane for $\lambda \in \mathbb{C}_{-}$ and continuous for $\lambda \in \mathbb{C}_{-} \cup \mathbb{R}$, and the components of the last three columns of $\phi^{-}$ are analytic in the lower half plane for $\lambda \in \mathbb{C}_{-}$ and continuous for $\lambda \in \mathbb{C}_{-} \cup \mathbb{R}$. \\[3mm] Now, let us construct the Riemann-Hilbert problems. To construct the upper-half plane we note that \begin{equation}\label{psiphiplusminus} \phi^{\pm} = \psi^{\pm} e^{-i \lambda \Lambda x}. \end{equation} Let $\phi^{\pm}_{j}$ be the $j$th column of $\phi^{\pm}$ for $j \in \{1,2,3,4\}$, hence the first Jost matrix solution can be taken as \begin{equation}\label{Pplusequ} P^{+}(x,\lambda)= (\phi_{1}^{-},\phi_{2}^{+},\phi_{3}^{+},\phi_{4}^{+}) =\phi^{-} H_{1} + \phi^{+} H_{2}, \end{equation} where $H_{1}=diag(1,0,0,0)$ and $H_{2}=diag(0,1,1,1)$. \\[3mm] Therefore, $P^{+}$ is then analytic for $\lambda \in \mathbb{C}_{+}$ and continuous for $\lambda \in \mathbb{C}_{+} \cup \mathbb{R}$. \\[3mm] For the lower-half plane, we can construct $P^{-} \in \mathbb{C}_{-}$ which is the analytic counterpart of $P^{+} \in \mathbb{C}_{+}$. To do so, we utilize the equivalent spectral adjoint equation (\ref{adjointspatialphiinv}). Because $\tilde{\phi}^{\pm}=(\phi^{\pm})^{-1}$ and $\psi^{\pm} = \phi^{\pm} e^{i \lambda \Lambda x}$, we have \begin{equation} (\phi^{\pm})^{-1} = e^{i \lambda \Lambda x} (\psi^{\pm})^{-1}. \end{equation} Let $\tilde{\phi}_{j}^{\pm}$ be the $j$th row of $\tilde{\phi}^{\pm}$ for $j \in \{1,2,3,4\}$. As above, we can get \begin{equation}\label{Pminusequ} P^{-}(x,\lambda)=\bigg( \tilde{\phi}_{1}^{-},\tilde{\phi}_{2}^{+},\tilde{\phi}_{3}^{+},\tilde{\phi}_{4}^{+} \bigg)^{T}= H_{1}(\phi^{-})^{-1}+H_{2}(\phi^{+})^{-1}. \end{equation} Hence, $P^{-}$ is analytic for $\lambda \in \mathbb{C}_{-}$ and continuous for $\lambda \in \mathbb{C}_{-} \cup \mathbb{R}$. \\[3mm] Since both $\phi^{-}$ and $\phi^{+}$ satisfy \begin{equation}\label{phiTinvequation} \phi^{T}(x,-t,-\lambda) = C \phi^{-1}(x,t,\lambda) C^{-1}, \end{equation} using (\ref{Pplusequ}), we have \begin{equation} P^{+}(x,-t,-\lambda) = \phi^{-}(x,-t,-\lambda) H_{1} + \phi^{+}(x,-t,-\lambda) H_{2} \end{equation} or equivalently \begin{equation}\label{Pplustranspose} (P^{+})^{T}(x,-t,-\lambda) = H_{1}^{T} (\phi^{-})^{T}(x,-t,-\lambda) + H_{2}^{T} (\phi^{+})^{T}(x,-t,-\lambda) \end{equation} substituting (\ref{phiTinvequation}) in (\ref{Pplustranspose}) we have the nonlocal involution property \begin{equation}\label{PplusPminusrelation} (P^{+})^{T}(x,-t,-\lambda) = C P^{-} (x,t,\lambda) C^{-1}. \end{equation} Employing analyticity of both $P^{+}$ and $P^{-}$, we can construct the Riemann-Hilbert problems \begin{equation} P^{-}P^{+}=J, \end{equation} where $J=e^{i \lambda \Lambda x} (H_{1}+H_{2}S)(H_{1}+S^{-1}H_{2}) e^{-i \lambda \Lambda x}$ for\quad $\lambda \in \mathbb{R}$, and \\ $S^{-1}=(\hat{s}_{ij})_{4 \times 4}$ for $i,j \in \{1,2,3,4\}$ is the inverse scattering data matrix. Replacing (\ref{psiplusminus}) in (\ref{Pplusequ}), we have \begin{equation}\label{Pplussimplified} P^{+}(x,\lambda) = \phi^{+} (e^{i \lambda \Lambda x} S e^{-i \lambda \Lambda x} H_{1} +H_{2}). \end{equation} Because $\phi^{+}(x,\lambda) \rightarrow I_{4}$ when $x \rightarrow + \infty$, we get \begin{equation} \lim_{x \rightarrow + \infty} P^{+} = \begin{pmatrix} s_{11}(\lambda) & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \quad \text{for} \quad \lambda \in \mathbb{C}_{+} \cup \mathbb{R}. \end{equation} In the same way, \begin{equation} \lim_{x \rightarrow - \infty} P^{-} = \begin{pmatrix} \hat{s}_{11}(\lambda) & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \quad \text{for} \quad \lambda \in \mathbb{C}_{-} \cup \mathbb{R}. \end{equation} Thus if we choose \begin{equation} \begin{array}{cc}\label{GplusPplusrelation} G^{+}(x,\lambda) = P^{+}(x,\lambda) \begin{pmatrix} s_{11}^{-1} (\lambda) & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \quad \text{and} \quad (G^{-})^{-1}(x,\lambda) = \begin{pmatrix} \hat{s}_{11}^{-1} (\lambda) & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} P^{-}(x,\lambda) \end{array}, \end{equation} the two generalized matrices $G^{+}(x,\lambda)$ and $G^{-}(x,\lambda)$ generate the matrix Riemann-Hilbert problems on the real line for the six-component AKNS system of sixth-order given by \begin{equation} G^{+}(x,\lambda) = G^{-}(x,\lambda) G_0(x,\lambda), \quad \text{for} \quad \lambda \in \mathbb{R}, \end{equation} where the jump matrix $G_0(x,\lambda)$ can be cast as \begin{equation}\label{Gequ} G_0(x,\lambda) = \begin{pmatrix} \hat{s}_{11}^{-1} (\lambda) & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} J \begin{pmatrix} s_{11}^{-1} (\lambda) & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \end{equation} this reads \begin{equation} G_0^{}(x,\lambda) = \begin{pmatrix} s_{11}^{-1} \hat{s}_{11}^{-1} & \hat{s}_{12}\hat{s}_{11}^{-1} e^{i \lambda \alpha x} & \hat{s}_{13}\hat{s}_{11}^{-1} e^{i \lambda \alpha x} & \hat{s}_{14}\hat{s}_{11}^{-1} e^{i \lambda \alpha x} \\[2mm] s_{21} s_{11}^{-1} e^{-i \lambda \alpha x} & 1 & 0 & 0 \\[4mm] s_{31} s_{11}^{-1} e^{-i \lambda \alpha x} & 0 & 1 & 0 \\[4mm] s_{41} s_{11}^{-1} e^{-i \lambda \alpha x} & 0 & 0 & 1 \end{pmatrix}, \end{equation} and its canonical normalization conditions: \begin{align} G^{+}(x,\lambda) \rightarrow I_{4} \quad \text{as} \quad \lambda \in \mathbb{C}_{+} \cup \mathbb{R} \rightarrow \infty, \\ G^{-}(x,\lambda) \rightarrow I_{4} \quad \text{as} \quad \lambda \in \mathbb{C}_{-} \cup \mathbb{R} \rightarrow \infty. \end{align} From (\ref{PplusPminusrelation}) along with (\ref{GplusPplusrelation}) and using (\ref{s11hats11relation}), we deduce the nonlocal involution property \begin{equation} (G^{+})^{T}(x,-t,-\lambda) = C (G^{-})^{-1} (x,t,\lambda) C^{-1}. \end{equation} Furthermore, we derive the following nonlocal involution property for $G_0$ \begin{equation} G_0^{T}(x,-t,-\lambda) = C G_0(x,t,\lambda) C^{-1}, \end{equation} from (\ref{Gequ}) and (\ref{s11hats11relation}). \subsection{Time evolution of the scattering data} Reaching this point, we need to determine the scattering data as they evolute in time. In order to do that , we differentiate equation (\ref{psiplusminus}) with respect to time $t$ and applying (\ref{temporalphi}) gives \begin{equation} S_{t} = i \lambda^{6} [\Omega,S], \end{equation} and thus \begin{equation} S_{t}= \begin{pmatrix} 0 & i \beta \lambda^{6} s_{12} & i \beta \lambda^{6} s_{13} & i \beta \lambda^{6} s_{14} \\ -i \beta \lambda^{6} s_{21} & 0 & 0 & 0 \\ -i \beta \lambda^{6} s_{31} & 0 & 0 & 0 \\ -i \beta \lambda^{6} s_{41} & 0 & 0 & 0 \end{pmatrix}. \end{equation} As a result, we have \begin{equation} \begin{cases} s_{12}(t,\lambda) = s_{12}(0,\lambda) e^{i \beta \lambda^{6} t}, \\ s_{13}(t,\lambda) = s_{13}(0,\lambda) e^{i \beta \lambda^{6} t} , \\ s_{14}(t,\lambda) = s_{14}(0,\lambda) e^{i \beta \lambda^{6} t} , \\ s_{21}(t,\lambda) = s_{21}(0,\lambda) e^{-i \beta \lambda^{6} t} , \\ s_{31}(t,\lambda) = s_{31}(0,\lambda) e^{-i \beta \lambda^{6} t} , \\ s_{41}(t,\lambda) = s_{41}(0,\lambda) e^{-i \beta \lambda^{6} t} , \end{cases} \end{equation} and $s_{11},s_{22},s_{23},s_{24},s_{32},s_{33},s_{34},s_{42},s_{43},s_{44}$ are constants. \section{Soliton solutions} \subsection{General case} The determinant of the matrix $G^{\pm}$ determines the type of soliton solutions generated using the Riemann-Hilbert problems. In the regular case, when $det(G^{\pm}) \neq 0$, we obtain unique soliton solution. In the non-regular case, that is to say when $det(G^{\pm})=0$, it could generate discrete eigenvalues in the spectral plane. This non-regular case can be transformed into the regular case to solve for soliton solutions \cite{Yang2010}. From (\ref{Pplussimplified}) and $det(\phi^{\pm})=1$, we can show that \begin{equation}\label{ppluss11} det(P^{+}(x,\lambda))=s_{11}(\lambda), \end{equation} in the same way, \begin{equation}\label{pminuss11hat} det(P^{-}(x,\lambda))=\hat{s}_{11}(\lambda). \end{equation} Because $det(S(\lambda))=1$, this implies that $S^{-1}(\lambda)=\bigg( cof(S(\lambda)) \bigg)^{T}$ and \begin{equation} \hat{s}_{11}= \begin{vmatrix} s_{22} & s_{23} & s_{24} \\ s_{32} & s_{33} & s_{34} \\ s_{42} & s_{43} & s_{44} \end{vmatrix}, \end{equation} which should be zero for the non-regular case. \\[3mm] To give rise to soliton solutions, we need the solutions of $det(P^{+}(x,\lambda))=det(P^{-}(x,\lambda))=0$ to be simple. When $det(P^{+}(x,\lambda))=s_{11}(\lambda)=0$, we assume $s_{11}(\lambda)$ has simple zeros with discrete eigenvalues $\lambda_{k} \in \mathbb{C}_{+}$ for $k \in \{1,2,...,N\}$, while for $det(P^{-}(x,\lambda))=\hat{s}_{11}(\lambda)=0$, we assume $\hat{s}_{11}(\lambda)$ has simple zeros with discrete eigenvalues $\hat{\lambda}_{k} \in \mathbb{C}_{-}$ for $k \in \{1,2,...,N\}$, which are the poles of the transmission coefficients \cite{DrazinJohnson1989}. \\[3mm] From $\hat{s}_{11}(\lambda)=s_{11}(-\lambda)$ and $det(P^{\pm}(x,\lambda))=0$, we have the nonlocal involution relation \begin{equation}\label{lambdahatlambdarelation} \hat{\lambda}=- \lambda. \end{equation} Each $Ker(P^{+}(x,\lambda_{k}))$ contains only a single column vector $v_{k}$, similarly each $Ker(P^{-}(x,\hat{\lambda}_{k}))$ contains only a single row vector $\hat{v}_{k}$ such that: \begin{equation}\label{Pplus} P^{+}(x,\lambda_{k}) v_{k}=0 \quad \text{for} \quad k \in \{1,2,...,N\}, \end{equation} and \begin{equation}\label{Pminus} \hat{v}_{k} P^{-}(x,\hat{\lambda}_{k})=0 \quad \text{for} \quad k \in \{1,2,...,N\}. \end{equation} To obtain explicit soliton solutions, we take $G_0=I_{4}$ in the Riemann-Hilbert problems. This will force the reflection coefficients $s_{21}=s_{31}=s_{41}=0$ and $\hat{s}_{12}=\hat{s}_{13}=\hat{s}_{14}=0$. \\[2mm] In that case, the Riemann-Hilbert problems can be presented as follows \cite{MaAugust2020}: \begin{equation}\label{Pplussum} G^{+}(x,\lambda)=I_{4}-\sum\limits_{k,j=1}^{N} \frac{v_{k}(M^{-1})_{kj}\hat{v}_{j}}{\lambda-\hat{\lambda}_{j}}, \end{equation} and \begin{equation}\label{Pminussum} (G^{-})^{-1}(x,\lambda)=I_{4}+\sum\limits_{k,j=1}^{N} \frac{v_{k}(M^{-1})_{kj}\hat{v}_{j}}{\lambda-\lambda_{k}}, \end{equation} where $M=(m_{kj})_{N \times N}$ is a matrix defined by \begin{equation} m_{kj}= \begin{cases} \frac{\hat{v}_{k} v_{j}}{\lambda_{j}-\hat{\lambda}_{k}} & \text{if} \quad \lambda_{j} \neq \hat{\lambda}_{k} \\ 0 & \text{if} \quad \lambda_{j} = \hat{\lambda}_{k} \end{cases} \quad k,j \in \{1,2,...,N\}. \end{equation} Since the zeros $\lambda_{k}$ and $\hat{\lambda}_{k}$ are constants, they are independent of space and time. We can explore the spatial and temporal evolution of the scattering vectors $v_{k}(x,t)$ and $\hat{v}_{k}(x,t)$ for $1\leq k \leq N$. \\[3mm] Taking the $x$-derivative of both sides of the equation \begin{equation} P^{+}(x,\lambda_{k})v_{k}=0, \quad 1\leq k \leq N \end{equation} and knowing that $P^{+}$ satisfies the spectral spatial equivalent equation (\ref{spatialphi}), along with (\ref{Pplus}), we obtain \begin{equation} P^{+}(x,\lambda_{k}) \Bigg( \frac{dv_{k}}{dx}-i\lambda_{k} \Lambda v_{k} \Bigg)=0 \quad \text{for} \quad k,j \in \{1,2,...,N\}. \end{equation} In a similar manner, taking the $t$-derivative and using the temporal equation (\ref{temporalphi}) and (\ref{Pplus}), we acquire \begin{equation} P^{+}(x,\lambda_{k}) \Bigg( \frac{dv_{k}}{dt}-i\lambda^{6}_{k} \Omega v_{k} \Bigg)=0 \quad \text{for} \quad k,j \in \{1,2,...,N\}. \end{equation} For the adjoint spectral equations (\ref{adjointspatialphi}) and (\ref{adjointtemporalphi}) , we can obtain the following similar results \begin{equation} \Bigg( \frac{d\hat{v}_{k}}{dx}+i\hat{\lambda}_{k} \hat{v}_{k} \Lambda \Bigg) P^{-}(x,\hat{\lambda}_{k})=0, \end{equation} and \begin{equation} \Bigg( \frac{d\hat{v}_{k}}{dt}+i\hat{\lambda}^{6}_{k} \hat{v}_{k} \Omega \Bigg) P^{-}(x,\hat{\lambda}_{k})=0. \end{equation} Because $v_{k}$ is a single vector in the kernel of $P^{+}$, so $\frac{d v_{k}}{dx}-i\lambda_{k} \Lambda v_{k}$ and $\frac{d v_{k}}{dt}-i\lambda^{6}_{k} \Omega v_{k}$ are scalar multiples of $v_{k}$. \newline Hence without loss of generality, we can take the space dependence of $v_{k}$ to be: \begin{equation}\label{vkxderivative} \frac{d v_{k}}{dx}=i\lambda_{k} \Lambda v_{k}, \quad 1\leq k \leq N \end{equation} and the time dependence of $v_{k}$ as: \begin{equation}\label{vktderivative} \frac{d v_{k}}{dt}=i\lambda^{6}_{k} \Omega v_{k}, \quad 1\leq k \leq N. \end{equation} Thus, we can conclude that \begin{equation}\label{vkequation} v_{k}(x,t)=e^{i\lambda_{k}\Lambda x + i\lambda^{6}_{k} \Omega t} w_{k} \quad \text{for} \quad k \in \{1,2,...,N\}, \end{equation} by solving equations (\ref{vkxderivative}) and (\ref{vktderivative}), where $w_{k}$ is a constant column vector. Likewise, we get \begin{equation}\label{vkhatequation} \hat{v}_{k}(x,t)=\hat{w}_{k} e^{-i\hat{\lambda}_{k}\Lambda x - i\hat{\lambda}^{6}_{k} \Omega t} \quad \text{for} \quad k \in \{1,2,...,N\}, \end{equation} where $\hat{w}_{k}$ is a constant row vector. \\[3mm] From (\ref{Pplus}) and using the formula (\ref{PplusPminusrelation}), it is easy to see \begin{equation} v_{k}^{T}(x,-t,-\lambda_{k}) (P^{+})^{T}(x,-t,-\lambda_{k})= v_{k}^{T}(x,-t,-\lambda_{k}) C P^{-}(x,t,\lambda_{k}) C^{-1} = 0. \end{equation} Because $v_{k}^{T}(x,-t,-\lambda_{k}) C P^{-}(x,t,\lambda_{k})$ can be zero and using (\ref{Pminus}), this leads to \begin{align} v_{k}^{T}(x,-t,-\lambda_{k}) C P^{-}(x,t,\lambda_{k}) &= \hat{v}_{k}(x,t,\hat{\lambda}_{k}) P^{-}(x,t,\hat{\lambda}_{k}) \\ &=\hat{v}_{k}(x,t,-\hat{\lambda}_{k}) P^{-}(x,t,-\hat{\lambda}_{k}) =0. \end{align} From (\ref{lambdahatlambdarelation}), we have $\hat{\lambda}_{k}=-\lambda_{k}$ for $k \in \{1,2,...,N\}$, then we can take \begin{equation} \hat{v}_{k}(x,t,-\hat{\lambda}_{k})= v_{k}^{T}(x,-t,-\lambda_{k}) C. \end{equation} Thus, the involution relations (\ref{vkequation}) and (\ref{vkhatequation}) give \begin{equation}\label{vknonlocalequ} v_{k}(x,t)=e^{i\lambda_{k} \Lambda x + i\lambda_{k}^{6} \Omega t} w_{k}, \end{equation} \begin{equation}\label{vkhatnonlocalequ} \hat{v}_{k}(x,t)= w^{T}_{k} e^{-i\hat{\lambda}_{k} \Lambda x - i\hat{\lambda}_{k}^{6} \Omega t} C. \end{equation} Because the jump matrix $G_{0}=I_{4}$, we can solve the Riemann-Hilbert problem precisely. As a result, we can determine the potentials by computing the matrix $P^{+}$. Because $P^{+}$ is analytic, we can expand $G^{+}$ as follows: \begin{equation}\label{Gexpansion} G^{+}(x,\lambda)=I_{4}+\frac{1}{\lambda} G^{+}_{1}(x)+O(\frac{1}{\lambda^{2}}), \quad \text{when} \quad \lambda \rightarrow \infty. \end{equation} Because $G^{+}$ satisfies the spectral problem, substituting it in (\ref{spatialphi}) and matching the coefficients of the same power of $\frac{1}{\lambda}$, at order $O(1)$, we get \begin{equation}\label{G1plusequs} P=-[\Lambda,G^{+}_{1}]. \end{equation} If we denote \begin{equation} G_{1}^{+}= \begin{pmatrix} (G_{1}^{+})_{11} & (G_{1}^{+})_{12} & (G_{1}^{+})_{13} & (G_{1}^{+})_{14} \\ (G_{1}^{+})_{21} & (G_{1}^{+})_{22} & (G_{1}^{+})_{23} & (G_{1}^{+})_{24} \\ (G_{1}^{+})_{31} & (G_{1}^{+})_{32} & (G_{1}^{+})_{33} & (G_{1}^{+})_{34} \\ (G_{1}^{+})_{41} & (G_{1}^{+})_{42} & (G_{1}^{+})_{43} & (G_{1}^{+})_{44} \end{pmatrix} \end{equation} then \begin{equation} P=-[\Lambda,G_{1}^{+}]= \begin{pmatrix} 0 & -\alpha (G_{1}^{+})_{12} & -\alpha (G_{1}^{+})_{13} & -\alpha (G_{1}^{+})_{14} \\ \alpha(G_{1}^{+})_{21} & 0 & 0 & 0 \\ \alpha(G_{1}^{+})_{31} & 0 & 0 & 0 \\ \alpha(G_{1}^{+})_{41} & 0 & 0 & 0 \end{pmatrix}. \end{equation} Consequently, we can recover the potentials $p_{i}$ and $r_{i}$ for $i \in \{1,2,3\}$: \begin{align}\label{p123equations} p_{1} &= -\alpha (G_{1}^{+})_{12}, \quad r_{1}= \alpha (G_{1}^{+})_{21}, \nonumber \\ p_{2} &= -\alpha (G_{1}^{+})_{13}, \quad r_{2}= \alpha (G_{1}^{+})_{31}, \\ p_{3} &= -\alpha (G_{1}^{+})_{14}, \quad r_{3}= \alpha (G_{1}^{+})_{41}. \nonumber \end{align} It can be seen from (\ref{Gexpansion}) that \begin{equation} G_{1}^{+} = \lambda \lim_{\lambda \rightarrow \infty} (G^{+}(x,\lambda)-I_{4}), \end{equation} then using equation (\ref{Pplussum}), we deduce \begin{equation}\label{G1plussummation} G_{1}^{+} = - \sum\limits_{k,j=1}^{N} v_{k} (M^{-1})_{k,j} \hat{v}_{j}. \end{equation} In addition, by the use of equations (\ref{Pequ}) and (\ref{G1plusequs}), we can easily prove the following nonlocal involution property \begin{equation}\label{P1plusequ} (G_{1}^{+})^{T}(x,-t) = C G_{1}^{+}(x,t) C^{-1}. \end{equation} By substituting (\ref{G1plussummation}) into (\ref{p123equations}) and using (\ref{vknonlocalequ}) and (\ref{vkhatnonlocalequ}), we generate the $N$-soliton solution to the nonlocal reverse-time six-component AKNS system of six-order \begin{equation} p_{i} = \alpha \sum\limits_{k,j=1}^{N} v_{k1} (M^{-1})_{kj} \hat{v}_{j,i+1} \quad \textrm{for} \quad i \in \{1,2,3\}, \end{equation} where $w_{k}$ is an arbitrary constant column vector in $\mathbb{C}^{4}$, and \[v_{k}=(v_{k1},v_{k2},v_{k3},...,v_{kn+1})^{T},\ \hat{v}_{k}=(\hat{v}_{k1},\hat{v}_{k2},\hat{v}_{k3},...,\hat{v}_{kn+1}).\] \section{Exact soliton solutions and dynamics} \subsection{Explicit one-soliton solution and its dynamics} A general explicit solution for a single soliton in the reverse-time case when $N=1$, $w_{1}=(w_{11},w_{12},w_{13},w_{14})^{T}$, $\lambda_{1} \in \mathbb{C}$ is arbitrary, and $\hat{\lambda}_{1}=-\lambda_{1}$ is given by \begin{align}\label{p1general} p_{1}(x,t)= \frac{ 2 \rho_{2} \rho_{3} \lambda_{1} (\alpha_{1}-\alpha_{2}) w_{11} w_{12} e^{i \lambda_{1} (\alpha_{1}+\alpha_{2})x+i\lambda_{1}^{6}(\beta_{1}-\beta_{2})t}}{\rho_{1} \rho_{2} \rho_{3} w_{11}^{2} e^{2i\lambda_{1} \alpha_{1}x} +(\rho_{2} \rho_{3} w_{12}^{2}+ \rho_{1} \rho_{3} w_{13}^{2}+ \rho_{1} \rho_{2} w_{14}^{2}) e^{2i\lambda_{1}\alpha_{2}x}}, \\ p_{2}(x,t)= \frac{ 2 \rho_{1} \rho_{3} \lambda_{1} (\alpha_{1}-\alpha_{2}) w_{11} w_{13} e^{i \lambda_{1} (\alpha_{1}+\alpha_{2})x+i\lambda_{1}^{6}(\beta_{1}-\beta_{2})t}}{\rho_{1} \rho_{2} \rho_{3} w_{11}^{2} e^{2i\lambda_{1} \alpha_{1}x} +(\rho_{2} \rho_{3} w_{12}^{2}+ \rho_{1} \rho_{3} w_{13}^{2}+ \rho_{1} \rho_{2} w_{14}^{2}) e^{2i\lambda_{1}\alpha_{2}x}}, \\ p_{3}(x,t)= \frac{ 2 \rho_{1} \rho_{2} \lambda_{1} (\alpha_{1}-\alpha_{2}) w_{11} w_{14} e^{i \lambda_{1} (\alpha_{1}+\alpha_{2})x+i\lambda_{1}^{6}(\beta_{1}-\beta_{2})t}}{\rho_{1} \rho_{2} \rho_{3} w_{11}^{2} e^{2i\lambda_{1} \alpha_{1}x} +(\rho_{2} \rho_{3} w_{12}^{2}+ \rho_{1} \rho_{3} w_{13}^{2}+ \rho_{1} \rho_{2} w_{14}^{2}) e^{2i\lambda_{1}\alpha_{2}x}}. \end{align} We can get the amplitude of $p_{1}$: \begin{equation}\label{p1magnitude} |p_{1}(x,t)|= 2 A e^{-\beta t Im(\lambda_{1}^{6})} \end{equation} where \begin{equation} A= \Bigg| \frac{ 2 \lambda_{1} \rho_{2} \rho_{3} (\alpha_{1}-\alpha_{2}) w_{11} w_{12}e^{-Im(\lambda_{1} (\alpha_{1}+\alpha_{2})x)}}{\rho_{1} \rho_{2} \rho_{3} w_{11}^{2} e^{2i\lambda_{1} \alpha_{1}x} +(\rho_{2} \rho_{3} w_{12}^{2}+ \rho_{1} \rho_{3} w_{13}^{2}+ \rho_{1} \rho_{2} w_{14}^{2}) e^{2i\lambda_{1}\alpha_{2}x}} \Bigg|. \end{equation} We can see from $p_{1}$, since the real part of the phase is zero, i.e $Re({\rm i} 2 \lambda_{1} \alpha x)=0$, then the phase velocity is zero. Hence the one-soliton is not a travelling wave, and it is stationary in space. \\[3mm] Fixing $x=x_{0}$, the amplitude is $|p_{1}(x,t)|= 2 A|_{x=x_{0}} e^{-\beta t Im(\lambda_{1}^{6})}$. If $Im(\lambda_{1}^{6})<0$ the amplitude decays exponentially, while it grows exponentially for $Im(\lambda_{1}^{6})>0$ and when $Im(\lambda_{1}^{6})=0$, the amplitude remains constant over the time. \\[3mm] In this reverse-time case, any one-soliton does not collapse, either it strictly increases, decreases or stays constant. \newline From the spectral plane, let $\lambda_{1}=\xi + i \eta=|\lambda_{1}| e^{i \theta}$, where $|\lambda_{1}|>0$, and $0<\theta<2\pi$ then: \begin{equation} \text{if} \begin{cases} \theta \in \big\{ (\frac{n}{6} \pi,\frac{n+1}{6} \pi)\big\}, \text{then the amplitude of the soliton is increasing for $n=\{0,2,4,\ldots \}$,} \\ \theta \in \big\{ (\frac{n}{6} \pi,\frac{n+1}{6} \pi)\big\}, \text{the amplitude of the soliton is decreasing for $n=\{1,3,5,\ldots \}$,} \\ \theta \in \big( \frac{n}{6}\, \text{mod} \{ n\} \big) \pi, \text{the amplitude of the soliton is constant for $n=\{0,1,2,3,4,5,\ldots \}$,} \\ \theta \in \{ n \pi \}, \, \text{we obtain one breather with constant amplitude for $n=\{0,1,2,3,4,5,\ldots \}$.} \end{cases} \end{equation} This illustration is shown by the figure below. \begin{figure}[H] \centering \includegraphics[width=12cm,height=9cm]{1solitonsymmetry1paper3.png} \caption{Spectral plane of eigenvalues.} \label{1solitonsymmetry1} \end{figure} \vspace{-0.5cm} For the one-soliton solution, when $\lambda_{1}$ does not lie on the real axis, imaginary axis or the trisectors, i.e. $\lambda \notin \{\xi, {\rm i} \eta, (1\pm {\rm i} \sqrt{3}) \xi,(1\pm {\rm i} \frac{1}{\sqrt{3}}) \xi\}$, the amplitude of the potential grows or decays exponentially, if $Im(\lambda_{1}^{6})>0$ or $Im(\lambda_{1}^{6})<0$ respectively. In Figure \ref{1solitonplot1} and Figure \ref{1solitonplot2}, we have two examples where the amplitude grows and decays exponentially. \par The amplitude does not change when $Im(\lambda_{1}^{6})=0$, i.e. that means when $\lambda_{1}$ belongs to the real axis, imaginary axis or the trisectors. If $\lambda_{1}$ lies on the imaginary axis or the trisectors, then we have a fundamental soliton as seen in figure \ref{1solitonplot3}. If $\lambda_{1}$ is purely imaginary, then the Lax matrix $U(u,\lambda)$ is a skew-Hermitian matrix. On the other hand, if $\lambda_{1}$ lies on the real axis, we have a breather which is a periodic one-soliton with period $\frac{\pi}{|\alpha \lambda_{1} |}$ as seen in figure \ref{1solitonplot4}. This is a consequence of the Lax matrix $U(u,\lambda)$ being a Hermitian matrix. \newpage \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot1-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot1-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot1-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot1-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ in the focussing case of the one-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-2,-1,-1,1,-1,1)$, $\lambda_{1}=-0.1+0.5i$, $w_{1}=(1,i,2+i,1)$.}% \label{1solitonplot1}% \end{center} \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot2-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot2-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot2-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot2-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ in the focussing case of the one-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-2,-1,-1,1,-1,1)$, $\lambda_{1}=0.1+0.5i$, $w_{1}=(1,i,2+i,1)$.}% \label{1solitonplot2}% \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot3-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot3-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot3-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot3-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ of the one-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(1,-2,-1,-1,1,-1,1)$, $\lambda_{1}=2i$, $w_{1}=(1,i,2+i,1)$.}% \label{1solitonplot3}% \end{center} \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot4-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot4-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot4-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-onesoliton-plot4-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ of the one-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(1,1,1,-1,1,-1,1)$, $\lambda_{1}=0.5$, $w_{1}=(1,i,2+i,1)$.}% \label{1solitonplot4}% \end{center} \end{figure} \newpage \begin{remark} In the case of one-soliton, when the AKNS system (\ref{AKNSsystem}) has higher even-orders i.e. $\lambda_{1}^{2m}=|\lambda_{1}|^{2m} e^{i2m \theta}$, $m \in \mathbb{N}$, the amplitude of $p_{1}$ can be written in the form: \begin{equation}\label{p1magnitudegeneral} |p_{1}(x,t)|= A e^{-Im(\lambda_{1}^{2m}(\beta_{1}-\beta_{2})t+ \lambda_{1} (\alpha_{1}+\alpha_{2})x)} \end{equation} where $A$ is a constant. From (\ref{p1magnitudegeneral}) if $Im(\lambda_{1}^{2m})=|\lambda_{1}|^{2m} sin(2m\theta)=0$, that gives the partition of the complex plane by $2m$-sectors. \\ If $0<|\lambda_{1}|<1$ then $\lim\limits_{m \rightarrow \infty}Im(\lambda_{1}^{2m})=0$, this means for any $\lambda_{1}$ lying inside the disk of radius 1, the soliton has a constant amplitude. \\ If $|\lambda_{1}|=1$ it lies on the circle of radius 1, then the amplitude $|p_{1}(x,t)|$ will be bounded by $A e^{\beta t}\leq |p_{1}(x,t)| \leq A e^{-\beta t}$, where $\beta = \beta_{1}-\beta_{2}<0$. \\ If $|\lambda_{1}|>1$ then $\lim\limits_{m \rightarrow \infty}Im(\lambda_{1}^{2m}) \rightarrow \pm \infty$, so the amplitude will grow exponentially or it will decay to zero exponentially. \end{remark} \subsection{Explicit two-soliton solution and its dynamics} A general explicit two-soliton solution in the reverse-time case when $N=2$, $w_{1}=(w_{11},w_{12},w_{13},w_{14})^{T}$, $w_{2}=(w_{21},w_{22},w_{23},w_{24})^{T}$, $(\lambda_{1},\lambda_{2}) \in \mathbb{C}^{2}$ are arbitrary, and $\hat{\lambda}_{1}=-\lambda_{1}$, $\hat{\lambda}_{2}=-\lambda_{2}$, is given if $\lambda_{1} \neq - \lambda_{2}$ by \begin{align} p_{1}(x,t) = 2 \rho_{2} \rho_{3} (\lambda_{1}+\lambda_{2}) (\alpha_{1}-\alpha_{2}) \frac{A(x,t)}{B(x,t)}, \\ p_{2}(x,t) = 2 \rho_{1} \rho_{3} (\lambda_{1}+\lambda_{2}) (\alpha_{1}-\alpha_{2}) \frac{C(x,t)}{B(x,t)}, \\ p_{3}(x,t) = 2 \rho_{1} \rho_{2} (\lambda_{1}+\lambda_{2}) (\alpha_{1}-\alpha_{2}) \frac{D(x,t)}{B(x,t)}, \end{align} where \begin{flalign} &\begin{aligned} A(x,t) = e^{i[\lambda_{2}^{6} (\beta_{1}-\beta_{2}) t +\lambda_{2} (\alpha_{1}+\alpha_{2}) x]} \cdot & \bigg[ \bigg( w_{22} M (\lambda_{1}+\lambda_{2}) - 2 w_{12} K \lambda_{1} \bigg) w_{21} \lambda_{2} e^{i 2\alpha_{2} \lambda_{1} x} \\[-2mm] & - \rho_{1} \rho_{2} \rho_{3} (\lambda_{1}-\lambda_{2}) w_{11}^{2} w_{21} w_{22} \lambda_{2} e^{i 2 \alpha_{1} \lambda_{1} x} \bigg] \\[-2mm] + e^{i[\lambda_{1}^{6} (\beta_{1}-\beta_{2}) t +\lambda_{1} (\alpha_{1}+\alpha_{2}) x]} \cdot & \bigg[ \bigg( w_{12} N (\lambda_{1}+\lambda_{2}) - 2 w_{22} K \lambda_{2} \bigg) w_{11} \lambda_{1} e^{i 2\alpha_{2} \lambda_{2} x} \\[-2mm] & + \rho_{1} \rho_{2} \rho_{3} (\lambda_{1}-\lambda_{2}) w_{11} w_{12} w_{21}^{2} \lambda_{1} e^{i 2 \alpha_{1} \lambda_{2} x} \bigg], \end{aligned}& \end{flalign} \begin{flalign} &\begin{aligned} B(x,t) &= -4 \rho_{1} \rho_{2} \rho_{3} \lambda_{1} \lambda_{2} w_{11} w_{21} K e^{i (\lambda_{1}+\lambda_{2})(\alpha_{1}+\alpha_{2}) x} \cdot \bigg[ e^{i (\lambda_{1}^{6}-\lambda_{2}^{6})(\beta_{1}-\beta_{2}) t} + e^{-i (\lambda_{1}^{6}-\lambda_{2}^{6})(\beta_{1}-\beta_{2}) t} \bigg] \\ &+ \rho_{1} \rho_{2} \rho_{3} w_{21}^{2} M (\lambda_{1}+\lambda_{2})^{2} e^{i2 (\alpha_{1} \lambda_{2}+\alpha_{2} \lambda_{1}) x} + \rho_{1} \rho_{2} \rho_{3} w_{11}^{2} N (\lambda_{1}+\lambda_{2})^{2} e^{i2 (\alpha_{1} \lambda_{1}+\alpha_{2} \lambda_{2}) x} \\ &+ \rho_{1}^2 \rho_{2}^2 \rho_{3}^2 w_{11}^{2} w_{21}^{2} (\lambda_{1}-\lambda_{2})^{2} e^{i2 \alpha_{1} (\lambda_{1}+\lambda_{2}) x} + \bigg[ (\lambda_{1}^{2}+\lambda_{2}^{2}) MN + (2MN-4K^{2}) \lambda_{1} \lambda_{2}\bigg] e^{i2 \alpha_{2} (\lambda_{1}+\lambda_{2}) x}, \end{aligned}& \end{flalign} \vspace{-2cm} \begin{flalign} &\begin{aligned} C(x,t) = e^{i[\lambda_{2}^{6} (\beta_{1}-\beta_{2}) t +\lambda_{2} (\alpha_{1}+\alpha_{2}) x]} \cdot &\bigg[ \bigg( w_{23} M (\lambda_{1}+\lambda_{2}) - 2 w_{13} K \lambda_{1} \bigg) w_{21} \lambda_{2} e^{i 2\alpha_{2} \lambda_{1} x} \\[-2mm] & - \rho_{1} \rho_{2} \rho_{3} (\lambda_{1}-\lambda_{2}) w_{11}^{2} w_{21} w_{23} \lambda_{2} e^{i 2 \alpha_{1} \lambda_{1} x} \bigg] \\[-2mm] + e^{i[\lambda_{1}^{6} (\beta_{1}-\beta_{2}) t +\lambda_{1} (\alpha_{1}+\alpha_{2}) x]} \cdot & \bigg[ \bigg( w_{13} N (\lambda_{1}+\lambda_{2}) - 2 w_{23} K \lambda_{2} \bigg) w_{11} \lambda_{1} e^{i 2\alpha_{2} \lambda_{2} x} \\[-2mm] & + \rho_{1} \rho_{2} \rho_{3} (\lambda_{1}-\lambda_{2}) w_{11} w_{13} w_{21}^{2} \lambda_{1} e^{i 2 \alpha_{1} \lambda_{2} x} \bigg], \end{aligned}& \end{flalign} \vspace{-1cm} \begin{flalign} &\begin{aligned} D(x,t) = e^{i[\lambda_{2}^{6} (\beta_{1}-\beta_{2}) t +\lambda_{2} (\alpha_{1}+\alpha_{2}) x]} \cdot & \bigg[ \bigg( w_{24} M (\lambda_{1}+\lambda_{2}) - 2 w_{14} K \lambda_{1} \bigg) w_{21} \lambda_{2} e^{i 2\alpha_{2} \lambda_{1} x} \\[-2mm] & - \rho_{1} \rho_{2} \rho_{3} (\lambda_{1}-\lambda_{2}) w_{11}^{2} w_{21} w_{24} \lambda_{2} e^{i 2 \alpha_{1} \lambda_{1} x} \bigg] \\[-2mm] + e^{i[\lambda_{1}^{6} (\beta_{1}-\beta_{2}) t +\lambda_{1} (\alpha_{1}+\alpha_{2}) x]} \cdot & \bigg[ \bigg( w_{14} N (\lambda_{1}+\lambda_{2}) - 2 w_{24} K \lambda_{2} \bigg) w_{11} \lambda_{1} e^{i 2\alpha_{2} \lambda_{2} x} \\[-2mm] & + \rho_{1} \rho_{2} \rho_{3} (\lambda_{1}-\lambda_{2}) w_{11} w_{14} w_{21}^{2} \lambda_{1} e^{i 2 \alpha_{1} \lambda_{2} x} \bigg], \end{aligned}& \end{flalign} and $M=\rho_{2} \rho_{3} w_{12}^{2}+ \rho_{1} \rho_{3} w_{13}^{2}+ \rho_{1} \rho_{2} w_{14}^{2}$, $N=\rho_{2} \rho_{3} w_{22}^{2}+ \rho_{1} \rho_{3} w_{23}^{2}+ \rho_{1} \rho_{2} w_{24}^{2}$ and $K=\rho_{2} \rho_{3} w_{12} w_{22} + \rho_{1} \rho_{3} w_{13} w_{23} + \rho_{1} \rho_{2} w_{14} w_{24}$. \\ For the two-soliton dynamics, we notice that either both the two solitons are moving (repeatedly or not) in opposite directions or both are stationary, i.e., they don't move with respect to space. In figure \ref{2solitonplot20}, we have two travelling waves that move in opposite directions, keeping the same amplitude before and after interaction in an elastic collision, that is no energy radiation emitted \cite{Yang2010}. Whereas in figure \ref{2solitonplot2}, the amplitudes of the two waves change after interaction to new constant amplitudes resembling Manakov waves \cite{Manakov1974}. \\[3mm] In figure \ref{2solitonplot24}, we have two solitons with exponentially decaying amplitude and stationary over the time, i.e. they do not move in space. On the other hand, we can have as in figure \ref{2solitonplot25}, two solitons with exponentially decaying amplitude but moving apart over the time. \\ Aside, if both $\lambda_{1}$ and $\lambda_{2}$ lies on the real axis, then we will obtain breather solitons with time period $\frac{2 \pi}{\beta (\lambda_{1}^{6}-\lambda_{2}^{6})}$. An example is shown in figure \ref{2solitonplot23}, where the breather waves coincide for $t=5$ and $t=6.015873016$. \newpage \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot20-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot20-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot20-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot20-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ of the two-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-1,1,-2,1,-1,2)$, $(\lambda_{1},\lambda_{2})=(1.2+0.5i,-1.2+0.5i)$, $w_{1}=(1,1-3i,-i,1+i)$, $w_{2}=(2,1-3i,-i,1+i)$.}% \label{2solitonplot20}% \end{center} \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot2-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot2-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot2-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot2-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ of the two-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,1,-1,-2,1,-2,1)$, $(\lambda_{1},\lambda_{2})=(-0.4+0.8i,0.4+0.8i)$, $w_{1}=(1,1-i,-0.1+i,1+i)$, $w_{2}=(-1+2i,1-0.1i,3+i,0)$.}% \label{2solitonplot2}% \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot24-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot24-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot24-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot24-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ in the focussing case of the two-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-2,-3,-2,1,-1,2)$, $(\lambda_{1},\lambda_{2})=(-0.01+i,-0.03+i)$, $w_{1}=(1,0,2+i,0)$, $w_{2}=(1,2-i,0,1)$.}% \label{2solitonplot24}% \end{center} \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot25-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot25-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot25-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot25-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ in the focussing case of the two-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-2,-3,-2,1,-1,2)$, $(\lambda_{1},\lambda_{2})=(0.01+i,-0.03+i)$, $w_{1}=(1,i,2+i,1)$, $w_{2}=(1,2-i,i,1)$.}% \label{2solitonplot25}% \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot23-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot23-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot23-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-twosoliton-plot23-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ in the focussing case of the two-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-1,-1,-2,1,-1/64 \pi,1/64 \pi)$, $(\lambda_{1},\lambda_{2})=(1,2)$, $w_{1}=(1,0,2+i,1-i)$, $w_{2}=(-1,1-2i,-i,0)$.}% \label{2solitonplot23}% \end{center} \end{figure} \subsection{The dynamics of the three-soliton solution} The three-soliton solution is given, for which $N=3$, $w_{1}=(w_{11},w_{12},w_{13},w_{14})^{T}$, $w_{2}=(w_{21},w_{22},w_{23},w_{24})^{T}$, $w_{3}=(w_{31},w_{32},w_{33},w_{34})^{T}$, $(\lambda_{1},\lambda_{2},\lambda_{3}) \in \mathbb{C}^{3}$, and $\hat{\lambda}_{1}=-\lambda_{1}$, $\hat{\lambda}_{2}=-\lambda_{2}$, $\hat{\lambda}_{3}=-\lambda_{3}$ by \begin{align} p_{1} = \alpha \sum\limits_{k,j=1}^{3} v_{k1} (M^{-1})_{kj}, \hat{v}_{j,2} \\ p_{2} = \alpha \sum\limits_{k,j=1}^{3} v_{k1} (M^{-1})_{kj}, \hat{v}_{j,3} \\ p_{3} = \alpha \sum\limits_{k,j=1}^{3} v_{k1} (M^{-1})_{kj}. \hat{v}_{j,4} \end{align} Without loss of generality, for the three-soliton, we take all three eigenvalues in the upper-half plane in such a way that $\lambda_{i} \neq \lambda_{j}$ for $i,j \in \{1,2,3\}$. \\[3mm] Here, we can look at some of the three-soliton solution dynamics. We have two solitons moving in opposite directions interacting with one stationary soliton. After the interaction either the three solitons keep their amplitudes or the amplitudes change to new constant amplitudes, an example is shown in figure \ref{3solitonplot7}. \\[3mm] Another behaviour could be the interaction of three solitons that are embedded into two solitons after the interaction, where the stationary soliton keeps or changes it amplitude after collision. We may have the opposite case where the two solitons unfold to three solitons. \\[3mm] A different class of behaviour shows that three solitons can interact and embedded in a single soliton after the interaction. \\[2mm] In figure \ref{3solitonplot12}, we have three solitons, two of which are moving in opposite directions and one is stationary, all of them with constant amplitudes before interaction. After the interaction, they are embedded into a one stationary soliton with constant amplitude. \\[2mm] We may have the opposite case where one stationary soliton unfolds to three different solitons, each keeping its amplitude. \\[3mm] In figure \ref{3solitonplot13}, we also have three-soliton, two-soliton moving in opposite directions interacting with an exponentially decreasing stationary soliton. In that case, after the interaction, they are embedded into one stationary decreasing soliton over the time due to effect of energy radiation. \\[2mm] In contrast, we can have one stationary increasing soliton that unfold to three-soltion, in which two of them are moving in opposite directions keeping their amplitude while the other one is stationary and increasing exponentially over the time. \\[3mm] In figure \ref{3solitonplot6}, we have three-soliton, two-soliton moving in opposite directions interacting with an exponentially decreasing stationary soliton. They are embedded into a one moving soliton that keeps its constant amplitude after the interaction where the stationary soliton vanish. \\[2mm] In contrary, one moving soliton can also unfold to three different solitons, where two are moving in opposite direct keeping the amplitude and the other one is increasing exponentially over the time \cite{Clarke2000}. \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot7-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot7-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot7-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot7-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ in the focussing case of the three-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-1,-1,-2,1,-1,1)$, $(\lambda_{1},\lambda_{2},\lambda_{3})=(1+0.5i,-1+0.5i,0.5i)$, $w_{1}=(1,1+2i,0,0)$, $w_{2}=(-1,1-2i,0,0)$, $w_{3}=(2+i,1+2i,1,2i)$.}% \label{3solitonplot7}% \end{center} \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot14-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot14-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot14-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot14-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ of the three-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(1,1,1,-2,1,-2,1)$, $(\lambda_{1},\lambda_{2},\lambda_{3})=(1+0.5i,-1+0.5i,0.75i)$, $w_{1}=(1,0,2+i,1-i)$, $w_{2}=(-1,1-2i,-i,0)$, $w_{3}=(2+i,1+2i,1,2i)$.}% \label{3solitonplot14}% \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot12-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot12-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot12-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot12-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ of the three-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(1,1,1,-1,1,-2,1)$, $(\lambda_{1},\lambda_{2},\lambda_{3})=(1+0.5i,-1+0.5i,0.75i)$, $w_{1}=(1,0,2+i,1-i)$, $w_{2}=(-1,5-2i,-i,0)$, $w_{3}=(2+i,1+2i,1,2i)$.}% \label{3solitonplot12}% \end{center} \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot13-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot13-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot13-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot13-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ of the three-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(1,1,1,-2,1,-2,1)$, $(\lambda_{1},\lambda_{2},\lambda_{3})=(1+0.5i,-1+0.5i,-0.05+0.75i)$, $w_{1}=(1,0,2+i,1-i)$, $w_{2}=(-1,5-2i,-i,0)$, $w_{3}=(2+i,1+2i,1,2i)$.}% \label{3solitonplot13}% \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot6-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot6-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot6-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-threesoliton-plot6-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ in the focussing case of the three-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-1,-1,-2,1,-1,1)$, $(\lambda_{1},\lambda_{2},\lambda_{3})=(1+0.5i,-1+0.5i,0.5+0.5i)$, $w_{1}=(1,0,2+i,1-i)$, $w_{2}=(-1,1-2i,-i,0)$, $w_{3}=(2+i,1+2i,1,2i)$.}% \label{3solitonplot6}% \end{center} \end{figure} \subsection{The dynamice of the four-soliton solution} The four-soliton solution is given, for which $N=4$, $w_{1}=(w_{11},w_{12},w_{13},w_{14})^{T}$, $w_{2}=(w_{21},w_{22},w_{23},w_{24})^{T}$, $w_{3}=(w_{31},w_{32},w_{33},w_{34})^{T}$, $w_{4}=(w_{41},w_{42},w_{43},w_{44})^{T}$, $(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}) \in \mathbb{C}^{4}$, and $\hat{\lambda}_{1}=-\lambda_{1}$, $\hat{\lambda}_{2}=-\lambda_{2}$, $\hat{\lambda}_{3}=-\lambda_{3}$, $\hat{\lambda}_{4}=-\lambda_{4}$ by \begin{align} p_{1} = \alpha \sum\limits_{k,j=1}^{4} v_{k1} (M^{-1})_{kj} \hat{v}_{j,2}, \\ p_{2} = \alpha \sum\limits_{k,j=1}^{4} v_{k1} (M^{-1})_{kj} \hat{v}_{j,3}, \\ p_{3} = \alpha \sum\limits_{k,j=1}^{4} v_{k1} (M^{-1})_{kj} \hat{v}_{j,4}. \end{align} Without loss of generality, for the four-soliton, we take all four eigenvalues in the upper-half plane in such a way that $\lambda_{i} \neq \lambda_{j}$ for $i,j \in \{1,2,3,4\}$. \\[3mm] For the four-soliton dynamics, we have the interactions of four solitons. Two of them can be stationary or all the four solitons are moving. \\[2mm] Figure \ref{4solitonplot21} exhibits the interaction of two exponentially increasing solitons moving in opposite directions and interacting with two moving solitons with constant amplitude. After interaction the middle two solitons keep moving with increasing amplitude, while the two other solitons keep moving with constant amplitude. \\[2mm] We can notice that the middle two solitons can decrease exponentially while moving and interacting with the other two solitons. \begin{remark} The speed of the far right and left solitons are larger than the speed of the middle two solitons, such that all four solitons collide together. \end{remark} Another behaviour is shown in figure \ref{4solitonplot23}, where two solitons moving in opposite directions interact with two stationary solitons with constant amplitudes. After the interaction, the two stationary solitons remain stationary and the two moving solitons continue to move in opposite directions, but their amplitudes can change to new constant amplitudes or it can stay unchanged. \\[3mm] As for figure \ref{4solitonplot30}, we have the interaction of four moving solitons. Two waves are moving in the same direction and interacting with the other two waves coming from the opposite direction. After the interaction, each of the four solitons can keep its amplitude unchanged or its amplitude can change to a new constant amplitude. In the case that each soliton keeps its amplitude before and after the interaction, we have four travelling waves. \\[2mm] Finally in figure \ref{4solitonplot24}, four moving solitons are embedded into three moving solitons. After the interaction each soliton keep its amplitude unchanged or it can be changed to a new constant amplitude over the time. \newpage \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot21-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot21-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot21-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot21-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ of the four-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(1,-2,1,-2,1,-2,1)$, $(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})=(1+0.5i,-1+0.5i,0.02+i,-0.01+i)$, $w_{1}=(1-2i,1+3i,-i,1+i)$, $w_{2}=(-1+2i,1-3i,i,1-i)$, $w_{3}=(1+i,1+2i,0,2i)$, $w_{4}=(1,i,2+i,1)$.}% \label{4solitonplot21}% \end{center} \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot23-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot23-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot23-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot23-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ of the four-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(1,1,1,-2,1,-2,1)$, $(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})=(0.8+0.5i,-0.8+0.5i,i,3i)$, $w_{1}=(1-0.5i,1+3i,-i,1+i)$, $w_{2}=(-1+2i,1-1.5i,i,1-i)$, $w_{3}=(30,i,2+i,1)$, $w_{4}=(-0.0005,1,2,1)$.}% \label{4solitonplot23}% \end{center} \end{figure} \newpage \begin{figure}[H] \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot30-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot30-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot30-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot30-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ in the focussing case of the four-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-1,-1,-1,1,-1,1)$, $(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})=(0.5+i,-0.5+i,0.4+0.8i,-0.4+0.8i,)$, $w_{1}=(-1.5+2i,2-3i,i,1-i)$, $w_{2}=(3+2i,-1+3i,-i,1+i)$, $w_{3}=(i,1,1-2i,1-i)$, $w_{4}=(-i,1-2i,1,1+i)$.}% \label{4solitonplot30}% \end{center} \begin{center} \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot24-eigenvalues.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot24-3D.png}% \\ \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot24-2D.png}% \includegraphics[width=0.45\textwidth,height=4.5cm]{AAM-foursoliton-plot24-contour.png}% \caption{Spectral plane along with 3D, 2D and contour plots of $|p_{1}|$ in the focussing case of the four-soliton with parameters $(\rho_{1},\rho_{2},\rho_{3},\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(-1,-1,-1,-2,1,-2,1)$, $(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})=(1+0.5i,-1+0.5i,0.3+0.75i,-0.3+0.75i)$, $w_{1}=(1+4i,0,2+i,i)$, $w_{2}=(-1+4i,i,1-2i,0)$, $w_{3}=(-2+i,0.5+i,-2+i,1)$, $w_{4}=(2+i,1-0.5i,1,1)$.}% \label{4solitonplot24}% \end{center} \end{figure} \newpage \section{Conclusion} To summarize for fundamental soliton solutions, solitons interact elastically in a superposition manner, while for nonlocal solitons it is not always the case. Also, for the nonlocal soliton solutions, we can have singularities at a finite time. \newline In reverse-time, the pairs of symmetric eigenvalues $(\lambda,-\lambda)$ make the Riemann-Hilbert problem simpler to solve \cite{Yang2019}, where $\lambda \in \mathbb{C}_{+}$ and $-\lambda \in \mathbb{C}_{-}$. \\[2mm] In addition, an interesting observation in this paper, is that we can explicit the one and two soliton solutions for any $n$-order ($n$ is even) six-component AKNS integrable system for our spectral matrix $U(u,\lambda)$. \newline That is the general explicit one soliton solution for reverse-time $n$-order six-component system when $\hat{\lambda}_{1}=-\lambda_{1}$ is given by \begin{align} p_{1}(x,t)= \frac{ 2 \rho_{2} \rho_{3} \lambda_{1} (\alpha_{1}-\alpha_{2}) w_{11} w_{12} e^{i \lambda_{1} (\alpha_{1}+\alpha_{2})x+i\lambda_{1}^{n}(\beta_{1}-\beta_{2})t}}{\rho_{1} \rho_{2} \rho_{3} w_{11}^{2} e^{2i\lambda_{1} \alpha_{1}x} +(\rho_{2} \rho_{3} w_{12}^{2}+ \rho_{1} \rho_{3} w_{13}^{2}+ \rho_{1} \rho_{2} w_{14}^{2}) e^{2i\lambda_{1}\alpha_{2}x}}, \end{align} similarly for $p_{2}(x,t)$ and $p_{3}(x,t)$. \\[2mm] As for the two-soliton, the general explicit solution when $\hat{\lambda}_{1}=-\lambda_{1}$, $\hat{\lambda}_{2}=-\lambda_{2}$, if $\lambda_{1} \neq - \lambda_{2}$, is given by \begin{align} p_{1}(x,t) = 2 \rho_{2} \rho_{3} (\lambda_{1}+\lambda_{2}) (\alpha_{1}-\alpha_{2}) \frac{A(x,t)}{B(x,t)}, \end{align} where \begin{flalign} &\begin{aligned} A(x,t) = e^{i[\lambda_{2}^{n} (\beta_{1}-\beta_{2}) t +\lambda_{2} (\alpha_{1}+\alpha_{2}) x]} \cdot & \bigg[ \bigg( w_{22} M (\lambda_{1}+\lambda_{2}) - 2 w_{12} K \lambda_{1} \bigg) w_{21} \lambda_{2} e^{i 2\alpha_{2} \lambda_{1} x} \\[-2mm] & - \rho_{1} \rho_{2} \rho_{3} (\lambda_{1}-\lambda_{2}) w_{11}^{2} w_{21} w_{22} \lambda_{2} e^{i 2 \alpha_{1} \lambda_{1} x} \bigg] \\[-2mm] + e^{i[\lambda_{1}^{n} (\beta_{1}-\beta_{2}) t +\lambda_{1} (\alpha_{1}+\alpha_{2}) x]} \cdot & \bigg[ \bigg( w_{12} N (\lambda_{1}+\lambda_{2}) - 2 w_{22} K \lambda_{2} \bigg) w_{11} \lambda_{1} e^{i 2\alpha_{2} \lambda_{2} x} \\[-2mm] & + \rho_{1} \rho_{2} \rho_{3} (\lambda_{1}-\lambda_{2}) w_{11} w_{12} w_{21}^{2} \lambda_{1} e^{i 2 \alpha_{1} \lambda_{2} x} \bigg], \end{aligned}& \end{flalign} \newpage \begin{flalign} &\begin{aligned} B(x,t) &= -4 \rho_{1} \rho_{2} \rho_{3} \lambda_{1} \lambda_{2} w_{11} w_{21} K e^{i (\lambda_{1}+\lambda_{2})(\alpha_{1}+\alpha_{2}) x} \cdot \bigg[ e^{i (\lambda_{1}^{n}-\lambda_{2}^{n})(\beta_{1}-\beta_{2}) t} + e^{-i (\lambda_{1}^{n}-\lambda_{2}^{n})(\beta_{1}-\beta_{2}) t} \bigg] \\ &+ \rho_{1} \rho_{2} \rho_{3} w_{21}^{2} M (\lambda_{1}+\lambda_{2})^{2} e^{i2 (\alpha_{1} \lambda_{2}+\alpha_{2} \lambda_{1}) x} + \rho_{1} \rho_{2} \rho_{3} w_{11}^{2} N (\lambda_{1}+\lambda_{2})^{2} e^{i2 (\alpha_{1} \lambda_{1}+\alpha_{2} \lambda_{2}) x} \\ &+ \rho_{1}^2 \rho_{2}^2 \rho_{3}^2 w_{11}^{2} w_{21}^{2} (\lambda_{1}-\lambda_{2})^{2} e^{i2 \alpha_{1} (\lambda_{1}+\lambda_{2}) x} + \bigg[ (\lambda_{1}^{2}+\lambda_{2}^{2}) MN + (2MN-4K^{2}) \lambda_{1} \lambda_{2}\bigg] e^{i2 \alpha_{2} (\lambda_{1}+\lambda_{2}) x}, \end{aligned}& \end{flalign} and $M=\rho_{2} \rho_{3} w_{12}^{2}+ \rho_{1} \rho_{3} w_{13}^{2}+ \rho_{1} \rho_{2} w_{14}^{2}$, $N=\rho_{2} \rho_{3} w_{22}^{2}+ \rho_{1} \rho_{3} w_{23}^{2}+ \rho_{1} \rho_{2} w_{24}^{2}$ and $K=\rho_{2} \rho_{3} w_{12} w_{22} + \rho_{1} \rho_{3} w_{13} w_{23} + \rho_{1} \rho_{2} w_{14} w_{24}$. Similarly for $p_{2}(x,t)$ and $p_{3}(x,t)$. \\[2mm] For higher order non-local reverse-time AKNS systems, the $n$-soliton exhibit similar dynamics (or combinations of dynamics) to the dynamics discussed in this paper and in previous work \cite{AlleAhmedMa}. \newline Solving integrable equations in reverse-space, reverse-time and reverse-spacetime using other techniques such as Darboux transformations and Hirota bilinear method is still an active investigation \cite{MatveevSalle1991}-\cite{SunMaYu}.
1,116,691,496,972
arxiv
\section{#1}\indent} \documentclass[12pt,epsf]{article} \usepackage{amssymb,amsmath} \usepackage{graphicx} \setcounter{MaxMatrixCols}{10} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand{\begin{array}}{\begin{array}} \newcommand{\end{array}}{\end{array}} \renewcommand{\l}{\label} \newcommand{\nonumber}{\nonumber} \renewcommand{\a}{\alpha} \renewcommand{\b}{\beta} \newcommand{\delta}{\delta} \newcommand{\mu}{\mu} \newcommand{\nu}{\nu} \renewcommand{\r}{\rho} \newcommand{\sigma}{\sigma} \newcommand{\gamma}{\gamma} \newcommand{\Gamma}{\Gamma} \newcommand{\varepsilon}{\varepsilon} \newcommand{\phi}{\phi} \newcommand{\Phi}{\Phi} \newcommand{\omega}{\omega} \newcommand{\xi}{\xi} \newcommand{\eta}{\eta} \renewcommand{\d}{\partial} \newcommand{\nabla}{\nabla} \renewcommand{\o}{\over} \newcommand{\sqrt}{\sqrt} \renewcommand{\(}{\left(} \renewcommand{\)}{\right)} \newcommand{\mathrm{Tr}}{\mathrm{Tr}} \newcommand{\mathrm{STr}}{\mathrm{STr}} \newcommand{\mathrm{Sym}}{\mathrm{Sym}} \newcommand{\mathrm{diag}}{\mathrm{diag}} \newcommand{\mathcal{N}}{\mathcal{N}} \def{\rlap{1} \hskip 1.6pt \hbox{1}}{{\rlap{1} \hskip 1.6pt \hbox{1}}} \newcommand{{\,\lower0.9pt\vbox{\hrule \hbox{\vrule height 0.2 cm \hskip 0.19 cm \vrule height 0.2 cm}\hrule}\,}}{{\,\lower0.9pt\vbox{\hrule \hbox{\vrule height 0.2 cm \hskip 0.19 cm \vrule height 0.2 cm}\hrule}\,}} \newcommand{\ {\rm Tr}\ }{\ {\rm Tr}\ } \renewcommand{\v}[1]{\vec{#1}} \def\href#1#2{#2} \textheight 22.4cm \textwidth 15.5cm \topmargin -1cm \oddsidemargin 5mm \evensidemargin 5mm \renewcommand{\baselinestretch}{1} \input{tcilatex} \begin{document} \begin{titlepage} \hfill \vbox{ \halign{#\hfil \cr } } \hbox to \hsize{{}\hss \vtop{ \hbox{MCTP-08-51} }} \vspace*{20mm} \begin{center} {\Large \bf M2-M5 Systems in ${\cal N}=6$ Chern-Simons Theory\\} \vspace*{15mm} \vspace*{1mm} Kentaro Hanaki{\footnote {e-mail: [email protected]}} and Hai Lin{\footnote {e-mail: [email protected]}} \vspace*{1cm} {\it Department of Physics and Michigan Center for Theoretical Physics \\ University of Michigan, Ann Arbor, MI 48109, USA \\} \vspace*{1cm} \end{center} \begin{abstract} We study two aspects of M5-branes in ${\cal N}=6$ $U(N)\times U(N)$ Chern-Simons gauge theory. We first examine multiple M2-branes ending on a M5-brane. We study Basu-Harvey type cubic equations, fuzzy funnel configurations, and derive the M5-brane tension from the ${\cal N}=6$ theory. We also find a limit in which the above M2-M5 system reduces to a D2-D4 system and we recover the Nahm equation from the ${\cal N}=6$ theory. We then examine domain wall configurations in mass-deformed ${\cal N}=6$ theory with a manifest $SU(2)\times SU(2)\times U(1)$ global symmetry. We derive tensions of domain walls connecting between arbitrary M5-brane vacua of the deformed theory and observe their consistency with gravity dual expectations. \end{abstract} \end{titlepage} \vskip 1cm \section{Introduction} Multiple M2-brane theory with a manifest $SO(8)$ R-symmetry was shown \cite% {Bagger:2007jr, Gustavsson:2007vu} to be consistent with a totally antisymmetric 3-algebraic description. The only finite dimensional Euclidean 3-algebra assuming total antisymmetry was based on the $so(4)$ 3-algebra with a quantized 4-index structure constant \cite% {Papadopoulos:2008sk,Bandres:2008vf}. The corresponding theory can be presented as a $SU(2)_{k}\times SU(2)_{-k}~$Chern-Simons gauge theory \cite% {VanRaamsdonk:2008ft, Berman:2008be} coupled to 8 scalars and 8 fermions in bi-fundamental representations \cite{VanRaamsdonk:2008ft}. The theory was shown to arise from two M2-branes moving in an orbifold of transverse $R^{8}$ space \cite{Distler:2008mk}, and reduce to a maximally supersymmetric multiple D2-brane theory in a large $k$ and large scalar vev limit \cite% {Mukhi:2008ux, Distler:2008mk}. One-loop corrections to the couplings were considered in \cite{Gustavsson:2008bf}. Generalizations to include an arbitrary higher rank non-abelian gauge symmetry lead to the Lorentzian 3-algebra \cite{Gomis:2008uv, DeMedeiros:2008zm}, but the corresponding theory contains ghost degrees of freedom due to the Lorentzian signature \cite{Gomis:2008uv}. A ghost-removing procedure turns the theory into that of a dual description of the 3d maximally supersymmetric Yang-Mills theory \cite{Bandres:2008kj, Cecotti:2008qs, Honma:2008jd, Bergshoeff:2008ix}. Besides, infinite dimensional 3-algebras also exist \cite{Ho:2008nn, Lin:2008qp}. An alternative method to include a higher rank gauge symmetry was obtained very recently by considering the $U(N)_{k}\times U(N)_{-k}~$Chern-Simons gauge theory coupled to four $\mathcal{N}=2$ superfields in bi-fundamental representations \cite{Aharony:2008ug}. The Lagrangian of the theory exhibits a manifest $SU(4)$ R-symmetry \cite{Benna:2008zy, Bagger:2008se,Bandres:2008ry}, see also \cite{Terashima:2008sy, Gaiotto:2008cg, Hosomichi:2008jb, Gomis:2008vc}, and was proposed to arise from multiple M2-branes moving in a $Z_{k}~$quotient of the transverse $% R^{8} $ space \cite{Aharony:2008ug}. The theory was also shown to be consistent with a 3-algebraic description with a less antisymmetric structure constant \cite{Bagger:2008se}. The present paper is motivated by trying to understand better the properties of this new $\mathcal{N}=6~$theory. A nice feature of the previous $\mathcal{% N}=8~$theory is that it admits the Basu-Harvey equation \cite% {Basu:2004ed,Krishnan:2008zm} with a $SO(4)$ symmetry, and as a result, there are fuzzy funnel configurations describing multiple M2-branes gradually ending on a M5-brane wrapping a fuzzy 3-sphere. Another nice feature is that the $\mathcal{N}=8~$theory also admits a mass deformation keeping a $SO(4)\times SO(4)$~global symmetry \cite{Pope:2003jp, Bena:2004jw, Lin:2004nb, Lin:2005nh, Gomis:2008cv}, which has multiple M5-brane vacua charaterized by M5-branes wrapping concentric fuzzy 3-spheres in two possible orthogonal $R^{4}~$spaces. In this paper, we will study these two aspects in the context of the $\mathcal{N}=6~$theory. The organization of this paper is as follows. In section 2.1, we derive Basu-Harvey type equations by the method of forming perfect squares combining the kinetic terms with F-terms or D-terms. Related discussion but with slightly different methods was given in \cite% {Gomis:2008vc,Terashima:2008sy}. In section 2.2, we analyze properties of the fuzzy funnel solutions and derive the M5-brane tension from the $% \mathcal{N}=6~$theory. In section 2.3, we show a limit that the above Basu-Harvey equations reduce to Nahm equations describing D2-D4 systems, thus giving another consistency check. In section 3.1, we derive domain wall equations in the mass-deformed $\mathcal{N}=6~$theory keeping a $SU(2)\times SU(2)\times U(1)$~global symmetry \cite{Gomis:2008vc}. In section 3.2, we analyze properties of the domain walls and compute their tensions, which are consistent with gravity dual descriptions in terms of M5-brane actions. In section 4, we briefly draw conclusions. \section{Basu-Harvey configurations and M2-M5 system} \vspace{1pt}\label{basu_2} \vspace{1pt} \subsection{Bogomol'nyi completion} \label{basu_2.1} We begin by examining the bosonic potential in $\mathcal{N}=6~U(N)\times U(N) $ Chern-Simons theory and expressing it as a sum of several perfect squares. We basically follow the notation of \cite{Benna:2008zy}, but use a different normalization condition for $U(N)$ generators $\mbox{tr}% (T^{a}T^{b})=(1/2)\delta ^{ab}$. In this notation, the potential can be rewritten as \begin{eqnarray} V_{scalar} &=&V_{D}+V_{F} \notag \\ &=&\frac{4\pi ^{2}}{k^{2}}\mbox{tr}~(|Z^{A}Z_{A}^{\dagger }Z^{B}-Z^{B}Z_{A}^{\dagger }Z^{A}-W^{\dagger A}W_{A}Z^{B}+Z^{B}W_{A}W^{\dagger A}|^{2} \notag \\ &&+|W^{\dagger A}W_{A}W^{\dagger B}-W^{\dagger B}W_{A}W^{\dagger A}-Z^{A}Z_{A}^{\dagger }W^{\dagger B}+W^{\dagger B}Z_{A}^{\dagger }Z^{A}|^{2}) \notag \\ &&+\frac{16\pi ^{2}}{k^{2}}\mbox{tr}\left( |\epsilon _{AC}\epsilon ^{BD}W_{B}Z^{C}W_{D}|^{2}+|\epsilon ^{AC}\epsilon _{BD}Z^{B}W_{C}Z^{D}|^{2}\right) , \end{eqnarray}% where $Z^{A},W^{\dagger A},~A=1,2~$are the lowest components of four $% \mathcal{N}=2~$superfields respectively, and are all in the $(N,\overline{N}% )~$representations and have overall $U(1)$ charges +1. \vspace{1pt}The classical vacuum moduli space can be determined by demanding all the squares to be zero simultaneously. In this theory there is an additional residual $% Z_{k}~$symmetry which orbifolds the moduli space. Next we want to consider Basu-Harvey type BPS equations, which have the dependence of only one of the spatial worldvolume coordinate, say $x^{2}=s$. The equations can be obtained by combining the kinetic terms and potential terms in the Hamiltonian and rewriting it as a sum of perfect squares plus some topological terms. There are two ways to make combinations. If we combine the kinetic terms with F-term potentials, we obtain \begin{eqnarray} H &=&\int dx^{1}ds~\mbox{tr}(|\partial _{s}W^{\dagger A}|^{2}+|\partial _{s}Z^{A}|^{2}+V_{scalar}) \notag \\ &=&\int dx^{1}ds~\mbox{tr}(|\partial _{s}W^{\dagger A}-\frac{4\pi }{k}% \epsilon ^{AC}\epsilon _{BD}Z^{B}W_{C}Z^{D}|^{2}+|\partial _{s}Z^{A}-\frac{% 4\pi }{k}\epsilon ^{AC}\epsilon _{BD}W^{\dagger B}Z_{C}^{\dagger }W^{\dagger D}|^{2} \notag \\ &&+\frac{4\pi ^{2}}{k^{2}}|Z^{A}Z_{A}^{\dagger }Z^{B}-Z^{B}Z_{A}^{\dagger }Z^{A}-W^{\dagger A}W_{A}Z^{B}+Z^{B}W_{A}W^{\dagger A}|^{2} \notag \\ &&+\frac{4\pi ^{2}}{k^{2}}|W^{\dagger A}W_{A}W^{\dagger B}-W^{\dagger B}W_{A}W^{\dagger A}-Z^{A}Z_{A}^{\dagger }W^{\dagger B}+W^{\dagger B}Z_{A}^{\dagger }Z^{A}|^{2}) \notag \\ &&+\frac{4\pi }{k}\epsilon _{AC}\epsilon ^{BD}\int dx^{1}~\mbox{tr}% (Z^{A}W_{B}Z^{C}W_{D}+W^{\dagger A}Z_{B}^{\dagger }W^{\dagger C}Z_{D}^{\dagger }) \label{Fterm_combine01} \end{eqnarray}% or, if the kinetic terms are combined with D-term potentials, we get:% \begin{eqnarray} H &=&\int dx^{1}ds~\mbox{tr}(|\partial _{s}W^{\dagger A}+\frac{2\pi }{k}% (W^{\dagger B}W_{B}W^{\dagger A}-W^{\dagger A}W_{B}W^{\dagger B}-Z^{B}Z_{B}^{\dagger }W^{\dagger A}+W^{\dagger A}Z_{B}^{\dagger }Z^{B})|^{2} \notag \\ &&+|\partial _{s}Z^{A}+\frac{2\pi }{k}(Z^{B}Z_{B}^{\dagger }Z^{A}-Z^{A}Z_{B}^{\dagger }Z^{B}-W^{\dagger B}W_{B}Z^{A}+Z^{A}W_{B}W^{\dagger B})|^{2} \notag \\ &&+\frac{16\pi ^{2}}{k^{2}}|\epsilon _{AC}\epsilon ^{BD}W_{B}Z^{C}W_{D}|^{2}+% \frac{16\pi ^{2}}{k^{2}}|\epsilon ^{AC}\epsilon _{BD}Z^{B}W_{C}Z^{D}|^{2}) \notag \\ &&+\frac{\pi }{k}\int dx^{1}~\mbox{tr}(W_{A}W^{\dagger A}W_{B}W^{\dagger B}-W^{\dagger A}W_{A}W^{\dagger B}W_{B}+2W^{\dagger A}W_{A}Z^{B}Z_{B}^{\dagger } \notag \\ &&-2W_{A}W^{\dagger A}Z_{B}^{\dagger }Z^{B}+Z_{A}^{\dagger }Z^{A}Z_{B}^{\dagger }Z^{B}-Z^{A}Z_{A}^{\dagger }Z^{B}Z_{B}^{\dagger }). \label{Dterm_combine01} \end{eqnarray}% In each case, the last term is topological and doesn't affect the dynamics in the bulk. So we get a set of BPS equations, which minimizes the energy in a given topological sector: \begin{equation} \partial _{s}W^{\dagger A}-\frac{4\pi }{k}\epsilon ^{AC}\epsilon _{BD}Z^{B}W_{C}Z^{D}=0 \end{equation}% \begin{equation} \partial _{s}Z^{A}-\frac{4\pi }{k}\epsilon ^{AC}\epsilon _{BD}W^{\dagger B}Z_{C}^{\dagger }W^{\dagger D}=0 \end{equation}% \begin{equation} Z^{A}Z_{A}^{\dagger }Z^{B}-Z^{B}Z_{A}^{\dagger }Z^{A}-W^{\dagger A}W_{A}Z^{B}+Z^{B}W_{A}W^{\dagger A}=0 \end{equation}% \begin{equation} W^{\dagger A}W_{A}W^{\dagger B}-W^{\dagger B}W_{A}W^{\dagger A}-Z^{A}Z_{A}^{\dagger }W^{\dagger B}+W^{\dagger B}Z_{A}^{\dagger }Z^{A}=0 \end{equation}% for the F-term combination, and \begin{equation} \partial _{s}W^{\dagger A}+\frac{2\pi }{k}(W^{\dagger B}W_{B}W^{\dagger A}-W^{\dagger A}W_{B}W^{\dagger B}-Z^{B}Z_{B}^{\dagger }W^{\dagger A}+W^{\dagger A}Z_{B}^{\dagger }Z^{B})=0 \end{equation}% \begin{equation} \partial _{s}Z^{A}+\frac{2\pi }{k}(Z^{B}Z_{B}^{\dagger }Z^{A}-Z^{A}Z_{B}^{\dagger }Z^{B}-W^{\dagger B}W_{B}Z^{A}+Z^{A}W_{B}W^{\dagger B})=0 \label{Z_basu_01} \end{equation}% \begin{equation} \epsilon _{AC}\epsilon ^{BD}W_{B}Z^{C}W_{D}=\epsilon ^{AC}\epsilon _{BD}Z^{B}W_{C}Z^{D}=0 \end{equation}% for the D-term combination, respectively. The topological term gives the energy of the configuration when the BPS equations are satisfied. \subsection{Fuzzy funnel solution and M5-brane tension} The new Basu-Harvey equation proposed in \cite{Terashima:2008sy, Gomis:2008vc} can be obtained by setting two complex scalars to be zero, and look at the non-trivial equations for the other two complex scalars. For example, we can set $W^{\dagger A}=0,~$and $Z^{A}\neq 0~$in (\ref{Z_basu_01}% ). The scalar part of the Hamiltonian is given as a square term plus a topological term: \begin{eqnarray} H &=&\int dx^{1}ds~\mbox{tr}(|\partial _{s}Z^{A}+\frac{2\pi }{k}% (Z^{B}Z_{B}^{\dagger }Z^{A}-Z^{A}Z_{B}^{\dagger }Z^{B})|^{2}) \notag \\ &&+\frac{\pi }{k}\int dx^{1}ds~\partial _{s}\mbox{tr}(Z_{A}^{\dagger }Z^{A}Z_{B}^{\dagger }Z^{B}-Z^{A}Z_{A}^{\dagger }Z^{B}Z_{B}^{\dagger }). \end{eqnarray}% The first line gives a pair of BPS equations% \begin{equation} \partial _{s}Z^{A}+\frac{2\pi }{k}(Z^{B}Z_{B}^{\dagger }Z^{A}-Z^{A}Z_{B}^{\dagger }Z^{B})=0, \label{Z_basu_02} \end{equation}% where $A,B=1,2.~$As opposed to the original Basu-Harvey equation in \cite% {Basu:2004ed} which has a manifest $SO(4)$ symmetry, the equation (\ref% {Z_basu_02}) has a manifest $SU(2)\times U(1)~$symmetry. As was argued in \cite{Terashima:2008sy}, this equation preserves half of the supersymmetries of the theory. For a configuration on which this equation is satisfied, the energy of the system is given by% \begin{eqnarray} E &=&\frac{\pi }{k}\int dx^{1}~\mbox{tr}(Z_{A}^{\dagger }Z^{A}Z_{B}^{\dagger }Z^{B}-Z^{A}Z_{A}^{\dagger }Z^{B}Z_{B}^{\dagger }) \\ &=&2\int dsdx^{1}\mbox{tr}(\partial _{s}Z_{A}^{\dagger }\partial _{s}Z^{A}). \label{Basu_energy01} \end{eqnarray}% We used the BPS equation (\ref{Z_basu_02}) to obtain the second line. To solve the BPS equation (\ref{Z_basu_02}), we may separate the $s$% -dependent and independent part: \begin{equation} Z^{A}=f(s)G^{A},\quad f(s)=\sqrt{\frac{k}{4\pi s}}, \label{funnel_profile01} \end{equation}% where $G^{A}$s are $N\times \overline{N}~$matrices satisfying \begin{equation} G^{A}=G^{B}G_{B}^{\dagger }G^{A}-G^{A}G_{B}^{\dagger }G^{B}. \end{equation}% This equation is solved in \cite{Gomis:2008vc} (see also \cite% {Terashima:2008sy}). One can diagonalize $G_{1}^{\dagger }~$using the $% U(N)\times U(N)~$transformations and find that the other matrix $% G_{2}^{\dagger }~$must be off-diagonal. The $G_{A}^{\dagger }$s~have some nice properties: For a $N$ dimensional irreducible solution, \begin{eqnarray} (G_{1}^{\dagger })_{m,n} &=&\sqrt{m-1}\delta _{m,n},~~~(G_{2}^{\dagger })_{m,n}=\sqrt{N-m}\delta _{m+1,n}, \label{G_A_01} \\ G^{1}G_{1}^{\dagger } &=&\mathrm{diag~}(0,1,2,\ ...\ ,N-1)=~G_{1}^{\dagger }G^{1} \\ G^{2}G_{2}^{\dagger } &=&\mathrm{diag~}(N-1,N-2,~...~,1,0) \\ G_{2}^{\dagger }G^{2} &=&\mathrm{diag~}(0,N-1,N-2,~...~,1) \\ G^{A}G_{A}^{\dagger } &=&(N-1)\mathbf{1}_{N\times N},~~~~\mathrm{tr}% (G^{A}G_{A}^{\dagger })=N(N-1). \label{G_A_trace01} \end{eqnarray}% The eigenvalues of the matrices $G^{1}G_{1}^{\dagger }~$and $% G^{2}G_{2}^{\dagger }~$may be interpreted as the squares of the radial positions of the points on a fuzzy 3-sphere projected onto 2 complex planes, respectively. Since there is a overall $Z_{k}~$residual symmetry, the solution would describe a fuzzy $S^{3}/Z_{k}.$ The energy formula (\ref{Basu_energy01}) is expressed in terms of fields $% Z^{A}$, which is of mass dimension $1/2$ and does not have the correct mass dimension $-1$ as a spatial coordinate. The correct normalization should reproduce the scalar kinetic term of the form, \begin{equation} S_{kinetic}=-T_{2}\int d^3x \mbox{tr}(\partial _{\mu }X_{A}^{\dagger }\partial ^{\mu }X^{A}), \label{normal} \end{equation}% where $T_{2}$ is the M2-brane tension and $X^{A}$ is the (complexified) spatial coordinate. This implies that we should relate $X^{A}$ and $Z^{A}$ by \begin{equation} X^{A}=\sqrt{\frac{1}{T_{2}}}Z^{A}. \end{equation}% Using this, we can define the radius averaged over each M2-brane as \begin{eqnarray} R^{2} &=&\frac{2\mbox{tr}(X_{A}^{\dagger }X^{A})}{N}=\frac{2(N-1)}{T_{2}}% f^{2} \\ &=&\frac{k(N-1)}{2\pi T_{2}}~\cdot \frac{1}{s} \end{eqnarray}% The factor of two in the numerator comes from our normalization condition $% \mbox{tr}(T^{a}T^{b})=(1/2)\delta ^{ab}$. The radius vanishes for $N=1$, and there are non-trivial fuzzy 3-spheres only for $N \geq 2$. Combining all the above results, after some algebra, we obtain \begin{eqnarray} E &=&\frac{T_{2}^{2}}{2\pi }\frac{N}{N-1}\int dx^{1}\left( \frac{2\pi ^{2}}{k% }\right) R^{3}dR \\ &=&\frac{T_{2}^{2}}{2\pi }\frac{N}{N-1}\int d^{5}x. \end{eqnarray}% The factor $k$ in the denominator represents the fact that this M5-brane is divided by the $Z_{k}$ orbifold action, and $\frac{2\pi ^{2}}{k}~$is the volume of an $S^{3}/Z_{k}$ with a unit radius. So the M5-brane wraps an $% S^{3}/Z_{k}.~$The M5-brane tension predicted from the $\mathcal{N}=6~$theory is \begin{equation} T_{5}=\frac{T_{2}^{2}}{2\pi }\frac{N}{N-1}. \label{N=6_tension_01} \end{equation} The relation between M2-brane and M5-brane tension can also be derived in different ways, by matching the M-theory and type II string theory BPS spectrum \cite{Schwarz:1995jq}, or by applying flux and Dirac quantization rules in eleven dimensions \cite{Duff:1995wd}: \begin{equation} T_{5}=\frac{T_{2}^{2}}{2\pi }. \label{M5tension} \end{equation}% We see that for large $N$ including the numerical coefficient, (\ref% {N=6_tension_01})\ exactly agrees with the known result (\ref{M5tension}). The $1/N$ deviation is due to the fuzziness of the 3-sphere in the finite $N$ regime, and will disappear in the continuum limit for the fuzzy 3-sphere. \subsection{Basu-Harvey equations and reduction to Nahm equations} \label{basu_2.2} In this section we take a limit in which M2-brane theory reduces to D2-brane theory \cite{Pang:2008hw, Li:2008ya} and show that the Basu-Harvey equation (% \ref{Z_basu_02}) studied in the last section reduces to a Nahm equation, which describes D2-D4 system. We take a diagonal expectation value in one of the direction, for example, the direction labelled by 3 and expand the fields around the vacuum: \begin{eqnarray} Z^{1} &=&(x^{10}+ix^{20})T^{0}+X^{1}+iX^{2}\quad \label{large_v_X} \\ Z^{2} &=&((v+x^{30})+ix^{40})T^{0}+X^{3}+iX^{4} \end{eqnarray}% Here, $x$'s represent the $U(1)$ part and $T^{0}=\frac{1}{\sqrt{2N}}\mathbf{% 1~}$for normalization purpose, $\mbox{tr}(T^{0}T^{0})=1/2$. $X$'s take value on $SU(N)$.$\mathbf{~}$We take $N$ and $v/k$ finite and fixed, and suppose $v $ is large, and then we will neglect $o(1/v)~$terms in the calculation below. By plugging (\ref{large_v_X}) into the BPS equation (\ref{Z_basu_02}), we see that \begin{eqnarray} \partial _{s}Z^{2} &=&\frac{2\pi }{k}(Z^{2}Z_{1}^{\dagger }Z^{1}-Z^{1}Z_{1}^{\dagger }Z^{2}) \\ &=&\frac{2\pi v}{k\sqrt{2N}}[Z_{1}^{\dagger },Z^{1}] \\ &=&\frac{4\pi v}{k\sqrt{2N}}i[X^{1},X^{2}] \end{eqnarray}% $U(1)$ part decouples from the equations and we simply set them to zero. $% SU(N)$ part implies \begin{equation} \partial _{s}X^{3}=\frac{4\pi v}{k\sqrt{2N}}i[X^{1},X^{2}],\quad \partial _{s}X^{4}=0 \end{equation}% where we compared hermitian and anti-hermitian parts respectively. In the same way, we can calculate the other component equation \begin{eqnarray} \partial _{s}Z^{1} &=&\frac{2\pi }{k}(Z^{1}Z_{2}^{\dagger }Z^{2}-Z^{2}Z_{2}^{\dagger }Z^{1}) \\ &=&\frac{2\pi v}{k\sqrt{2N}}[Z^{1},Z_{2}^{\dagger }+Z^{2}] \\ &=&\frac{4\pi v}{k\sqrt{2N}}([X^{1},X^{3}]+i[X^{2},X^{3}]) \end{eqnarray}% So we get% \begin{eqnarray} \partial _{s}X^{1} &=&\frac{4\pi v}{k\sqrt{2N}}i[X^{2},X^{3}] \\ \partial _{s}X^{2} &=&\frac{4\pi v}{k\sqrt{2N}}i[X^{3},X^{1}] \end{eqnarray} Combining the above results, we get \begin{equation} \partial _{s}X^{i}=i\frac{1}{2}g_{YM}\epsilon ^{ijk}[X^{j},X^{k}] \end{equation}% where $i,j,k=1,2,3$ and $\epsilon ^{ijk}$ is the totally antisymmetric tensor. By using $g_{YM}=4\pi v/k\sqrt{2N}$ as in the M2 to D2 reduction \cite{Pang:2008hw, Li:2008ya} for the $\mathcal{N}=6$ theory, we get the Nahm equation with the exact coefficient, in the large $v$ and large $k$ limit, with $N~$and $v/k~$fixed and finite.~This describes$~$multiple D2-branes ending on a D4-brane wrapping an $S^{2}$, and the reduction process makes an $S^{3}/Z_{k}~$reducing to an $S^{2}~$that the D4-brane wraps. \section{Domain wall configurations and M2-M5 system} \label{domain} \subsection{Domain wall equations} \vspace{1pt}\label{domain_equation}\vspace{1pt} In this section, we turn to the discussion of another aspect of the M5-branes in the $\mathcal{N}=6$ theory. \vspace{1pt}For the $\mathcal{N}=8~$% M2-brane theory on flat space, we can turn on four fermion mass terms, which preserve at least $\mathcal{N}=2~$supersymmetry. The most symmetric mass deformation is the one preserving a $SO(4)\times SO(4)~$symmetry \cite% {Pope:2003jp, Bena:2004jw, Lin:2004nb} and a $SU(2|2)\times SU(2|2)$ superalgebra. In this case, M5-branes can wrap either of the two geometric $% S^{3}$s in orthogonal $R^{4}$s.$~$ In the case of $\mathcal{N}=6$ formulation, the most symmetric mass deformation turns out to preserve a manifest $SU(2)\times SU(2)\times U(1)~$% symmetry \cite{Gomis:2008vc} (see also related discussion \cite% {Hosomichi:2008jb,Hosomichi:2008jd}) and we expect to have a $SU(2|2)\times SU(1|1)~$superalgebra. While, in this case, M5-branes can wrap either of two possible geometric ($S^{3}/Z_{k})$s, where the $Z_{k}$ action is due to the residual symmetry, which squash the 3-spheres along their Hopf fiber directions while maintaining a manifest $SU(2)\times U(1)~$symmetry, as in (% \ref{Z_basu_02}). We can turn on a D-term deformation corresponding to adding a FI term as found in \cite{Gomis:2008vc}. In our notation, we have the deformed potential% \begin{eqnarray} V_{scalar} &=&V_{D}+V_{F} \notag \\ &=&\frac{4\pi ^{2}}{k^{2}}\mbox{tr}(|-\frac{k}{2\pi }\mu Z^{B}+Z^{A}Z_{A}^{\dagger }Z^{B}-Z^{B}Z_{A}^{\dagger }Z^{A}-W^{\dagger A}W_{A}Z^{B}+Z^{B}W_{A}W^{\dagger A}|^{2} \notag \\ &&+|-\frac{k}{2\pi }\mu W^{\dagger B}+W^{\dagger A}W_{A}W^{\dagger B}-W^{\dagger B}W_{A}W^{\dagger A}-Z^{A}Z_{A}^{\dagger }W^{\dagger B}+W^{\dagger B}Z_{A}^{\dagger }Z^{A}|^{2}) \notag \\ &&+\frac{16\pi ^{2}}{k^{2}}\mbox{tr}\left( |\epsilon _{AC}\epsilon ^{BD}W_{B}Z^{C}W_{D}|^{2}+|\epsilon ^{AC}\epsilon _{BD}Z^{B}W_{C}Z^{D}|^{2}\right) \end{eqnarray}% where $\mu $ is a canonical mass parameter. We perform the Bogomol'nyi completion combining the kinetic terms and D-terms similar to (\ref{Dterm_combine01}), and we get% \begin{eqnarray} H &=&\int dx^{1}ds\mbox{tr}(|\partial _{s}W^{\dagger A}-\mu W^{\dagger A}+% \frac{2\pi }{k}(W^{\dagger B}W_{B}W^{\dagger A}-W^{\dagger A}W_{B}W^{\dagger B} \notag \\ &&-Z^{B}Z_{B}^{\dagger }W^{\dagger A}+W^{\dagger A}Z_{B}^{\dagger }Z^{B})|^{2} \notag \\ &&+|\partial _{s}Z^{A}-\mu Z^{A}+\frac{2\pi }{k}(Z^{B}Z_{B}^{\dagger }Z^{A}-Z^{A}Z_{B}^{\dagger }Z^{B}-W^{\dagger B}W_{B}Z^{A}+Z^{A}W_{B}W^{\dagger B})|^{2} \notag \\ &&+\frac{16\pi ^{2}}{k^{2}}|\epsilon _{AC}\epsilon ^{BD}W_{B}Z^{C}W_{D}|^{2}+% \frac{16\pi ^{2}}{k^{2}}|\epsilon ^{AC}\epsilon _{BD}Z^{B}W_{C}Z^{D}|^{2}) \notag \\ &&+\frac{\pi }{k}\int dx^{1}~\mbox{tr}(W_{A}W^{\dagger A}W_{B}W^{\dagger B}-W^{\dagger A}W_{A}W^{\dagger B}W_{B}+2W^{\dagger A}W_{A}Z^{B}Z_{B}^{\dagger } \notag \\ &&-2W_{A}W^{\dagger A}Z_{B}^{\dagger }Z^{B}+Z_{A}^{\dagger }Z^{A}Z_{B}^{\dagger }Z^{B}-Z^{A}Z_{A}^{\dagger }Z^{B}Z_{B}^{\dagger }) \notag \\ &&+\int dx^{1}~\mathrm{tr}(\mu W^{\dagger A}W_{A}+\mu Z^{A}Z_{A}^{\dagger }) \end{eqnarray}% New boundary topological terms are produced at the same time when the BPS equations are modified. The BPS domain wall equations are \begin{equation} \partial _{s}W^{\dagger A}-\mu W^{\dagger A}+\frac{2\pi }{k}(W^{\dagger B}W_{B}W^{\dagger A}-W^{\dagger A}W_{B}W^{\dagger B}-Z^{B}Z_{B}^{\dagger }W^{\dagger A}+W^{\dagger A}Z_{B}^{\dagger }Z^{B})=0 \label{domain_eqn_06} \end{equation}% \begin{equation} \partial _{s}Z^{A}-\mu Z^{A}+\frac{2\pi }{k}(Z^{B}Z_{B}^{\dagger }Z^{A}-Z^{A}Z_{B}^{\dagger }Z^{B}-W^{\dagger B}W_{B}Z^{A}+Z^{A}W_{B}W^{\dagger B})=0 \label{domain_eqn_07} \end{equation}% \begin{equation} \epsilon _{AC}\epsilon ^{BD}W_{B}Z^{C}W_{D}=\epsilon ^{AC}\epsilon _{BD}Z^{B}W_{C}Z^{D}=0 . \label{domain_eqn_08} \end{equation}% The equations are modified by just adding the linear terms. \subsection{Domain wall solutions and their tensions} \label{domain_tension}\vspace{1pt} In this section we discuss solutions of these domain wall configurations and derive their tensions. Setting $W^{\dagger A}=0~$in equations (\ref% {domain_eqn_06})-(\ref{domain_eqn_08}), we need to solve \begin{equation} \partial _{s}Z^{A}-\mu Z^{A}+\frac{2\pi }{k}(Z^{B}Z_{B}^{\dagger }Z^{A}-Z^{A}Z_{B}^{\dagger }Z^{B})=0 \label{domain_01} \end{equation} We assume the ansatz% \begin{eqnarray} &&Z^{A}=~h(s)G^{A},~~~~G^{A}=G^{B}G_{B}^{\dagger }G^{A}-G^{A}G_{B}^{\dagger }G^{B} \\ &&\partial _{s}h-\mu h+\frac{2\pi }{k}h^{3}=0 \label{h_eqn_domain04} \end{eqnarray}% We then obtain two solutions \begin{eqnarray} h_{1}(s) &=&\sqrt{\frac{k\mu }{2\pi \left( 1-e^{-2\mu s}\right) }}~ \\ h_{2}(s) &=&~\sqrt{\frac{k\mu }{2\pi \left( 1+e^{-2\mu s}\right) }} \end{eqnarray} The first solution $h_{1}~$describes a fuzzy funnel where $s\in (0,\infty )$% ,~and in the $\mu \rightarrow 0$ limit reproduces (\ref{funnel_profile01}). The second solution $h_{2}~$is a domain wall solution where\ $s\in (-\infty ,\infty )$. We have \begin{equation} h_{2}(-\infty )=0,~\ \ \ h_{2}(+\infty )=\sqrt{\frac{k\mu }{2\pi }}~ \end{equation}% so this domain wall solution% \begin{equation} Z^{A}=~~\sqrt{\frac{k\mu }{2\pi \left( 1+e^{-2\mu s}\right) }}G^{A} \end{equation}% connects a trivial vacuum with a nontrivial fuzzy sphere vacuum $\sqrt{\frac{% k\mu }{2\pi }}G^{A}$. The non-vanishing boundary terms when $W^{\dagger A}=0$ are \begin{eqnarray} H &=&\int dx^{1}ds\partial _{s}\mathrm{tr}(\mu Z^{A}Z_{A}^{\dagger })+\frac{% \pi }{k}\int dx^{1}ds\partial _{s}\mbox{tr}(Z_{A}^{\dagger }Z^{A}Z_{B}^{\dagger }Z^{B}-Z^{A}Z_{A}^{\dagger }Z^{B}Z_{B}^{\dagger }) \\ &=&\int dx^{1}~\mathrm{tr}(\frac{1}{2}\mu Z^{A}Z_{A}^{\dagger })|_{s=-\infty }^{s=\infty }=2\int dx^{1}ds~\mathrm{tr}(\partial _{s}Z^{A}\partial _{s}Z_{A}^{\dagger }) \label{boundary terms_after eqn} \\ &=&\int dx^{1}(\frac{k\mu ^{2}}{4\pi })\mbox{tr}(G^{A}G_{A}^{\dagger })|_{s=-\infty }^{s=\infty } \label{general_domain 04} \\ &=&\int dx^{1}\frac{k}{4\pi }\mu ^{2}N(N-1) \label{general domain 05} \end{eqnarray}% where in deriving the second line in (\ref{boundary terms_after eqn}) we have used the equation of motion (\ref{domain_01}) to simplify% \begin{equation} \frac{\pi }{k}(Z^{A}Z_{B}^{\dagger }Z^{B}Z_{A}^{\dagger }-Z^{B}Z_{B}^{\dagger }Z^{A}Z_{A}^{\dagger })=-\frac{1}{2}\mu Z^{A}Z_{A}^{\dagger }+\frac{1}{2}(\partial _{s}Z^{A})Z_{A}^{\dagger } \label{eqn_domain02} \end{equation}% and used the fact that $\frac{1}{2}(\partial _{s}Z^{A})Z_{A}^{\dagger }$ vanishes for both $s=-\infty ~$and $s=\infty .$ Thereby the tension of this domain wall is \begin{equation} \tau =\frac{k}{4\pi }\mu ^{2}N(N-1) \label{simplest_domain02} \end{equation}% It agrees with other results for slightly different theories as discussed in \cite{Bagger:2007jr}, and the second ref. in \cite{Gomis:2008cv}. Since (\ref{eqn_domain02}),(\ref{boundary terms_after eqn}) are the general results for general domain wall solutions, we see that the expression (\ref% {general_domain 04}) should be a general result for the tension of a domain wall between two arbitrary vacua labelled by integers $\{N_{i}^{\prime }|_{s=-\infty },i=1,...,p^{\prime }\},~\{N_{i}|_{s=\infty },i=1,...,p\},$ in which the integers label the dimensions of irreducible solutions of the $% p^{\prime }~$and $p~$diagonal-block matrices in $G^{A}|_{s=-\infty }~$and$% ~G^{A}|_{s=\infty }~$respectively.$~$The tension of the domain wall between these two arbitrary vacua is therefore% \begin{equation} \tau =\frac{k\mu ^{2}}{4\pi }\sum\limits_{i=1}^{p}N_{i}(N_{i}-1)|_{s=\infty }-\frac{k\mu ^{2}}{4\pi }\sum\limits_{i=1}^{p^{\prime }}N_{i}^{\prime }(N_{i}^{\prime }-1)|_{s=-\infty } \end{equation} The dependence of (\ref{simplest_domain02}) on mass and $N$ also agrees with the gravity dual analysis in \cite{Bena:2000zb} based on computing the action of a M5-brane filling a 4-ball bounded by the 3-sphere on which the M5-brane constructed from M2-branes wraps. The probe M5-brane is also along the $R^{1,1}~$part of the M2-brane worldvolume directions. This computation can also be performed by calculating the action of a M5-brane wrapping a $% S^{3}$ as well as the $x_{2}~$line-segment across the fermion band at $y=0$ in the gravity geometry in \cite{Lin:2004nb},\cite{Lin:2005nh}. In this gravity picture, it is suggestive that if the fermion band is narrow, the M5-brane action is expected to be small. \vspace{1pt}\vspace{1pt} \section{Conclusions and discussion} \label{discussion} In this paper we have studied two problems of M5-branes in the $\mathcal{N}% =6 $ theory. We analyzed the Basu-Harvey type equations and found evidence that the equations describe multiple M2-branes ending on a M5-brane, which wraps on a fuzzy 3-sphere. We derived the tension of M5-brane and it exactly agrees with the known result in large $N$ limit. We also found that the 3-sphere is orbifolded by a $Z_{k}~$action as the volume of the M5-brane is suppressed by $1/k.$ This is also consistent with the $SU(2)\times U(1)~$% symmetry of the equations. We also derived the Nahm equation describing D2-branes ending on a D4-brane wrapping an $S^{2}~$starting from the above Basu-Harvey type equations and taking a large $k$ limit, providing further evidence for consistency. We then turned to another situation where M5-branes wrapping on fuzzy 3-sphere emerge as the vacua of the mass-deformed $\mathcal{N}=6$ theory. We find domain wall solutions and computed their tensions, in agreement with known gravity analysis, thereby adding another evidence for the existence of the M5-branes in the $\mathcal{N}=6$ theory. \vspace{1pt} \vspace{0.2cm} \section*{Acknowledgments} It is a pleasure to thank A. Gustavsson, T. Klose, J. T. Liu, J. Maldacena, S. Sasaki and X. Yin for helpful conversations or correspondences. This work is supported by U.S. Department of Energy under grant DE-FG02-95ER40899, and Michigan Center for Theoretical Physics.
1,116,691,496,973
arxiv
\section{Introduction} The relationship between gas, metals and dust that defines the interstellar medium (ISM), plays a central role in the properties of star-formation, and in the appearance, evolution and ultimate fate of galaxies. However the basic quantitative relationship between gas, metals and dust is still not well-defined even for our own galaxy. Many studies have been made over the years examining the dust-to-gas ratio in the Galaxy and more recently at cosmological distances. Two methods dominate these analyses: a) comparing hydrogen absorption to dust extinction or reddening, using the Ly$\alpha$ line in the UV (and sometimes H$_2$ UV lines) to get the total gas column and pairs of stars to obtain the total extinction, and b) soft X-ray photoelectric absorption that measures the total metal column density in the foreground of bright X-ray sources, and converting this to a gas column assuming a metallicity, and comparing this to a dust extinction obtained from methods like the Balmer decrement, the deviation in the optical/infrared from a blackbody spectrum or even measuring the dust column using the halo made by small angle scattering of X-rays off the dust. The first method provides a genuine gas-to-dust ratio, in the sense that it measures the bulk of the gas directly. However, it suffers from the deficiency that it is insensitive to ionised gas and unless the molecular lines are measured, also to H$_2$, where the fraction of hydrogen in H$_2$ may be $\sim 0.5$ for $A_V\gtrsim0.5$ \citep{2009ApJS..180..125R}. Furthermore it only works along relatively low extinction lines of sight ($A_V\lesssim2-3$) since it requires spectroscopy in the UV where the extinction is far higher and stars are often too faint to observe in Ly$\alpha$ or H$_2$. The X-ray absorption method works to very high extinctions ($A_V\sim30$ or more) and measures essentially all metals whether they are ionised or even in the solid phase, providing a census of the total column density in metals. However, it is a measurement of the metal column, not the gas column since the X-ray absorption is almost insensitive to hydrogen absorption and depends only weakly on helium. Therefore the X-ray absorption requires a metallicity conversion to move from a dust-to-metals to a dust-to-gas ratio. It is perhaps worth noting that most studies to date in the Galaxy have either focussed on relatively nearby sets of objects or provided a small number (between 3 and about 20 for the X-ray studies) of lines of sight to locations within a few kpc of the Sun. Most studies in the Galaxy have therefore not probed the ISM of the Milky Way in either a complete or unbiased way. These studies have consistently found a dust-to-gas ratio at a level of $N_H/A_V \sim 2\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ \citep{1978ApJ...224..132B,1981MNRAS.196..469W,1994ApJ...427..274D,1996Ap&SS.236..285R,1973A&A....26..257R,1975ApJ...198...95G,1975ApJ...198..103R,1995A&A...293..889P,2003A&A...408..581V,2009MNRAS.400.2050G,2009ApJS..180..125R}. The statistical errors quoted for some studies have been as small as a few $10^{19}$\,cm$^{-2}$\,mag$^{-1}$, however the variation from study to study is closer to a few parts in $10^{20}$\,cm$^{-2}$\,mag$^{-1}$ This discrepancy may be related to an underestimate of the uncertainties or to an intrinsic scatter in the relation. Gamma-ray bursts (GRBs), while found at cosmological distances, are extremely bright. In this paper we use the large homogeneous sample of \emph{Swift} GRB X-ray afterglows to determine upper limits to the metal column densities to several hundred lines of sight through the Galaxy. We compare these metal column densities to all-sky hydrogen and dust surveys to obtain a new dust-to-metals ratio and metallicity value for the Galaxy. The GRB afterglows are subject to absorption by their hosts, with a minor contribution from intervening objects, so we use a 2D 2-sided Kolmogorov-Smirnov (KS) test to overcome this limitation to define the Galactic lower envelope to their absorbing column densities. Since it is X-ray selected, the sample is not afflicted by observational bias as UV studies are. However, its greatest benefit over previous samples is that this sample passes lines of sight at random through the Galaxy, and passes through the entire Galaxy in every direction, providing a set of sightlines less affected by the relatively local nature of some previous studies. In the next section I describe the sample, data reduction and analysis techniques. In section~\ref{sec:results}, I present the results of the study. Section~\ref{sec:discussion} contains an analysis of the relevance of the results and a comparison to previous efforts inside and outside our Galaxy. In section~\ref{sec:conclusions}, I offer my conclusions. All errors quoted are statistical uncertainties at the 68\% confidence level for one parameter of interest unless stated otherwise. \section{Sample selection, data reduction and analysis} The aim of the work is to obtain equivalent hydrogen column densities (in essence the metal column density, $N_{\rm H_{\rm X}}$) for a large number of sightlines through the Galaxy from the soft X-ray photoelectric absorption of GRB afterglows. These column densities will be a combination of the whole column density along the line of sight to the GRB, dominated by the Galactic column and the absorption from the GRB's host galaxy. This would yield a distribution of column densities with a lower limit at the values of the Galactic column density. By obtaining the Galactic dust and \ion{H}{i} column densities ($E(B-V)$) and $N_{\rm HI}$ respectively) along the line-of-sight to the GRB from surveys, specifically \citet{1998ApJ...500..525S} for Galactic dust and \citet{2005A&A...440..775K} for \ion{H}{i}, the dust-to-metals ratio and metallicity can be obtained by suitably fitting the two-dimensional distributions of $N_{\rm H_{\rm X}}$--$A_V$ and $N_{\rm H_{\rm X}}$--$N_{\rm HI}$ with a cut-off at the value of the Galactic relation. An $R_V=3.1$ was assumed to convert $E(B-V)$ to $A_V$, consistent with the mean and median $R_V$ values found by \citet{2007ApJ...663..320F} for 328 sources to distances up to $\sim5$\,kpc distributed across the Galactic plane. The dust column values were obtained from \citet{1998ApJ...500..525S} and reduced by 14\% following the analysis of \citet{2010ApJ...725.1175S}, with uncertainties based on the standard deviation of nearby points. The atomic hydrogen column densities were obtained from the Leiden-Argentine-Bonn survey reported in \citet{2005A&A...440..775K} and accessed through the \texttt{nH} tool in Ftools. Uncertainties in the \ion{H}{i} column density were set at 10\% \citep{2010arXiv1012.5319W}. To derive reddenings, \citet{1998ApJ...500..525S} use the $100\,\mu$m sky emission maps from COBE/DIRBE and IRAS/ISSA with temperature corrections. The conversion from far-infrared (FIR) flux to reddening is made using the excess colours of a sample of elliptical galaxies. The zodiacal light contribution is removed using the DIRBE $25\,\mu$m maps. The values of E($B-V$) reported should be fairly accurate, since the conversion from FIR emission is measured on reddenings, with systematic uncertainties at a level below our ultimate statistical uncertainty \citep{2010ApJ...725.1175S}. The accuracy of using a conversion from reddenings to absolute extinction in the Galaxy is a matter of significant debate \citep[e.g.][]{2007ApJ...663..320F}, and will evidently be different along different lines of sight. However, the standard deviation found in the conversion, i.e. in $R_V$, is 0.27 \citep{2007ApJ...663..320F}. The 21\,cm maps \citep{2005A&A...440..775K} are the combined Leiden/Dwingeloo \citep{1997agnh.book.....H} and Instituto Argentino de Radioastronomía \citep{2005A&A...440..767B} surveys detecting \ion{H}{i} over a velocity range of $-450$\,km\,s$^{-1}$ to $+400$\,km\,s$^{-1}$ at 1.3\,km\,s$^{-1}$ resolution with very high equivalent main beam efficiency ($\gtrsim0.99$). To obtain an equivalent hydrogen column density, $N_{\rm H_X}$, from low-resolution X-ray spectroscopy, the deviation from a power-law at low energies is measured. This deviation is due to photoelectric absorption by metals (primarily O, C, Si, Fe and He depending on the energy), and is fit with absorption models of the gas. I assume a neutral gas, although the total column density is not strongly affected by the first few ionisations of the metals since the absorption is due to inner shell electrons. Measured cross-sections are used for the elements, and while improvements to the cross-sections over the years have improved fits to absorption data \citep[][WAM00]{2000ApJ...542..914W}, the effects on low-resolution spectroscopy are not large, a few percent at most on the total column density (WAM00). The most important effect is the assumed abundance of the elements relative to hydrogen and this is discussed in section~\ref{sec:results} below. The relative abundances of the various metals also has some effect on the total derived column density, however, at these resolutions, there is no way to distinguish which elements dominate the absorption, and since the majority of these elements contribute to the dust, it is not a crucial point. Recent work suggests that the hot intergalactic medium (IGM) might increase the X-ray absorption smoothly with redshift at a low level \citep{2011ApJ...734...26B}. However 1) any such effect is small, 2) does not appear to be consistent with IGM absorption since no excess absorption is detected in about a third of quasars examined as a comparison sample, and 3) may be related to inadequately modelled Galactic absorption. Apart from smooth IGM absorption, intervening galaxies are unlikely to contribute much to the X-ray absorption in the general case as the observed absorption is related to the total metal column density and drops approximately as $(1+z)^{2.5}$ due to bandpass effects. Every line of sight would have to have the equivalent of a $0.1Z_{\sun}$ metallicity absorber at redshift $z=1$ with a neutral gas column density of $\sim5\times10^{21}$\,cm$^{-2}$ in order to begin to impact the result here. Such large column density intervening systems do not exist along most sightlines, as argued in \citet{2007ApJ...660L.101W}. All GRBs observed with \emph{Swift}'s X-ray telescope (XRT) were included in the sample up to the end of November 2010 (GRB\,101030A), resulting in 638 GRBs. While X-rays are detected for almost all GRBs observed by \emph{Swift}-XRT, a significant fraction do not have sufficient signal-to-noise to provide a spectrum good enough to determine the X-ray absorption. The method adopted here has been to take all pre-reduced spectra from the Swift/XRT GRB spectrum repository \citep{2009MNRAS.397.1177E} for both the windowed timing (WT) and photon counting (PC) modes using the appropriate corresponding calibration files from that archive. These data were then fit with a power-law with photo-electric absorption (\texttt{phabs(pow)} in Xspec). While the PC and WT mode spectra for a given GRB are not independent objects, they are independent measurements of the same sightline. It is well known that the absorption in the X-ray afterglows of some GRBs appears to decrease as a function of time \citep[e.g.][]{2005A&A...442L..21S,2007A&A...462..565G,2007ApJ...654L..17C}. For this reason, it is often preferable to choose the later PC data, as it is likely to be closer to the Galactic value. However, the WT data often has considerably higher signal. Therefore, where the PC and WT mode data gave results consistent within $1\sigma$ (68\% confidence), both values were used. Where the results were discrepant at $>1\sigma$, the PC value was used. Since the adopted method depends solely on the limit of the population in the $N_{\rm H_{\rm X}}$--$A_V$ plane, the result is, however, not very sensitive to this. The cut-off, which is the relation between the X-ray column density and the \ion{H}{i} or $A_V$ columns, was obtained from the data using a two-dimensional two-sample KS (2D2SKS) test. The 2D distribution was fit using a log-normal for the Galactic column density distributions (either $A_V$ or $N_{\rm HI}$) and the sum of the normalised Galactic column and an extragalactic column for the GRB afterglows, where a log-normal was also used to reproduce the extragalactic column density distribution. The parameters for the Galactic column density log-normal distribution was obtained by fitting the $A_V$ or $N_{\rm HI}$ distributions directly. The parameters of the extragalactic log-normal distribution as well as the normalisation of the Galactic component of the GRB afterglow absorption distribution were left as free parameters in the 2D2SKS fit. This normalisation parameter is the ratio between the X-ray absorbing column density and the $A_V$ or $N_{\rm HI}$. The fit was obtained by applying the 2D2SKS test at each step while stepping through each parameter across the search space, until a maximum in the probability was obtained. Uncertainties in the parameters were obtained using a Monte Carlo technique based on a Gaussian distribution of the errors on each datapoint. The same method was employed to obtain the best-fit $N_{\rm HI}$--$N_{\rm H_{\rm X}}$ and $A_V$--$N_{\rm H_{\rm X}}$ relations. \section{Results} \label{sec:results} The best-fit to the $A_V$--$N_{\rm H_{\rm X}}$ relation is $N_{\rm H} = 2.2^{+0.3}_{-0.4}\times10^{21}$\,cm$^{-2}$\,$A_V$. The best-fit for the ratio of X-ray to 21\,cm absorbing column densities is $N_{\rm H_{\rm X}}$/$N_{\rm HI} = 1.1^{0.2}_{-0.1}$. The results obtained are, not surprisingly, somewhat sensitive to finding a function to reproduce the distributions with reasonable fidelity. However, whether a normal or log-normal distribution is used to reproduce the extragalactic column density distribution does not affect the results significantly. This is because the fits are sensitive to where the limit of the 2D dataset is, rather than the precise shape of the distribution. As long as the distribution as a whole is reproduced with reasonable accuracy, the nature of the function is not very important. In all of these fits the solar metallicity estimate of \citet[][AG89]{1989GeCoA..53..197A} were used. It is now generally accepted that this metallicity is $\sim45$\% higher than indicated by direct measurements of the solar spectrum \citep{2009ARA&A..47..481A} and possibly the local ISM (WAM00). It was suggested by WAM00 that X-ray absorption studies should use among other things, updated absorption cross-sections and, in particular, updated ISM metallicity values to determine gas column densities in the Galaxy. The other improvements suggested by WAM00 have a relatively small effect on the determined gas column densities, compared to the change in metallicity they propose. Their proposed ISM abundances typically increase the equivalent hydrogen column density for a given observation by roughly the inverse of the change in the metallicity. We fitted all of our data again using the proposed absorption model of WAM00, i.e.\ \texttt{tbabs}, with abundances set at the levels suggested in that paper for the ISM. As expected, we obtain values of the $N_{\rm H_{\rm X}}/A_V$ and $N_{\rm H_{\rm X}}$/$N_{\rm HI}$ ratios that are $\sim40\%$ higher than using AG89 metallicities with similar fractional uncertainties. However, previous works have assumed metallicities similar to the AG89 values. Therefore it is the results using AG89 metallicities that we will use to compare with previous measurements. Indeed, as discussed further below, it appears that the metallicity of a typical Galactic sightline is not consistent with the values proposed in WAM00. \begin{figure} \includegraphics[viewport=1.15 17.95 396.857941 410.8,width=\columnwidth,clip=]{nXvsEBV.pdf} \caption{The Galactic dust column ($A_V$) plotted against the total equivalent hydrogen column density measured from soft X-ray absorption toward \emph{Swift} GRBs ($N_{\rm H_X}$). The best-fit limit to the population is plotted as a solid line with 1$\sigma$ uncertainties as dashed lines and represents the Galactic relation between metals and dust: $N_{\rm H_{\rm X}}/A_V = 2.2^{+0.4}_{-0.3}\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$, assuming AG89 abundances to convert the metal column to an equivalent hydrogen column density.} \label{fig:nxebv} \end{figure} \begin{figure} \includegraphics[viewport=0.15 10.95 400.857941 412.8,width=\columnwidth,clip=]{nXvsnH.pdf} \caption{The Galactic \ion{H}{i} column density plotted against the total equivalent hydrogen column density measured from soft X-ray absorption toward \emph{Swift} GRBs ($N_{\rm H_X}$). While the soft X-ray absorption is presented in units of equivalent hydrogen column density, it is effectively a metal column density converted to equivalent column density assuming AG89 abundances (see text). The best-fit relation is super-solar and is plotted as in Fig.~\ref{fig:nxebv} above. The best-fit relation, corresponding to a ratio of $N_{\rm H_X}/N_{\rm HI} = 1.1^{+0.2}_{-0.1}$, would be even higher using the current \citet{2009ARA&A..47..481A} solar metallicities.} \label{fig:nxnh} \end{figure} \section{Discussion} \label{sec:discussion} The results presented here are consistent with the range of previous results within uncertainties. Early work focussing on bright X-ray sources, comparing soft--X-ray metal column densities to extinction columns by \citet{1973A&A....26..257R}, \citet{1975ApJ...198...95G}, and \citet{1975ApJ...198..103R} provided $N_{\rm H_{\rm X}}/A_V = 1.9$, 2.2, and $2.2\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$. Later, \citet{1995A&A...293..889P} analysed 25 point sources and four supernova remnants (SNRs) with data from ROSAT, deriving the soft X-ray absorption from the spectra of the sources. That work resulted in a value of $N_{\rm H_{\rm X}}/A_V = 1.79\pm0.03\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$. To extend the sample to higher extinction lines of sight, \citet{2003A&A...408..581V} analysed the X-ray absorption and $J$-band extinction toward six nearby star-forming regions (dominated by observations of $\rho$\,Oph) using pre-main sequence stars. They obtain a relation $N_{\rm H_{\rm X}}/A_J = 5.6\pm0.4\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$. Converting this to $N_{\rm H_{\rm X}}/A_V$ using the mean Galactic extinction curve \citep[e.g.][]{1989ApJ...345..245C} with an $R_V=3.1$, yields an extremely low value, $N_{\rm H_{\rm X}}/A_V \sim 1\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$. In that paper the unusually low metals-to-dust ratio is ascribed to either low metallicity in the region of the Galaxy local to the Sun, or to a very flat extinction curve in the $\rho$\,Oph cloud. The former hypothesis seems implausible since it requires that the gas-to-dust ratio remains constant while the metals-to-dust ratios and metallicity are lower by a factor of two. It therefore seems considerably more plausible that the dust grains in a dense molecular cloud are larger, and that therefore the extinction curve is simply flatter. Adopting $R_V=4$ \citep{1993AJ....105.1010V} results in $N_{\rm H_{\rm X}}/A_V = 1.7\pm0.1\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$; adopting $R_V=6$ \citep{2003A&A...408..581V} gives $1.9\times10^{21}$ with a similar error. Observations of 38 sightlines in the far-UV allowed \citet{2009ApJS..180..125R} to estimate the gas-to-dust ratio including the effects of molecularisation of the hydrogen. They found that for their sightlines, with E($B-V$) in the range 0.17--1.08, the fraction of hydrogen in molecular form, $f_{H_2}$, was typically $\sim 0.5$ with none approaching $f_{H_2}\sim1$. Their total gas-to-dust ratio was consistent with previous measurements, $N_{\rm H_{\rm X}}/A_V = 2.15\pm0.14\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$. Finally, a recent paper by \citet{2009MNRAS.400.2050G} analysed this issue very systematically by selecting 22 Galactic SNRs, deriving the metal absorption from the soft X-ray spectra and the extinction from the Balmer decrement in most cases. They obtained $N_{\rm H_{\rm X}}/A_V = 2.21\pm0.09\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$. In that work they suggest that while 143 SNRs exist in the \emph{Chandra}, XMM-\emph{Newton}, and \emph{Suzaku} archives that permit good measurements of $N_{\rm H_{\rm X}}$, only the 22 they use have reasonable dust column density measurements available. An obvious extension of that work would be to obtain extinction measurements for the remaining SNRs and increase the sample size from $\sim20$ to over a hundred. A potential improvement of that technique could be to do more than use the H$\alpha$/H$\beta$ Balmer decrement which gives a measure only of the optical reddening. If the technique could be extended to other line ratios \citep[as recently done for GRB host galaxies,][]{2011MNRAS.414.2793W} it would significantly improve the understanding of the extinction properties of the dust columns along the line of sight. The method used in this paper possesses several advantages over previous studies. First, the underlying X-ray source spectrum is well-defined, almost invariably dominated by a power-law. Second, the sight-lines are passed at random through the Galaxy. Third, the sight-lines are to objects outside the Galaxy, allowing us to include the entire Galactic column in any given direction, and is therefore a good census of the whole Galaxy. However, the drawback is clearly that we are dealing with an upper-limit and therefore a large fraction of our more than 600 sightlines do not contribute much statistical power to the constraint on the metals-to-dust ratio. Partly for this reason, the uncertainties on the best-fit linear relation are fairly large, $\sim 20\%$. A curious fact among previous measurements deriving the metals-to-extinction ratios from X-ray data is that the quoted statistical errors are typically a few percent, while clearly the statistical quality of the fits are low, with very large outliers in terms of contributions to the fit statistic \citep[e.g.\ SNR G0.0+0.0 in][shows a very large deviation from the fit]{2009MNRAS.400.2050G}. From this fact alone it is obvious that either the statistical or systematic uncertainties in these studies are substantially underestimated or that the intrinsic variation in the metals-to-extinction ratios is larger than the uncertainty. In addition, the studies quoted above reach mean values that differ by significantly more than a few percent, e.g.\ the studies of \citet{2009MNRAS.400.2050G} and \citet{1995A&A...293..889P} differ by more than $4\sigma$, supporting the fact that the quoted statistical uncertainties do not represent the actual scatter in the data. The study on the $\rho$\,Oph cloud is instructive in regard to deciding the origin of the scatter \citep{2003A&A...408..581V}, providing what appears to be a good quality fit for a single region in the Galaxy, and hints that perhaps it is the inherent scatter in the metals-to-extinction ratio that is responsible for the large deviation from one study to the next. \subsection{The effect of metallicity} The results presented here are compatible with previous observations since as far as can be determined, previous X-ray measurements used metallicity values similar to AG89 to determine the equivalent hydrogen column density. The comparison made in this paper with the 21\,cm \ion{H}{i} measurements indicates that the mean ISM metallicity in the Galaxy is approximately the AG89 value, rather than the current best estimate of the solar metallicity as proposed by WAM00. While the uncertainties presented in this study are relatively large, a more telling comparison is with Ly$\alpha$ studies of gas-to-dust ratios in the Galaxy \citep[e.g.]{1978ApJ...224..132B,1981MNRAS.196..469W,1994ApJ...427..274D}, which suggest a value of $N_\ion{H}{i}$ in the range $1.6-1.9\times10^{21}A_V$\,cm$^{-2}$. The ratio of the typical X-ray--derived value to the Ly$\alpha$-derived value is around 1.2, immediately indicating that the mean Galactic ISM metallicity is somewhat \emph{larger} (approximately 20\%) than the solar metallicity values of AG89. Such a result is a little surprising given the debate in the X-ray literature on the correct dust-to-gas ratio. However, it has been known for many years that the ISM metallicity increases toward the Galactic centre and that the ISM metallicity becomes similar to the AG89 solar value at around 6\,kpc from the Galactic centre \citep{2000A&A...363..537R}. Since the ratios derived in X-rays are dominated by high-extinction sightlines, it is perhaps after all not that surprising that the metallicity obtained for a typical sightline through the Galaxy is $\sim75\%$, or 0.25\,dex, higher than the most recent measurements of the solar metallicity \citep{2009ARA&A..47..481A}. This value is particularly useful since it is a direct measure of the ISM itself without having to use stellar photospheres and it is effectively independent of dust depletion or ionisation in the ISM. \subsection{Proxies for the soft X-ray absorbing column density} For a typical X-ray sightline through the Galaxy, therefore, a more accurate estimate of the soft X-ray absorption may be obtained by using a metallicity value somewhat higher than that of AG89. Certainly it seems that using the current solar metallicities \citep{2009ARA&A..47..481A} with \ion{H}{i} column densities is likely to lead to a large underestimate of the Galactic soft X-ray absorption. Since the Galaxy is known to show a strong metallicity gradient \citep{2000A&A...363..537R}, if the dust-to-metals ratio is roughly constant \citep{2003ARA&A..41..241D}, one would expect the dust-to-gas ratio to vary in a similar way, i.e.\ a higher dust-to-gas ratio toward the Galactic centre. Sightlines toward the Galactic centre will have higher column densities: we therefore may expect a relationship in the Galaxy between the dust-to-gas ratio and the total column density. Such a relationship is possibly indicated by \citet{1998ApJ...500..525S}, where the Galactic plane has a significantly higher dust-to-gas ratio than at high-latitudes with the Galactic centre direction being especially high. In Fig.~\ref{fig:nhebv} the relationship between the Galactic 21\,cm \ion{H}{i} and dust column densities is shown. Sightlines toward the Galactic centre ($\cos(l)\cos(b)>0.9$, open squares on Fig.~\ref{fig:nhebv}) do indeed seem to show a higher dust-to-gas ratio than sightlines with comparable column densities along other sightlines. The comparison of dust to gas directly yields a non-linear relation, with higher column density sightlines showing a higher dust-to-gas ratio. The linear relation $N_\ion{H}{i} = 2.2\times10^{21}A_V$ shows clear residuals. However, $N_\ion{H}{i} = 2.00\pm0.14\times10^{21} A_V^{0.77\pm0.07}$ provides a better fit to the data, though the fit is still not good. \begin{figure} \includegraphics[viewport=1.152000 29 396.857941 378,width=\columnwidth,clip=]{compare_nh_av.pdf} \caption{Atomic hydrogen column density plotted as a function of extinction for the sightlines associated with every GRB in the sample from the LAB \ion{H}{i} survey \citep{2005A&A...440..775K} and the dust maps of \citet{1998ApJ...500..525S}. The anticipated linear relation is plotted as a solid line with residuals to this relation in terms of $\Delta\chi$ in the lower panel. The linear relation is clearly a poor fit to the data, suggesting that the dust-to-gas ratio increases as a function of column density. The best-fit non-linear relation is plotted as a dashed line. The non-linearity in the relation may be due to metallicity variations along different sightlines. The metallicity gradient in the Galaxy suggests that sightlines toward the Galactic centre should be more metal rich and therefore show a higher dust-to-gas ratio. Sightlines toward the Galactic centre ($\cos(l)\cos(b)>0.9$) are plotted with open squares and do indeed show a higher dust-to-gas ratio than sightlines with comparable column densities. } \label{fig:nhebv} \end{figure} Previous studies have shown a similar effect. \citet{1998ApJ...500..525S} note that the dust content of low density high velocity \ion{H}{i} clouds is lower than in other regions of the Galaxy. The survey of \ion{H}{i} column densities and extinction in the UV using Ly$\alpha$ absorption and reddening of stars of \citet{1994ApJ...427..274D} show the same effect very clearly (their Fig.~4a and to a lesser extent 4b). In principle, a change in the absolute to selective extinction ratio, $R_V$, could be responsible for the apparent change in gas-to-dust ratio derived from reddening. However, since we observe the same effect in Fig.~\ref{fig:nhebv} where the extinction data are derived from the dust emission in the far-infrared (FIR), not from reddening, this seems unlikely. Another possibility is that dust temperatures, which strongly affect the emitted flux in the FIR, might produce such an effect -- cooler lines of sight would yield a spuriously lower dust column if the assumed temperature was too high. However, \citet{1998ApJ...500..525S} accounted for the dust temperature in their analysis, and more compellingly, we observe the same effect in extinction in the UV data of \citet{1994ApJ...427..274D}. Other explanations for the correlation of gas-to-dust ratio with column density, such as ionisation of the hydrogen at low column densities or molecularisation at high column densities, seems unlikely since the gas-to-metals ratio remains effectively constant across the range we study using \ion{H}{i} observations (Fig.~\ref{fig:nxnh}). It is worth noting, however, that the deviation observed at the high column end of Fig~\ref{fig:nhebv} is consistent with the factor of two decrease in the \ion{H}{i} column density observed in far-UV molecularisation studies, where a approximately half of the hydrogen exists in molecular form along sightlines with moderate or greater extinctions \citep{1978ApJ...224..132B,2009ApJS..180..125R}. Even if molecularisation or ionisation were partly responsible, however, it does not alter the fact that the \ion{H}{i} column density is not the best proxy for the soft X-ray absorbing column density. This result immediately suggests that while using $N_\ion{H}{i}$ as a proxy for the soft X-ray absorption is reasonable, using the dust column, as suggested by \citet{1998ApJ...500..525S}, is likely to yield better results not simply because of the higher resolution of the dust maps, but because the dust-to-metals ratio seems to be more constant than the dust-to-gas ratio. While this cannot be investigated further here due to the large uncertainty in the ratios we derive, future studies should try to determine the systematic Galactic radial and column density dependences of the dust-to-metals ratio. This latter point may be important since it is well-known that the depletion of metals out of the gas phase increases as we move into the disk and into cooler environments \citep{1996ARA&A..34..279S}, which would obviously be associated with higher column densities. It has been suggested above that the exploration of \citet{2009MNRAS.400.2050G} could be extended by increasing the sample size. Such an extension would indeed be worthwhile and allow the Galaxy to be divided into various lines of sight, related to the disk or the bulge. A more complete approach might be to use bright extragalactic objects known to have low host galaxy absorptions, which would allow a census to be taken of sightlines through arbitrary directions in the galaxy, including the halo. An obvious candidate are blazars, where the dust extinction could be derived either as done here from Galactic dust maps, or from reddening of the optical power-law itself (where the caveats are that the data in different bands be taken simultaneously and that one needs to account for intrinsic curvature of the spectra). Indeed, for a smaller sample, the X-ray and some of the optical-UV data already exists. Blazars might be bright enough for UV spectroscopy as well, allowing H$_2$ and Ly$\alpha$ measurements to be made. Such a programme, while observationally intensive for a large enough sample to probe a significant fraction of the Galaxy, could represent one of the best probes of the Galactic ISM yet devised. \subsection{Extragalactic metals-to-dust ratios} Studies of metals-to-dust and gas-to-dust also exist outside the local group. Metals-to-dust ratios from foreground lensing galaxies at $z\lesssim1$ have been obtained using multiply-imaged quasars \citep{2006ApJ...637...53D, 2009ApJ...692..677D}. Results show metals-to-dust ratios consistent with Galactic values. Those objects are typically relatively high mass, evolved systems. \citet{2011arXiv1102.1469Z} showed in an analysis of GRB afterglows that GRB host galaxies, which are typically young and star-forming with low metallicities and hard radiation environments \citep{2004A&A...425..913C,2006ApJ...653L..85C,2009ApJ...691..182S,2010MNRAS.405...57S,2002ApJ...566..229C,2008ApJ...683..321F,2010arXiv1010.1783W}, have substantially lower dust-to-gas ratios than the local group even after accounting for metallicity. \section{Conclusions} \label{sec:conclusions} An analysis of the dust-to-metals ratio and metallicity of sightlines through the Galaxy has been presented. The Galactic metal column densities were determined using the lower bound of the distribution of soft X-ray absorptions of the afterglows of a large sample of GRBs detected by the \emph{Swift} satellite. The corresponding extinction and gas column densities were found using the dust and \ion{H}{i} maps of \citet{1998ApJ...500..525S} and \citet{2005A&A...440..775K} respectively. The metal to atomic hydrogen relation is well reproduced with a metallicity $\sim1.75$ times the solar metallicity of \citet{2009ARA&A..47..481A}. The best-fitting relation between metal and dust column densities is $N_{H_X}/A_V = 2.2_{-0.3}^{+0.4}\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ (using AG89 abundances). Previous observations are consistent with this result, suggesting that the metallicity for a typical ISM sightline is 0.25\,dex higher than the current best value for the solar metallicity. It is therefore suggested that a better reproduction of the Galactic soft X-ray absorption will be provided with a metallicity $\sim20\%$ \emph{higher} than the solar metallicity of AG89. However, it is also found that a linear representation does not reproduce the gas-to-dust relationship very well. A gas-to-dust relationship with $N_\ion{H}{i} = 2.00\pm0.14\times A_V^{0.77\pm0.07}$ provides a much better fit to the data. It is very likely that this is predominantly a metallicity-gradient effect, and it is therefore concluded that while the gas-to-dust relation may be as given above, the best proxy for the Galactic soft X-ray absorption should be given by the dust column density with a relation of $N_{\rm H_X}/A_V = 2.2\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ or $N_{\rm H_X}/E(B-V) = 6.8\times10^{21}$\,cm$^{-2}$\,mag$^{-1}$ for an R$_V=3.1$. \begin{acknowledgements} The Dark Cosmology Centre is funded by the DNRF. I would like to thank Anja C. Andersen, Jens Hjorth, Daniele Malesani, Johan Fynbo, and Marten van Kerkwijk for useful discussions. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. \end{acknowledgements}
1,116,691,496,974
arxiv
\section{Introduction} Let $\{e_n\}_{n=0}^\infty$ be a linearly dense sequence of unit vectors in a Hilbert space $\mathcal H.$ Define {\setlength\arraycolsep{2pt} \begin{eqnarray*} x_0&=& \langle x,e_0\rangle e_0,\\ x_n&=& x_{n-1}+ \langle x- x_{n-1},e_n\rangle e_n. \end{eqnarray*} The formula is called the Kaczmarz algorithm (\cite{K}). It can be shown that if vectors the $g_n$ are given by the recurrence relation \begin{equation}\label{gn} g_0=e_0,\quad g_n=e_n-\sum_{i=0}^{n-1}\langle e_n,e_i\rangle g_i \end{equation} then $g_0$ is orthogonal to $g_n,$ for any $n\ge 1$ and \begin{equation}\label{xn}x_n=\sum_{i=0}^n\langle x,g_i\rangle e_i. \end{equation} By (\ref{gn}) the vectors $\{g_n\}_{n=0}^\infty$ are linearly dense in $\mathcal H.$ Also by definition of the algorithm the vectors $x-x_n$ and $e_n$ are orthogonal to each other. Hence \begin{eqnarray} \|x\|^2&=&\|x-x_0\|^2+|\langle x, g_0\rangle|^2,\nonumber\\ \|x-x_{n-1}\|^2&=&\|x-x_n\|^2+ |\langle x, g_n\rangle|^2, \quad n\ge 1.\label{tight} \end{eqnarray} For $n\ge 1 $ let $S_n$ denote the finite dimensional operator defined by the rule \begin{equation} S_ny=\sum_{j=0}^{n}\langle y,e_j\rangle g_j,\quad y\in \mathcal H. \end{equation} Observe that the formulas (\ref{gn}) and (\ref{xn}) can be restated as \begin{eqnarray} (I-S_{n-1})e_n&=&g_n\\ (I-S_n^*)x&=&x-x_{n}.\label{impor} \end{eqnarray} Moreover by (\ref{tight}) it follows that \begin{equation}\label{finite} \|x-x_n\|^2=\|(I-S_n^*)x\|^2=\|x\|^2-\sum_{j=0}^{n}|\langle x,g_j\rangle|^2. \end{equation} In particular \begin{equation}\label{subframe} \sum_{n=0}^{\infty}|\langle x,g_n\rangle|^2\le \|x\|^2, \quad x\in \mathcal H. \end{equation} The sequence $\{e_n\}_{n=0}^\infty$ is called effective if $x_n\to x$ for any $x\in \mathcal H.$ By virtue of (\ref{finite}) this is equivalent to $\|x\|^2=\sum_{n=0}^\infty ||\langle x,g_n\rangle|^2$ for any $x\in \mathcal H,$ which means $\{g_n\}_{n=0}^\infty$ is a normalized tight frame. We refer to \cite{KM} and \cite{HS} for more information on Kaczmarz algorithm. {\bf Acknowledgement.} I thank Wojtek Czaja and Pascu Gavruta for pointing my attention to Lemma 3.5.1 of \cite{c}. \section{Bessel sequences} \begin{defi} A sequence of vectors $\{g_n\}_{n=0}^\infty$ in a Hilbert space $\mathcal H$ will be called a Bessel sequence if (\ref{subframe}) holds. The sequence $\{g_n\}_{n=0}^\infty$ will be called a normalized Bessel sequence if in addition $\|g_0\|=1.$ \end{defi} Let $P_n$ denote the orthogonal projection onto $e_n^\perp$ the orthogonal complement to the vector $e_n.$ By \cite[(1)]{HS} we have \begin{eqnarray} I-S_n^*&=&P_nP_{n-1}\ldots P_0,\label{ker0}\\ I-S_n&=&P_0\ldots P_{n-1}P_n.\label{ker} \end{eqnarray} \begin{thm} For any normalized Bessel sequence $\{g_n\}_{n=0}^\infty$ in a Hilbert space $\mathcal H$ there exists a sequence $\{e_n\}_{n=0}^\infty$ of unit vectors such that (\ref{gn}) holds. In other words any normalized Bessel sequence can be obtained through Kaczmarz algorithm. \end{thm} \begin{proof} We will construct the sequence $\{e_n\}_{n=0}^\infty$ recursively. Set $e_0=g_0.$ Assume the unit vectors $e_1,\ldots,e_{N-1}$ have been constructed such that the formula (\ref{gn}) holds for $n=0,\ldots, N-1.$ We want to solve in $y$ the equations \begin{equation}\label{en}(I-S_{N-1})y=g_N,\quad \|y\|=1.\end{equation} By (\ref{ker}) we have $(I-S_{N-1})e_{N-1}=0,$ i.e. the operator $I-S_{N-1}$ admits nontrivial kernel. Hence the solvability of (\ref{en}) is equivalent to that of \begin{equation}\label{enl}(I-S_{N-1})y=g_N,\quad \|y\|\le 1. \end{equation} By the Fredholm alternative the equation $(I-S_{N-1})y=g_N$ is solvable if and only if $g_N$ is orthogonal to $\ker (I-S_{N-1}^*).$ We will check that this condition holds. Let $x\in \ker (I-S_{N-1}^*).$ Then by (\ref{finite}) and (\ref{subframe}) we have $$0=\|(I-S_{N-1}^*)x\|^2=\|x\|^2-\sum_{j=0}^{N-1}|\langle x,g_j\rangle|^2 \ge \sum_{j=N}^\infty\langle x,g_j\rangle|^2. $$ In particular $\langle x,g_N\rangle =0,$ i.e. $g_N\perp \ker (I-S_{N-1}^*).$ Let $y$ denote the unique solution to $$(I-S_{N-1})y=g_N,\qquad y\perp \ker (I-S_{N-1}).$$ The proof will be complete if we show that $\|y\|\le 1.$ Again by the Fredholm alternative we have $y\in {\rm Im}\, (I-S_{N-1}^*).$ Let $y=(I-S_{N-1}^*)x$ for some $x\in \mathcal H.$ We may assume that $x\perp \ker(I-S_{N-1}^*).$ In particular $\langle x,g_0\rangle =0,$ as (\ref{ker0}) yields $g_0\in \ker (I-S_{N-1}^*).$ By (\ref{finite}) we have $$\|y\|^2=\|(I-S_{N-1}^*)x\|^2=\|x\|^2-\sum_{j=1}^{N-1}|\langle x,g_j\rangle|^2.$$ One the other hand $$\|y\|^2=\langle x,(I-S_{N-1})y\rangle =\langle x, g_N\rangle.$$ Therefore $$\|y\|^2-\|y\|^4=\|x\|^2-\sum_{j=1}^{N}|\langle x,g_j\rangle|^2\ge 0,$$ which implies $\|y\|\le 1.$ \end{proof} \begin{cor} For any normalized tight frame $\{g_n\}_{n=0}^\infty$ in a Hilbert space $\mathcal H$ there exists an effective sequence $\{e_n\}_{n=0}^\infty$ of unit vectors such that (\ref{gn}) holds, i.e. any normalized tight frame can be obtained through Kaczmarz algorithm. \end{cor} For a sequence $\{e_n\}_{n=0}^\infty$ of unit vectors the normalized Bessel sequence $\{g_n\}_{n=0}^\infty$ is determined uniquely. However a given normalized Bessel sequence may correspond to many sequences of unit vectors due to two reasons. First of all for certain $N$ the dimension of the space $\ker(I-S_{N-1})$ may exceed 1. Secondly, if we fix a unit vector $u$ in $\ker(I-S_{N-1})$ the vector $e_N$ can be defined as $e_N=y+\lambda u$ for any complex number such that $|\lambda|^2+\|y\|^2=1.$ In what follows we will indicate properties which guarantee one to one correspondence between $\{e_n\}_{n=0}^\infty$ and $\{g_n\}_{n=0}^\infty.$ \begin{defi} A sequence of unit vectors $\{e_n\}_{n=0}^\infty$ will be called stable if the vectors $ \{e_n\}_{n= N}^\infty $ are linearly dense for any $N.$ A normalized Bessel sequence $\{g_n\}_{n=0}^\infty$ will be called stable if the vectors $\{g_0\}\cup\{g_n\}_{n= N}^\infty $ are linearly dense for any $N.$ \end{defi} \begin{pro} Let sequences $\{e_n\}_{n=0}^\infty$ and $\{g_n\}_{n=0}^\infty$ satisfy (\ref{gn}). The sequence $\{g_n\}_{n=0}^\infty$ is stable if and only if $\{e_n\}_{n=0}^\infty$ is stable and $\langle e_n,e_{n+1}\rangle \neq 0 $ for any $n\ge 0.$ \end{pro} \begin{proof} Assume $\{g_n\}_{n=0}^\infty$ is stable. First we will show that the kernel of $I-S_{N-1}$ is one dimensional and thus consists of the multiples of the vector $e_{N-1}$ (see (\ref{ker}). Assume for a contradiction that $\dim\ker (I-S_{N-1})\ge 2.$ By the Fredholm alternative we get $\dim\ker (I-S_{N-1}^*)\ge 2.$ Hence there exists a nonzero vector $x$ such that $x\perp g_0$ and $(I-S_{N-1}^*)x=0.$ By (\ref{tight}) we obtain $$\|x\|^2=\sum_{n=1}^{N-1}|\langle x,g_n\rangle |^2.$$ This and the condition (\ref{subframe}) imply that $x$ is orthogonal to all the vectors $g_0$ and $\{g_n\}_{n=N}^\infty,$ which contradicts the stability assumption. Assume $\langle e_{N-1},e_{N}\rangle =0$ for some $N\ge 1.$ Then by (\ref{ker}) we have $e_{N-1},\,e_N\in \ker (I-S_{N-1})$ which is a contradiction as the kernel is one dimensional. Concerning stability of $\{e_n\}_{n=0}^\infty$ assume a vector $y$ is orthogonal to all the vectors $\{e_n\}_{n=N}^\infty.$ In particular $y$ is orthogonal to $e_N.$ Since $\ker(I-S_N)=\mathbb{C}e_N$ by the Fredholm alternative $y$ belongs to ${\rm Im}\,(I-S_N^*).$ Let $y=(I-S_N^*)x$ for some $x\in \mathcal H.$ We may assume that $x\perp g_0$ as $g_0\in \ker (I-S_N^*).$ By (\ref{ker0}), since $y$ is orthogonal to $e_n$ for $n\ge N,$ we get $y=(I-S_n^*)x=(I-S_N^*)x$ for $n\ge N.$ On the other hand by (\ref{xn}) and (\ref{impor}) we obtain that $\langle x,g_n\rangle =0$ for $n>N+1.$ Since $x\perp g_0$ by stability assumptions we obtain $x=0$ and thus $y=0.$ For the converse implication assume $\{e_n\}_{n=0}^\infty$ is stable and $\langle e_n,e_{n+1}\rangle \neq 0.$ By the inequality (see \cite{KM}) $$\|x-x_n\|\ge |\langle e_{n-1},e_n\rangle| \|x-x_{n-1}\|$$ we get that $x-x_n\neq 0$ for any $x\perp e_0.$ Since $x-x_n=(I-S_n^*)x$ the kernel of $I-S_n^*$ consists only of the multiples of $e_0=g_0.$ Let $x$ be orthogonal to $\{g_0\}\cup\{g_n\}_{n\ge N+1}$ for some $N\ge 1.$ By (\ref{xn}) we obtain that $x_n=x_N$ for $n\ge N.$ By the definition of the Kaczmarz algorithm we get $x-x_N\perp e_n$ for $n\ge N+1.$ Now stability of $\{e_n\}_{n=0}^\infty$ implies $x-x_N=0.$ By (\ref{impor}) we obtain $(I-S_N^*)x=0.$ This implies $x=0$ since the kernel is one dimensional and consists of the multiples of $g_0.$ \end{proof} For sequences $\{e_n\}_{n=0}^\infty $ and $\{\sigma_ne_n\}_{n=0}^\infty , $ where $\sigma_n$ are complex numbers of absolute value 1, the Kaczmarz algorithm coincide. Therefore we will restrict our attention to {\em admissible} sequences of unit vectors $\{e_n\}_{n=0}^\infty$ such that $\langle e_n,e_{n+1}\rangle \ge 0.$ \begin{thm} Let $\{g_n\}_{n=0}^\infty$ be a stable normalized Bessel sequence. Then there exists a unique admissible sequence $\{e_n\}_{n=0}^\infty $ of unit vectors such that (\ref{gn}) holds. Moreover the sequence $\{e_n\}_{n=0}^\infty $ is stable. \end{thm} \begin{proof} The proof will go by induction. The vector $e_0$ is determined by $e_0=g_0.$ Assume the vectors $e_0,\ldots, e_{N-1}$ were determined uniquely. We have to show that the problem $$ (I-S_{N-1})y=g_N,\quad \|y\|=1,\quad \langle y,e_{N-1}\rangle \ge 0$$ has the unique solution $y.$ By the proof of Proposition 2 the kernel of $I-S_{N-1}$ is one dimensional and thus consists of the multiples of the vector $e_{N-1}.$ By the proof of Theorem 1 there exists the unique solution $y_N$ to the problem $$(I-S_{N-1})y=g_N,\quad y\perp \ker (I-S_{N-1})$$ and $\|y_N\|\le 1.$ Moreover by this proof $\|y_N\|=1$ if and only if $$\|x\|^2-\sum_{j=1}^{N}|\langle x,g_j\rangle|^2=0,$$ where $y_N=(I-S_{N-1}^*)x$ and $x\perp \ker(I-S_{N-1}^*).$ This leads to a contradiction because by inequality (\ref{subframe}) we get that $x$ is orthogonal to all the vectors $g_0$ and $\{g_n\}_{n=N}^\infty.$ Hence $\|y_N\|<1.$ At this stage we know that any solution to the equation $$(I-S_{N-1})y=0$$ is of the form $$y=y_N+\lambda e_{N-1},\quad \lambda\in \mathbb{C}$$ because $\ker (I-S_{N-1})=\mathbb{C}e_{N-1}.$ Since $\|y_N\|<1$ and $y_N\perp e_{N-1}$ there exists a unique solution $y$ satisfying $\|y\|=1$ and $\langle y, e_{N-1}\rangle \ge 0$ namely the one corresponding to $\lambda=\sqrt{1-\|y_N\|^2}.$ \end{proof} \begin{cor} Let $\{g_n\}_{n=0}^\infty$ be a stable normalized tight frame. Then there exists a unique admissible effective sequence $\{e_n\}_{n=0}^\infty $ of unit vectors such that (\ref{gn}) holds. Moreover the sequence $\{e_n\}_{n=0}^\infty $ is stable. \end{cor} \section{Algorithm} The proof of Theorem 1 can also be given by using Gram matrix of the sequence $\{g\}_{n=0}^\infty.$ This argument can be used for constructing an underlying sequence of unit vectors $\{e_n\}_{n=0}^\infty.$ This will be done below. The following lemma follows from Lemma 1 it follows from \cite[Lemma 3.5.1]{c}. \begin{lem}\label{iff} The collection $\{g_n\}_{n=0}^\infty$ is a Bessel sequence if and only if the Gram matrix $G=\{\langle g_i,g_j\rangle\}_{i,j=0}^\infty $ corresponds to a contraction operator on $\ell^2(\mathbb{N}).$ The sequence $\{g_n\}_{n=0}^\infty$ is a tight frame if and only if it is linearly dense and $G$ corresponds to a projection on $\ell^2(\mathbb{N}).$ \end{lem} Let $\{e_n\}_{n=0}^\infty$ be a sequence of unit vectors in a Hilbert space $\mathcal H$ and let $\{g_n\}_{n=0}^\infty$ be the corresponding normalized Bessel sequence. Let $M$ be the strictly lower triangular part of the Gram matrix of the sequence $\{e_n\}_{n=0}^\infty$ and $U$ strictly lower triangular matrix defined by $$(I+U)(I+M)=I.$$ By \cite{HS} the matrix $U$ is a contraction on the Hilbert space $\ell^2(\mathbb{N}).$ \begin{lem}\label{if} For any $i,j$ we have $$\langle g_i,g_j\rangle =\langle (I-UU^*)\delta_j,\delta_i\rangle_{\ell^2(\mathbb{N})}$$ \end{lem} \begin{proof} Let $$M=\begin{pmatrix} 0 & 0 & 0&0 & 0&\ldots \\ m_{10} & 0 &0& 0&0 & \ldots \\ m_{20} & m_{21} & 0&0 &0&\ldots\\ m_{30}&m_{31}&m_{32}& 0&0&\ldots\\ \vdots&\vdots & \vdots &\vdots&\ddots&\ddots \end{pmatrix},\quad U=\begin{pmatrix} 0 & 0 & 0&0 & 0&\ldots \\ c_{10} & 0 &0& 0&0 & \ldots \\ c_{20} & c_{21} & 0&0 &0&\ldots\\ c_{30}&c_{31}&c_{32}& 0&0&\ldots\\ \vdots&\vdots & \vdots &\vdots&\ddots&\ddots \end{pmatrix}.$$ By \cite[(6)]{HS} we have \begin{equation}\label{equiv} g_i=e_i+\sum_{k=0}^{i-1}c_{ik}e_k. \end{equation} Set $c_{nn}=m_{nn}=1$ and let $1\le k\le n.$ By taking inner product with $g_j$ in (\ref{equiv}) we get \begin{multline*} \langle g_i,g_j\rangle = \sum_{k=0}^ic_{ik}\sum_{l=0}^j\overline{c}_{jl}\langle e_k,e_l\rangle \\=\langle (I+U)(I+M+M^*)(I+U^*)\delta_j,\delta_i\rangle _{\ell^2(\mathbb{N})}. \end{multline*} Taking into account relations between the matrices $M$ and $U$ gives \begin{equation}\label{MU}(I+U)(I+M+M^*)(I+U^*)=I-UU^*. \end{equation} The product of these matrices is well defined since $U^*$ leaves the space $F(\mathbb{N})={\rm span}\, \{\delta_n\mid n\ge 0\} )$ invariant. Therefore $$\langle g_i,g_j\rangle=\langle (I-UU^*)\delta_j,\delta_i\rangle_{\ell^2(\mathbb{N})}.$$ \end{proof} {\bf Remark.} Lemma \ref{if} can be used to give a shorter and simpler proof of Theorem 1 in \cite{HS}. Indeed, by Lemma \ref{iff}, a linearly dense sequence $\{g_n\}_{n=0}^\infty$ constitutes a normalized tight frame if and only if the matrix $G=\{\langle g_i,g_j\rangle\}_{i,j=0}^\infty $ is a projection. In view of Lemma \ref{if} the latter is equivalent to $U$ being a partial isometry. Moreover in this case we have $$\dim{\mathcal H}=\sum_{n=0}^\infty \|g_n\|^2={\rm Tr}\, (I-UU^*).$$ We are ready now to give an alternative proof of Theorem 1. Let $\{g_i\}_{i=0}^\infty$ be a normalized Bessel sequence. By Lemma \ref{iff} the matrix $A:=I-G$ is positive definite. Moreover $A(0,i)=A(i,0)=0$ because $\|g_0\|=1$ and $g_0\perp g_i$ for $i\ge 1.$ Let $\tilde{A}$ denote the truncated matrix obtained from $A$ by removing the first row and the first column. Clearly $\tilde{A}$ corresponds to a positive definite contraction on $\ell^2(\mathbb{N}^+).$ The next lemma is probably known and provides infinite dimensional version of the so called Cholesky decomposition of positive definite matrices. \begin{lem}\label{three} For any positive definite matrix $B=\{b(i,j)\}_{i,j=1}^\infty$ there exists a lower triangular matrix $V=\{v(i,j)\}_{i,j=1}^\infty$ such that $B=VV^*.$ \end{lem} \begin{proof} By the well known fact there exists a Hilbert space $\mathcal M$ and a linearly dense sequence of vectors $\{h_i\}_{i=1}^\infty$ in $\mathcal M$ such that $$b(i,j)=\langle h_i,h_j\rangle.$$ By applying the Gram-Schmid procedure to this sequence we obtain an orthonormal sequence $\{\eta_i\}_{i=1}^N,$ where $N=\dim \mathcal M,$ such that $h_i\in {\rm span\,}\{\eta_1,\eta_2,\ldots, \eta_i\}$ for $i< N+1.$ In particular there are coefficients $v_{ik}$ for $i\ge k$ and $N+1> k$ for which we have $$h_i=\sum_{k=1}^iv_{ik}\eta_k.$$ Set $v_{ik}=0$ for $i>k$ and for $k>N.$ Then $$ b(i,j)=\langle h_i,h_j\rangle = \sum_{k,l=1}^{\min(i,j)}v_{ik}\overline{v}_{jk}= (VV^*)(i,j). $$ \end{proof} By Lemma \ref{three} there is a lower triangular matrix $V=\{v_{ij}\}_{i,j=1}^\infty$ such that $\widetilde{A}=VV^*.$ Let $U=\{c_{ij}\}_{i,j=0}^\infty$ be the strictly lower triangular matrix obtained from $V$ by adding a zero row and a zero column, i.e. $$c_{ij}=\begin{cases}v_{i-1,j-1}& ij>0,\\ 0 &ij=0. \end{cases}$$ In this way we obtain \begin{equation}\label{I-G}I-G=A=UU^*. \end{equation} Since $G$ corresponds to a contraction on $\ell^2(\mathbb{N})$ so does $U.$ Let $M=\{m_{ij}\}_{i,j=0}^\infty$ be the strictly lower triangular matrix determined by $(I+M)(I+U)=I.$ Set $m_{ii}=1$ and define (cf. (\ref{gn})) $$e_i=\sum_{k=0}^{i}m_{ik}g_k.$$ We claim that $e_i$ are unit vectors and moreover $\langle e_i,e_j\rangle =m_{i,j}$ for $i\ge j.$ This will give (\ref{gn}) and thus conclude another proof of Theorem 1. By (\ref{I-G}) we have \begin{multline*} \langle e_i,e_j\rangle = \sum_{k=0}^i\sum_{l=0}^jm_{ik}\overline{m}_{jl}\langle g_k,g_l\rangle =\sum_{k=0}^i\sum_{l=0}^jm_{ik}\overline{m}_{jl}(I-UU^*)(k,l)\\= (I+M)(I-UU^*)(I+M^*)(i,j) \end{multline*} On the other hand (\ref{MU}) yields $$(I+M)(I-UU^*)(I+M^*)=I+M+M^*.$$ In particular for $i\ge j$ we obtain $$\langle e_i,e_j\rangle =\begin{cases}m_{i,j}& i>j,\\ 1&i=j.\end{cases} $$ This way of proving Theorem 1 provides an algorithm for constructing a sequence of unit vectors $\{e_n\}_{n=0}^\infty$ for a given normalized Bessel sequence $\{g_n\}_{n=0}^\infty.$ Indeed, it suffices to determine an algorithm for proving Lemma \ref{three}. When $B$ is strictly positive definite then the solution can be given by the so called Cholesky algorithm. When $B$ is not necessarily strictly positive definite this algorithm fails and w ehave to find a different way of constructing the decomposition. We will construct a sequence of indices $\{n_k\}_{k=1}^N$ in the following way. Let $n_1$ be the smallest number $i$ such that $b_{ii}>0.$ If such number does not exist then $B=0.$ Assume $n_1,n_2,\ldots, n_k$ have been constructed in such a way that the determinant $$\Delta_k=\det (b_{n_in_j})_{i,j=1}^k> 0.$$ Then let $n_{k+1}$ be the smallest number such that $$\det (b_{n_in_j})_{i,j=1}^{k+1}> 0.$$ If such number does not exist the procedure terminates and $N=n_k.$ The matrix $B$ gives rise to a positive definite hermitian form on the space $F(\mathbb{N}_+)={\rm span}\,\{\delta_n\,|\,n\ge 1\}$ by the rule $$\langle x,y\rangle =\sum_{i,j=1}^\infty b(i,j)x_i\overline{y}_j.$$ \begin{lem} For any $n$ there exist $i$ with $n_{i}\le n<n_{i+1}$ and numbers $\lambda_{nk}$ for $1\le k\le i,$ such that \begin{equation}\label{equ} \left \langle \delta_n-\sum_{k=1}^i\lambda_{nk}\delta_{n_k},\delta_m\right \rangle =0, \quad m\ge 1. \end{equation} \end{lem} \begin{proof} If $n=n_i$ for some $i,$ then statement follows as $\delta_n=\delta_{n_i}.$ Otherwise we have $n_i<n<n_{i+1}$ for some $i.$ By plugging in $m=n_1,n_2,\ldots, n_i$ to (\ref{equ}) we obtain a system of linear equations $$\sum_{k=1}^i\lambda_{nk}b_{n_kn_l}=b_{nn_l},\quad l=1,2,\ldots, i.$$ The main determinant of this system is $\Delta_i.$ Therefore the system has the unique solution $\lambda_{n,1},\ldots,\lambda_{n,i}.$ By definition of $n_{i+1}$ we have $$ \begin{vmatrix b_{n_1n_1}&b_{n_1n_2}&\cdots & b_{n_1n_i}&b_{n_1n}\\ b_{n_2n_1}&b_{n_2n_2}&\cdots & b_{n_2n_i}&b_{n_2n}\\ \ \vdots &\ \vdots &\cdots &\ \vdots&\vdots\\ b_{n_in_1} &b_{n_in_2} & \cdots & b_{n_in_i}& b_{n_in}\\ b_{nn_1}& b_{nn_2} & \cdots & b_{nn_i}& b_{nn} \end{vmatrix =0. $$ As $\Delta_i>0$ the first $i$ rows of this matrix are linearly independent. Therefore the last row of this matrix is a linear combination of the first $i.$ The coefficients must coincide with $\lambda_{n1},\ldots,\lambda_{ni}.$ In particular considering the last entry of the rows gives $$\sum_{k=1}^i\lambda_{nk}b_{n_kn}=b_{nn}.$$ This is equivalent to (\ref{equ}) with $m=n.$ Since (\ref{equ}) is valid for $m=n,n_1,\ldots, n_i$ then $$\left\langle \delta_n-\sum_{k=1}^i\lambda_{nk}\delta_{n_k}, \delta_n-\sum_{k=1}^i\lambda_{nk}\delta_{n_k}\right\rangle =0.$$ By Schwarz inequality this implies (\ref{equ}) for any $m.$ \end{proof} Define the sequence of vectors $\{\eta_i\}_{i=1}^N$ by the formula $$ \eta_1={1\over \sqrt{\Delta_1}}\delta_{n_1},\qquad\eta_i= {1\over \sqrt{\Delta_{i-1}\Delta_i} \begin{vmatrix b_{n_1n_1}&b_{n_1n_2}&\cdots & b_{n_1n_i}\\ b_{n_2n_1}&b_{n_2n_2}&\cdots & b_{n_2n_i}\\ \ \vdots &\ \vdots &\cdots &\ \vdots\\ b_{n_{i-1}n_1}&b_{n_{i-1}n_2}&\cdots &b_{n_{i-1}n_i}\\ \delta_{n_1} &\delta_{n_2} & \cdots & \delta_{n_i} \end{vmatrix $$ It can be checked easily that \begin{equation}\label{orth} \langle \eta_i,\eta_j\rangle =\delta_i^j. \end{equation} Obviously from the definition we have $$\eta_{i}={\sqrt{\Delta_i}\over \sqrt{\Delta_{i-1}}}\delta_{n_i}+ \sum_{k=1}^{i-1}\alpha_{ik}\delta_{n_k}$$ for some explicitly given coefficients $\alpha_{ik}.$ Therefore \begin{equation}\label{orthinv} \delta_{n_i}=\sum_{k=1}^i\beta_{ik}\eta_k, \end{equation} for some coefficients $\beta_{ik}.$ By (\ref{equ}) and (\ref{orthinv}) we get that for any $n$ there exist $i$ with $n_{i}\le n<n_{i+1}$ and numbers $v_{nk}$ for $1\le k\le i,$ such that $$ \left \langle \delta_n-\sum_{k=1}^iv_{nk}\eta_{k},\delta_m\right \rangle =0,\quad m\ge 1. $$ Setting $v_{nk}=0$ for $i<k\le n$ gives \begin{equation}\label{equ1} \left \langle \delta_n-\sum_{k=1}^nv_{nk}\eta_{k},\delta_m\right \rangle =0,\quad m\ge 1. \end{equation} Therefore by (\ref{equ}) and (\ref{equ1}) we have \begin{multline*} 0=\left\langle \sum_{k=1}^nv_{n,k}\eta_k\,,\,\delta_m-\sum_{k=1}^nv_{mk}\eta_{k}\right\rangle= \langle \delta_n,\delta_m\rangle - \sum_{k=0}^{\min(n,m)}v_{nk}\overline{v}_{mk}\\ =b_{nm}-\sum_{k=0}^{\min(n,m)}v_{nk}\overline{v}_{mk}. \end{multline*} Therefore $B=VV^*$ where $$ V=\begin{pmatrix} v_{11} & 0 & 0&0 & 0&\ldots \\ v_{21} & v_{22} &0& 0&0 & \ldots \\ v_{31} & v_{32} & v_{33}&0 &0&\ldots\\ v_{41}&v_{42}&v_{43}& v_{44}&0&\ldots\\ \vdots&\vdots & \vdots &\vdots&\ddots&\ddots \end{pmatrix}. $$ By analyzing the entire construction we may conclude the coefficients $v_{nk}$ can be computed in an algorithmic way. \section{Equivalent sequences} Any sequence of unit vectors $\{e_n\}_{n=0}^\infty$ leads by Kaczmarz algorithm to a normalized Bessel sequence $\{g_n\}_{n=0}^\infty.$ \begin{defi} Two normalized Bessel sequences $\{g_n\}_{n=0}^\infty$ and $\{g'_n\}_{n=0}^\infty$ will be called equivalent if there is a unitary operator $V$ such that $g'_n=Vg_n$ for $n\ge 0.$ Similarly two sequences $\{e_n\}_{n=0}^\infty$ and $\{e_n'\}_{n=0}^\infty$ of unit vectors will be called equivalent if there is a unitary operator $V$ such that $e'_n=Ve_n$ for $n\ge 0.$ \end{defi} It is easy to see that if the sequences $\{e_n\}_{n=0}^\infty$ and $\{e_n'\}_{n=0}^\infty$ are equivalent so are the corresponding sequences of normalized Bessel sequences $\{g_n\}_{n=0}^\infty$ and $\{g_n'\}_{n=0}^\infty,$ with the same unitary operator $V.$ The converse is not true, as the normalized Bessel sequences do not correspond to sequences of unit vectors in one-to-one fashion. Nevertheless by Lemma 1, the equivalence relation between normalized Bessel sequences can be described in terms of Gram matrices of the corresponding sequences of unit vectors. Assume sequences of unit vectors $\{e_n\}_{n=0}^\infty$ and $\{e_n'\}_{n=0}^\infty$ are associated with normalized Bessel sequences $\{g_n\}_{n=0}^\infty$ and $\{e_g'\}_{n=0}^\infty,$ respectively. Let $M$ and $M'$ be the strictly lower triangular part of the Gram matrices of sequences $\{e_n\}_{n=0}^\infty$ and $\{e_n'\}_{n=0}^\infty,$ respectively. Let $U$ and $U'$ be strictly lower triangular matrices defined by $$(I+U)(I+M)=(I+U')(I+M')=I.$$ By \cite{HS} the matrices $U$ and $U'$ are contractions on the Hilbert space $\ell^2(\mathbb{N})$ \begin{cor} The sequences $\{g_n\}_{n=0}^\infty$ and $\{g_n'\}_{n=0}^\infty$ are equivalent if and only if $UU^*=U'U'^*.$ \end{cor} \begin{proof} By Lemma 1 we get that $UU^*=U'U'^*$ if and only if $\langle g_i,g_j\rangle =\langle g'_i,g'_j\rangle$ for any $i,j\ge 0.$ Obviously the latter, along with the linear density of vectors $\{g_n\}_{n=0}^\infty $ and $\{g'_n\}_{n=0}^\infty ,$ is equivalent to the existence a unitary operator $U$ such that $g'_i=Ug_i.$ \end{proof} {\bf Remark.} It would be of interest to determine when two sequences of unit vectors $\{e_n\}_{n=0}^\infty$ and $\{e_n'\}_{n=0}^\infty$ lead to the same normalized Bessel sequence $\{g_n\}_{n=0}^\infty .$
1,116,691,496,975
arxiv
\section{Introduction} The identification of \emph{time-varying} linear systems is a fundamental problem in many engineering applications. Concrete examples include radar and the identification of dispersive communication channels. In this paper, we study the problem of identifying a system $H$ whose response $y = Hx$ to the probing signal $x$ can be described by finitely many delays and Doppler shifts: \begin{align} y(t) = \sum_{j=1}^\S b_j \prsig(t-\bar \tau_j) e^{i2\pi \bar \nu_j t}. \label{eq:iorelintro} \end{align} Here, $b_j$ is the attenuation factor corresponding to the delay-Doppler pair $(\bar \tau_j, \bar \nu_j)$. In radar imaging, for example, this input-output relation corresponds to a scene consisting of $\S$ moving targets modeled by point scatters, where the input $x$ is the probing signal transmitted by the radar, and the output $y$ is the superposition of the reflections of the probing signal by the point scatters. The relative distances and velocities of the targets can be obtained from the delay-Doppler pairs $(\bar \tau_j, \bar \nu_j)$. In order to identify the system $H$ (e.g.,~to locate the targets in radar) we need to estimate the continuous time-frequency shifts $(\bar \tau_j, \bar \nu_j)$ and the corresponding attenuation factors $b_j$ from a single input-output measurement, i.e., from the response $y$ to some known and suitably selected probing signal $x$. There are, however, important constraints on the type of input-output measurements that can be performed in practice: The probing signal $x$ must be band-limited and approximately time-limited. Also, the response $y$ can be observed only over a finite time interval. For concreteness, we assume that we observe the response $y$ over an interval of length $T$ and that $x$ has bandwidth $B$ and is approximately supported on a time interval of length proportional to $T$. This time- and band-limitation determines the ``natural'' resolution of the system, i.e., the accuracy up to which the delay-Doppler pairs can be identified is $1/B$ and $1/\Tint$ in $\tau$- and $\nu$-directions, respectively. This resolution is achieved by a standard pulse-Doppler radar that performs digital matched filtering in order to detect the delay-Doppler pairs. From \eqref{eq:iorelintro}, it is evident that band- and approximate time-limitation of $x$ implies that $y$ is band- and approximately time-limited as well---provided that the delay-Doppler pairs are compactly supported. For example in radar, due to path loss and finite velocity of the targets or objects in the scene this is indeed the case \cite{strohmer_pseudodifferential_2006}. Throughout, we will therefore assume that $ (\bar \tau_j, \bar \nu_j) \in [-T/2,T/2]\times[-B/2,B/2] $. This is not a restrictive assumption as the region in the $(\tau,\nu)$-plane where the delay-Doppler pairs are located can have area $BT \gg 1$, which is very large. In fact, for certain applications, it is reasonable to assume that the system is \emph{underspread}, i.e., that the delay Doppler pairs lie in a region of area $\ll 1$ \cite{taubock_compressive_2010,bajwa_learning_2008,bajwa_identification_2011}. We do not need the underspread assumption in this paper. Since $y$ is band-limited and approximately time-limited, by the $2WT$-Theorem \cite{slepian_bandwidth_1976,durisi_sensitivity_2012}, it is essentially characterized by on the order of $BT$ coefficients. We therefore sample $y$ in the interval $[-T/2, T/2]$ at rate $1/B$, so as to collect $\L \defeq BT$ samples\footnote{For simplicity we assume throughout that $\L = BT$ is an odd integer.}. Furthermore, we choose $x$ partially periodic by taking its samples $x_\ell = x(\ell/B)$ to be $\L$-periodic for $3L$ many samples, and zero otherwise, so that $x$ is essentially supported on an interval of length $3T$. For the readers familiar with wireless communication, we point out that the partial periodization of $x$ serves a similar purpose as the cyclic prefix used in OFDM systems. As detailed in Section \ref{sec:probform}, the corresponding samples $y_p \defeq y(p/B)$ in the interval $ p/B \in [-T/2, T/2]$ are given by \begin{align} y_p &= \sum_{j=1}^{\S} b_j [\mc F_{\nu_j} \mc T_{\tau_j} \vx ]_p , \quad p = -N,...,N, \quad N \defeq \frac{L-1}{2}, \label{eq:periorel} \end{align} where \begin{align} [\mc T_{\tau} \vx ]_p \defeq \frac{1}{L} \sum_{k=-N}^{N} \left[ \left( \sum_{\ell=-N}^{N} x_{\ell} e^{- i2\pi \frac{\ell k}{L} } \right) e^{-i2\pi k \tau } \right] e^{i2\pi \frac{p k}{L} } \quad \text{and} \quad [\mc F_{\nu} \vx ]_p \defeq x_p e^{i2\pi p \nu }. \label{eq:deftimefreqshifts} \end{align} Here, we defined\footnote{ To avoid ambiguity, from here onwards we refer to $(\bar \tau_j, \bar \nu_j)$ as delay-Doppler pair and to $(\tau_j, \nu_j)$ as time-frequency shift. } the time-shifts $\tau_j \defeq \bar \tau_j/T$ and frequency-shifts $\nu_j \defeq \bar \nu_j/B$. Since $(\bar \tau_j, \bar \nu_j) \in [-T/2,\allowbreak T/2] \allowbreak \times[-B/2,B/2]$ we have $(\tau_j, \nu_j) \in [-1/2,1/2]^2$. Since $\mc T_{\tau}\vx$ and $\mc F_{\nu}\vx$ are $1$-periodic in $\tau$ and $\nu$, we can assume in the remainder of the paper that $(\tau_j, \nu_j) \in [0,1]^2$. The operators $\mc T_{\tau}$ and $\mc F_{\nu}$ can be interpreted as fractional time and frequency shift operators in $\complexset^\L$. If the $(\tau_j,\nu_j)$ lie on a $(1/L,1/L)$ grid, the operators $\mc F_{\nu}$ and $\mc T_{\tau}$ reduce to the ``natural'' time frequency shift operators in $\complexset^\L$, i.e., $[\mc T_{\tau} \vx ]_p = x_{p - \tau \L}$ and $[\mc F_{\nu} \vx ]_p = x_p e^{i2\pi p \frac{\nu\L}{\L} }$. The definition of a time shift in \eqref{eq:deftimefreqshifts} as taking the Fourier transform, modulating the frequency, and taking the inverse Fourier transform is a very natural definition of a \emph{continuous} time-shift $\tau_j \in [0,1]$ of a \emph{discrete} vector $\vx = \transp{[x_0,...,x_{\L-1}]}$. Finally note that to obtain \eqref{eq:periorel} (see Section \ref{sec:probform}) from \eqref{eq:iorelintro}, we approximate a periodic sinc function with a finite sum of sinc functions (this is where partial periodization of $x$ becomes relevant). Thus \eqref{eq:periorel} does not hold exactly if we take the probing signal to be essentially time-limited. However, in Section \ref{sec:probform} we show that the incurred relative error decays as $1/\sqrt{\L}$ and is therefore negligible for large $\L$. The numerical results in Section \ref{sec:numres} indeed confirm that this error is negligible. If we took $x$ to be $T$-periodic on $\reals$, \eqref{eq:periorel} becomes exact, but at the cost of $x$ not being time-limited. The problem of identifying the system $H$ with input-output relation \eqref{eq:iorelintro} under the constraints that the probing signal $x$ is band-limited and the response to the probing signal $y=Hx$ is observed on a finite time interval now reduces to the estimation of the triplets $(b_j, \tau_j, \nu_j)$ from the samples in \eqref{eq:periorel}. Motivated by this connection to the continuous system model, in this paper, we consider the problem of recovering the attenuation factors $b_j$ and the corresponding time-frequency shifts $(\tau_j, \nu_j )\in [0,1]^2, j=1,...,\S$, from the samples $y_p, p=-N,...,N$, in~\eqref{eq:periorel}. We call this the super-resolution radar problem, as recovering the exact time-frequency shifts $(\tau_j,\nu_j)$ ``breaks'' the natural resolution limit of $(1/B,1/T)$ achieved by standard pulse-Doppler radar. Alternatively, one can view the super-resolution radar problem as that of recovering a signal that is $\S$-sparse in the continuous dictionary of time-frequency shifts of an $\L$-periodic sequence $x_\ell$. In order to see this, and to better understand the super-resolution radar problem, it is instructive to consider two special cases. \subsection{Time-frequency shifts on a grid \label{sec:ongrid}} If the delay-Doppler pairs $(\bar \tau_j, \bar \nu_j)$ lie on a $(\frac{1}{B}, \frac{1}{T})$ grid or equivalently if the time-frequency shifts $(\tau_j, \nu_j)$ lie on a $(\frac{1}{L},\frac{1}{L})$ grid, the super-resolution radar problem reduces to a sparse signal recovery problem with a Gabor measurement matrix. To see this, note that in this case $\tau_j \L$ and $\nu_j\L$ are integers in $\{0,...,\L-1\}$, and \eqref{eq:periorel} reduces to \begin{align} y_p &= \sum_{j=1}^{\S} b_j x_{p - \tau_j \L} e^{i2\pi \frac{ (\nu_j \L) p}{\L} }, \quad p = -N,...,N.\label{eq:periorel_gabor} \end{align} Equation \eqref{eq:periorel_gabor} can be written in matrix-vector form \[ \vy = \vect{G} \vs. \] Here, $[\vy]_p \defeq y_p$, $\vect{G} \in \complexset^{\L \times \L^2}$ is the Gabor matrix with window $\vx$, where the entry in the $p$th row and $(\tau_j\L, \nu_j\L)$-th column is $x_{p - \tau_j \L} e^{i2\pi \frac{(\nu_j \L) p}{\L} }$, and $\vs \in \complexset^{\L^2}$ is a sparse vector where the $j$-th non-zero entry is given by $b_j$ and is indexed by $(\tau_j\L, \nu_j\L)$. Thus, the recovery of the triplets $(b_j,\tau_j,\nu_j)$ amounts to recovering the $\S$-sparse vector $\vs \in \complexset^{\L^2}$ from the measurement vector $\vy \in \complexset^\L$. This is a sparse signal recovery problem with a Gabor measurement matrix. A---by now standard---recovery approach is to solve a simple convex $\ell_1$-norm-minimization program. From \cite[Thm.~5.1]{krahmer_suprema_2014} we know that, provided the $x_\ell$ are i.i.d.~sub-Gaussian random variables, and provided that $S\le c L/(\log L)^4$ for a sufficiently small numerical constant $c$, with high probability, all $\S$-sparse vectors $\vs$ can be recovered from $\vy$ via $\ell_1$-minimization. Note that the result \cite[Thm.~5.1]{krahmer_suprema_2014} only applies to the Gabor matrix $\vect{G}$ and therefore does not apply to the super-resolution problem where the ``columns'' $\mc F_{\nu} \mc T_{\tau} \vx$ are highly correlated. \subsection{Only time or only frequency shifts \label{sec:redsupres}} Next, we consider the case of only time or only frequency shifts, and show that in both cases recovery of the $(b_j,\tau_j)$ and the $(b_j,\nu_j)$, is equivalent to the recovery of a weighted superposition of spikes from low-frequency samples. Specifically, if $\tau_j = 0$ for all $j$, \eqref{eq:periorel} reduces to \begin{align} y_p = x_p \sum_{j=1}^{\S} b_j e^{i2\pi p \nu_j }, \quad p = -N,...,N. \label{eq:supres} \end{align} The $y_p$ above are samples of a mixture of $\S$ complex sinusoids, and estimation of the $(b_j,\nu_j)$ corresponds to determining the magnitudes and the frequency components of these sinusoids. Estimation of the $(b_j,\nu_j)$ is known as a line spectral estimation problem, and can be solved using approaches such as Prony's method \cite[Ch.~2]{gershman_space-time_2005}. Recently, an alternative approach for solving this problem has been proposed, specifically in~\cite{candes_towards_2014} it is shown that exact recovery of the $(b_j,\nu_j)$ is possible by solving a convex total-variation norm minimization problem. This results holds provided that the minimum separation between any two $\nu_j$ is larger than $2/N$. An analogous situation arises when there are only time shifts ($\nu_j = 0$ for all $j$) as taking the discrete Fourier transform of $y_p$ yields a relation exactly of the form \eqref{eq:supres}. \subsection{Main contribution} In this paper, we consider a random probing signal by taking the $x_\ell$ in \eqref{eq:periorel} to be i.i.d. Gaussian (or sub-Gaussian) random variables. We show that with high probability, the triplets $(b_j,\tau_j,\nu_j)$ can be recovered perfectly from the $L$ samples $y_p$ by essentially solving a convex program. This holds provided that two conditions are satisfied: \begin{itemize} \item \emph{Minimum separation condition:} We assume the time-frequency shifts $(\tau_j,\nu_j) \in [0,1]^2, j = 1,...,\S$, satisfy the minimum separation condition \begin{align} \max(|\tau_j - \tau_{j'}|, |\nu_j - \nu_{j'}| ) \geq \frac{2.38}{N} , \text{ for all } j\neq j', \label{eq:minsepcond} \end{align} where $|\tau_j - \tau_{j'}|$ is the wrap-around distance on the unit circle. For example, $|3/4-1/2|=1/4$ but $|5/6-1/6|=1/3\neq 2/3$. Note that the time-frequency shifts must not be separated in both time \emph{and} frequency, e.g., \eqref{eq:minsepcond} can hold even when $\tau_j = \tau_{j'}$ for some $j\neq j'$. \item \emph{Sparsity:} We also assume that the number of time-frequency shifts $S$ obeys \begin{align*} S\le c\frac{L}{(\log L)^3}, \end{align*} where $c$ is a numerical constant. \end{itemize} This result is essentially optimal in terms of the allowed sparsity level, as the number $\S$ of unknowns can be linear---up to a log-factor---in the number of observations $L$. Even when we are given the time-frequency shifts $(\tau_j,\nu_j)$, we can only hope to recover the corresponding attenuation factors $b_j, j=1,...,\S$, by solving \eqref{eq:periorel}, provided that $\S \leq \L$. We note that some form of separation between the time-frequency shifts is necessary for stable recovery. To be specific, we consider the simper problem of line spectral estimation (cf.~Section \ref{sec:redsupres}) that is obtained from our setup by setting $\tau_j=0$ for all $j$. Clearly, any condition necessary for the line spectral estimation problem is also necessary for the super-resolution radar problem. Consider an interval of the $\nu$-axis of length $\frac{2\S'}{\L}$. If there are more than $\S'$ frequencies $\nu_j$ in this interval, then the problem of recovering the $(b_j,\nu_j)$ becomes extremely ill-posed when $\S'$ is large \cite[Thm.~1.1]{donoho1992superresolution}, \cite{morgenshtern2014stable}, \cite[Sec.~1.7]{candes_towards_2014}, \cite{beurling1989collected}. Hence, in the presence of even a tiny amount of noise, stable recovery is not possible. The condition in~\eqref{eq:minsepcond} allows us to have $0.42\,\S'$ time-frequency shifts in an interval of length $\frac{2\S'}{\L}$, an optimal number of frequencies up to the constant $0.42$. We emphasize that while some sort of separation between the time-frequency shifts is necessary, the exact form of separation required in \eqref{eq:minsepcond} may not be necessary for stable recovery and less restrictive conditions may suffice. Indeed, in the simpler problem of line spectral estimation (i.e., $\tau_j=0$ for all $j$), Donoho \cite{donoho1992superresolution} showed that stable super-resolution is possible via an exhaustive search algorithm even when condition~\eqref{eq:minsepcond} is violated locally as long as every interval of the $\nu$-axis of length $\frac{2\S'}{\L}$ contains less than $\S'/2$ frequencies $\nu_j$ and $\S'$ is small (in practice, think of $\S'\lesssim 10$). The exhaustive search algorithm is infeasible in practice and an important open question in the theory of line spectral estimation is to develop a feasible algorithm that achieves the stability promised in \cite{donoho1992superresolution}. In the special case when the $b_j$ are real and positive, stable recovery can be achieved by convex optimization \cite{morgenshtern2014stable,Caratheodory_ueber_1911,fuchs_sparsity_2005}, see also \cite{schiebinger_superresolution_2015} for recent results on more general positive combination of waveforms. Translated to the continuous setup, our result implies that with high probability we can identify the triplets $(b_j, \bar \tau_j,\bar \nu_j)$ perfectly provided that \begin{equation} \label{eq:mindist} |\bar \tau_j -\bar \tau_{j'}| \geq \frac{4.77}{B} \,\text { or } \, |\bar \nu_j - \bar \nu_{j'}| \geq \frac{4.77}{T}, \quad \text{ for all } j\neq j' \end{equation} and $S\le c\frac{BT}{\left(\log (BT)\right)^3}$. Since we can exactly identify the delay-Doppler pairs $(\bar \tau_j,\bar \nu_j)$, our result offers a significant improvement in resolution over conventional radar techniques. Specifically, with a standard pulse-Doppler radar that samples the received signal and performs digital matched filtering in order to detect the targets, the delay-Dopper shifts $(\bar \tau_j,\bar \nu_j)$ can only be estimated up to an uncertainty of about $(1/T,1/B)$. We hasten to add that in the radar literature, the term super-resolution is often used for the ability to resolve very close targets, specifically even closer than the Rayleigh resolution limit \cite{quinquis_radar_2004} that is proportional to $1/B$ and $1/T$ for delay and Doppler resolution, respectively. Our work permits identification of a \emph{each} target with precision much higher than $1/B$ and $1/T$ as long as other targets are not too close, specifically other targets should be separated by a constant multiple of the Rayleigh resolution limit (cf. \eqref{eq:mindist}). Finally recall that $(\tau_j, \nu_j) \in [0,1]^2$ translates to $(\bar \tau_j,\bar \nu_j) \in [-T/2, T/2] \times [-B/2,B/2]$, i.e., the $(\bar \tau_j,\bar \nu_j)$ can lie in a rectangle of area $\L=BT \gg 1$, i.e., the system $H$ does not need to be underspread\footnote{A system is called underspread if its spreading function is supported on a rectangle of area much less than one.}. The ability to handle systems that are \emph{overspread} is important in radar applications. Here, we might need to resolve targets with large relative distances and relative velocities, resulting in delay-Doppler pairs $(\bar \tau_j,\bar \nu_j)$ that lie in a region of area larger than $1$ in the time-frequency plane. \subsection{Notation We use lowercase boldface letters to denote (column) vectors and uppercase boldface letters to designate matrices. The superscripts $\transp{}$ and $\herm{}$ stand for transposition and Hermitian transposition, respectively. For the vector $\vx$, $x_q$ and $[\vx]_q$ denotes its $q$-th entry, $\norm[2]{\vx}$ its $\ell_2$-norm and $\norm[\infty]{\vx} = \max_q |x_q|$ its largest entry. For the matrix $\mA$, $[\mA]_{ij}$ designates the entry in its $i$-th row and $j$-th column, $\norm[\opnormss]{\mA} \defeq\;$ $\max_{\norm[2]{\vv} = 1 } \norm[2]{\mA \vv}$ its spectral norm, $\norm[F]{\mA} \defeq (\sum_{i,j} |[\mA]_{ij}|^2 )^{1/2}$ its Frobenius norm, and $\mA \succeq 0$ signifies that $\mA$ is positive semidefinite. The identity matrix is denoted by $\vect{I}$. For convenience, we will frequently use a two-dimensional index for vectors and matrices, e.g., we write $[\vg]_{(k,\ell)}, k,\ell=-N,...,N$ for $ \vg = \transp{[g_{(-N,-N)}, g_{(-N,-N+1)}, ...,g_{(-N,N)},g_{(-N+1,-N)},...,g_{(N,N)}]}. $ For a complex number $b$ with polar decomposition $b = |b|e^{i2\pi \phi}$, $\mathrm{sign}(b) \defeq e^{i2\pi \phi}$. Similarly, for a vector $\vect{b}$, $[\mathrm{sign}(\vect{b})]_k \defeq \mathrm{sign}([\vect{b}]_k)$. For the set $\T$, $|\T|$ designates its cardinality and $\comp{\T}$ is its complement. The sinc-function is denoted as $\sinc(t) = \frac{\sin(\pi t)}{\pi t}$. For vectors $\vr, \vr' \in [0,1]^2$, $\infdist{\vr - \vr'}=\max(|r_1-r'_1|,|r_2-r'_2|)$. Here, $|x-y|$ is the wrap-around distance on the unit circle between two scalars $x$ and $y$. For example, $|3/4-1/2|=1/4$ but $|5/6-1/6|=1/3\neq 2/3$. Throughout, $\vr$ denotes a 2-dimensional vector with entries $\tau$ and $\nu$, i.e., $\vr = \transp{[\tau,\nu]}$. Moreover $c,\tilde c, c', c_1,c_2,...$ are numerical constants that can take on different values at different occurrences. Finally, $\mathcal N(\mu, \sigma^2)$ is the Gaussian distribution with mean $\mu$ and variance $\sigma^2$. \section{Recovery via convex optimization \label{sec:recovery}} In this section we present our approach to the recovery of the parameters $(b_j, \tau_j,\nu_j)$ from the samples $y_p$ in \eqref{eq:periorel}. Before we proceed we note that \eqref{eq:periorel} can be rewritten as (see Appendix \ref{app:discspfunc} for a detailed derivation) \begin{align} y_p = \sum_{j=1}^{\S} b_j \sum_{k,\ell = -N}^N D_{N} \! \left( \frac{\ell}{\L} - \tau_j \right) D_{N} \! \left( \frac{k}{\L} - \nu_j \right) x_{p- \ell} e^{i2\pi \frac{ pk}{\L}}, \quad p = -N,...,N, \label{eq:iowithdirchkernel1} \end{align} where \begin{align} D_{N}(t) \defeq \frac{1}{L} \sum_{k=-N}^{N} e^{i2\pi t k} \label{eq:defDirichlet} \end{align} is the Dirichlet kernel. \newcommand{\mathcal A}{\mathcal A} We define atoms $\va \in \complexset^{\L^2}$ as \begin{align} [\va(\vr)]_{(k,\ell)} = D_{N} \! \left( \frac{\ell}{\L} - \tau \right) D_{N} \! \left( \frac{k }{\L} - \nu \right), \quad \vr = \transp{[\tau,\nu]},\quad k,\ell = -N,...,N. \label{eq:defatoms} \end{align} Rewriting \eqref{eq:iowithdirchkernel1} in matrix-vector form yields \[ \vy = \vect{G} \vect{z}, \quad \vect{z} = \sum_{j=1}^{\S} |b_j| e^{i2\pi \phi_j} \va(\vr_j), \quad \vr_j = \transp{[\tau_j,\nu_j]}, \] where $b_j = |b_j|e^{i2\pi \phi_j}$ is the polar decomposition of $b_j$ and $\vect{G} \in \complexset^{\L \times \L^2}$ is the Gabor matrix defined by \begin{align} [\vect{G}]_{p, (k,\ell)} \defeq x_{p- \ell} e^{i2\pi \frac{k p}{\L}}, \quad k,\ell, p = -N,...,N. \label{eq:defgabormtx} \end{align} The signal $\mathbf{z}$ is a sparse linear combination of time and frequency shifted versions of the atoms $\va(\vr)$. A regularizer that promotes such a sparse linear combination is the atomic norm induced by these signals~\cite{chandrasekaran_convex_2012}. The atoms in the set $\mathcal A \defeq \{ e^{i2\pi \phi} \va(\vr), \vr \in [0,1]^2,\phi \in [0,1]\}$ are the building blocks of the signal~$\vect{z}$. The atomic norm $\norm[\mathcal A]{\cdot}$ is defined as \[ \norm[\mathcal A]{\vect{z}} = \inf \left\{ t > 0\colon \vect{z} \in t\, \mathrm{conv}(\mathcal A) \right\} = \inf_{b_j \in \complexset, \vr_j \in [0,1]^2} \left\{ \sum_j |b_j| \colon \vect{z} = \sum_j b_j \va(\vr_j) \right\}, \] where $\mathrm{conv}(\mathcal A)$ denotes the convex hull of the set $\mathcal A$. The atomic norm can enforce sparsity in $\mathcal A$ because low-dimensional faces of $\mathrm{conv}(\mathcal A)$ correspond to signals involving only a few atoms \cite{chandrasekaran_convex_2012,tang_compressed_2013}. A natural algorithm for estimating $\vect{z}$ is the atomic norm minimization problem \cite{chandrasekaran_convex_2012} \newcommand{\mathrm{AN}}{\mathrm{AN}} \begin{align} \mathrm{AN}(\vy) \colon \;\; \underset{\minlet{\vect{z}} }{\text{minimize}} \, \norm[\mathcal A]{\minlet{\vect{z}} } \; \text{ subject to } \; \vy = \vect{G} \minlet{\vect{z}}. \label{eq:primal} \end{align} Once we obtain $\vect{z}$, the recovery of the time-frequency shifts is a 2D line spectral estimation problem that can be solved with standard approaches such as Prony's method, see e.g.~\cite[Ch.~2]{gershman_space-time_2005}. In Section \ref{sec:estfromdual}, we will present a more direct approach for recovering the time-frequency shifts $\mathbf{r}_j$. When the time-frequency shifts $\vr_j$ are identified, the coefficients $b_j$ can be obtained by solving the linear system of equations \[ \vy = \sum_{j=1}^\S b_j \vect{G} \va(\vr_j). \] Computation of the atomic norm involves taking the infimum over infinitely many parameters and may appear to be daunting. For the case of only time or only frequency shifts (cf.~Section \ref{sec:redsupres}), the atomic norm can be characterized in terms of linear matrix inequalities \cite[Prop.~2.1]{tang_compressed_2013}; this allows us to formulate the atomic norm minimization program as a semidefinite program that can be solved efficiently. The characterization \cite[Prop.~2.1]{tang_compressed_2013} relies on a classical Vandermonde decomposition lemma for Toeplitz matrices by Carath\'eodory and Fej\'er. While this lemma generalizes to higher dimensions \cite[Thm.~1]{yang_vandermonde_2015}, this generalization fundamentally comes with a rank constraint on the corresponding Toeplitz matrix. This appears to prohibit a characterization of the atomic norm paralleling that of \cite[Prop.~2.1]{tang_compressed_2013} which explains why no semidefinite programming formulation of the atomic norm minimization problem \eqref{eq:primal} is known, to the best of our knowledge. Nevertheless, based on \cite[Thm.~1]{yang_vandermonde_2015}, one can obtain a semidefinite programming \emph{relaxation} of $\mathrm{AN}(\vy)$. Instead of taking that route, and explicitly stating the corresponding semidefinite program, we show in Section \ref{sec:estfromdual} that the time-frequency shifts $\vr_j$ can be identified directly from the dual solution of the atomic norm minimization problem $\mathrm{AN}(\vy)$ in \eqref{eq:primal}, and propose a semidefinite programming \emph{relaxation} that allows us to find a solution of the dual efficiently. \section{Main result \label{sec:prefres}} Our main result, stated below, provides conditions guaranteeing that the solution to $\mathrm{AN}(\vy)$ in \eqref{eq:primal} is $\vect{z}$ (perfect recovery). As explained in Section \ref{sec:recovery}, from $\vect{z}$ we can obtain the triplets $(b_j, \tau_j,\nu_j)$ easily. \begin{theorem} Assume that the samples of the probing signal $x_\ell, \ell =-N,...,N$, are i.i.d.~$\mathcal N(0,1/\L)$ random variables, $\L=2N+1$. Let $\mathbf{y}\in\mathbb{C}^L$, with $L \geq 1024$, contain the samples of the output signal obeying the input-output relation \eqref{eq:periorel}, i.e., \[ \vy = \vect{G} \vect{z}, \quad \vect{z} = \sum_{\vr_j \in \T} b_j \va(\vr_j), \] where $\mathbf{G}$ is the Gabor matrix of time-frequency shifts of the input sequence $x_\ell$ defined in \eqref{eq:defgabormtx}. Assume that the $\mathrm{sign}(b_j)$ are i.i.d.~uniform on $\{-1,1\}$ and that the set of time-frequency shifts $\T = \{\vr_1, \vr_2,...,\vr_\S \} \allowbreak \subset [0,1]^2$ obeys the minimum separation condition \begin{equation} \max(|\tau_j - \tau_{j'}|, |\nu_j - \nu_{j'}| ) \geq \frac{2.38}{N} \text{ for all } [\tau_j, \nu_j], [\tau_{j'}, \nu_{j'}] \in \T \text{ with } j \neq j'. \label{eq:minsepcond1} \end{equation} Furthermore, choose $\delta > 0$ and assume \begin{align*} S\le c\frac{L}{(\log(L^6/\delta))^3}, \end{align*} where $c$ is a numerical constant. Then, with probability at least $1-\delta$, $\vect{z}$ is the unique minimizer of $\mathrm{AN}(\vy)$ in \eqref{eq:primal}. \label{thm:mainres} \end{theorem} Recall that the complex-valued coefficients $b_j$ in \eqref{eq:iorelintro} in the radar model describe the attenuation factors. Therefore, it is natural to assume that the phases of different $b_j$ are independent from each other and are uniformly distributed on the unit circle of the complex plane. Indeed, in standard models in wireless communication and radar~\cite{bello_characterization_1963}, the $b_j$ are assumed to be complex Gaussian. To keep the proof simple, in Theorem~\ref{thm:mainres} we assume that the $b_j$ are real-valued. The assumption that the coefficients $b_j$ have random sign is the real-valued analogue of the random phase assumption discussed above. Theorem~\ref{thm:mainres} continues to hold for complex-valued $b_j$ (only the constant $2.38$ in \eqref{eq:minsepcond1} changes slightly). While this random sign assumption is natural for many applications, we believe that is not necessary for our result to hold. Finally, we would like to point out that Theorem \ref{thm:mainres} continues to hold for sub-Gaussian sequences $x_\ell$ with trivial modifications to our proof. The proof of Theorem \ref{thm:mainres} is based on analyzing the dual of $\mathrm{AN}(\vy)$. We will prove that the recovery is perfect by constructing an appropriate dual certificate. The existence of this dual certificate guarantees that the solution to $\mathrm{AN}(\vy)$ in \eqref{eq:primal} is $\vect{z}$. This is a standard approach, e.g., in the compressed sensing literature, the existence of a related dual certificate guarantees that the solution to $\ell_1$-minimization is exact \cite{candes_robust_2006}. Specifically, the dual of $\mathrm{AN}(\vy)$ in \eqref{eq:primal} is \cite[Sec.~5.1.16]{boyd_convex_2004} \begin{align} \underset{\vect{q}}{\text{maximize}} \; \Re \innerprod{\vect{q}}{\vy} \text{ subject to } \norm[\mathcal A^\ast]{\herm{\vect{G}} \vect{q}} \leq 1, \label{eq:dual} \end{align} where $\vect{q} = \transp{[ q_{-N}, ..., q_{N}]}$ and \begin{align*} \norm[\mathcal A^\ast]{\vv} = \sup_{\norm[\mathcal A]{\vect{z}} \leq 1} \Re \innerprod{\vv}{\vect{z}} = \sup_{\vr \in [0,1]^2} \left| \innerprod{\vv}{\va(\vr)} \right| \end{align*} is the dual norm. Note that the constraint of the dual \eqref{eq:dual} can be rewritten as: \begin{align} \norm[\mathcal A^\ast]{\herm{\vect{G}} \vect{q}} = \sup_{\vr \in [0,1]^2} \left| \innerprod{ \vect{q}}{\vect{G} \va(\vr)} \right| = \sup_{[\tau,\nu] \in [0,1]^2} \left| \innerprod{\vect{q}}{ \mc F_\nu \mc T_\tau \vx } \right| \leq 1, \label{eq:constrdual} \end{align} where we used $\vect{G} \va(\vr) = \mc F_\nu \mc T_\tau \vx$. By definition of the time and frequency shifts in \eqref{eq:deftimefreqshifts} it is seen that $\innerprod{\vect{q}}{ \mc F_\nu \mc T_\tau \vx }$ is a 2D trigonometric polynomial (in $\tau,\nu$, see \eqref{eq:dualpolyinprop} for its specific form). The constraint in the dual is therefore equivalent to the requirement that the absolute value of a specific 2D trigonometric polynomial is bounded by one. A sufficient condition for the success of atomic norm minimization is given by the existence of a certain dual certificate of the form $\innerprod{\vect{q}}{ \mc F_\nu \mc T_\tau \vx }$. This is formalized by Proposition \ref{prop:dualmin} below and is a consequence of strong duality. Strong duality is implied by Slater's conditions being satisfied \cite[Sec.~5.2.3]{boyd_convex_2004} (the primal problem only has equality constraints). \begin{proposition} Let $\vy=\vect{G}\vect{z}$ with $\vect{z} = \sum_{\vr_j \in \T} b_j \va(\vr_j)$. If there exists a dual polynomial Q(\vr) = \innerprod{\vect{q}}{\mc F_{\nu} \mc T_{\tau} \vx } with complex coefficients $\vect{q} = \transp{[q_{-N}, ..., q_{N}]}$ such that \begin{align} Q(\vr_j) = \mathrm{sign}(b_j), \text{ for all } \vr_j \in \T, \text{ and } |Q(\vr)| < 1 \text{ for all } \vr \in [0,1]^2 \setminus \T \label{eq:dualpolyinatmincon} \end{align} then $\vect{z}$ is the unique minimizer of $\mathrm{AN}(\vy)$. Moreover, $\vect{q}$ is a dual optimal solution. \label{prop:dualmin} \end{proposition} The proof of Proposition \ref{prop:dualmin} is standard, see e.g., \cite[Proof of Prop.~2.4]{tang_compressed_2013}, and is provided in Appendix \ref{sec:proofprop:dualmin} for convenience of the reader. The proof of Theorem \ref{thm:mainres} consists of constructing a dual polynomial satisfying the conditions of Proposition \ref{prop:dualmin}, see Section \ref{app:proofmainres}. \section Relationship between the continuous time and discrete time models \label{sec:probform} } In this section we discuss in more detail how the discrete time model \eqref{eq:periorel} follows from the continuous time model \eqref{eq:iorelintro} through band- and time-limitations. This section is geared towards readers with interest in radar and wireless communication applications. Readers not interested in the detailed justification of \eqref{eq:periorel} may wish to skip this section on a first reading. We start by explaining the effect of band- and time-limitation in continuous time. We show that \eqref{eq:periorel} holds \emph{exactly} when the probing signal is $T$-periodic, and holds \emph{approximately} when the probing signal is essentially time-limited on an interval of length $3T$, as discussed in the introduction. Finally, we explicitly quantify the corresponding approximation error. As mentioned previously, radar systems and wireless communication channels are typically modeled as linear systems whose response is a weighted superposition of delayed and Doppler-shifted versions of the probing signal. In general, the response $y=Hx$ of the system $H$ to the probing signal $\prsig$ is given by \begin{equation} y(t) = \iint \sfunc (\tau,\nu) \prsig(t-\tau) e^{i2\pi \nu t} d\nu d\tau, \label{eq:ltvsys} \end{equation} where $\sfunc(\tau,\nu)$ denotes the spreading function associated with the system. In the channel identification and radar problems, the probing signal $x$ can be controlled by the system engineer and is known. The spreading function depends on the scene and is unknown. We assume that the spreading function consists of $\S$ point scatterers. In radar, these point scatterers correspond to moving targets. Mathematically, this means that the spreading function specializes to \begin{align} \sfunc(\tau,\nu) = \sum_{j=1}^{\S} b_j \delta(\tau-\bar \tau_j) \delta(\nu-\bar \nu_j). \label{eq:origradarspreadfunc} \end{align} Here, $b_j$, $j=1,...,\S$, are (complex-valued) attenuation factors associated with the delay-Doppler pair $(\bar \tau_j,\bar \nu_j)$. With \eqref{eq:origradarspreadfunc}, \eqref{eq:ltvsys} reduces to the input-output relation \eqref{eq:iorelintro} stated in the introduction, i.e., to \[ y(t) = \sum_{j=1}^\S b_j \prsig(t-\bar \tau_j) e^{i2\pi \bar \nu_j t} . \] \subsection{The effect of band- and time-limitations} In practice, the probing signal $x$ has finite bandwidth $B$ and the received signal $y$ can only be observed over a finite time interval of length $T$. We refer to this as time-limitation, even though $y$ is non-zero outside of the time interval of length $T$. As shown next, this band- and time-limitations lead to a discretization of the input-output relation \eqref{eq:ltvsys} and determine the ``natural'' resolution of the system of $1/B$ and $1/\Tint$ in $\tau$- and $\nu$-directions, respectively. Specifically, using the fact that $\prsig$ is band-limited to $[-B/2,B/2]$, \eqref{eq:ltvsys} can be rewritten in the form \begin{equation} y(t) = \sum_{k \in \mb Z}\sum_{\ell \in \mb Z} \overline{\sfunc}\! \left(\frac{\ell}{B},\frac{k}{\Tint} \right) \prsig \!\left( t-\frac{\ell}{B} \right) e^{i2\pi \frac{k}{\Tint} t}, \label{eq:sfunc_discrete} \end{equation} with $t \in [-\Tint/2,\Tint/2]$. Here, \begin{align} \overline{\sfunc}(\tau,\nu) \label{eq:btlimiorel} \defeq \iint \! \sfunc(\tau',\nu') \sinc( (\tau \!- \!\tau') B ) \sinc( (\nu \!-\! \nu') \Tint ) d\tau' d\nu' \end{align} is a smeared version of the original spreading function. The relation \eqref{eq:sfunc_discrete} appears in \cite{bello_characterization_1963}; for the sake of completeness, we detail the derivations leading to \eqref{eq:sfunc_discrete} in Appendix \ref{app:iorelproof}. For point scatterers, i.e., for the spreading function in \eqref{eq:origradarspreadfunc}, \eqref{eq:btlimiorel} specializes to \begin{align} \overline{\sfunc}(\tau,\nu) = \sum_{j=1}^{\S} b_j \sinc( (\tau - \bar \tau_j) B ) \sinc( (\nu - \bar \nu_j) \Tint ). \label{eq:specialradar} \end{align} \begin{figure} \centering \includegraphics[width=\textwidth]{f0.pdf} \caption{\label{fig:intersect} Illustration of the spreading function $\sfunc(\tau,\nu)$ and the corresponding smeared spreading function $\overline{\sfunc}(\tau,\nu)$.} \end{figure}% Imagine for a moment that we could measure $\overline{\sfunc}(\tau,\nu)$ directly. We see that $\overline{\sfunc}(\tau,\nu)$ is the 2D low-pass-filtered version of the signal $\sfunc(\tau,\nu)$ in~\eqref{eq:origradarspreadfunc}, where the filter has resolution $1/B$ in $\tau$ direction and resolution $1/T$ in $\nu$ direction; see Figure \ref{fig:intersect} for an illustration. Estimation of the triplets $(b_j,\bar \tau_j,\bar \nu_j),\ j=1,\ldots,S$, from $\overline{\sfunc}(\tau,\nu)$ is the classical 2D line spectral estimation problem. In our setup, the situation is further complicated by the fact that we can not measure $\overline{\sfunc}(\tau,\nu)$ directly. We only get access to $\overline{\sfunc}(\tau,\nu)$ after the application of the Gabor linear operator in~\eqref{eq:sfunc_discrete}. \subsection{Choice of probing signal} Next, we consider a probing signal that is band-limited and essentially time-limited to an interval of length $3T$. To be concrete, we consider the signal \begin{align} \TL{x}(t) = \sum_{\ell=-\L-N}^{\L+N} x_\ell \sinc(tB - \ell) \label{eq:truncsignal} \end{align} where the coefficients $x_\ell$ are $L$-periodic, with the $x_\ell, \ell = -N,...,N$, i.i.d.~$\mc N(0,1/\L)$. A realization of the random signal $\TL{x}(t)$ along with its Fourier transform is depicted in Figure \ref{fig:prsig}. Since the sinc-kernel above is band-limited to $[-\frac{B}{2},\frac{B}{2}]$, $\TL{x}$ is band-limited to $[-\frac{B}{2},\frac{B}{2}]$ as well. As the sinc-kernel decays relatively fast, $\TL{x}$ is essentially supported on the interval $[-\frac{3T}{2},\frac{3T}{2}]$. We hasten to add that there is nothing fundamental about using the sinc-kernel to construct $\TL{x}$ here; we choose it out of mathematical convenience, and could as well use a kernel that decays faster in time. For the readers familiar with wireless communication, we point out that the partial periodization of $\TL{x}$ serves a similar purpose as the cyclic prefix used in OFDM systems. \begin{figure} \centering \includegraphics{./f1.pdf} \caption{\label{fig:prsig} The probing signal $\TL{x}(t)$ and the real part of its Fourier transform $\TL{X}(f)$ for $B=1, T = 61, \L = 61$: $\TL{x}(t)$ is essentially time-limited on an interval of length $3T$ and band-limited to $[-B/2,B/2]$. } \end{figure} \subsection{Sampling the output} Recall that delay-Doppler pairs satisfy, by assumption, $(\bar \tau_j, \bar \nu_j) \in [-\frac{T}{2},\frac{T}{2}]\times [\frac{B}{2},\frac{B}{2}]$. Thus, $H\TL{x}$ is band-limited to $[-B,B]$ and approximately time-limited to $[-2T,2T]$. According to the $2WT$-Theorem \cite{slepian_bandwidth_1976,durisi_sensitivity_2012}, $H \TL{x}$ has on the order of $BT$ degrees of freedom so that one can think of $H \TL{x}$ as having effective dimensionality on the order of $BT$. Therefore, by sampling $H\TL{x}$ at rate $1/B$ in the interval $[-T/2,T/2]$, we collect the number of samples that matches the dimensionality of $H\TL{x}$ up to a constant. The samples $\TL{y}_p \defeq (H\TL{x})(p/B)$ are given by \begin{equation} \TL{y}_p = \sum_{k,\ell \in \mb Z} \overline{\sfunc} \left(\frac{\ell}{B},\frac{k}{\Tint} \right) \TL{\prsig} \left(\frac{p-\ell}{B} \right) e^{i2\pi \frac{k p}{B \Tint}},\quad p = -N,...,N, \label{eq:sfunc_discretesample} \end{equation} where $N = (\L-1)/2$ and $\L = BT$. Substituting \eqref{eq:specialradar} into \eqref{eq:sfunc_discretesample} and defining $\TL{x}_{\ell} := \TL{x}(\ell/B)$ yields \begin{align} \TL{y}_p = \sum_{j=1}^{\S} b_j \sum_{k,\ell \in \mb Z} \sinc(\ell - \bar \tau_j B) \sinc(k - \bar \nu_j T) \, \TL{x}_{p-\ell} e^{i2\pi \frac{k p}{B \Tint}}. \label{eq:sfunc_discretesample2} \end{align} Next, we rewrite \eqref{eq:sfunc_discretesample2} in terms of the following equivalent expression of the Dirichlet kernel defined in \eqref{eq:defDirichlet} \begin{align} D_N( t ) = \sum_{k\in \mb Z} \sinc\left( \L \left( t - k \right) \right), \label{eq:dirichletsum} \end{align} and a truncated version \begin{align} \quad \tilde D_N( t ) \defeq \sum_{k=-1}^1 \sinc\left( \L \left( t - k \right) \right), \label{eq:dirichletsum1} \end{align} as \begin{align} \TL{y}_p = \sum_{k,\ell = -N}^N \sum_{j=1}^{\S} b_j \tilde D_{N} \! \left( \frac{p-\ell}{\L} - \tau_j \right) D_{N} \! \left( \frac{k }{\L} - \nu_j \right) x_{\ell} e^{i2\pi \frac{k p}{\L}}, \quad p = -N,...,N, \label{eq:iowithdirchkernelapprox} \end{align} where $\tau_j = \bar \tau_j/T$, $\nu_j =\bar \nu_j/B$, as before (see Appendix \ref{eq:proofeq:iowithdirchkernel} for details). For $t \in [-1.5, 1.5]$, $\TL{D}_N( t )$ is well approximated by $D_N( t )$, therefore \eqref{eq:iowithdirchkernelapprox} is well approximated by \begin{align} y_p = \sum_{k,\ell = -N}^N \sum_{j=1}^{\S} b_j D_{N} \! \left( \frac{\ell}{\L} - \tau_j \right) D_{N} \! \left( \frac{k }{\L} - \nu_j \right) x_{p- \ell} e^{i2\pi \frac{k p}{\L}}, \quad p = -N,...,N. \label{eq:iowithdirchkernel} \end{align} This is equivalent to \eqref{eq:periorel}, as already mentioned in Appendix \ref{app:discspfunc}. Note that \eqref{eq:iowithdirchkernel} can be viewed as the periodic equivalent of \eqref{eq:sfunc_discrete} (with $\overline{\sfunc}(\tau,\nu)$ given by \eqref{eq:specialradar}). We show in~Appendix \ref{eq:proofeq:iowithdirchkernel} that the discretization~\eqref{eq:iowithdirchkernel} can be obtained from \eqref{eq:sfunc_discrete} without any approximation if the probing signal $x(t)$ is chosen to be $T$-periodic with its samples selected as $x(\ell/B) = x_\ell$, for all $\ell$. Recall that the samples of the quasi-periodic probing signal satisfy $\TL{x}(\ell/B) = x_\ell$ for $\ell \in [-N -\L, \L +N]$ and $\TL{x}(\ell/B) = 0$ otherwise. Clearly, the periodic probing signal is not time-limited and, hence, can not be used in practice. The error we make by approximating $\TL{y}_p$ with $y_p$ is very small, as shown by the following proposition, proven in Appendix \ref{app:errbound}. We also show numerically in Section \ref{sec:numres} that in practice, the error we make by approximating $\TL{y}_p$ with $y_p$ is negligible. \begin{proposition} Let the $x_\ell$ be i.i.d.~$\mc N(0,1/\L)$ and let the sign of $b_j$ be i.i.d.~uniform on $\{-1,1\}$. For all $\alpha>0$, the difference between $\TL{y}_p$ in \eqref{eq:iowithdirchkernelapprox} and $y_p$ in \eqref{eq:iowithdirchkernel} satisfies \[ \PR{|y_p - \TL{y}_p| \geq c \frac{\alpha}{\L} \norm[2]{\vect{b}} } \leq (4 + 2 \L^2) e^{-\frac{\alpha}{2}}, \] where $\vect{b} = \transp{[b_1,...,b_\S]}$ and $c$ is a numerical constant. \label{prop:errbound} \end{proposition} Proposition \ref{prop:errbound} ensures that with high probability, the $\TL{y}_p$ are very close to the $y_p$. Note that, under the conditions of Theorem \ref{thm:mainres}, $\norm[2]{\vy} \approx \norm[2]{\vect{b}}, \vy = \transp{[y_{-N},...,y_{N}]}$, with high probability (not shown here). Since it follows from Proposition \ref{prop:errbound} that \[ \norm[2]{\vy - \TL{\vy} } \leq \frac{c_1 }{\sqrt{\L}} \norm[2]{\vect{b}} \] holds with high probability, we can also conclude that $\frac{\norm[2]{\vy - \TL{\vy}} }{\norm[2]{\vy} } \leq \frac{c_2}{\sqrt{\L}}, \TL{\vy} = \transp{[\TL{y}_{-N},...,\TL{y}_{N}]}$ holds with high probability. That is, the relative error we make in approximating the $\TL{y}_p$ with the $y_p$ tends to zero in $\ell_2$ norm as $\L$ grows. \newcommand{\dT}{\mathcal S} \section{Super-resolution radar \label{sec:discretesuperes} on a grid} One approach to estimate the triplets $(b_j,\tau_j, \nu_j)$ from the samples $y_p$ in \eqref{eq:periorel} is to suppose the time-frequency shifts lie on a \emph{fine} grid, and solve the problem on that grid. When the time-frequency shifts do not exactly lie on a grid this leads to an error that becomes small as the grid becomes finer. In Section \ref{sec:numres} below we study numerically the error incurred by this approach; for an analysis of the corresponding error in related problems see \cite{tang_sparse_2013}. Our results have immediate consequences for the corresponding (discrete) sparse signal recovery problem, as we discuss next. Suppose we want to recover a sparse discrete signal $s_{m,n} \in \complexset,\; m,n = 0,...,K-1$, $K \geq \L=2N+1$, from samples of the form \begin{align} y_p &= \sum_{m,n=0}^{K-1} s_{m,n} [\mc F_{m/K} \mc T_{n/K} \vx]_p, \quad p=-N,...,N. \\ % &= \sum_{m,n=0}^{K-1} \left( e^{i2\pi p \frac{m}{K} } \frac{1}{L} \sum_{\ell,k=-N}^{N} e^{-i2\pi k \frac{n}{K} } e^{i2\pi (p-\ell) \frac{k}{\L} } x_{\ell} \right) s_{m,n}, \quad p=-N,...,N. \label{eq:perioreldisc} \end{align} To see the connection to the gridless setup in the previous sections, note that the recovery of the $\S$-sparse (discrete) signal $s_{m,n}$ is equivalent to the recovery of the triplets $(b_j, \tau_j,\nu_j)$ from the samples $y_p$ in \eqref{eq:periorel} if we assume that the $(\tau_j,\nu_j)$ lie on a $(1/K,1/K)$ grid (the non-zeros of $s_{m,n}$ correspond to the $b_j$). Writing the relation \eqref{eq:perioreldisc} in matrix-vector form yields \[ \vy = \vect{R} \vs \] where $[\vy]_{p} \defeq y_p$, $[\vs]_{(m,n)} \defeq s_{m,n}$, and $\vect{R} \in \complexset^{\L \times K^2}, K \geq \L$, is the matrix with $(m,n)$-th column given by $\mc F_{m/K} \mc T_{n/K} \vx$. The matrix $\vect{R}$ contains as columns ``fractional'' time-frequency shifts of the sequence $x_\ell$. If $K = \L$, $\vect{R}$ contains as columns only ``whole'' time-frequency shifts of the sequence $x_\ell$ and $\vect{R}$ is equal to the Gabor matrix $\vect{G}$ defined by \eqref{eq:defgabormtx}. In this sense, $K = \L$ is the natural grid (cf.~Section \ref{sec:ongrid}) and the ratio $\SRF \defeqK/\L$ can be interpreted as a super-resolution factor. The super-resolution factor determines by how much the $(1/K,1/K)$ grid is finer than the original $(1/\L,1/\L)$ grid. A standard approach to the recovery of the sparse signal $\vs$ from the underdetermined linear system of equations $\vy = \vect{R} \vs$ is to solve the following convex program: \begin{align} \mathrm{L1}(\vy) \colon \;\; \underset{\minlet{\vs}}{\text{minimize}} \; \norm[1]{\minlet{\vs}} \text{ subject to } \vy = \vect{R} \minlet{\vs}. \label{eq:l1minmG} \end{align} The following theorem is our main result for recovery on the fine grid. \begin{theorem} Assume that the samples of the probing signal $x_\ell = -N,...,N$, in \eqref{eq:perioreldisc} are i.i.d. $\mathcal N(0,1/\L)$ random variables, $L=2N+1$. Let $\mathbf{y}\in\mathbb{C}^\L$, with $L\geq 1024$ be the samples of the output signal obeying the input-output relationship \eqref{eq:perioreldisc}, i.e., $\vy = \mathbf{R}\mathbf{s}$. Let $\mathcal{S} \subseteq \{0,...,K-1\}^2$ be the support of the vector $[\vs]_{(m,n)}$, $m,n=0,...,K-1$, and suppose that it satisfies the minimum separation condition \[ \min_{(m,n), (m', n') \in \mathcal{S} \colon (m,n) \neq ( m', n')} \frac{1}{K} \max(|m- m'|, |n - n'|) \geq \frac{2.38}{N}. \] Moreover, suppose that the non-zeros of $\vs$ have random sign, i.e., $\mathrm{sign}([\vs]_{(m,n)}), (m,n) \in \dT$ are i.i.d.~uniform on $\{-1,1\}$. Choose $\delta>0$ and assume \begin{align*} S\le c\frac{L}{(\log(L^6/\delta))^3}, \end{align*} where $c$ is a numerical constant. Then, with probability at least $1-\delta$, $\mathbf{s}$ is the unique minimizer of $\mathrm{L1}(\vy)$ in \eqref{eq:l1minmG}. \label{cor:discretesuperres} \end{theorem} The proof of Theorem \ref{cor:discretesuperres}, presented in Appendix \ref{sec:proofdiscrete}, is closely linked to that of Theorem \ref{thm:mainres}. As reviewed in Appendix \ref{sec:proofdiscrete}, the existence of a certain dual certificate guarantees that $\vs$ is the unique minimizer of $\mathrm{L1}(\vy)$ in \eqref{eq:l1minmG}. The dual certificate is obtained directly from the dual polynomial for the continuous case (i.e., from Proposition \ref{prop:dualpolynomial} in Section \ref{app:proofmainres}). \subsection{Implementation details} The matrix $\vect{R}$ has dimension $\L \times K^2$, thus as the grid becomes finer (i.e., $K$ becomes larger) the complexity of solving \eqref{eq:l1minmG} increases. The complexity of solving \eqref{eq:l1minmG} can be managed as follows. First, the complexity of first-order convex optimization algorithms (such as TFOCS \cite{becker_templates_2011}) for solving \eqref{eq:l1minmG} is dominated by multiplications with the matrices $\vect{R}$ and $\herm\vect{R}$. Due to the structure of $\vect{R}$, those multiplications can be done very efficiently utilizing the fast Fourier transform. Second, in practice we have $ (\bar \tau_j, \bar \nu_j) \in [0,\taum]\times [0, \num] $, which means that \begin{align} (\tau_j, \nu_j) \in \left[0, \frac{\taum }{ T } \right] \times \left[0, \frac{\num }{ B} \right] . \label{eq:restaunun} \end{align} It is therefore sufficient to consider the restriction of $\vect{R}$ to the $\frac{\taum \num K^2}{BT} = \taum \num \L \cdot\SRF^2$ many columns corresponding to the $(\tau_j, \nu_j)$ satisfying \eqref{eq:restaunun}. Since typically $\taum \num \ll BT=\L$, this results in a significant reduction of the problem size. \subsection{Numerical results} \label{sec:numres} We next evaluate numerically the resolution obtained by our approach. We consider identification of the time-frequency shifts from the response to the (essentially) time-limited signal in \eqref{eq:truncsignal}, which corresponds to identification from the samples $\TL{y}_p, p=-N,...,N$ given by \eqref{eq:iowithdirchkernelapprox}, without and with additive Gaussian noise. We also consider identification from the response to a signal with $\L$-periodic samples $x(\ell/B) = x_\ell$, which corresponds to identification from the samples $y_p, p=-N,...,N$ in \eqref{eq:iowithdirchkernel}. To account for additive noise, we solve the following modification of $\mathrm{L1}(\vy)$ in \eqref{eq:l1minmG} \begin{align} \text{L1-ERR}\colon \underset{\minlet{\vs}}{\text{minimize}} \; \norm[1]{\minlet{\vs}} \text{ subject to } \norm[2]{\vy - \vect{R} \minlet{\vs} }^2 \leq \delta, \label{eq:BDDN} \end{align} with $\delta$ chosen on the order of the noise variance. We choose $\L = 201$, and for each problem instance, we draw $S=10$ time-frequency shifts $(\tau_j,\nu_j)$ uniformly at random from $[0,2/\sqrt{201}]^2$, which amounts to drawing the corresponding delay-Doppler pairs $(\bar \tau_j,\bar \nu_j)$ from $[0,2]\times [0,2]$. We use SPGL1 \cite{BergFriedlander:2008} to solve \text{L1-ERR}. The attenuation factors $b_j$ corresponding to the time-frequency shifts $(\tau_j,\nu_j)$ are drawn uniformly at random from the complex unit disc, independently across $j$. In Figure \ref{fig:realrecov} we plot the average resolution error versus the super-resolution factor $\SRF = K/L$. The resolution error is defined as the average over $j=1,...,S$ of $\L \sqrt{ (\hat \tau_j - \tau_j)^2 + (\hat \nu_j - \nu_j)^2}$, where the $(\hat \tau_j, \hat \nu_j)$ are the time-frequency shifts obtained by solving \eqref{eq:BDDN}. There are three error sources incurred by this approach. The first is the gridding error obtained by assuming the points lie on a fine grid with grid constant $(1/K,1/K)$. The second is the model error from approximating the $\TL{y}_p$ in \eqref{eq:iowithdirchkernelapprox}, obtained by sending an essentially time-limited probing signal (cf.~\eqref{eq:truncsignal}), with the $y_p$ in \eqref{eq:iowithdirchkernel}, obtained by sending a truly periodic input signal $x(t)$. The third is the additive noise error. Note that the resolution attained at $\SRF=1$ corresponds to the resolution attained by matched filtering and by the compressive sensing radar architecture \cite{herman_high-resolution_2009} discussed in Section \ref{sec:ongrid}. We see that for all $\SRF>1$, the resolution is significantly improved using our super-resolution radar approach. We furthermore observe that for low values of $\SRF$, the gridding error dominates, while for large values of $\SRF$, the additive noise error dominates. By looking at the noiseless case, it is seen that the gridding error decays as $1/\SRF$, e.g., at $\SRF = 20$, the error is about $0.4/20$. This demonstrates empirically that in practice solving the super-resolution radar problem on a fine grid is essentially as good as solving it on the continuum--provided the super-resolution factor is chosen sufficiently large. Finally, we observe that the model error is negligible, even for large signal-to-noise ratios (the curves in Figure \ref{fig:realrecov} corresponding to the noiseless and the periodic case are indistinguishable). \begin{figure} \begin{center} \includegraphics{./f2.pdf} \end{center} \caption{\label{fig:realrecov} Resolution error $\L \sqrt{ (\hat \tau_j - \tau_j)^2 + (\hat \nu_j - \nu_j)^2}$ for the recovery of $\S = 10$ time-frequency shifts from the samples $y_p, p=-N,...,N$ in \eqref{eq:iowithdirchkernel} (periodic input signal $x(t)$), and identification from the samples $\TL{y}_p, p =-N, ...,N$ in \eqref{eq:iowithdirchkernelapprox} (essentially time-limited input signal $\TL{x}(t)$) with and without additive Gaussian noise $n_p$ of a certain signal-to-noise ratio $\text{SNR} \defeq \norm[2]{[\TL{y}_{-N},...,\TL{y}_{N}]}^2/\norm[2]{[n_{-N},...,n_{N}]}^2$, by solving \text{L1-ERR}. } \end{figure} \section{Identification of time-frequency shifts} \label{sec:sdprel} In this section we show that the time-frequency shifts can be obtained from a solution to the dual problem, and present a semidefinite programming relaxation to obtain a solution of the dual efficiently. \subsection{Semidefinite programming relaxation of the dual} Our relaxation is similar in spirit to related convex programs in \cite[Sec.~3.1]{bhaskar_atomic_2012}, \cite[Sec.~4]{candes_towards_2014}, and \cite[Sec.~2.2]{tang_compressed_2013}. We show that the constraint in the dual is equivalent to the requirement that the absolute value of a specific 2D trigonometric polynomial is bounded by one, and, therefore, this constraint can be formulated as a linear matrix inequality. The dimensions of the corresponding matrices are, however, unspecified. Choosing a certain relaxation degree for those matrices, and substituting the constraint in the dual with the corresponding matrix inequality leads to a semidefinite programming relaxation of the dual. Recall that the constraint of the dual program \eqref{eq:dual} is \[ \norm[\mathcal A^\ast]{\herm{\vect{G}} \vect{q}} = \sup_{\vr \in [0,1]^2} \left| \innerprod{\herm{\vect{G}} \vect{q}}{\va(\vr)} \right| \leq 1. \] By definition of the Dirichlet kernel in \eqref{eq:defDirichlet}, the vector $\va(\vr)$ defined in \eqref{eq:defatoms} can be written as \[ \va(\vr) = \herm{\vect{F}} \vf(\vr) \] where $\herm{\vect{F}}$ is the (inverse) 2D discrete Fourier transform matrix with the entry in the $(k,\ell)$-th row and $(r,q)$-th column given by $[\herm{\vect{F}}]_{(k,\ell), (r,q)} \defeq \frac{1}{\L^2} e^{i2\pi \frac{qk + r\ell}{\L}}$ and the entries of the vector $\vf$ are given by $[\vf(\vr)]_{(r,q)} \defeq e^{-i2\pi (r\tau + q \nu)}$, $k, \ell, q, r = -N, ..., N$, $\vr=[\tau, \nu]^T$. With these definitions, \begin{align} \innerprod{\herm{\vect{G}}\vect{q}}{\va(\vr)} = \innerprod{\herm{\vect{G}}\vect{q}}{ \herm{\vect{F}} \vf(\vr) } = \innerprod{\vect{F}\herm{\vect{G}}\vect{q}}{\vf(\vr) } = \sum_{r,q=-N}^N [\vect{F}\herm{\vect{G}}\vect{q}]_{(r,q)} e^{i2\pi (r\tau + q \nu)}. \label{eq:Q2Dtripolyresp} \end{align} Thus, the constraint in the dual \eqref{eq:dual} says that the 2D trigonometric polynomial in \eqref{eq:Q2Dtripolyresp} is bounded in magnitude by $1$ for all $\vr \in [0,1]^2$. The following form of the bounded real lemma allows us to approximate this constraint by a linear matrix inequality. \begin{proposition}[{\cite[Cor.~4.25, p.~127]{dumitrescu_positive_2007}}] Let $P$ be a bivariate trigonometric polynomial in $\vr=[\tau, \nu]^T$ \begin{equation*} P(\vr) = \sum_{k, \ell=-N}^{N} p_{(k,\ell)} e^{i2\pi (k \tau + \ell \nu)}. \end{equation*} If \[ \sup_{\vr \in [0,1]^2} |P(\vr)| < 1 \] then there exists a matrix $\vect{Q}\succeq 0$ such that \begin{equation} \begin{bmatrix} \vect{Q} & \vp \\ \herm{\vp} & 1 \end{bmatrix} \succeq \vect{0} \qquad \text{and} \qquad \forall k,\ell = -N,...,N, \quad \mathrm{trace}( (\boldsymbol{\Theta}_{k} \otimes \boldsymbol{\Theta}_{\ell}) \vect{Q}) = \begin{cases} 1, & (k,\ell) = (0,0) \\ 0, & \text{otherwise} \end{cases} \label{eq:adfol} \end{equation} where $\boldsymbol{\Theta}_{k}$ designates the elementary Toeplitz matrix with ones on the $k$-th diagonal and zeros elsewhere. The vector $\vp$ contains the coefficients $p_{(k,\ell)}$ of $P$ as entries, and is padded with zeros to match the dimension of $\vect{Q}$. Reciprocally, if there exists a matrix $\vect{Q} \succeq 0$ satisfying \eqref{eq:adfol}, then \[ \sup_{\vr \in [0,1]^2} |P(\vr)| \leq 1. \] \label{prop:BRL} \end{proposition} Contrary to the corresponding matrix inequality for the 1D case in \cite{bhaskar_atomic_2012,candes_towards_2014,tang_compressed_2013}, where the size of the matrix $\vect{Q}$ is fixed to $\L \times \L$, here, the size of the matrix $\vect{Q}$ is not determined and may, in principle, be significantly larger than the minimum size $\L^2\times \L^2$. This stems from the fact that the sum-of-squares representation of a positive trigonometric polynomial of degree $(\L, \L)$ possibly involves factors of degree larger than $(\L, \L)$ (see, e.g., \cite[Sec.~3.1]{dumitrescu_positive_2007}). Therefore, Proposition \ref{prop:BRL} only gives a sufficient condition and can not be used to \emph{exactly} characterize the constraint of the dual program \eqref{eq:dual}. Fixing the degree of the matrix $\vect{Q}$ to the minimum size of $\L^2 \times \L^2$ yields a \emph{relaxation} of the constraint of the dual in \eqref{eq:dual} that leads to the following semidefinite programming \emph{relaxation} of the dual program \eqref{eq:dual}: \begin{align} \underset{\vect{q}, \vect{Q} \in \complexset^{\L^2\times \L^2}, \vect{Q} \succeq 0}{\text{maximize}} \, \Re \innerprod{\vect{q}}{\vy} \text{ subject to \eqref{eq:dualsemdefconstraint}} . \label{eq:dualsemdef} \end{align} \begin{align} \begin{bmatrix} \vect{Q} & \vect{F}\herm{\vect{G}}\vect{q} \\ \herm{\vect{q}} \vect{G} \herm{\vect{F}} & 1 \end{bmatrix} \succeq \vect{0}, \quad \mathrm{trace}( (\boldsymbol{\Theta}_{k} \otimes \boldsymbol{\Theta}_{\ell}) \vect{Q}) = \begin{cases} 1, & (k,\ell) = (0,0) \\ 0, & \text{otherwise} \end{cases}. \label{eq:dualsemdefconstraint} \end{align} Note that we could also use a higher relaxation degree than $(\L,\L)$, which would, in general, lead to a better approximation of the original problem. However, the relaxation of minimal degree are known to yield optimal solution in practice, as, e.g., observed in a related problem of 2D FIR filter design~\cite{dumitrescu_positive_2007}. In Section \ref{sec:estfromdual}, we report an example that shows that the relaxation of minimal degree also yields optimal solutions for our problem in practice. \subsection{\label{sec:estfromdual}Estimation of the time-frequency shifts from the dual solution} Proposition \ref{prop:dualmin} suggests that an estimate $\hat \T$ of the set of time-frequency shifts $\T$ can be obtained from a dual solution $\vect{q}$ by identifying the $\vr_j$ for which the dual polynomial $Q(\vr) = \innerprod{\vect{q}}{\mc F_{\nu} \mc T_{\tau} \vx } = \innerprod{\vect{q}}{\vect{G} \va(\vr)}$ has magnitude $1$. In general, the solution $ \vect{q}$ to \eqref{eq:dualsemdef} is not unique but we can ensure that \[ \T \subseteq \hat \T \defeq \{\vr \colon |\innerprod{ \vect{q}}{\vect{G} \va(\vr)}| = 1 \}. \] To see this assume that $\T \setminus \hat \T \neq \emptyset$. Then, we have \begin{align*} \Re \innerprod{ \vect{q}}{\vect{G} \vect{z}} &= \Re \innerprod{ \vect{q}}{\vect{G} \sum_{\vr_j \in \T} b_j \va(\vr_j)} \\ &= \sum_{\vr_j \in \T \cap \hat \T } \Re ( \conj{b}_j \innerprod{\vect{q}}{\vect{G} \va(\vr_j)} ) + \sum_{\vr_j \in \T \setminus \hat \T } \Re( \conj{b}_j \innerprod{\vect{q}}{\vect{G} \va(\vr_j)} ) \\ &<\sum_{\vr_j \in \hat \T \cap \T } |b_n| + \sum_{\vr_j \in \T \setminus \hat \T } |b_n| = \norm[\mathcal A]{\vect{z}}, \end{align*} where the strict inequality follows from $\left|\innerprod{\vect{q}}{\vect{G} \va(\vr)}\right| < 1$ for $\vr \in \T \setminus \hat \T$, by definition of the set $\hat \T$. This contradicts strong duality, and thus implies that $\T \setminus \hat \T = \emptyset$, i.e., we must have $\T \subseteq \hat \T$. In general, we might have $\T \neq \hat \T$. However, in ``most cases'', standard semidefinite programming solvers (e.g., SDPT3) will yield a solution such that $\T = \hat \T$, see \cite[Prop.~2.5]{tang_compressed_2013} and \cite[Sec.~4]{candes_towards_2014} for formal results on related problems. We next provide a numerical example where the time-frequency shifts can be recovered perfectly from a solution to the semidefinite program \eqref{eq:dualsemdef}. We choose $N=8$, consider the case of two time-frequency shifts, specifically $\T = \{ (0.2,0.5), (0.8,0.5)\}$, and let the coefficients $x_\ell, \ell = -N,...,N$ and the $b_j, j=1,2$, be i.i.d.~uniform on the complex unit sphere. In Figure \ref{fig:exdualpoly} we plot the dual polynomial $Q(\vr) = \innerprod{\vect{q}}{\vect{G} \va(\vr)}$ with $\vect{q}$ obtained by solving \eqref{eq:dualsemdef} via CVX \cite{cvx_2014} (CVX calls the SDPT3 solver). It can be seen that the time-frequency shifts can be recovered perfectly, i.e., $\hat \T = \T$. \begin{figure} \centering \includegraphics[width=\textwidth]{./f3.pdf} \caption{\label{fig:exdualpoly} Localization of the time-frequency shifts via the estimated dual polynomial $Q(\tau,\nu)$ obtained by solving \eqref{eq:dualsemdef} with noiseless measurements. The red lines show the actual positions of the time-frequency shifts located at $(0.2,0.5)$ and $(0.8,0.5)$. Note that the estimated dual polynomial satisfies $|Q(\tau,\nu)| = 1$ if $(\tau,\nu) \in \{(0.2,0.5),(0.8,0.5)\}$ and $|Q(\tau,\nu)| < 1$ otherwise, thereby providing accurate identification of the time-frequency shifts.} \end{figure} \subsection{Recovery in the noisy case \label{sec:noisycase}} In practice, the samples $y_p$ in \eqref{eq:periorel} are corrupted by additive noise. In that case, perfect recovery of the $(b_j,\tau_j,\nu_j)$ is in general no longer possible, and we can only hope to identify the time-frequency shifts up to an error. In the noisy case, we solve the following convex program: \begin{align} \underset{\minlet\vect{z}}{\text{minimize}} \, \norm[\mathcal A]{\minlet{\vect{z}}} \; \text{ subject to } \; \norm[2]{\vy - \vect{G} \minlet{\vect{z}}} \leq \delta. \label{eq:lasso} \end{align} The semidefinite programing formulation of the dual of \eqref{eq:lasso} takes the form \begin{align} \underset{\vect{q}, \vect{Q}}{\text{maximize}} \, \Re \innerprod{\vect{q}}{\vy} - \delta \norm[2]{\vect{q}} \text{ subject to \eqref{eq:dualsemdefconstraint}} \label{eq:lassodualsemdef} \end{align} and we again estimate the time-frequency shifts $\vr_j$ as the $\vr$ for which the dual polynomial $Q(\vr) = \innerprod{\vect{q}}{\vect{G} \va(\vr)}$ achieves magnitude $1$. We leave theoretical analysis of this approach to future work, and only provide a numerical example demonstrating that this approach is stable. We choose $N=5$, consider the case of one time-frequency shift at $(\tau_1,\nu_1) = (0.5,0.8)$ (so that the dual polynomial can be visualized in 3D) and let the coefficients $x_\ell, \ell = -N,...,N$ and $b_1$ be random variables that are i.i.d.~uniform on the complex unit sphere but normalized to have variance $1/\L$. We add i.i.d.~complex Gaussian noise $n_p$ to the samples samples $y_p$ in \eqref{eq:periorel}, such that the signal-to-noise ratio (SNR), defined as $\text{SNR} = \norm[2]{[y_{-N},...,y_{N}]}^2/\norm[2]{[n_{-N},...,n_{N}]}^2$, is 10dB (we express the SNR in decibels computed as $10\log_{10}(\text{SNR})$). In Figure \ref{fig:noisypoly} we plot the dual polynomial $Q(\vr) = \innerprod{\vect{q}}{\vect{G} \va(\vr)}$ with $\vect{q}$ obtained by solving \eqref{eq:lassodualsemdef} (with $\delta=0.8$) using CVX. The time-frequency shift for which the dual polynomial achieves magnitude $1$ is $(0.4942,0.7986)$; it is very close to the original time-frequency shift $(0.5,0.8)$. \begin{figure} \centering \includegraphics[width=\textwidth]{./f4.pdf} \caption{\label{fig:noisypoly} Localization of the time-frequency shifts via the estimated dual polynomial $Q(\tau,\nu)$ obtained by solving \eqref{eq:lassodualsemdef} using noisy measurements (10dB noise). The estimated dual polynomial satisfies $|Q(\tau,\nu)| = 1$ for $(\tau,\nu) = (0.4942,0.7986)$ (marked by $\times$); this is very close to the original time-frequency shift $(0.5,0.8)$ (marked by $\oplus$). } \end{figure} \section{Relation to previous work \label{sec:priorwork}} The general problem of extracting the spreading function $\sfunc (\tau,\nu)$ of a linear time varying system of the form \eqref{eq:ltvsys} is known as system identification. It has been shown that LTV systems with spreading function compactly supported on a known region of area $\Delta$ in the time-frequency plane are identifiable if and only if $\Delta \leq 1$ as shown by Kailath, Bello, Kozek, and Pfander in \cite{kailath_measurements_1962,bello_measurement_1969,kozek_identification_2005,pfander_measurement_2006}. If the spreading function's support region is unknown, a necessary and sufficient condition for identifiability is $\Delta\leq 1/2$ \cite{heckel_identification_2013}. In contrast to our work, the input (probing) signal in \cite{kailath_measurements_1962,bello_measurement_1969,kozek_identification_2005,pfander_measurement_2006,heckel_identification_2013} is not band-limited, and the response to the probing signal is not time-limited. In fact, the probing signal in those works is a (weighted) train of Dirac impulses, a signal that decays neither in time nor in frequency. If it is necessary to use a band-limited probing signal and observe the response to the probing signal for a finite amount time (as is the case in practice), it follows from the theory developed in~\cite{kailath_measurements_1962,bello_measurement_1969,kozek_identification_2005,pfander_measurement_2006,heckel_identification_2013} that $\sfunc (\tau,\nu)$ can be identified with precision of $1/B$ and $1/T$ in $\tau$ and $\nu$ directions, respectively. This is good enough if the function $\sfunc (\tau,\nu)$ is smooth in the sense that it does not vary much at the scale $1/B$ and $1/T$ in $\tau$ and $\nu$ directions, respectively, and its support contains only a few disjoint regions. The assumptions we make about $\sfunc (\tau,\nu)$ in this paper are very far from smoothness: Our $\sfunc (\tau,\nu)$ consists of Dirac delta functions, and hence is peaky, and all the elements of its support are disjoint. In this sense, the results in this paper are complementary to those in~\cite{kailath_measurements_1962,bello_measurement_1969,kozek_identification_2005,pfander_measurement_2006,heckel_identification_2013}. Taub\"{o}ck et al.~\cite{taubock_compressive_2010} and Bajwa et al.~\cite{bajwa_learning_2008} considered the identification of LTV systems with spreading function compactly supported in a rectangle of area $\Delta \leq 1$. In \cite{taubock_compressive_2010,bajwa_learning_2008}, the time-frequency shifts lie on a \emph{coarse} grid (i.e., the grid in Section \ref{sec:ongrid}). In our setup, the time-frequency shifts need not lie on a grid and may in principle lie in a rectangle of area $\L = BT$ that is considerably larger than $1$. In a related direction, Herman and Strohmer \cite{herman_high-resolution_2009} (in the context of compressed sensing radar) and Pfander et al.~\cite{pfander_identification_2008} considered the case where the time-frequency shifts lie on a coarse grid (i.e., the grid in Section \ref{sec:ongrid}). Bajwa et al.~\cite{bajwa_identification_2011} also considered the identification of an LTV of the form \eqref{eq:iorelintro}. The approach in \cite{bajwa_identification_2011} requires that the time-frequency shifts $(\bar \tau_j,\bar \nu_j)$ lie in a rectangle of area much smaller than $1$ (this is required for a certain approximation in \cite[Eq.~5]{bajwa_identification_2011} to be valid) and in the worst case $(BT)^2\geq c \S$. Both assumption are not required here. In \cite{candes_towards_2014}, Cand\`es and Fernandez-Granda study the recovery of the frequency components of a mixture of $\S$ complex sinusoids from $\L$ equally spaced samples as in \eqref{eq:supres}. As mentioned previously, this corresponds to the case of only time or only frequency shifts. Recently, Fernandez-Granda \cite{fernandez-granda_super-resolution_2015} improved the main result of \cite{candes_towards_2014} by providing an tighter constant in the minimum separation condition. Tang et al.~\cite{tang_compressed_2013} study a related problem, namely the recovery of the frequency components from a random subset of the $\L$ equally spaced samples. Both \cite{candes_towards_2014,tang_compressed_2013} study convex algorithms analogous to the algorithm studied here, and the proof techniques developed in these papers inspired the analysis presented here. In \cite{MSthesis} the author improved the results of \cite{candes_towards_2014} with simpler proofs by building approximate dual polynomials. We believe that one can utilize this result to simplify our proofs and/or remove the random sign assumption. We leave this to future work. Finally, we would like to mention a few recent papers that use algorithms not based on convex optimization for the super-resolution problem \cite{demanet2014recoverability, demanet2013super, fannjiang2011music, liao2014music, moitra2014threshold}. We should note that some of these approaches can handle smaller separation compared to convex optimization based approaches but the stability of these approaches to noise is not well understood, they do not as straightforwardly generalize to higher dimensions, and often need the model order (e.g., the number of frequencies) as input parameter. To the best of our knowledge, the approaches in \cite{demanet2014recoverability, demanet2013super, fannjiang2011music, liao2014music, moitra2014threshold} have never been generalized to the more general radar problem as in \eqref{eq:periorel}. \section{Construction of the dual polynomial \label{app:proofmainres}} In this section we prove Theorem \ref{thm:mainres} by constructing a dual polynomial that satisfies the conditions of Proposition \ref{prop:dualmin}. Existence of the dual polynomial is guaranteed by the following proposition, which is the main technical result of this paper. \begin{proposition} Assume that the samples of the probing signal $x_\ell, \ell =-N,...,N$, are i.i.d.~$\mathcal N(0,1/\L)$ random variables and $\L = 2N+1 \geq 1024$. Let $\T = \{\vr_1, \vr_2,...,\vr_\S\} \subset [0,1]^2$ be an arbitrary set of points obeying the minimum separation condition \begin{align} \max(|\tau_j - \tau_{j'}|, |\nu_j - \nu_{j'}| ) \geq \frac{2.38}{N} \text{ for all } [\tau_j, \nu_j], [\tau_{j'}, \nu_{j'}] \in \T \text{ with } j \neq j'. \label{eq:mindistcond} \end{align} Let $\vu \in \{-1,1\}^\S$ be a random vector, whose entries are i.i.d. and uniform on $\{-1,1\}$. Choose $\delta>0$ and assume \[ S \le c\frac{L}{\log^3\left(\frac{L^6}{\delta} \right)} \] where $c$ is a numerical constant. Then, with probability at least $1-\delta$, there exists a trigonometric polynomial $Q(\vr)$, $\vr = \transp{[\tau,\nu]}$, of the form \begin{align} Q(\vr) = \innerprod{\vect{q}}{\mc F_{\nu} \mc T_{\tau} \vx } = \sum_{p=-N}^{N} \conj{[\mc F_{\nu} \mc T_{\tau} \vx]}_p q_p = \sum_{k,\ell = -N}^{N} \underbrace{ \left( \frac{1}{\L} \conj{x}_\ell \sum_{p=-N}^{N} e^{i2\pi (p-\ell) \frac{k}{L} } q_p \right) }_{q_{k,\ell} \defeq } e^{-i2\pi (k \tau + p \nu)} \label{eq:dualpolyinprop} \end{align} with complex coefficients $\vect{q} = \transp{[ q_{-N}, ..., q_{N}]}$ such that \begin{align} Q(\vr_j) = u_j, \text{ for all } \vr_j \in \T, \text{ and } |Q(\vr)| < 1 \text{ for all } \vr \in [0,1]^2 \setminus \T. \label{eq:intboundcondpro} \end{align} \label{prop:dualpolynomial} \end{proposition} We provide a proof of Proposition \ref{prop:dualpolynomial} by constructing $Q(\vr)$ explicitly. Our construction of the polynomial $Q(\vr)$ is inspired by that in \cite{candes_towards_2014,tang_compressed_2013}. To built $Q(\vr)$ we need to construct a 2D-trigonometric polynomial that satisfies \eqref{eq:intboundcondpro}, and whose coefficients $q_{k,\ell}$ are constraint to be of the form \eqref{eq:dualpolyinprop}. It is instructive to first consider the construction of a 2D trigonometric polynomial \begin{align} \bar Q(\vr) = \sum_{k,\ell = -N}^N \bar q_{k,\ell} e^{-i2\pi (k \tau + \ell \nu)} \label{eq:Qtrigonpoly} \end{align} satisfying \eqref{eq:intboundcondpro} without any constraint on the coefficients $\bar q_{k,\ell}$. To this end, we next review the construction in \cite{candes_towards_2014} establishing that there exists a 2D trigonometric polynomial $\bar Q$ satisfying simultaneously the interpolation condition $\bar Q(\vr_j) = u_j, \text{ for all } \vr_j \in \T$, and the boundedness condition $\abs{\bar Q(\vr)} < 1$, for all $\vr \notin \T$, provided the minimum separation condition \eqref{eq:mindistcond} holds. In order to construct $\bar Q$, Cand\`es and Fernandez-Granda \cite{candes_towards_2014} interpolate the points $u_j$ with a fast-decaying kernel $\bar G$ and its partial derivatives according to \begin{align} \bar Q(\vr) = \sum_{k=1}^\S \bar \alpha_k \bar G(\vr-\vr_k) + \bar \beta_{1k} \bar G^{(1,0)}(\vr - \vr_k) + \bar\beta_{2k} \bar G^{(0,1)}(\vr - \vr_k). \label{eq:detintpolC} \end{align} Here, $\bar G^{(m,n)}(\vr) \defeq \frac{\partial^m }{ \partial \tau^m} \frac{\partial^n }{ \partial \nu^n} \bar G(\vr)$ and $ \bar G(\vr) \defeq \FK(\tau) \FK(\nu) $ where $\FK(t)$ is the squared Fej\'er kernel defined as \[ \FK(t) \defeq \left( \frac{\sin\left( M \pi t\right)}{M \sin(\pi t)} \right)^4, \quad M \defeq \frac{N}{2}+1. \] For $N$ even, the Fej\'er kernel is a trigonometric polynomial of degree $N/2$. It follows that $\FK(t)$ is a trigonometric polynomial of degree $N$ and can be written as a trigonometric polynomial with coefficients $g_k$ according to \begin{align} \FK(t) = \frac{1}{M} \sum_{k=-N}^{N} g_k e^{i2\pi t k}. \label{eq:def:FejK} \end{align} Since shifted versions of $F(t)$ and the derivatives of $\FK(t)$ are also 1D trigonometric polynomials of degree $N$, it follows that the kernel $\bar G$, its partial derivatives, and shifted versions thereof, are 2D trigonometric polynomials of the form \eqref{eq:Qtrigonpoly}. Since $\FK(t)$ decays rapidly around the origin $t=0$ ($\FK(0)=1$), $\bar G(\vr)$ decays rapidly around the origin $\vr = \vect{0}$ as well. To ensure that $\bar Q(\vr)$ reaches local maxima, which is necessary for the interpolation and boundedness conditions to hold simultaneously, the coefficients $\bar \alpha_k, \bar \beta_{1k}$ and $\bar \beta_{2k}$ are chosen in a specific way guaranteeing that \begin{align} \bar Q(\vr_k) = u_k, \quad \bar Q^{(1,0)}(\vr_k) = 0, \text{ and } \bar Q^{(0,1)}(\vr_k) = 0, \text{ for all } \vr_k \in \T, \label{eq:condQinterpolv} \end{align} where $\bar Q^{(m,n)}(\vr) \defeq \frac{\partial^m }{ \partial \tau^m} \frac{\partial^n }{ \partial \nu^n} \bar Q(\vr)$. The idea of this construction is to interpolate the $u_j$ with the functions $\bar G(\vr - \vr_j)$ (the $\alpha_k$ are close to the $u_j$) and to slightly adopt this interpolation near the $\vr_j$ with the functions $\bar G^{(1,0)}(\vr - \vr_j)$ , $\bar G^{(0,1)}(\vr - \vr_j)$ to ensure that local maxima are achieved at the $\vr_j$ (the $\beta_{1k}, \beta_{2k}$ are in fact very small). The key properties of the interpolating functions used in this construction are that $G(\vr - \vr_j)$ decays fast around $\vr_j$, as this enables a good constant in the minimum separation condition \eqref{eq:mindistcond}, and that the ``correction'' functions $\bar G^{(1,0)}(\vr - \vr_j)$, $\bar G^{(0,1)}(\vr - \vr_j)$ are small at $\vr_j$, but sufficiently large in a small region relatively close to $\vr_j$, and decay fast far from $\vr_j$ as well. A first difficulty with generalizing this idea to the case where the coefficients $q_{k,\ell}$ of the 2D trigonometric polynomial have the special form \eqref{eq:dualpolyinprop} is this: Since $\vx$ is a random vector our interpolation and correction functions are naturally non-deterministic and thus showing the equivalent to \eqref{eq:condQinterpolv}, namely \eqref{eq:interpcondQ}, requires a probabilistic analysis. Specifically, we will use concentration of measure results. A second difficulty is that interpolating the points $u_j$ with shifted versions of a \emph{single} function will not work, as shifted versions of a function of the special form \eqref{eq:dualpolyinprop}, playing the role of $\bar G$, are in general not of the form \eqref{eq:dualpolyinprop} (the time and frequency shift operators $\mc T_\tau$ and $\mc F_\nu$ do not commute). As a result we have to work with different interpolating functions for different $\vr_j$'s. These function reach their maxima at or close to the $\vr_j$'s. A third difficulty is that we cannot simply use the derivatives of our interpolation functions as ``correction'' functions, because the derivatives of a polynomial of the form \eqref{eq:dualpolyinprop} are in general not of the form \eqref{eq:dualpolyinprop}. We will construct the polynomial $Q(\vr)$ by interpolating the points $(\vr_k, u_k)$ with functions $G_{(m,n)}(\vr,\vr_k),\allowbreak m,n =0,1$ that have the form \eqref{eq:dualpolyinprop}: \begin{align} Q(\vr) = \sum_{k=1}^\S \alpha_k G_{(0,0)}(\vr,\vr_k) + \beta_{1k} G_{(1,0)}(\vr,\vr_k) + \beta_{2k} G_{(0,1)}(\vr,\vr_k). \label{eq:dualpolyorig} \end{align} Choosing the functions $G_{(m,n)}(\vr,\vr_k)$ in \eqref{eq:dualpolyorig} to be of the form \eqref{eq:dualpolyinprop} ensures that $Q(\vr)$ itself is of the form \eqref{eq:dualpolyinprop}. We will show that, with high probability, there exists a choice of coefficients $\alpha_k, \beta_{1k}, \beta_{2k}$ such that \begin{align} Q(\vr_k) = u_k, \quad Q^{(1,0)}(\vr_k) = 0, \text{ and } Q^{(0,1)}(\vr_k) = 0, \text{ for all } \vr_k \in \T, \quad \label{eq:interpcondQ} \end{align} where $Q^{(m,n)}(\vr) \defeq \frac{\partial^m }{ \partial \tau^m} \frac{\partial^n }{ \partial \nu^n} Q(\vr)$. This ensures that $Q(\vr)$ reaches local maxima at the $\vr_k$. We will then show that with this choice of coefficients, the resulting polynomial satisfies $|Q(\vr)| < 1$ for all $\vr \notin \T$. Now that we have stated our general strategy, we turn our attention to the construction of the interpolating and correction functions $G_{(m,n)}(\vr,\vr_k)$. We will chose $G_{(0,0)}(\vr,\vr_k)$ such that its peak is close to $\vr_k$ and it decays fast around $\vr_k$. For $G_{(0,0)}(\vr,\vr_k)$ to have the form \eqref{eq:dualpolyinprop} it must be random (due to $\vx$); we will show that the properties just mentioned are satisfied with high probability. We start by recalling (cf.~Section \ref{sec:sdprel}) \[ \mc F_\nu \mc T_\tau \vx = \vect{G} \herm{\vect{F}} \vf(\vr), \] where $\vect{G}$ is the Gabor matrix (cf.~\ref{eq:defgabormtx}). Here, $\herm{\vect{F}}$ is the inverse 2D discrete Fourier transform matrix with the entry in the $(k,\ell)$-th row and $(r,q)$-th column given by $[\herm{\vect{F}}]_{(k,\ell), (r,q)} \defeq \frac{1}{\L^2} e^{i2\pi \frac{qk + r\ell}{\L}}$ and $[\vf(\vr)]_{(r,q)} = e^{-i2\pi (r\tau + q \nu)}$ with $k, \ell, q, r = -N, ..., N$. Next, define the vector $\vg_{(m,n)}(\vr_j) \in \complexset^{\L^2}$ as \[ [\vg_{(m,n)}(\vr_j)]_{(r,q)} = g_r g_q e^{-i2\pi(\tau_j r + \nu_j q)} (i2\pi r)^m (i2\pi q )^n, \quad r,q=-N,...,N,\quad \vr_j = \transp{[\tau_j,\nu_j]}. \] Here, the $g_r$ are the coefficients of the squared Fej\'er kernel in \eqref{eq:def:FejK}. With this notation, we define \begin{align} G_{(m,n)}(\vr, \vr_j) &\defeq \frac{\L^2}{M^2} \innerprod{\vect{G} \herm{\vect{F}} \vg_{(m,n)}(\vr_j) }{\mc F_\nu \mc T_\tau \vx } \nonumber \\ &= \frac{\L^2}{M^2} \herm{\vf}(\vr) \vect{F} \herm{\vect{G}} \vect{G} \herm{\vect{F}} \vg_{(m,n)}(\vr_j). \label{eq:defkernelG} \end{align} By identifying $\vect{q}$ in \eqref{eq:dualpolyinprop} with $\vect{G} \herm{\vect{F}} \vg_{(m,n)}(\vr_j)$, we immediately see that $G_{(m,n)}(\vr, \vr_j)$ and in turn $Q(\vr)$ has the form \eqref{eq:dualpolyinprop}, as desired. The particular choice of $G_{(m,n)}(\vr, \vr_j)$ is motivated by the fact---made precise later---that $G_{(m,n)}(\vr, \vr_j)$ concentrates around the deterministic function $G^{(m,n)}(\vr - \vr_j)$ of \eqref{eq:detintpolC}. Thus, $G_{(0,0)}(\vr,\vr_j)$ decays rapidly around $\vr_j$ with high probability. To demonstrate this empirically, in Figure \ref{fig:GbarG} we plot $\bar G(\vr)$ and $G_{(0,0)}(\vr,\vect{0})/ G_{(0,0)}(\vect{0},\vect{0})$ for $N = 60$ and $N=300$. Note that close to $\vr= \vect{0}$, the random function $G_{(0,0)}(\vr,\vect{0})$ and the deterministic kernel $\bar G(\vr)$ are very close. A simple calculation shows that the expected value of $G_{(m,n)}(\vr, \vr_j)$ with respect to $\mathbf{x}$ is equal to $G^{(m,n)}(\vr - \vr_j)$. Specifically, as shown later on in Section \ref{sec:concstep1}, $\EX{\herm{\vect{G}} \vect{G}} = \vect{I}$. This immediately implies that \begin{align} \EX{G_{(m,n)}(\vr,\vr_j)} &= \frac{1}{M^2} \herm{\vf}(\vr) \vg_{(m,n)}(\vr_j) \nonumber \\ &= \frac{1}{M^2} \sum_{r,q=-N}^N e^{i2\pi (r\tau + q \nu )} g_r g_q e^{-i2\pi(\tau_j r + \nu_j q)} (i2\pi r)^m (i2\pi q)^n \nonumber \\ &= \frac{\partial^m }{ \partial \tau^m} \frac{\partial^n }{ \partial \nu^n} \frac{1}{M^2} \sum_{r,q=-N}^N e^{i2\pi( r (\tau - \tau_j) + q (\nu - \nu_j) )} g_r g_q \nonumber \\ &= \frac{\partial^m }{ \partial \tau^m} \frac{\partial^n }{ \partial \nu^n} \FK(\tau-\tau_j) \FK(\nu - \nu_j) \nonumber \\ &= \bar G^{(m,n)}(\vr-\vr_j). \label{eq:expGmn} \end{align} \begin{figure} \centering \includegraphics{./f5.pdf} \caption{\label{fig:GbarG} Plots of the random kernel $G_{(0,0)}(\vr,\vect{0}) / G_{(0,0)}(\vect{0},\vect{0})$ along with the deterministic kernel $\bar G(\vr)$.} \end{figure} The remainder of the proof is organized as follows. \begin{description} \item[Step 1:] \label{it:step1} We will show that for every $\vr \in [0,1]^2$ the function $G_{(0,0)}(\vr,\vr_j)$ is close to $\bar G(\vr-\vr_j)$ with high probability, i.e., $|G_{(0,0)}(\vr,\vr_j) - \bar G(\vr-\vr_j)|$ is small. \item[Step 2:]\label{it:step2} We will then show that for a randomly chosen $\vx$, with high probability, there exists a specific choice of coefficients $\alpha_k, \beta_{1k}, \beta_{2k}$ guaranteeing that \eqref{eq:interpcondQ} is satisfied. \item[Step 3:]\label{it:step3} We conclude the proof by showing that with the coefficients chosen as in Step 2, $\abs{Q(\vr)} < 1$ with high probability uniformly for all $\vr \notin \T$. This is accomplished using an $\epsilon$-net argument. \begin{description} \item[Step 3a:]\label{it:step3a} Let $\Omega \subset [0,1]^2$ be a (finite) set of grid points. For every $\vr\in \Omega$, we show that $Q(\vr)$ is ``close'' to $\bar Q(\vr)$ with high probability. \item[Step 3b:]\label{it:step3b} We use Bernstein's polynomial inequality to conclude that this result holds with high probability uniformly for all $\vr \in [0,1]^2$. \item[Step 3c:]\label{it:step3c} Finally, we combine this result with a result in \cite{candes_towards_2014} that shows that $\abs{\bar Q(\vr)} < 1$ for all $\vr \notin \T$ to conclude that $\abs{Q(\vr)} < 1$ holds with high probability uniformly for all $\vr \notin \T$. \end{description} \end{description} \subsection{Step 1: Concentration of $G_{(0,0)}(\vr,\vr_j)$ around $\bar G(\vr-\vr_j)$} \label{sec:concstep1} In this subsection we establish the following result. \begin{lemma} Let $G_{(m',n')}^{(m,n)}(\vr, \vr_j) = \frac{\partial^m }{ \partial \tau^m} \frac{\partial^n }{ \partial \nu^n} G_{(m',n')}(\vr, \vr_j) $ and fix $\vr, \vr_j \in [0,1]^2$. For all $\alpha \geq 0$, and for all nonnegative integers $m, m',n,n'$ with $m + m'+n + n' \leq 4$, \begin{align} &\PR{ \frac{1}{\kappa^{m+m'+n+n'}} | G^{(m,n)}_{(m',n')}(\vr, \vr_j) - \bar G^{(m + m',n + n')}(\vr - \vr_j) | > c_1 12^{\frac{m + m'+n+ n'}{2}} \frac{\alpha}{\sqrt{\L}} } \nonumber \\ &\hspace{8cm}\leq 2 \exp\left( - c \min\left( \frac{ \alpha^2}{c_2^4 }, \frac{ \alpha}{ c_2^2 }\right) \right), \label{eq:polydev} \end{align} where $\kappa \defeq \sqrt{|\FK''(0)|}$ and $c,c_1,c_2$ are numerical constants. \label{lem:polybo} \end{lemma} To this aim, first note that, by the definition of $G_{(m',n')}(\vr, \vr_j)$ in \eqref{eq:defkernelG} we have \[ G_{(m',n')}^{(m,n)}(\vr, \vr_j) = \frac{\partial^m }{ \partial \tau^m} \frac{\partial^n }{ \partial \nu^n} G_{(m',n')}(\vr, \vr_j) = \frac{\L^2}{M^2} \herm{(\vf^{(m,n)}(\vr))} \vect{F} \herm{\vect{G}} \vect{G} \herm{\vect{F}} \vg_{(m',n')}(\vr_j), \] where for $r, q= -N, ..., N$ we define $[\vf^{(m,n)}(\vr)]_{(r,q)} \defeq (-i2\pi r)^m (-i2\pi r)^n e^{-i2\pi (r\tau + q \nu)}$. Lemma~\ref{lem:polybo} is now proven in two steps. First, we show that $\EX{\herm{\vect{G}} \vect{G}} = \vect{I}$. From this, following calculations similar to \eqref{eq:expGmn}, we obtain \begin{align} \EX{ G^{(m,n)}_{(m',n')}(\vr, \vr_j)} = \bar G^{(m + m',n + n')}(\vr - \vr_j). \label{eq:expGmnGen} \end{align} Second, we express $G^{(m,n)}_{(m',n')}(\vr,\vr_j)$ as a quadratic form in $\mathbf{x} \defeq \transp{[x_{-N},...,x_{N}]}$, and utilize the Hanson-Wright inequality stated in the lemma below to show that $G^{(m,n)}_{(m',n')}(\vr,\vr_j)$ does not deviate too much from its expected value $\bar G^{(m + m',n + n')}(\vr - \vr_j)$. \begin{lemma}[Hanson-Wright inequality {\cite[Thm.~1.1]{rudelson_hanson-wright_2013}}] Let $\mathbf{x} \in \mathbb R^L$ be a random vector with independent zero-mean $K$-sub-Gaussian entries (i.e., the entries obey $\sup_{p\geq 1} p^{-1} (\EX{|x_\ell|^p})^{1/p} \leq K$), and let $\vect{V}$ be an $L\times L$ matrix. Then, for all $t\geq 0$, \[ \PR{ | \transp{\mathbf{x}} \vect{V} \mathbf{x} - \EX{\transp{\mathbf{x}} \vect{V} \mathbf{x}} | > t } \leq 2 \exp\left( - c \min\left( \frac{t^2}{K^4 \norm[F]{\vect{V}}^2 }, \frac{t}{K^2 \norm[\opnormss]{\vect{V}} }\right) \right) \] where $c$ is a numerical constant. \label{thm:hanswright} \end{lemma} We first establish $\EX{\herm{\vect{G}} \vect{G}} = \vect{I}$. By definition of the Gabor matrix in \eqref{eq:defgabormtx}, the entry in the $(k,\ell)$-th row and $(k',\ell')$-th column of $\herm{\vect{G}} \vect{G}$ is given by \[ [\herm{\vect{G}} \vect{G}]_{(k,\ell), (k',\ell')} = \sum_{p=-N}^N \conj{x}_{p-\ell} x_{p-\ell'} e^{-i2\pi \frac{kp}{\L}} e^{i2\pi \frac{k'p}{\L}}. \] Noting that $\EX{x_\ell} = 0$, we conclude that $\EX{[\herm{\vect{G}} \vect{G}]_{(k,\ell), (k',\ell')}} = 0$ for $\ell \neq \ell'$. For $\ell = \ell'$, using the fact that $\EX{\conj{x}_{p-\ell} x_{p-\ell}} = 1/\L$, we arrive at \[ \EX{ [\herm{\vect{G}} \vect{G}]_{(k,\ell), (k',\ell')} } = \frac{1}{\L} \sum_{p=-N}^N e^{i2\pi \frac{(k' - k )p}{\L}}. \] The latter is equal to $1$ for $k = k'$ and $0$ otherwise. This concludes the proof of $\EX{\herm{\vect{G}} \vect{G}} = \vect{I}$. We now turn our attention to the concentration part of the argument, where we express $G^{(m,n)}_{(m',n')}(\vr,\vr_j)$ as a quadratic form in $\mathbf{x}$ and apply Lemma~\ref{thm:hanswright}. To this end, first note that \begin{align} [L \vect{G} \herm{\vect{F}} \vg_{(m',n')}(\vr_j) ]_p &= \frac{1}{\L} \sum_{k,\ell=-N}^N \left( \sum_{r,q=-N}^N g_r g_q e^{-i2\pi(\tau_j r + \nu_j q)} (i2\pi r)^{m'} (i2\pi q)^{n'} e^{i2\pi \frac{q k + r\ell }{\L}} \right) x_{p-\ell} e^{i2\pi \frac{kp}{\L}} \nonumber \\ &= \sum_{\ell=-N}^N x_\ell \sum_{r=-N}^N e^{i2\pi \frac{r(p-\ell)}{\L}} g_r g_p e^{-i2\pi(\tau_j r - \nu_j p)} (i2\pi r)^{m'} (-i2\pi p)^{n'} \label{eq:GFHgs}, \end{align} where we used that $\frac{1}{\L} \sum_{k=-N}^N e^{i2\pi \frac{k (p+q)}{\L}}$ is equal to $1$ if $p= -q$ and equal to $0$ otherwise, together with the fact that $x_\ell$ is $\L$-periodic. We next write \eqref{eq:GFHgs} in matrix-vector form. For the ease of presentation, we define the matrix $\mA( \vg_{(m',n')}(\vr_j)) \in \complexset^{\L\times \L}$ (note that $\mA$ is a function of $\vg_{(m',n')}(\vr_j)$) by \[ [\mA( \vg_{(m',n')}(\vr_j) )]_{p,\ell} \defeq \sum_{k=-N}^{N} e^{i2\pi\frac{k(p-\ell)}{L}} g_k g_p e^{-i2\pi(\tau_j k - \nu_j p)} (i2\pi k)^{m'} (-i2\pi p)^{n'}. \] Utilizing this shorthand, writing \eqref{eq:GFHgs} in matrix-vector form yields \[ L \vect{G} \herm{\vect{F}} \vg_{(m',n')}(\vr_j) = \mA( \vg_{(m',n')}(\vr_j) ) \vx, \] where $\vx = \transp{[x_{-N}, ..., x_{N}]}$. Analogously as in \eqref{eq:GFHgs}, we have \begin{align} [L \vect{G} \herm{\vect{F}} \vf^{(m,n)}(\vr)]_{p} = \sum_{\ell=-N}^{N}x_{\ell} \sum_{k=-N}^{N} e^{i2\pi\frac{k(p-\ell)}{L}} e^{-i2\pi(k \tau - p \nu)} (-i2\pi k)^{m} (i2\pi p)^{n} . \label{eq:GFhfrm} \end{align} Defining the matrix $\herm{\mA}(\vf^{(m,n)}(\vr))\in \complexset^{L\times L}$ by \[ [\herm{\mA}(\vf^{(m,n)}(\vr)) ]_{\tilde \ell, p} = \sum_{\tilde k=-N}^{N} e^{-i2\pi\frac{\tilde k ( p - \tilde \ell)}{L}} e^{i2\pi(\tilde k \tau - p \nu)} (i2\pi \tilde k)^{m} (-i2\pi p)^{n} \] allows us to express \eqref{eq:GFhfrm} in matrix-vector form according to \[ \L\,\herm{(\vf^{(m,n)}(\vr))} \vect{F} \herm{\vect{G}} = \herm{\mathbf{x}} \herm{\mA}(\vf^{(m,n)}(\vr)). \] This allows us to represent $G^{(m,n)}_{(m',n')}(\vr, \vr_j)$ in the desired quadratic form \begin{align} G^{(m,n)}_{(m',n')}(\vr, \vr_j) = \frac{\L^2}{M^2} \herm{(\vf^{(m,n)}(\vr))} \vect{F} \herm{\vect{G}} \vect{G} \herm{\vect{F}} \vg_{(m',n')}(\vr_j) = \herm{\mathbf{x}} \underbrace{\frac{1}{M^2} \herm{\mA}(\vf^{(m,n)}(\vr)) \mA(\vg_{(m',n')}(\vr_j)) }_{\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j) \defeq} \mathbf{x}, \label{eq:gmnqform} \end{align} where \begin{align*} &[\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j)]_{\tilde \ell, \ell} \\ &\hspace{0.7cm} =\frac{1}{M^2} \sum_{p,k,\tilde k=-N}^{N} e^{i2\pi\frac{(p - \ell )k}{L}} e^{-i2\pi\frac{(p - \tilde \ell)\tilde k}{L}} e^{i2\pi(\tilde k \tau - p (\nu - \nu_j) - k \tau_j )} g_p g_k (i2\pi \tilde k)^m (i2\pi k)^{m'} (-i2\pi p)^{n+n'} . \end{align*} In order to evaluate the RHS of the Hanson-Wright inequality, we will need the following upper bound on $\norm[F]{\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j)}$. We defer the proof to Section \ref{seclem:boundonVFnorm}. \begin{lemma} For all $\vr$ and $\vr_j$, and for all non-negative $m, m',n,n'$ with $m + m'+n+n' \leq 4$, \begin{equation} \norm[F]{\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j)} \leq c_1 (2\pi N)^{m + m' + n +n'} \sqrt{L}. \label{eq:boundonVFnorm} \end{equation} \label{lem:boundonVFnorm} \end{lemma} We are now ready to establish Lemma \ref{lem:polybo} by applying the Hanson-Wright inequality. To this end note that using $\kappa=\sqrt{\FK''(0)} = \sqrt{\frac{\pi^2}{3}(N^2+4N)}$ and utilizing \cite[Eq.~2.23]{candes_towards_2014} we hav \[ \frac{(2\pi N)^{m}}{\kappa^{m}} = \frac{(2\pi N)^{m}}{( \frac{\pi^2}{3} (N^2 + 4 N))^{(m)/2}} \leq 12^{\frac{m}{2}} . \] Setting $\vect{V} \defeq \vect{V}^{(m,n)}_{(m',n')}(\vr, \vr_j)$ for ease of presentation, we have \begin{align} &\hspace{-1cm} \PR{ \frac{1}{\kappa^{m+m'+n+n'}} | G^{(m,n)}_{(m',n')}(\vr, \vr_j) - \bar G^{(m + m',n + n')}(\vr - \vr_j) | > c_1 12^{\frac{m + m'+n+ n'}{2}} \frac{\alpha}{\sqrt{\L}} } \nonumber \\ &\leq \PR{ | G^{(m,n)}_{(m',n')}(\vr, \vr_j) - \bar G^{(m + m',n + n')}(\vr - \vr_j) | > c_1 (2\pi N)^{m + m'+n +n'} \frac{\alpha}{\sqrt{\L}} } \nonumber \\ &\leq \PR{ | \transp{\mathbf{x}} \vect{V} \mathbf{x} - \EX{\transp{\mathbf{x}} \vect{V} \mathbf{x}} | > \norm[F]{ \vect{V}} \frac{\alpha}{\L} } \label{eq:uselemboundvfnor} \\ &\leq 2 \exp\left( - c \min\left( \frac{\norm[F]{\vect{V} }^2 \alpha^2}{\L^2 K^4 \norm[F]{\vect{V} }^2 }, \frac{\norm[F]{\vect{V}} \alpha}{ \L K^2 \norm[\opnormss]{\vect{V}} }\right) \right) \label{eq:usehansonwr} \\ &\leq 2 \exp\left( - c \min\left( \frac{ \alpha^2}{c_2^4 }, \frac{ \alpha}{ c_2^2 }\right) \right). \label{eq:simphanswrer} \end{align} Here, \eqref{eq:uselemboundvfnor} follows from~\eqref{eq:gmnqform} and~\eqref{eq:boundonVFnorm}, together with the fact that $\EX{\herm{\mathbf{x}} \vect{V} \mathbf{x}} =\EX{G^{(m,n)}_{(m',n')}(\vr, \vr_j)} = \bar G^{(m+m',n+n')}(\vr - \vr_j)$ (cf.~\eqref{eq:expGmnGen}). To obtain \eqref{eq:usehansonwr}, we used Lemma \ref{thm:hanswright} with $t=\norm[F]{\vect{V}} \frac{\alpha}{\L}$. Finally, \eqref{eq:simphanswrer} holds because the sub-Gaussian parameter $K$ of the random variable $[\mathbf{x}]_\ell \sim \mathcal N(0,1/\L)$ is given by $K = c_2/\sqrt{\L}$ (e.g., \cite[Ex.~5.8]{vershynin_introduction_2012}) and $\norm[F]{\vect{V} }/\norm[\opnormss]{\vect{V} } \geq 1$. \subsubsection{Proof of Lemma \ref{lem:boundonVFnorm}:} \label{seclem:boundonVFnorm} We start by upper-bounding $|[\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j)]_{\tilde \ell, \ell}|$. By definition of $\FK(t)$ (cf.~\eqref{eq:def:FejK}) \begin{align*} &[\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j)]_{\tilde \ell, \ell} \nonumber \\ &= \sum_{p=-N}^{N} \left(\frac{1}{M} \sum_{k=-N}^{N} g_k (i2\pi k)^{m'} e^{i2\pi\left( \frac{p - \ell }{L} - \tau_j \right)k} \right) % \left( \frac{1}{M} \sum_{\tilde k=-N}^{N} (i2\pi \tilde k)^{m} e^{-i2\pi \left( \frac{p - \tilde \ell}{L} - \tau \right)\tilde k } \right) \cdot \nonumber \\ &\hspace{9.5cm} \cdot g_p (-i2\pi p)^{n+n'} e^{-i2\pi (\nu - \nu_j)p} \nonumber \\ &= \sum_{p=-N}^{N} F^{(m')} \left( \frac{p-\ell}{L} - \tau_j \right) \left( \frac{1}{M} \sum_{\tilde k=-N}^{N} (i2\pi \tilde k)^{m} e^{-i2\pi \left( \frac{p - \tilde \ell}{L} - \tau \right)\tilde k } \right) g_p (-i2\pi p)^{n+n'} e^{-i2\pi (\nu - \nu_j)p},\nonumber \\ \end{align*} where $F^{(m)}(t) \defeq \frac{\partial^m }{ \partial \tau^m} F(t)$. Since $|g_p| \leq 1$ holds for all $p$, we obtain \begin{align} |[\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j)]_{\tilde \ell, \ell}| &\leq (2\pi N)^{n+n'} \sum_{p=-N}^{N} \left| \FK^{(m')} \left( \frac{p-\ell}{L} - \tau_j \right) \right| \left| \frac{1}{M} \sum_{\tilde k=-N}^{N} (-i2\pi \tilde k)^m e^{i2\pi\left(\frac{p - \tilde \ell}{L} - \tau \right) \tilde k } \right| \nonumber \\ &= (2\pi N)^{n+n'} \sum_{p=-N}^{N} \left| \FK^{(m')} \left( \frac{p}{L} +s/\L - \tau_j \right) \right| \left| \frac{1}{M} \sum_{\tilde k=-N}^{N} (-i2\pi \tilde k)^m e^{i2\pi\left(\frac{p + \ell + s - \tilde \ell}{L} - \tau \right) \tilde k } \right| , \label{mahlabel} \end{align} where we choose $s$ as the integer minimizing $|s/\L - \tau_j|$ and used the fact that the absolute values in the sum above are $L$-periodic in $p$ (recall that $\FK(t)$ is $1$-periodic). We proceed by upper-bounding $|F^{(m)}(t)|$. To this aim, we use Bernstein's polynomial inequality, (cf.~Proposition \ref{prop:bernstein} below), to conclude that \begin{align} \sup_{t} \left| F^{(m)}(t) \right| \leq (2 \pi N)^{m} \sup_{t} |F(t)| = (2 \pi N)^{m}. \label{eq:Fmtb1} \end{align} Also note that, from \cite[Lem.~2.6]{candes_towards_2014} we know that for $|t| \in [1/(2N), 1/2]$, there exists a numerical constant $\tilde c$ such that \begin{align} |F^{(m)}(t)| \leq \tilde c (2\pi N)^m \frac{1}{(2Mt)^4}. \label{eq:Fmtb2} \end{align} Combining \eqref{eq:Fmtb1} and \eqref{eq:Fmtb2} we arrive at \begin{align} |F^{(m)}(t)| \leq H^{(m)}(t) \defeq \bar c (2\pi N)^{m} \min \left( 1, \frac{1}{(2M t)^4 } \right). \label{eq:boundderrFej} \end{align} Utilizing the latter inequality we have \begin{align} \left| \FK^{(m)} \left( \frac{p}{L} + s/\L - \tau_j \right) \right| &\leq H^{(m)}\left( \frac{p}{L} + s/\L - \tau_j \right) \nonumber \\ &\leq c' H^{(m)}\left( \frac{p}{L}\right) \label{eq:usesdef} \\ &\leq c' \bar c (2\pi N)^{m} \min \left( 1, \frac{1}{(2Mp /L )^4 } \right) \label{eq:uaseeq:boundderrFej}\\ &\leq c' \bar c (2\pi N)^{m} \min \left( 1, \frac{16}{p^4 } \right) \leq 16 c' \bar c (2\pi N)^{m} \min \left( 1, \frac{1}{p^4 } \right). \label{eq:stdnqwml} \end{align} Here, \eqref{eq:usesdef} holds because when $s$ is the integer minimizing $|s/\L - \tau_j|$, we have that $|s/\L - \tau_j|\leq 1/(2L)$. Therefore, for all $p$ with $|p|>0$, $2M ( p/\L - s/L - \tau_j )$ is within a constant factor of $2M p/L$, which proves that \eqref{eq:usesdef} holds for a numerical constant $c'$. To obtain \eqref{eq:uaseeq:boundderrFej} we used \eqref{eq:boundderrFej}. Finally, \eqref{eq:stdnqwml} follows from $\frac{\L}{2M} = \frac{2N+1}{N+2} < 2$. Plugging \eqref{eq:stdnqwml} into \eqref{mahlabel} we obtain \begin{align} &|[\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j)]_{\tilde \ell, \ell}| \nonumber \\ &\hspace{0.7cm}\leq (2\pi N)^{m+m'+n+n'} \underbrace{ \hat c (2\pi N)^{-m} \!\! \sum_{p=-N}^{N} \min\left(1, \frac{1}{p^4} \right) \left| \frac{1}{M} \sum_{\tilde k=-N}^{N} (-i2\pi \tilde k)^m e^{i2\pi\left(\frac{ p + s+ \ell - \tilde \ell}{L} - \tau \right) \tilde k } \right|}_{ U\left( \tau - \frac{s + \ell - \tilde \ell}{L} \right) \defeq} \label{mah2}, \end{align} where $\hat c = 16 c' \bar c$. We show in Appendix \ref{sec:boundU} that $U(t)$ is $1$-periodic and satisfies $U(t) \leq c \min(1, \frac{1}{L |t|})$ for $|t|\leq 1/2$. Using this bound together with \eqref{mah2} we conclude that \begin{align} \norm[F]{\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j)}^2 =& \sum_{\ell, \tilde \ell=-N}^{N} \left|[\vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j)]_{\tilde \ell, \ell} \right|^2\nonumber\\ \le& (2\pi N)^{2(m +m'+ n + n')} \sum_{\ell, \tilde \ell = -N}^N U^2\!\left( \tau - \frac{s + \ell - \tilde \ell }{L} \right)\nonumber\\%. \label{eq:ubl} \le& (2\pi N)^{2(m +m'+ n + n')} \sum_{\tilde{\ell}=-N}^N\sum_{\ell=-N}^N \left( c \min\left(1, \frac{1}{L |\ell/L|}\right) \right)^2\nonumber\\ \le&c^2(2\pi N)^{2(m +m'+ n + n')}\sum_{\tilde{\ell}=-N}^N \left( 1 + 2 \sum_{\ell\geq 1} \frac{1}{\ell^2} \right) \nonumber\\ =&c^2L(2\pi N)^{2(m +m'+ n + n')}\left( 1 + \frac{\pi^2}{3} \right) \nonumber. \end{align} The proof is now complete by setting $c_1=c\sqrt{1+\pi^2/3}\sqrt{L}$. \subsection{Step 2: Choice of the coefficients $\alpha_k, \beta_{1k}, \beta_{2k}$} We next show that, with high probability, it is possible to select the coefficients $\alpha_k, \beta_{1k}, \beta_{2k}$ such that $Q(\vr)$ satisfies \eqref{eq:interpcondQ}. To this end, we first review the result in \cite{candes_towards_2014} that ensures that there exists a set of coefficients $\bar \alpha_k, \bar \beta_{1k}, \bar \beta_{2k}$ such that \eqref{eq:condQinterpolv} is satisfied. Specifically, writing \eqref{eq:condQinterpolv} in matrix form yields \begin{align} \underbrace{ \begin{bmatrix} \bar \vect{D}^{(0,0)} & \frac{1}{\kappa} \bar \vect{D}^{(1,0)} & \frac{1}{\kappa} \bar \vect{D}^{(0,1)} \\ -\frac{1}{\kappa} \bar \vect{D}^{(1,0)} & -\frac{1}{\kappa^2} \bar \vect{D}^{(2,0)} & -\frac{1}{\kappa^2} \bar \vect{D}^{(1,1)} \\ -\frac{1}{\kappa} \bar \vect{D}^{(0,1)} & -\frac{1}{\kappa^2} \bar \vect{D}^{(1,1)} & -\frac{1}{\kappa^2}\bar \vect{D}^{(0,2)} \end{bmatrix} }_{\bar \vect{D} } \begin{bmatrix} \bar {\bm \alpha} \\ \kappa \bar {\bm \beta} _1 \\ \kappa \bar {\bm \beta} _2 \end{bmatrix} &= \begin{bmatrix} \vu \\ \vect{0} \\ \vect{0} \end{bmatrix} \end{align} where $\bar \vect{D}^{(m,n)}_{j,k} \defeq \bar G^{(m,n)}(\vr_j - \vr_k)$, $[\bar {\bm \alpha} ]_k \defeq \bar \alpha_k$, $[\bar {\bm \beta} _1]_k \defeq \bar \beta_{1k}$ and $[\bar {\bm \beta} _2]_k \defeq \bar \beta_{2k}$. Here, we have scaled the entries of $\bar \vect{D}$ such that its diagonal entries are $1$ ($\FK(0)=1$, $\kappa^2 = |\FK''(0)|$, and $\FK''(0)$ is negative). Since $\bar \vect{D}^{(0,0)},\bar \vect{D}^{(1,1)},\bar \vect{D}^{(2,0)},\bar \vect{D}^{(0,2)}$ are symmetric and $\bar \vect{D}^{(1,0)},\bar \vect{D}^{(0,1)}$ are antisymmetric, $\bar \vect{D}$ is symmetric. The following result, which directly follows from \cite[Eq.~C6, C7, C8, C9]{candes_towards_2014}, ensures that $\bar \vect{D}$ is invertible and thus the coefficients $\bar \alpha_k, \bar \beta_{1k}, \bar \beta_{2k}$ can be obtained according to \begin{align} \begin{bmatrix} \bar {\bm \alpha} \\ \kappa \bar {\bm \beta} _1 \\ \kappa \bar {\bm \beta} _2 \end{bmatrix} = \inv{\bar \vect{D}} \begin{bmatrix} \vu \\ \vect{0} \end{bmatrix} = \bar \vect{L} \vu, \label{eq:barLu} \end{align} where $\bar \vect{L}$ is the $3\S \times \S$ submatrix of $\inv{\bar \vect{D}}$ corresponding to the first $\S$ columns of $\inv{\bar \vect{D}}$. \begin{proposition $\bar \vect{D}$ is invertible and \begin{align} \norm[\opnormss]{\vect{I} - \bar \vect{D}} &\leq 0.19808 \\ \norm[\opnormss]{\bar \vect{D}} &\leq 1.19808 \\ \norm[\opnormss]{ \inv{\bar \vect{D}}} &\leq 1.24700. \label{eq:boundinvbard} \end{align} \end{proposition} \begin{proof} The proof of this proposition is an immediate consequence of \cite[Eq.~C6, C7, C8, C9]{candes_towards_2014}. Since $\bar \vect{D}$ is real and symmetric, it is normal, and thus its singular values are equal to the absolute values of its eigenvalues. Using that the diagonal entries of $\bar \vect{D}$ are $1$, by Gershgorin's circle theorem \cite[Thm.~6.1.1]{horn_matrix_2012}, the eigenvalues of $\bar \vect{D}$ are in the interval $[1-\norm[\infty]{\vect{I} - \bar \vect{D}}, 1+ \norm[\infty]{\vect{I} - \bar \vect{D}}]$, where $\norm[\infty]{\mA} \defeq \max_i \sum_j |[\mA]_{i,j}|$. Using that $\norm[\infty]{\vect{I} - \bar \vect{D}} \leq 0.19808$ (shown below), it follows that $\bar \vect{D}$ is invertible and \begin{align*} \norm[\opnormss]{\bar \vect{D}} &\leq 1+ \norm[\infty]{\vect{I} - \bar \vect{D}} \leq 1.19808 \nonumber \\ \norm[\opnormss]{ \inv{\bar \vect{D}}} &\leq \frac{1}{1- \norm[\infty]{\vect{I} - \bar \vect{D}}} \leq 1.2470. \end{align*} The proof is concluded by noting that \begin{align} \norm[\infty]{\vect{I} - \bar \vect{D}} &= \max \left\{ \norm[\infty]{\vect{I} - \vect{D}^{(0,0)}} \!+\! 2\norm[\infty]{ \frac{1}{\kappa} \bar \vect{D}^{(1,0)}}, \norm[\infty]{ \frac{1}{\kappa} \bar \vect{D}^{(1,0)}} \!+\! \norm[\infty]{\vect{I} - \frac{1}{\kappa^2}\vect{D}^{(2,0)}} \!+\! \norm[\infty]{ \frac{1}{\kappa^2} \bar \vect{D}^{(1,1)}} \right\} \nonumber \\ &\leq 0.19808, \nonumber \end{align} where we used \cite[Eq.~C6, C7, C8, C9]{candes_towards_2014}: \begin{align*} \norm[\infty]{\vect{I} - \vect{D}^{(0,0)}} &\leq 0.04854 \\ \norm[\infty]{ \frac{1}{\kappa} \bar \vect{D}^{(1,0)}} = \norm[\infty]{ \frac{1}{\kappa} \bar \vect{D}^{(0,1)}} &\leq 0.04258 \\ \norm[\infty]{ \frac{1}{\kappa^2} \bar \vect{D}^{(1,1)}} &\leq 0.04791 \\ \norm[\infty]{\vect{I} - \frac{1}{\kappa^2}\vect{D}^{(0,2)}} = \norm[\infty]{\vect{I} - \frac{1}{\kappa^2}\vect{D}^{(2,0)}} &\leq 0.1076. \end{align*} \end{proof} We next select the coefficients $\alpha_k, \beta_{1k}, \beta_{2k}$ such that $Q(\vr)$ satisfies the interpolation conditions \eqref{eq:interpcondQ}. To this end, we write \eqref{eq:interpcondQ} in matrix form: \begin{align} \underbrace{ \begin{bmatrix} \vect{D}_{(0,0)}^{(0,0)} & \frac{1}{\kappa} \vect{D}_{(1,0)}^{(0,0)} & \frac{1}{\kappa} \vect{D}_{(0,1)}^{(0,0)} \\ -\frac{1}{\kappa} \vect{D}^{(1,0)}_{(0,0)} & -\frac{1}{\kappa^2} \vect{D}^{(1,0)}_{(1,0)} & -\frac{1}{\kappa^2} \vect{D}^{(1,0)}_{(0,1)} \\ -\frac{1}{\kappa} \vect{D}^{(0,1)}_{(0,0)} & -\frac{1}{\kappa^2} \vect{D}^{(0,1)}_{(1,0)} & -\frac{1}{\kappa^2} \vect{D}^{(0,1)}_{(0,1)} \end{bmatrix} }_{\vect{D}} \begin{bmatrix} {\bm \alpha} \\ \kappa {\bm \beta} _1 \\ \kappa {\bm \beta} _2 \end{bmatrix} = \begin{bmatrix} \vu \\ \vect{0} \\ \vect{0} \end{bmatrix}, \label{eq:syseqorig} \end{align} where $[\vect{D}^{(m,n)}_{(m',n')}]_{j,k} \defeq G^{(m,n)}_{(m',n')}(\vr_j, \vr_k)$, $[ {\bm \alpha} ]_k \defeq \alpha_k$, $[ {\bm \beta} _1]_k \defeq \beta_{1k}$, and $[ {\bm \beta} _2]_k \defeq \beta_{2k}$. To show that the system of equations \eqref{eq:syseqorig} has a solution, and in turn \eqref{eq:interpcondQ} can be satisfied, we will show that, with high probability, $\vect{D}$ is invertible. To this end, we show that the probability of the event \[ \mc E_\xi = \{ \norm[\opnormss]{ \vect{D} - \bar \vect{D} } \leq \xi\} \] is high, and $\vect{D}$ is invertible on $\mathcal E_\xi$ for all $\xi \in (0,1/4]$. The fact that $\vect{D}$ is invertible on $\mathcal E_\xi$ for all $\xi \in (0,1/4]$ follows from the following set of inequalities: \[ \norm[\opnormss]{\vect{I} - \vect{D}} \leq \norm[\opnormss]{\vect{D} - \bar \vect{D}} + \norm[\opnormss]{\bar \vect{D} - \vect{I}} \leq \xi + 0.1908 \leq 0.4408. \] Since $\vect{D}$ is invertible, the coefficients $\alpha_k, \beta_{1k}, \beta_{2k}$ can be selected as \begin{align} \begin{bmatrix} {\bm \alpha} \\ \kappa {\bm \beta} _1 \\ \kappa {\bm \beta} _2 \end{bmatrix} = \inv{\vect{D}} \begin{bmatrix} \vu \\ \vect{0} \end{bmatrix} = \vect{L} \vu , \label{eq:alphabeta} \end{align} where $\vect{L}$ is the $3\S \times \S$ submatrix of $\inv{\vect{D}}$ corresponding to the first $\S$ columns of $\inv{\vect{D}}$. We record two useful inequalities about $\mathbf{L}$ and its deviation from $\bar{\mathbf{L}}$ on the event $\mc E_\xi$ in the lemma below. \begin{lemma} On the event $\mc E_\xi$ with $\xi \in (0,1/4]$ the following identities hol \begin{align} \norm[\opnormss]{\vect{L}} \leq& 2.5 \label{eq:normmLb},\\ \norm[\opnormss]{\vect{L}-\bar{\vect{L}}}\leq& 2.5 \xi. \label{eq:normLmbL} \end{align} \end{lemma} \begin{proof} We will make use of the following lemma. \begin{lemma}[{\cite[Proof of Cor.~4.5]{tang_compressed_2013}}] Suppose that $\mC$ is invertible and $ \norm{\mB -\mC} \norm{\inv{\mC}} \leq 1/2 $. Then i) $\norm{\inv{\mB}} \leq 2 \norm{\inv{\mC}}$ and ii) $\norm{\inv{\mB} - \inv{\mC}} \leq 2 \norm{\inv{\mC}}^2 \norm{\mB - \mC}$. \label{lem:breadqy} \end{lemma} First note that since $\norm[]{\vect{D} - \bar \vect{D}} \leq 1/4$ and $\norm[]{\inv{\bar \vect{D}}} \leq 1.247$ (cf.~\eqref{eq:boundinvbard}) the conditions of Lemma \ref{lem:breadqy} with $\mB = \vect{D}$ and $\mC = \bar \vect{D}$ are satisfied. Equation \eqref{eq:normmLb} now readily follows: \begin{align} \norm[\opnormss]{\vect{L}} \leq \norm[\opnormss]{\inv{\vect{D}}} \leq 2 \norm[\opnormss]{\inv{\bar \vect{D}}} \leq 2.5 \label{eq:normmLb2}. \end{align} Specifically, for the first inequality, we used that $\vect{L}$ is a submatrix of $\inv{\vect{D}}$, the second inequality follows by application of part $i)$ of Lemma \ref{lem:breadqy}, and the last inequality follows from \eqref{eq:boundinvbard}. To prove \eqref{eq:normLmbL}, we use part $ii)$ of the lemma above together with \eqref{eq:boundinvbard}, to conclude that \begin{align} \norm[\opnormss]{\vect{L} - \bar \vect{L}} \leq \norm[\opnormss]{\inv{\vect{D}} - \inv{\bar \vect{D}}} \leq 2 \norm[\opnormss]{\inv{\bar \vect{D}}}^2 \norm[\opnormss]{\vect{D} - \bar \vect{D}} \leq 2.5 \xi. \label{eq:normLmbL2} \end{align} holds on $\mc E_\xi$ with $\xi \in (0,1/4]$. \end{proof} It only remains to prove that the event $\mc E_\xi$ does indeed have high probability. \begin{lemma} For all $\xi>0$ \[ \PR{ \mc E_\xi } \geq 1 - \delta \] provided that \begin{align} \L \geq \S \frac{c_4}{\xi^2} \log^2(18\S^2/\delta), \label{eq:condlemaaf} \end{align} where $c_4$ is a numerical constant. \label{lem:preptaubound} \end{lemma} \begin{proof} We will upper-bound $\norm[\opnormss]{\vect{D} - \bar \vect{D}}$ by upper-bounding the largest entry of $\vect{D} - \bar \vect{D}$. To this end, first note that the entries of $\vect{D} - \bar \vect{D}$ are given by \[ \frac{1}{\kappa^{m + m'+n + n'}} [\vect{D}^{(m,n)}_{(m',n')} -\bar \vect{D}^{(m+m',n+n')}]_{j,k} = \frac{1}{\kappa^{m+m'+n+n'}} (G^{(m,n)}_{(m',n')}(\vr_j, \vr_k) - \bar G^{(m+m',n+n')}(\vr_j - \vr_k)) \] for $m,m',n,n' \leq 1$ and $m+m'+n+n' \leq 2$ and for $j,k=1,...,\S$. We now have \begin{align} \PR{ \norm[\opnormss]{\vect{D} - \bar \vect{D}} \geq \xi } &\leq \PR{ \sqrt{3\S} \max_{j,k,m,m',n,n'} \frac{1}{\kappa^{m+m'+n+n'}} |[\vect{D}^{(m,n)}_{(m',n')} -\bar \vect{D}^{(m+m',n+n')}]_{j,k}| \geq \xi } \label{eq:ubmaxopnormin} \\ &\hspace{-0.5cm}\leq \sum_{j,k, m,m',n,n'} \PR{ \frac{1}{\kappa^{m+m'+n+n'}} |[\vect{D}^{(m,n)}_{(m',n')} -\bar \vect{D}^{(m+m',n+n')}]_{j,k}| \geq \frac{\xi}{\sqrt{3\S}}} \label{eq:ubmaxdmnbdm} \\ &\hspace{-0.5cm}= \sum_{j,k, m,m',n,n'} \PR{ \frac{1}{\kappa^{m+m'+n+n'}} |[\vect{D}^{(m,n)}_{(m',n')} -\bar \vect{D}^{(m+m',n+n')}]_{j,k}| \geq 12 c_1 \frac{\alpha}{\sqrt{L}} } \label{eq:chooseal} \\ &\hspace{-0.5cm}\leq \sum_{j,k, m,m',n,n'} \PR{ \frac{1}{\kappa^{m+m'+n+n'}} |[\vect{D}^{(m,n)}_{(m',n')} -\bar \vect{D}^{(m+m',n+n')}]_{j,k}| \geq 12^{\frac{m+m'+n+n'}{2}} c_1 \frac{\alpha}{\sqrt{L}} } \label{eq:usemnleq2} \\ &\hspace{-0.5cm}\leq 2(3\S)^2 \exp\left( - c \min\left( \frac{ \xi^2 L}{c_2^4 c_3^2 3\S }, \frac{ \xi \sqrt{L}}{ c_2^2 c_3 \sqrt{3\S} }\right) \right). \label{eq:applempolybo} \end{align} Here, \eqref{eq:ubmaxopnormin} follows from the fact that $\vect{D}$ and $\bar \vect{D}$ are $3\S\times 3\S$ matrices; \eqref{eq:ubmaxdmnbdm} follows from the union bound; \eqref{eq:chooseal} follows by setting $\alpha \defeq \frac{\xi \sqrt{L}}{\sqrt{3\S} 12 c_1}$, where $c_1$ is the constant in Lemma \ref{lem:polybo}; \eqref{eq:usemnleq2} follows from the fact that $12^{\frac{m+m'+n+n'}{2}} \leq 12$ holds for $m+m'+n+n'\leq 2$; and \eqref{eq:applempolybo} follows from Lemma \ref{lem:polybo} (here we set $c_3 \defeq 12 c_1$). To show that the RHS of \eqref{eq:applempolybo} is smaller than $\delta$, as desired, it suffices to show \[ \log(18\S^2/\delta) \leq c \min\left( \frac{ \xi^2 L}{c_2^4 c_3^2 3 \S }, \frac{ \xi \sqrt{L}}{ c_2^2 c_3 \sqrt{3\S} }\right), \] which is a consequence of \eqref{eq:condlemaaf} with $c_4 = 3c_2^4 c_3^2 \max(1/c^2, 1/c)$. \end{proof} \subsection{ Step 3a: $Q(\vr)$ and $\bar Q(\vr)$ are close on a grid } The goal of this section is to prove Lemma \ref{lem:diffqmbarqmongrid} below that shows that $Q(\vr)$ and $\bar Q(\vr)$ and their partial derivatives are close on a set of (grid) points. \begin{lemma} \label{lem6} Let $\Omega \subset [0,1]^2$ be a finite set of points. Fix $0<\epsilon \leq 1$ and $\delta > 0$. Suppose that \[ L \geq \frac{\S}{\epsilon^2} \max \left( c_5 \log^2\left(\frac{12 \S |\Omega|}{\delta}\right) \log\left(\frac{8 |\Omega|}{\delta}\right), c \log \left( \frac{4 |\Omega|}{\delta} \right) \log\left( \frac{18\S^2}{\delta}\right) \right). \] Then, \[ \PR{ \max_{\vr \in \Omega} \frac{1}{\kappa^{n+m}} \left| Q^{(m,n)}(\vr) - \bar Q^{(m,n)}(\vr) \right| \leq \epsilon } \geq 1 - 4 \delta. \] \label{lem:diffqmbarqmongrid} \end{lemma} In order to prove Lemma \ref{lem:diffqmbarqmongrid}, first note that the $(m,n)$-th partial derivative of $Q(\vr)$ (defined by \eqref{eq:dualpolyorig}) after normalization with $1/\kappa^{m+n}$ is given by \begin{align} \frac{1}{\kappa^{m+n}} Q^{(m,n)}(\vr) &= \frac{1}{\kappa^{m+n}} \sum_{k=1}^\S \Big( \alpha_k G^{(m,n)}_{(0,0)}(\vr, \vr_k) + \kappa \beta_{1k} \frac{1}{\kappa} G^{(m,n)}_{(1,0)}(\vr, \vr_k) + \kappa \beta_{2k} \frac{1}{\kappa} G^{(m,n)}_{(0,1)}(\vr, \vr_k) \Big) \nonumber \\%\label{eq:dualpolyderiv} \\ &= \herm{(\vv^{(m,n)}(\vr) )} \vect{L} \vu. \label{eq:Qmninnprodform} \end{align} Here, we used \eqref{eq:alphabeta} and the shorthand $\vv^{(m,n)} (\vr)$ defined by \begin{align*} \herm{(\vv^{(m,n)}) } (\vr) \defeq \frac{1}{\kappa^{m+n}} \bigg[ &G^{(m,n)}_{(0,0)}(\vr,\vr_1), ..., G^{(m,n)}_{(0,0)}(\vr,\vr_\S), \; \frac{1}{\kappa} G^{(m,n)}_{(1,0)}(\vr,\vr_1), ..., \frac{1}{\kappa} G^{(m,n)}_{(1,0)}(\vr, \vr_\S), \\ & \frac{1}{\kappa} G^{(m,n)}_{(0,1)}(\vr, \vr_1),..., \frac{1}{\kappa} G^{(m,n)}_{(0,1)}(\vr, \vr_\S) \bigg]. \end{align*} Since $ \EX{ G^{(m,n)}_{(m',n')}(\vr, \vr_j)} = \bar G^{(m + m',n + n')}(\vr - \vr_j) $ (cf.~\eqref{eq:expGmnGen}), we have \[ \EX{\vv^{(m,n)} (\vr) } = \bar \vv^{(m,n)}(\vr), \] where \begin{align*} \herm{(\bar\vv^{(m,n)})}(\vr) \defeq \frac{1}{\kappa^{m+n}} \bigg[ &\bar G^{(m,n)}(\vr-\vr_1), ..., \bar G^{(m,n)}(\vr-\vr_\S), \; \frac{1}{\kappa} \bar G^{(m+1,n)}(\vr - \vr_1) ,..., \frac{1}{\kappa} \bar G^{(m+1,n)}(\vr - \vr_\S), \\ & \frac{1}{\kappa} \bar G^{(m,n+1)}(\vr - \vr_1) ,..., \frac{1}{\kappa} \bar G^{(m,n+1)}(\vr - \vr_\S) \bigg]. \end{align*} Next, we decompose the derivatives of $Q(\vr)$ according to \begin{align} \frac{1}{\kappa^{m+n}} &Q^{(m,n)}(\vr) = \innerprod{\vu}{\herm{\vect{L}} \vv^{(m,n)}(\vr) } \nonumber \\ &= \innerprod{\vu}{ \herm{\bar \vect{L}} \bar \vv^{(m,n)}(\vr) } + \underbrace{\innerprod{\vu}{\herm{\vect{L}} (\vv^{(m,n)}(\vr) - \bar \vv^{(m,n)}(\vr)) } }_{I^{(m,n)}_1(\vr)} + \underbrace{\innerprod{\vu}{\herm{(\vect{L} - \bar \vect{L})} \bar \vv^{(m,n)}(\vr) } }_{I^{(m,n)}_2(\vr)} \nonumber \\ &= \frac{1}{\kappa^{m+n}} \bar Q^{(m,n)}(\vr) + I_1^{(m,n)}(\vr) + I_2^{(m,n)}(\vr), \label{eq:pertI1I2} \end{align} where $\bar \vect{L}$ was defined below \eqref{eq:barLu}. The following two results establish that the perturbations $I_1^{(m,n)}(\vr)$ and $I_2^{(m,n)}(\vr)$ are small on a set of (grid) points $\Omega$ with high probability. \begin{lemma} Let $\Omega \subset [0,1]^2$ be a finite set of points and suppose that $m+n\leq 2$. Then, for all $0<\epsilon\le 1$ and for all $\delta > 0$, \[ \PR{\max_{\vr \in \Omega} |I^{(m,n)}_1(\vr)| \geq \epsilon} \leq \delta +\PR{\comp{ \mc E}_{1/4}} \] provided that \[ \L \geq \frac{c_5}{\epsilon^2} \S \log^2\left(\frac{12 \S |\Omega|}{\delta}\right) \log\left(\frac{8 |\Omega|}{\delta}\right) . \] \label{lem:uboundI1} \end{lemma} \begin{lemma} Let $\Omega \subset [0,1]^2$ be a finite set of points. Suppose that $m+n\leq 2$. Then, for all $\epsilon,\delta > 0$, and for all $\xi>0$ with \begin{align} \xi \leq \frac{\epsilon c_6}{\sqrt{\log\left( \frac{4|\Omega|}{\delta} \right)}}, \label{eq:ubontaune} \end{align} where $c_6\leq 1/4$ is a numerical constant, it follows \[ \PR{\max_{\vr \in \Omega} |I^{(m,n)}_2(\vr)| \geq \epsilon \Big| \mc E_\xi } \leq \delta. \] \label{lem:uboundI2} \end{lemma} Lemma \ref{lem:uboundI1} and \ref{lem:uboundI2} are proven in Section \ref{sec:proflemuboundl1} and \ref{sec:prooflemuboundl2}, respectively. We are now ready to complete the proof of Lemma \ref{lem6}. From \eqref{eq:pertI1I2}, we obtain for all $\xi>0$ satisfying~\eqref{eq:ubontaune} \begin{align} \PR{\max_{\vr \in \Omega} \frac{1}{\kappa^{n+m}} \left| Q^{(m,n)}(\vr) - \bar Q^{(m,n)}(\vr) \right| \geq 2 \epsilon } &= \PR{ \max_{\vr \in \Omega} \left| I_1^{(m,n)}(\vr) + I_2^{(m,n)}(\vr) \right| \geq 2 \epsilon } \nonumber \\ &\hspace{-4cm}\leq \PR{ \max_{\vr \in \Omega} \left| I_1^{(m,n)}(\vr) \right| \geq \epsilon } + \PR{ \comp{\mc E}_\xi } + \PR{ \max_{\vr \in \Omega} \left| I_2^{(m,n)}(\vr) \right| \geq \epsilon | \mc E_\xi } \label{eq:uppbpri1I2}\\ &\hspace{-4cm}\leq 4 \delta, \nonumber \end{align} where \eqref{eq:uppbpri1I2} follows from the union bound and the fact that $\PR{A} = \PR{A \cap \comp{B}} + \PR{A \cap B} \leq \PR{\comp{B}} + \PR{A | B}$ by setting $B = \mc E_\xi$ and $A = \left\{\max_{\vr \in \Omega} \left| I_2^{(m,n)}(\vr) \right| \geq \epsilon \right\}$, and the last inequality follows from Lemmas \ref{lem:preptaubound}, \ref{lem:uboundI1} and \ref{lem:uboundI2}, respectively. In more details, we choose $\xi = \epsilon c_6 \log^{-1/2}\left( \frac{4|\Omega|}{\delta} \right)$. It then follows from Lemma \ref{lem:uboundI2} that the third probability in \eqref{eq:uppbpri1I2} is smaller than $\delta$. With this choice of $\xi$, the condition in Lemma \ref{lem:preptaubound} becomes $L \geq \S \frac{c_4}{\epsilon^2 c_6^2} \log\left(\frac{4 |\Omega|}{\delta} \right) \log \left( \frac{18\S^2}{\delta} \right)$, which is satisfied by choosing $c = \frac{c_4}{c_6^2}$. Moreover, $\xi \leq 1/4$ since $\epsilon\leq 1$ and $c_6\leq 1/4$. Thus, Lemma \ref{lem:preptaubound} yields $\PR{ \comp{\mc E}_\xi } \leq \delta$ and $\PR{ \comp{\mc E}_{1/4} } \leq \delta$. Finally, observe that the conditions of Lemma \ref{lem:uboundI1} are satisfied, thus the first probability in \eqref{eq:uppbpri1I2} can be upper-bounded by \[ \PR{ \max_{\vr \in \Omega} \left| I_1^{(m,n)}(\vr) \right| \geq \epsilon } \leq \delta +\PR{\comp{ \mc E}_{1/4}} \leq 2 \delta. \] This concludes the proof. \subsubsection{Proof of Lemma \ref{lem:uboundI1} \label{sec:proflemuboundl1}} Set $\Delta \vv^{(m,n)} \defeq \vv^{(m,n)} (\vr) - \bar \vv^{(m,n)} (\vr)$ for notational convenience. By the union bound, we have for all $a,b\geq 0$, \begin{align} \PR{\max_{\vr \in \Omega} |I^{(m,n)}_1(\vr)| \geq 2.5 a b} \hspace{-3cm}&\hspace{2.5cm}= \PR{\max_{\vr \in \Omega} \left|\innerprod{\vu}{\herm{\vect{L}} \Delta \vv^{(m,n)} }\right| \geq 2.5 a b} \nonumber \\ &\leq \PR{\bigcup_{\vr \in \Omega } \left\{ \left|\innerprod{\vu}{\herm{\vect{L}} \Delta \vv^{(m,n)} }\right| \geq \norm[2]{\herm{\vect{L}} \Delta \vv^{(m,n)}} b \right\} \cup \left\{ \norm[2]{\herm{\vect{L}} \Delta \vv^{(m,n)}}\geq 2.5 a \right\} } \nonumber \\ &\leq \PR{\bigcup_{\vr \in \Omega } \left\{ \left|\innerprod{\vu}{\herm{\vect{L}} \Delta \vv^{(m,n)} }\right| \geq \norm[2]{\herm{\vect{L}} \Delta \vv^{(m,n)}} b \right\} \cup \left\{ \norm[2]{\Delta \vv^{(m,n)}}\geq a \right\} \cup \left\{ \norm[\opnormss]{\vect{L}} \geq 2.5 \right\} } \nonumber \\ &\leq \PR{\norm[\opnormss]{\vect{L}} \geq 2.5 } +\sum_{\vr \in \Omega} \left( \PR{\left|\innerprod{\vu}{\herm{\vect{L}} \Delta \vv^{(m,n)} }\right| \geq \norm[2]{\herm{\vect{L}} \Delta \vv^{(m,n)}} b} +\PR{\norm[2]{\Delta \vv^{(m,n)}}\geq a} \right) \nonumber \\ &\leq \PR{\comp{ \mc E}_{1/4}} + |\Omega| 4 e^{- \frac{b^2}{4}} + \sum_{\vr \in \Omega} \PR{\norm[2]{\Delta \vv^{(m,n)}}\geq a} \label{eq:useHoeffanddf} \\ &\leq \PR{\comp{ \mc E}_{1/4}} + \frac{\delta}{2} + \sum_{\vr \in \Omega} \PR{\norm[2]{\Delta \vv^{(m,n)}}\geq a}, \label{eq:inposltdel} \end{align} where \eqref{eq:useHoeffanddf} follows from application of Hoeffding's inequality (stated below) and from $\{\norm[\opnormss]{\vect{L}} \geq 2.5\} \subseteq \comp{ \mc E}_{1/4}$ according to \eqref{eq:normmLb}. For \eqref{eq:inposltdel}, we used $|\Omega| 4 e^{- \frac{b^2}{4}} \leq \frac{\delta}{2}$ ensured by choosing $b = 2 \sqrt{\log(8|\Omega|/\delta)}$. \begin{lemma}[Hoeffding's inequality] Suppose the entries of $\vu \in \mathbb R^{\S}$ are i.i.d.~with $\PR{u_i = -1} = \PR{u_i =1} = 1/2$. Then, for all $t\geq 0$, and for all $\vv \in \complexset^\S$ \[ \PR{ \left|\innerprod{\vu}{\vv}\right| \geq \norm[2]{\vv} t } \leq 4 e^{- \frac{t^2}{4}}. \] \label{thm:hoeff} \end{lemma} We next upper-bound $\PR{\norm[2]{\Delta \vv^{(m,n)}}\geq a}$ in \eqref{eq:inposltdel}. For all $\alpha \geq 0$, using that $12^{\frac{n+m+1}{2}} \leq 12^{\frac{3}{2}}$, we have \begin{align} \PR{\norm[2]{\Delta \vv^{(m,n)} } \geq \frac{\sqrt{3\S}}{\sqrt{L}} 12^{\frac{3}{2}} c_1 \alpha } &\leq \PR{\norm[2]{\Delta \vv^{(m,n)} } \geq \frac{\sqrt{3\S}}{\sqrt{L}} 12^{\frac{n+m+1}{2}} c_1 \alpha } \nonumber \\ &= \PR{\norm[2]{\Delta \vv^{(m,n)}}^2 \geq \frac{3\S}{L} 12^{n+m+1} c_1^2 \alpha^2 } \nonumber \\ &\leq \sum_{k=1}^{3\S} \PR{|[ \Delta \vv^{(m,n)} ]_k|^2\geq \frac{1}{L} 12^{n+m+1} c_1^2 \alpha^2 } \label{eq:ubagao} \\ &= \sum_{k=1}^{3\S} \PR{|[\Delta \vv^{(m,n)} ]_k| \geq \frac{1}{\sqrt{L}} 12^{\frac{n+m+1}{2}} c_1 \alpha} \nonumber \\ &\leq 3\S \cdot 2 \exp\left( - c \min\left( \frac{ \alpha^2}{c_2^4 }, \frac{ \alpha}{ c_2^2 }\right) \right) \label{eq:lobDvgead} \\ &\leq \frac{\delta}{2 |\Omega|} ,\label{eq:choosealphac2sq} \end{align} where \eqref{eq:ubagao} follows from the union bound, \eqref{eq:lobDvgead} follows from Lemma \ref{lem:polybo}. Finally, \eqref{eq:choosealphac2sq} follows by choosing $\alpha = \frac{c_2^2}{ c} \log\left( \frac{12 \S |\Omega| }{\delta} \right)$ and using the fact that for $\alpha \geq c_2^2$ (since $c\ge 1$) we have $\min\left( \frac{ \alpha^2}{c_2^4 }, \frac{ \alpha}{ c_2^2 }\right) = \frac{ \alpha}{ c_2^2 }$. We have established that $\PR{\norm[2]{\Delta \vv^{(m,n)} } \geq a } \leq \frac{\delta}{2|\Omega|}$ with $a = \frac{\sqrt{3\S}}{\sqrt{L}} 12^{\frac{3}{2}} c_1 \frac{c_2^2}{ c} \log\left( \frac{12\S |\Omega| }{\delta} \right)$. Substituting \eqref{eq:choosealphac2sq} into \eqref{eq:inposltdel} we get \[ \PR{\max_{\vr \in \Omega} |I^{(m,n)}_1(\vr)| \geq \sqrt{c_5} \frac{\sqrt{\S}}{\sqrt{L}} \log\left( \frac{12\S |\Omega| }{\delta} \right) \sqrt{\log\left( \frac{8|\Omega|}{\delta} \right)} } \leq \delta + \PR{\comp{ \mc E}_{1/4}}, \] where $c_5 = (5 \sqrt{3} \,12^{\frac{3}{2}} c_1 \frac{c_2^2}{\sqrt{c}})^2$ is a numerical constant. This concludes the proof. \subsubsection{\label{sec:prooflemuboundl2}Proof of Lemma \ref{lem:uboundI2}} By the union bound \begin{align} \PR{\max_{\vr \in \Omega} |I^{(m,n)}_2(\vr)| \geq \epsilon \Big| \mc E_\xi } &\leq \sum_{\vr \in \Omega} \PR{ \left|\innerprod{ \vu }{\herm{(\vect{L} - \bar \vect{L})} \bar \vv^{(m,n)}(\vr) }\right| \geq \epsilon \Big| \mc E_\xi } \nonumber \\ &\leq \sum_{\vr \in \Omega} \PR{\left|\innerprod{\vu}{\herm{(\vect{L} - \bar \vect{L})} \bar \vv^{(m,n)}(\vr) }\right| \geq \norm[2]{\herm{(\vect{L} - \bar \vect{L})} \bar \vv^{(m,n)}(\vr) } \frac{\epsilon}{c_5\xi} } \label{eq:useeq:ubltlvr}\\ &\leq |\Omega| 4 e^{- \frac{(\epsilon/(c_5\xi))^2}{4}} \label{eq:useHoeffanddf2} \\ &\leq \delta \label{eq:iqledeluc} \end{align} where \eqref{eq:useeq:ubltlvr} follows from \eqref{eq:ubltlvr} below, \eqref{eq:useHoeffanddf2} follows by Hoeffding's inequality (cf.~Lemma \ref{thm:hoeff}), and to obtain \eqref{eq:iqledeluc} we used the assumption \eqref{eq:ubontaune} with $c_6=1/(2c_5)$. To complete the proof, note that by \eqref{eq:normLmbL} we have $\norm[\opnormss]{\vect{L} - \bar \vect{L}} \leq 2.5 \xi$ on $\mc E_\xi$. Thus, conditioned on $\mc E_\xi$, \begin{align} \norm[2]{\herm{(\vect{L} - \bar \vect{L})} \bar \vv^{(m,n)}(\vr) } \leq \norm[\opnormss]{\vect{L} - \bar \vect{L}} \norm[2]{ \bar \vv^{(m,n)}(\vr) } \leq 2.5 \xi \norm[1]{ \bar \vv^{(m,n)}(\vr) } \leq c_5 \xi, \label{eq:ubltlvr} \end{align} where we used $\norm[2]{\cdot} \leq \norm[1]{\cdot}$, and the last inequality follows from the fact that, for all $\vr$, \begin{align*} \norm[1]{ \bar \vv^{(m,n)}(\vr) } &= \frac{1}{\kappa^{m+n}} \sum_{k=1}^\S \left( \left|\bar G^{(m,n)}(\vr-\vr_k)\right| + \left| \frac{1}{\kappa} \bar G^{(m+1,n)}(\vr - \vr_k) \right| + \left|\frac{1}{\kappa} \bar G^{(m,n+1)}(\vr - \vr_k) \right| \right) \leq \frac{c_5}{2.5}. \end{align*} Here, $c_5$ is a numerical constant, and we used \cite[C.12, Table 6]{candes_towards_2014} and $N/ \kappa \leq 0.5514$. \subsection{Step 3b: $Q(\vr)$ and $\bar Q(\vr)$ are close for all $\vr$} We next use an $\epsilon$-net argument together with Lemma \ref{lem:diffqmbarqmongrid} to establish that $Q^{(m,n)}(\vr)$ is close to $\bar Q^{(m,n)}(\vr)$ with high probability uniformly for all $\vr \in [0,1]^2$. \begin{lemma} Let $\epsilon, \delta > 0$. If \begin{align} L \geq \S \frac{c}{\epsilon^2} \log^3\left(\frac{c' L^6}{\delta \epsilon^2} \right) \label{eq:condLleex} \end{align} then, with probability at least $1-\delta$, \begin{align} \max_{\vr \in [0,1]^2, (m,n)\colon m+n \leq 2} \frac{1}{\kappa^{n+m}} \left| Q^{(m,n)}(\vr) - \bar Q^{(m,n)}(\vr) \right| \leq \epsilon. \label{eq:ledffbaere} \end{align} \label{lem:lemdffqbarqevry} \end{lemma} \begin{proof} We start by choosing a set of points $\Omega$ (i.e., the $\epsilon$-net) that is sufficiently dense in the $\infty$-norm. Specifically, we choose the points in $\Omega$ on a rectangular grid such that \begin{align} \max_{\vr \in [0,1]^2} \min_{\vr_g \in \Omega} \infdist{ \vr - \vr_g} \leq \frac{\epsilon}{3 \tilde c L^{5/2}}. \label{eq:gridmaxdist} \end{align} The cardinality of the set $\Omega$ is \begin{align} |\Omega| = \left(\frac{3\tilde c L^{5/2}}{\epsilon}\right)^2 = c' L^5/\epsilon^2. \end{align} First, we use Lemma \ref{lem:diffqmbarqmongrid} to show that $\left| Q^{(m,n)}(\vr_g) - \bar Q^{(m,n)}(\vr_g) \right|$ is small for all points $\vr_g \in \Omega$. Note that the condition of Lemma \ref{lem:diffqmbarqmongrid} is satisfied by assumption \eqref{eq:condLleex}. Using the union bound over all $6$ pairs $(m,n)$ obeying $m+n\leq 2$, it now follows from Lemma \ref{lem:diffqmbarqmongrid}, that \begin{align} \left\{ \max_{\vr_g \in \Omega, m+n\leq 2} \frac{1}{\kappa^{m+n}} \left| Q^{(m,n)}(\vr_g) - \bar Q^{(m,n)}(\vr_g) \right| \leq \frac{\epsilon}{3} \right\} \label{eq:QrmgbarQrg} \end{align} holds with probability at least $1- 6\delta' = 1- \frac{\delta}{2}$. Here, $\delta'$ is the original $\delta$ in Lemma \ref{lem:diffqmbarqmongrid}. Next, we will prove that this result continues to hold uniformly for all $\vr \in [0,1]^2$. It follows from the derivation in Section \ref{sec:techres1} below that to prove the uniform result, it is sufficient to demonstrate that the event \begin{align} \left\{ \max_{\vr \in [0,1]^2, m+n\leq 2}\frac{1}{\kappa^{m+n}} \left | Q^{(m,n)}(\vr) \right| \leq \frac{\tilde c}{2} L^{3/2} \right\} \label{eq:QrmQrg} \end{align} holds with probability at least $1-\frac{\delta}{2}$. By the union bound, the events in \eqref{eq:QrmgbarQrg} and \eqref{eq:QrmQrg} hold simultaneously with probability at least $1-\delta$. In Section \ref{sec:techres2}, it is shown that \eqref{eq:QrmgbarQrg} and \eqref{eq:QrmQrg} imply \eqref{eq:ledffbaere}. \subsubsection{\label{sec:techres1} Proof of the fact that \eqref{eq:QrmQrg} holds with probability at least $1-\frac{\delta}{2}$:} In order to show that \eqref{eq:QrmQrg} holds with probability at least $1-\frac{\delta}{2}$, we first upper-bound $|Q^{(m,n)}(\vr)|$. By \eqref{eq:Qmninnprodform}, \begin{align} \frac{1}{\kappa^{m+n}} \left | Q^{(m,n)}(\vr) \right| &= \left| \innerprod{ \vect{L} \vu }{ \vv^{(m,n)}(\vr)} \right| \nonumber \\ &\leq \norm[\opnormss]{{\vect{L}}} \norm[2]{ \vu } \norm[2]{ \vv^{(m,n)} (\vr)} \nonumber \\ &\leq \norm[\opnormss]{{\vect{L}}} \sqrt{\S} \norm[2]{ \vv^{(m,n)} (\vr)} \nonumber \\ &\leq \norm[\opnormss]{{\vect{L}}} \sqrt{\S} \sqrt{3\S}\norm[\infty]{ \vv^{(m,n)} (\vr)} \nonumber \\ &= \norm[\opnormss]{{\vect{L}}} \sqrt{3} \, \S \max_{j, (m', n') \in \{ (0,0), (1,0), (0,1) \}} \frac{1}{\kappa^{m+m'+n+n'}} \left | G^{(m,n)}_{(m',n')}(\vr,\vr_j) \right|, \label{eq:ubqmnarr} \end{align} where we used $\norm[2]{\vu} = \sqrt{\S}$, since the entries of $\vu$ are $\pm 1$. Next, note that, for all $\vr$ and all $\vr_j$, we have, by~\eqref{eq:gmnqform} \begin{align} \frac{1}{\kappa^{m+m'+n+n'}} \left | G^{(m,n)}_{(m',n')}(\vr,\vr_j) \right| &= \frac{1}{\kappa^{m+m'+n+n'}} \herm{\vx} \vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j) \vx \leq \frac{1}{\kappa^{m+m'+n+n'}} \norm[2]{\vx}^2 \norm[\opnormss]{ \vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j) } \nonumber \\ &\leq c_1\frac{(2\pi N)^{m + m' + n + n'} }{\kappa^{m + m'+n+n'}} \sqrt{\L} \norm[2]{\vx}^2 \leq c_1 12^{\frac{m + m'+n+n'}{2}} \sqrt{\L} \norm[2]{\vx}^2 \nonumber \\%\label{eq:ulebvfop} \\ &\leq c_1 12^{\frac{3}{2}} \sqrt{\L} \norm[2]{\vx}^2, \label{eq:fbongmnra} \end{align} where we used Lemma \ref{lem:boundonVFnorm} to conclude $\norm[\opnormss]{ \vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j) } \leq \norm[F]{ \vect{V}^{(m,n)}_{(m',n')}(\vr,\vr_j) } \leq c_1 (2\pi N)^{m + m'+ n + n'} \sqrt{L}$ and \eqref{eq:fbongmnra} follows from $m+m'+n+n'\leq 3$ (recall that $m+n \leq 2$). Substituting \eqref{eq:fbongmnra} into \eqref{eq:ubqmnarr} and using that $\S \leq \L$ (by assumption \eqref{eq:condLleex}) yields \[ \frac{1}{\kappa^{m+n}} \left | Q^{(m,n)}(\vr) \right| \leq \sqrt{3} \, 12^{\frac{3}{2}} c_1 L^{3/2} \norm[\opnormss]{{\vect{L}}} \norm[2]{\vx}^2. \] It follows that (with $\frac{\tilde c}{2} = 2.5 \cdot 3 \cdot \sqrt{3} \, 12^{\frac{3}{2}} c_1$) \begin{align} \PR{\max_{\vr \in [0,1]^2, m+n\leq 2}\frac{1}{\kappa^{m+n}} \left | Q^{(m,n)}(\vr) \right| \geq \frac{\tilde c}{2} L^{3/2} } &\leq \PR{\norm[\opnormss]{{\vect{L}}} \norm[2]{\vx}^2 \geq 2.5 \cdot 3} \nonumber \\ &\leq \PR{\norm[\opnormss]{{\vect{L}}} \geq 2.5 } + \PR{ \norm[2]{\vx}^2 \geq 3} \label{eq:ublapr} \\ &\leq \frac{\delta}{2} \label{eq:funbmaxqrgl} \end{align} as desired. Here, \eqref{eq:ublapr} follows from the union bound and \eqref{eq:funbmaxqrgl} follows from $ \PR{\norm[\opnormss]{{\vect{L}}} \geq 2.5} \leq \PR{\comp{\mathcal E}_{1/4} } \leq \frac{\delta}{4} $ (by \eqref{eq:normmLb} and application of Lemma \ref{lem:preptaubound}; note that the condition of Lemma \ref{lem:preptaubound} is satisfied by \eqref{eq:condLleex}) and $\PR{ \norm[2]{\vx}^2 \geq 3} \leq \frac{\delta}{4}$, shown below. Using that $4\log(4/\delta) \leq \L$ (by \eqref{eq:condLleex}), we obtain \begin{align} \PR{\norm[2]{\vx}^2 \geq 3 } &\leq \PR{\norm[2]{\vx}^2 \geq 2\left(1 + \frac{2 \log(4/\delta) }{L} \right) } \nonumber \\ &\leq \PR{\norm[2]{\vx} \geq \left(1 + \frac{\sqrt{2 \log(4/\delta)}}{\sqrt{L}} \right) } \leq e^{- \frac{2 \log(4/\delta) }{2}} = \frac{\delta}{4}, \end{align} where we used $\sqrt{2(1+\beta^2)}\geq (1+\beta)$, for all $\beta$, and a standard concentration inequality for the norm of a Gaussian random vector, e.g., \cite[Eq. 1.6]{ledoux_probability_1991}. This concludes the proof of \eqref{eq:QrmQrg} holding with probability at least $1-\frac{\delta}{2}$. \subsubsection{\label{sec:techres2} Proof of the fact that \eqref{eq:QrmgbarQrg} and \eqref{eq:QrmQrg} imply \eqref{eq:ledffbaere}:} Consider a point $\vr \in [0,1]$ and let $\vr_g$ be the point in $\Omega$ closest to $\vr$ in $\infty$-distance. By the triangle inequality, \begin{align} &\frac{1}{\kappa^{n+m}} \left| Q^{(m,n)}(\vr) - \bar Q^{(m,n)}(\vr) \right| \leq \nonumber \\ &\hspace{0.5cm}\frac{1}{\kappa^{n+m}} \left[ \left| Q^{(m,n)}(\vr) - Q^{(m,n)}(\vr_g) \right| + \left| Q^{(m,n)}(\vr_g) - \bar Q^{(m,n)}(\vr_g) \right| + \left| \bar Q^{(m,n)}(\vr_g) - \bar Q^{(m,n)}(\vr) \right| \right]. \label{eq:Qmqgrid} \end{align} We next upper-bound the terms in \eqref{eq:Qmqgrid} separately. With a slight abuse of notation, we write $Q^{(m,n)}(\tau,\nu) = Q^{(m,n)}(\transp{[\tau,\nu]}) \allowbreak = Q^{(m,n)}(\vr)$. The first absolute value in \eqref{eq:Qmqgrid} can be upper-bounded according to \begin{align} \left| Q^{(m,n)}(\vr) - Q^{(m,n)}(\vr_g) \right| &= \left| Q^{(m,n)}(\tau,\nu) - Q^{(m,n)}(\tau,\nu_g) + Q^{(m,n)}(\tau,\nu_g) - Q^{(m,n)}(\tau_g,\nu_g) \right| \nonumber \\ &\leq \left| Q^{(m,n)}(\tau,\nu) - Q^{(m,n)}(\tau,\nu_g)\right| + \left| Q^{(m,n)}(\tau,\nu_g) - Q^{(m,n)}(\tau_g,\nu_g) \right| \nonumber \\ &\leq |\nu - \nu_g| \sup_{z} \left|Q^{(m,n+1)}(\tau,z)\right| + |\tau - \tau_g| \sup_{z} \left|Q^{(m+1,n)}(z,\nu_g)\right| \nonumber \\ &\leq |\nu- \nu_g| 2 \pi N \sup_{z} \left|Q^{(m,n)}(\tau,z)\right| + |\tau - \tau_g| 2 \pi N \sup_{z} \left|Q^{(m,n)}(z,\nu_g)\right|, \label{eq:ubqrmqrg} \end{align} where \eqref{eq:ubqrmqrg} follows from Bernstein's polynomial inequality, stated below (note that $Q^{(m,n)}(\tau,\nu)$ is a trigonometric polynomial of degree $N$ in both $\tau$ and $\nu$). \begin{proposition}[Bernstein's polynomial inequality {\cite[Cor.~8]{harris_bernstein_1996 }] Let $p(\theta)$ be a trigonometric polynomial of degree $N$ with complex coefficients $p_k$, i.e., $p(\theta) = \sum_{k=-N}^{N} p_k e^{i2\pi \theta k}$. Then \[ \sup_{\theta} \left| \frac{d}{d\theta} p(\theta) \right| \leq 2 \pi N \sup_{\theta} |p(\theta)|. \] \label{prop:bernstein} \end{proposition} Substituting \eqref{eq:QrmQrg} into \eqref{eq:ubqrmqrg} yields \begin{align} \frac{1}{\kappa^{m+n}} \left| Q^{(m,n)}(\vr) - Q^{(m,n)}(\vr_g) \right| \leq \frac{\tilde c}{2} L^{5/2} ( |\tau- \tau_g| + |\nu- \nu_g|) \leq \tilde c L^{5/2} \infdist{\vr - \vr_g \leq \frac{\epsilon}{3}, \label{eq:diffQrQrg} \end{align} where the last inequality follows from \eqref{eq:gridmaxdist}. We next upper-bound the third absolute value in \eqref{eq:Qmqgrid}. Using steps analogous to those leading to \eqref{eq:diffQrQrg}, we obtain \begin{align} \frac{1}{\kappa^{m+n}} \left| \bar Q^{(m,n)}(\vr_g) - \bar Q^{(m,n)}(\vr) \right| \leq \frac{\epsilon}{3}. \label{eq:barQrmbarQrg} \end{align} Substituting \eqref{eq:QrmgbarQrg}, \eqref{eq:diffQrQrg}, and \eqref{eq:barQrmbarQrg} into \eqref{eq:Qmqgrid} yields that \[ \frac{1}{\kappa^{n+m}} \left| Q^{(m,n)}(\vr) - \bar Q^{(m,n)}(\vr) \right| \leq \epsilon, \text{ for all } (m,n)\colon m+n \leq 2 \text{ and for all } \vr \in [0,1]^2. \] This concludes the proof of Lemma \ref{lem:lemdffqbarqevry}. \end{proof} \subsection{Step 3c: Ensuring that $\abs{Q(\vr)} < 1$ for all $\vr \notin \T$} \begin{lemma} Suppose that \[ L \geq \S c \log^3\left(\frac{c' L^6}{\delta} \right). \] Then with probability at least $1 - \delta$ the following statements hold: \begin{enumerate} \item \label{it:stat1} For all $\vr$, that satisfy $\min_{\vr_j \in \T} \infdist{\vr - \vr_j } \geq 0.2447/N$ we have that $ \abs{Q(\vr)} < 0.9963. $ \item \label{it:stat2} For all $\vr \notin \T$ that satisfy $0 < \infdist{\vr - \vr_j} \leq 0.2447/N$ for some $\vr_j \in \T$, we have that $\abs{Q(\vr)} < 1$. \end{enumerate} \end{lemma} \begin{proof} Choose $\epsilon =0.0005$. It follows from Lemma \ref{lem:lemdffqbarqevry} that \begin{align} \frac{1}{\kappa^{n+m}} \left| Q^{(m,n)}(\vr) - \bar Q^{(m,n)}(\vr) \right| \leq 0.0005 \label{eq:conddiffQbarQinfipr} \end{align} for all $(m,n)\colon m+n \leq 2$, and for all $\vr$ with probability at least $1 - \delta$. To prove the lemma we will show that statements \ref{it:stat1} and \ref{it:stat2} follow from \eqref{eq:conddiffQbarQinfipr} and certain properties of $\bar Q^{(m,n)}(\vr)$ established in \cite{candes_towards_2014}. Statement \ref{it:stat1} follows directly by combining \eqref{eq:conddiffQbarQinfipr} with the following result via the triangle inequality. \begin{proposition}[{\cite[Lem.~C.4]{candes_towards_2014}}] For all $\vr$, that satisfy $\min_{\vr_j \in \T} |\vr - \vr_j | \geq 0.2447/N$ we have that $ \abs{Q(\vr)} < 0.9958. $ \end{proposition} In order to prove statement \ref{it:stat2}, assume without loss of generality that $\vect{0} \in \T$, and consider $\vr$ with $|\vr| \leq 0.2447/N$. Statement \ref{it:stat2} is established by showing that the Hessian matrix of $\tilde Q(\vr) \defeq |Q(\vr)|$, i.e., \[ \vect{H} = \begin{bmatrix} \tilde Q^{(2,0)}(\vr) & \tilde Q^{(1,1)}(\vr) \\ \tilde Q^{(1,1)}(\vr) & \tilde Q^{(0,2)}(\vr) \end{bmatrix}, \quad \tilde Q^{(m,n)}(\vr) \defeq \frac{\partial^m }{ \partial \tau^m} \frac{\partial^n }{ \partial \nu^n} \tilde Q(\vr) \] is negative definite. This is done by showing that \begin{align} \mathrm{trace}(\vect{H}) = \tilde Q^{(2,0)} + \tilde Q^{(0,2)} < 0 \label{eq:traceHleq0} \\ \mathrm{det}(\vect{H}) = \tilde Q^{(2,0)} \tilde Q^{(0,2)} - (\tilde Q^{(1,1)})^2 > 0, \label{eq:detHgeq0} \end{align} which implies that both eigenvalues of $\vect{H}$ are strictly negative. To prove \eqref{eq:traceHleq0} and \eqref{eq:detHgeq0}, we will need the following result. \begin{proposition}[{\cite[Sec.~C.2]{candes_towards_2014}}] For $|\vr| \leq 0.2447/N$ and for $N \geq 512$, \begin{align} 1\geq \bar Q(\vr) \geq 0.6447 \\ \frac{1}{\kappa^2}\bar Q^{(2,0)}(\vr) \leq -0.3550 \\ \frac{1}{\kappa^2}|\bar Q^{(1,1)}(\vr)| \leq 0.3251 \\ \frac{1}{\kappa^2}|\bar Q^{(1,0)}(\vr)| \leq 0.3344. \end{align} \label{propcanc2} \end{proposition} Define $Q_R^{(m,n)} = \frac{1}{\kappa^{m+n}} \mathrm{Re}(Q^{(m,n)})$ and $Q_I^{(m,n)} = \frac{1}{\kappa^{m+n}} \mathrm{Im}(Q^{(m,n)})$. We have that \[ \frac{1}{\kappa}\tilde Q^{(1,0)} = \frac{Q_R^{(1,0)}Q_R + Q_I^{(1,0)}Q_I }{|Q|} \] therefore \begin{align} \frac{1}{\kappa^2} \tilde Q^{(2,0)} &= -\frac{(Q_R Q_R^{(1,0)} + Q_I Q_I^{(1,0)})^2 }{|Q|^3} + \frac{|Q^{(1,0)}|^2 + Q_R Q_R^{(2,0)} + Q_I Q_I^{(2,0)} }{|Q|} \nonumber \\ &=-\frac{Q_R^2 {Q_R^{(1,0)}}^2 + 2Q_R Q_R^{(1,0)} Q_I Q_I^{(1,0)} + Q_I^2 {Q_I^{(1,0)}}^2 }{|Q|^3} + \frac{ {Q_R^{(1,0)}}^2 + {Q_I^{(1,0)}}^2 + Q_R Q_R^{(2,0)} + Q_I Q_I^{(2,0)} }{|Q|} \nonumber \\ &= \left(1-\frac{Q_R^2}{|Q|^2}\right) \frac{{Q_R^{(1,0)}}^2}{|Q|} -\frac{2Q_R Q_R^{(1,0)} Q_I Q_I^{(1,0)} + Q_I^2 {Q_I^{(1,0)}}^2 }{|Q|^3} + \frac{{Q_I^{(1,0)}}^2 + Q_I Q_I^{(2,0)} }{|Q|}+ \frac{Q_R}{|Q|} Q_R^{(2,0)}. \label{eq:lhsoftildq20} \end{align} By Proposition \ref{propcanc2}, using the triangle inequality, and using the fact that $\bar Q^{(m,n)}(\vr)$ is real, the following bounds are in force: \begin{align*} Q_R(\vr) &\leq \bar Q(\vr) + \epsilon \leq 1+\epsilon \\ Q_R(\vr) &\geq \bar Q(\vr) - \epsilon \geq 0.6447 - \epsilon \\ Q_I^{(m,n)} &\leq \epsilon \\ Q_R^{(2,0)}(\vr) &\leq \frac{1}{\kappa^2}\bar Q^{(2,0)}(\vr) + \epsilon \leq -0.3550 +\epsilon \\ |Q_R^{(1,1)}| &\leq \frac{1}{\kappa^2}|\bar Q^{(1,1)}(\vr)| + \epsilon \leq 0.3251 + \epsilon \\ |Q^{(1,0)}_R(\vr)| &\leq \frac{1}{\kappa^2} |\bar Q^{(1,0)}(\vr)| + \epsilon \leq 0.3344 +\epsilon. \end{align*} Using these bounds in \eqref{eq:lhsoftildq20} with $\epsilon = 0.0005$ we obtain $\tilde Q^{(2,0)} < -0.3539$, which implies that \eqref{eq:traceHleq0} is satisfied. It remains to verify \eqref{eq:detHgeq0}. First note that \begin{align} &\frac{1}{\kappa^2} \tilde Q^{(1,1)} \nonumber \\ &= \frac{Q_R^{(1,1)}Q_R + Q_R^{(1,0)}Q_R^{(0,1)} + Q_I^{(1,1)}Q_I + Q_I^{(1,0)}Q_I^{(0,1)} }{|Q|} - \frac{ (Q_R^{(0,1)}Q_R + Q_I^{(0,1)}Q_I ) (Q_R^{(1,0)}Q_R + Q_I^{(1,0)}Q_I) }{|Q|^3} \nonumber \\ &= Q_R^{(1,1)} \frac{Q_R}{|Q|} + \frac{Q_R^{(1,0)}Q_R^{(0,1)}}{|Q|} \left(1- \frac{Q_R^2}{|Q|^2} \right) + \frac{ Q_I^{(1,1)}Q_I + Q_I^{(1,0)}Q_I^{(0,1)} }{|Q|} \nonumber \\ &- \frac{Q_R^{(0,1)}Q_R Q_I^{(1,0)}Q_I + Q_I^{(0,1)}Q_I (Q_R^{(1,0)}Q_R + Q_I^{(1,0)}Q_I) }{|Q|^3}. \label{eq:tilQ11} \end{align} Using the bounds above in \eqref{eq:tilQ11} yields, with $\epsilon = 0.0005$, that $\frac{1}{\kappa^2}|\tilde Q^{(1,1)}| \leq 0.3267$. With $\frac{1}{\kappa^2}\tilde Q^{(2,0)} < -0.3539$, it follows that the RHS of \eqref{eq:traceHleq0} can be lower-bounded by \[ \frac{1}{\kappa^2}(0.3539^2 - 0.3267^2) = \frac{1}{\kappa^2}0.01855 > 0, \] i.e., \eqref{eq:detHgeq0} holds. This concludes the proof of Statement \ref{it:stat2}. \end{proof} \section*{Funding} VM was supported by the Swiss National Science Foundation fellowship for advanced researchers under grant PA00P2\_139678. \section*{Acknowledgments} RH would like to thank Helmut B\"{o}lcskei, C\'{e}line Aubel, Nora Loose, and Emmanuel Cand\`{e}s for helpful discussions. RH would also like to thank Emmanuel Cand\`{e}s for his hospitality during a visit to the Statistics Department at Stanford, and Helmut B\"{o}lcskei for his support and for initiating this visit. We would also like to thank the referees for helpful comments and suggestions, which greatly improved the manuscript. \bibliographystyle{plain}
1,116,691,496,976
arxiv
\section{Introduction} \label{sec:intro} After formation of a young star and its circumstellar disk due to the gravitation collapse of the protostellar cloud the process of accretion of matter from the nearest surrounding the star can continue and have a form of the clumpy accretion. \citet{1992PASP..104..479G} was probably the first who used this term. He tried to explain by such a way the observations of the strong extinction events observed in some young variables. Later this type of accretion has been considered by \citet{1996ARA&A..34..207H} for explanation of the FUOR’s phenomenon. This idea is quite popular up to now \citep{2010ApJ...713.1143Z, 2013ApJ...764..141B, 2018MNRAS.474...88H}. The mechanism of formation of chondrule as a result of the clumpy accretion was discussed \citet{1998Icar..134..137T}. Obviously, the fall of the clump on the disk should cause disturbances at the place of the fall. It is interesting to trace how this disturbance will develop and what structures on the disk can be caused by it. The detection of various structures on images of protoplanetary disks is one of the most interesting results obtained with the ALMA interferometer \citep[see, e.g.][]{2018A&A...619A.161C, 2018ApJ...869L..43H,2018ApJ...869L..42H, 2018ApJ...869L..50P}. Ring and spiral structures are most often observed. Less commonly, structures resembling highly elongated vortices are observed. A number of papers are devoted to theoretical studies of the formation of such structures. Their formation is associated with perturbations in the disks caused by the motion of the forming planets \citep[e.g.,][]{2013A&A...549A..97R, 2015A&A...579A.106V, 2015ApJ...809...93D, 2016MNRAS.463L..22D, 2016ApJ...818...76J, 2018ApJ...866..110D}, the development of various kinds of instabilities in the disks ~\citep[e.g.,][]{2015ApJ...815L..15B, 2015ApJ...806L...7Z, 2015ApJ...813L..14B, 2016ApJ...821...82O,2009ApJ...697.1269J, 2014ApJ...796...31B, 2014ApJ...794...55T, 2018A&A...609A..50D}, a large scale vertical magnetic field \citep{2018MNRAS.477.1239S} or with the destruction of large bodies in collisions \citep{2019ApJ...887L..15D,2020MNRAS.495..285N}. In all these papers, the source of disturbance is in the disk itself. In our paper we discuss an alternative scenario for the formation of the observed structure. We investigate in the first time the dynamical response of circumstellar disk on the perturbation associated with the clumpy accretion events. Using the hydrodynamic simulations we calculate the disk images at $1$ mm and discuss the results in the context with interferometric observations of the protoplanetary disks with ALMA. \section{Initial condition} A model consists of a young star of solar mass ($ M_{\ast} = M_{\odot} $) embedded in a gas disk with total mass is $ M_{disk} = 0.01M_{\odot} $. At the beginning of simulation the disk matter was distributed azimuthally symmetrical within the radii $ r_{in} = 0.5$ and $r_{out} = 50$ AU. The initial density distribution of the disk is \begin{equation} \rho(r,z,0)=\frac{\Sigma_0}{\sqrt{2\pi}H(r)}\frac{r_{in}}{r}e^{-\frac{z^2}{2H^2(r)}}, \end{equation} where $\Sigma_0$ is arbitrary scale parameter, which is determined by disk mass. Hydrostatic scale height is $H(r)=\sqrt{\frac{\kappa T_{mid}(r) r^3}{GM_{\ast} \mu m_H}}$, where $\kappa$, $G$ and $m_H$ are the Boltzmann constant, the gravitational constant and the mass of a hydrogen atom and $\mu=2.35$ is the mean molecular weight \citep{1994A&A...286..149D}. Following~\citet{1997ApJ...490..368C} we determine the law of midplane temperature distribution $T_{mid}(r)=\sqrt[4]{\frac{\gamma}{4}}\sqrt{\frac{R_{\ast}}{r}}T_{\ast}$, where $\gamma=0.05$ \citep{2004A&A...421.1075D}. The calculations were performed in the local thermodynamic equilibrium approximation $P(r,z,t)=c^2(r)\rho(r,z,t)$, where $P$ is local pressure at the moment $t$ and $c$ is a sound speed. It was assumed that in the vertical direction along $z$ the disk is isothermal. The temperature of the star was assumed to be $T_{\ast} = 5780$ K and star radius is $R_{\ast}=R_{\odot}$. The disk relaxed during $600$ years, and then the remnant of the fallen gas clump was added into it. \subsection{Impulse approximation to the clump-disk collision} When a clump falls onto a disk, part of its kinetic energy is converted into thermal energy. An infrared spot should appear on the image of the disk where the clump fell. The thermal relaxation time in protoplanetary disks at the distance $\geq20$ AU from the star is much less than the local orbital period \citep{2017A&A...605A..30M}. Therefore, the remnant of the fallen clump quickly comes to thermodynamic equilibrium with the matter of the disk. It participates in the Keplerian motion of the matter, while maintaining the residual velocity component orthogonal to the plane of the disk. At the initial moment of time, we assume that the remnant of the clump has already reached thermodynamic equilibrium with the matter of the disk. The remnant was generated as a density perturbation on the disk in the form of a disk segment bounded by radii $R_0$ and step $dR$ and distributed over the azimuthal angle $\phi$ with the axis of symmetry along negative part of x-axis ($\phi=30^{\circ}$ for all models). The density of matter in the disturbance exceeded the local density of the disk by a factor of $\displaystyle K=\frac{\Sigma_{cl}}{\Sigma_d}$, where $\Sigma_{cl}$ and $\Sigma_d$ are local surface density of remnant and disk respectively. The remnant moves prograde. The particle velocity of the remnant is equal to $\displaystyle V(R)=L\cdot V_k(R)$, where $V_k(R)$ is Keplerian velocity at a given distance from the star and $L$ is parameter of the problem. The velocity vector had a residual inclination to the disk plane $\displaystyle sin(I)=\frac{V_z(R)}{V(R)}$ (Fig.~\ref{fig:disk}). The residual angle of inclination of the remnant depends on the initial angle of the fall and the amount of kinetic energy that is spent on heating the disk in the region of fall. We considered a number of the possible options for the value and inclination of the velocity vector. The parameters of all calculated models are given in the Table~\ref{tab:models}. \begin{figure}[ht!] \plotone{disk.eps} \caption{\normalsize Particle distribution at the initial moment of the remnant of the clump motion. Top is the projection onto the plane of the disk, bottom is a section along the axis $y$. \label{fig:disk}} \end{figure} \begin{figure*}[ht!] \plotone{Fig2.eps} \caption{\normalsize The average value of the $z$ coordinate of the particles in the cells of $R$, $\phi$. The model parameters are $K=3$, $I=30^\circ$ and $L=0.8$. The time in years is in the upper right corner of the pictures. \label{fig:08inclImg}} \end{figure*} \begin{deluxetable*}{ccccccccc} \tablenum{1} \tablecaption{The models parameters \label{tab:models}} \tablewidth{0pt} \tablehead{ \colhead{L} & \colhead{I} & \colhead{K} & \colhead{R} & \colhead{dR} & \colhead{$\phi$} & \colhead{Remnant mass} & \nocolhead{} & \colhead{Lifetime}\\ \colhead{Float} & \colhead{Degrees} & \colhead{Number} & \colhead{AU} & \colhead{AU} & \colhead{Degrees} & \colhead{Jupiter mass} & \colhead{Sructures} & \colhead{yrs} } \decimalcolnumbers \startdata 0.8 & 5 & 3 & 20 & 5 & 30 & 0.11 & Arc, One-hand spiral, Horseshoe & $> 600$ \\ 0.8 & 10 & 3 & 20 & 5 & 30 & 0.11 & Arc, One-hand spiral, Horseshoe & $> 600$\\ 0.8 & 20 & 3 & 20 & 5 & 30 & 0.11 & Arc, Faint two-arm spiral, Horseshoe & $> 600$\\ 0.8 & 30 & 3 & 20 & 5 & 30 & 0.11 & Arc, Faint two-arm spiral, Horseshoe & $> 600$\\ 0.8 & 10 & 5 & 20 & 5 & 30 & 0.19 & Arc, One-hand spiral, Horseshoe & $> 600$\\ \hline 1 & 5 & 3 & 20 & 5 & 30 & 0.11 & Arc, One-arm spiral, Multi Rings, Ring & $\sim 1000$ \\ 1 & 10 & 3 & 20 & 5 & 30 & 0.11 & Arc, One-arm spiral, Ring & $> 600$\\ 1 & 20 & 3 & 20 & 5 & 30 & 0.11 & Arc, One-arm spiral, Faint two-arm spiral & $> 600$\\ 1 & 30 & 3 & 20 & 5 & 30 & 0.11 & Arc, One-arm spiral, Faint two-arm spiral & $> 600$\\ 1 & 5 & 5 & 20 & 5 & 30 & 0.19 & Arc, One-arm spiral, Multi Rings, Ring & $> 600$\\ 1 & 5 & 10 & 20 & 5 & 30 & 0.38 & Arc, One-arm spiral, Multi Rings, Ring & $> 600$\\ \hline 1.2 & 5 & 3 & 20 & 5 & 30 & 0.11 & Arc, One-arm spiral, Multi Rings, Ring & $> 600$ \\ 1.2 & 10 & 3 & 20 & 5 & 30 & 0.11 & Arc, Bright two-arm spiral & $> 600$\\ 1.2 & 20 & 3 & 20 & 5 & 30 & 0.11 & Arc, Bright two-arm spiral & $> 600$\\ 1.2 & 30 & 3 & 20 & 5 & 30 & 0.11 & Arc, Bright two-arm spiral, Asymmetric ring & $\sim 2000$\\ 1.2 & 30 & 1 & 20 & 5 & 30 & 0.04 & Arc, Bright two-arm spiral & $> 600$\\ \hline 0.8 & 30 & 3 & 10 & 2 & 30 & 0.04 & Arc & $\sim 100$\\ 1 & 30 & 3 & 10 & 2 & 30 & 0.04 & Arc & $\sim 100$\\ 1.2 & 30 & 3 & 10 & 2 & 30 & 0.04 & Arc, Faint two-arm spiral, Multi Rings & $\sim 450$\\ \enddata \tablecomments{The column of ``Structures'' lists the types of observed asymmetries in the order of their appearance in the disk images. The column of ``Lifetimes'' is a long-lived structures lifetime. } \end{deluxetable*} \begin{figure*}[ht!] \plotone{Fig1.eps} \caption{\normalsize The surface density multiplied by the distance from the center of mass ($\Sigma R$). The model parameters are $K=3$, $I=10^\circ$ and $L=0.8$. The time in years is in the upper right corner of the pictures. \label{fig:08sig}} \end{figure*} \begin{figure}[ht!] \plotone{Fig3.eps} \caption{\normalsize The average value of the $z$ coordinate of the particles in the cells of $R$, $\phi$ along $x$ (left) and $y$ (right) axes after $600$ years. The model parameters are $K=3$, $L=0.8$ and the angles are in the upper right corner of the pictures. \label{fig:08inclxy}} \end{figure} \begin{figure*}[h] \plotone{Fig4.eps} \caption{\normalsize The images at wavelength 1 mm. The color shows the flux of radiation ($F_\nu$) multiplied by $R^2$ in conventional units. The model parameters are $K=3$, $I=10^\circ$ and $L=0.8$. The time in years is in the upper right corner of the pictures. \label{fig:08img}} \end{figure*} \section{Methods} The evolution of the remnant in gas disk was simulated by the SPH method (smooth particle hydrodynamics). The calculations were performed using the code Gadget-2~\citep{2001NewA....6...79S, 2005MNRAS.364.1105S} modified by us \citep{2016Ap.....59..449D}. In total, from $5\cdot 10^5$ to $2.5\cdot 10^6$ particles of the gas disk and from $5\cdot 10^3$ to $2\cdot 10^5$ particles of the perturbation were involved in the simulations. The calculations took into account the self-gravity of the disk. The simulated region was divided by $200\times30\times90$ cells in spherical coordinates ($R,\theta,\phi$), in which the average values of the SPH particle density were determined. We assume that dust particles with a size of 1, 10, 100 microns and 1 mm are well mixed with the gas, and are distributed according to the law$\frac{dn(s)}{ds}\propto s^{-3.5}$, where n is the concentration and s is the size of the dust grain \citep{1969JGR....74.2531D}. The dust to gas mass ratio in the disk was $0.01$ as the average in interstellar medium. The dust opacity is calculated using Mie theory for Magnesium-iron silicates~\citep{1995A&A...300..503D}. RADMC-3D~\citep{2012ascl.soft02015D} code was used for the 3-D radiative transfer calculations. \begin{figure*}[ht!] \plotone{Fig5.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08sig} for model parameters $K=3$, $I=10^\circ$ and $L=1$. \label{fig:1sig}} \end{figure*} \begin{figure*}[ht!] \plotone{Fig6.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08img} for model parameters $K=3$, $I=10^\circ$ and $L=1$. \label{fig:1img}} \end{figure*} \begin{figure*}[ht!] \plotone{Fig7.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08img} for model parameters $K=3$, $L=1$ after $600$ years. The angle of $I$ is in the upper right corner of the pictures. \label{fig:img600}} \end{figure*} \begin{figure*}[ht!] \plotone{Fig8.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08sig} for model parameters $K=3$, $I=10^\circ$ and $L=1.2$. \label{fig:1.2sig}} \end{figure*} \begin{figure}[ht!] \plotone{Fig9.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08inclxy} for model parameters $K=3$, $L=1.2$.\label{fig:12inclxy}} \end{figure} \begin{figure*}[ht!] \plotone{Fig10.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08img} for model parameters $K=3$, $L=1.2$ after $600$ years. The angle of $I$ is in the upper right corner of the pictures. \label{fig:img1_600}} \end{figure*} \begin{figure}[ht!] \plotone{Fig11.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08img} for model parameters $K=1$, $I=30^\circ$, $L=1.2$ after $600$ years. \label{fig:K1I30}} \end{figure} \begin{figure*}[ht!] \plotone{Fig12.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08img} for two models with $2.5\cdot 10^6$ particles. The models parameters $K=3$, $I=5^\circ$, $L=1$ (left) and $K=3$, $I=30^\circ$, $L=1.2$ (right) at the time $1000$ years. \label{fig:longImg}} \end{figure*} \begin{figure}[ht!] \plotone{Fig13.eps} \caption{\normalsize The same as in Fig.~\ref{fig:longImg} for model with parameters $K=3$, $I=30^\circ$, $L=1.2$ at the time $2000$ years. \label{fig:im2000}} \end{figure} \begin{figure}[ht!] \plotone{Fig14.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08inclxy} for models with parameters $K=3$, $I=5^\circ$, $L=1$ (top) and $K=3$, $I=30^\circ$, $L=1.2$ (bottom). The time in years is in the upper right corner of the pictures. \label{fig:longIncl}} \end{figure} \begin{figure*}[ht!] \plotone{Fig15.eps} \caption{\normalsize The same as in Fig.~\ref{fig:08img} for models with parameters $K=3$, $I=30^\circ$, $R_0=10$ AU and $dR=2$ AU after $36.5$ years. The parameter of $L$ is in the upper right corner of the pictures. \label{fig:closeImg}} \end{figure*} \begin{figure}[ht!] \plotone{Fig16.eps} \caption{\normalsize The same as in Fig.~\ref{fig:closeImg} for models with parameters $K=3$, $I=30^\circ$, $L=1.2$ for the time $400$ years. \label{fig:close400}} \end{figure} \begin{figure}[ht!] \plotone{Fig17.eps} \caption{\normalsize The average value of the $z$ along $y$ axes after $365$ (left) and $1460$ (right) years. The model parameters are $K=3$, $I=30$. The values of $L$ are in the upper right corner of the pictures. \label{fig:yincl}} \end{figure} \section{Results} The emergence of the remnant of the clump of matter in the disk leads to the propagation of density waves in horizontal and vertical directions. The strongest perturbations arise at large values of the parameters $K$, $L$, and $I$, as expected. Due to the differential rotation of the Keplerian disk the remnant stretches and transform during the time. A local increase in the surface density leads to the appearance of corresponding large-scale inhomogeneities in the disk images. \subsection{Perturbations at large radii} For the models discussed here, the initial position of the disturbing remnant of the clump was set equal to $R_0 = 20$ AU with a step of $dR = 5$ AU. Calculations have shown that the parameter $L$, which characterizes the kinetic energy of the falling clump, has the greatest influence on the type of disturbance in the disk. Therefore, we will sequentially discuss the three energy regimes considered in our models. \subsubsection{Sub Keplerian perturbations} In this case, when the parameter is $L = 0.8$, due to the differential rotation of the Keplerian disk the remnant stretches and transform to a piece of arc reembling a cyclonic vortex and then turn into a spiral during one revolution around the star ($\sim 125$ years). Since the matter of the disk is involved in its motion the spiral dips and thickening of the density are visible in the disk (Fig.~\ref{fig:08sig}). In the central part of the disk, the spiral splits into two branches. The spiral structure quickly ($\sim 300$ years) twists into an asymmetric ring, which, scattering in the disk, retains the asymmetry until the end of the calculations ($600$ years). This asymmetry rotates with the disk. The wave also passes in the vertical direction along the disk, which extends inside and to the edge of the disk. Perturbation twists the central plane of the disk. The maximum distortion of the disk plane occurs near the axis $y$, but does not coincide with its position. The inner parts of the disk inclines relative to the periphery. Over time, the radius of the outer boundary of the inclined area increases. It distorts 30 AU at the time of $600$ years (Fig.~\ref{fig:08inclImg}). In this case, the tilt of the disk plane relative to the initial position is approximately equal to $0.2^\circ$ at $I = 5^\circ $ and $0.9^\circ$ in the case of $I = 30^\circ$. An increase in the angle of $I$ does not affect the speed of propagation of global perturbation to the edge of the disk (Fig.~\ref{fig:08inclxy}). Increasing the initial density (parameter K) of the clump increases the vertical distortion of the disk. The perturbations described above show themselves on the images of protoplanetary disks. The form of asymmetric structures in images corresponds to the perturbations of the density of the disk matter. However, on the periphery of the disk is visible a shadow from the matter above the disk plane. The asymmetric ring-shaped structure on the images has a horseshoe-shaped form (Fig.~\ref{fig:08img}). Calculations have shown that the minimum value of $K$ in which this structure is visible on the images, equal to $3$ (about $0.1$ of Jupiter mass). An increase in the parameter $K$ to $5 $ increases the brightness of the structure and their lifetime, but its horseshoe form is preserved. \subsubsection{Keplerian perturbations} For this class of models of the phase of the disintegration of the clump remnant during the first orbit similar to the previous one (Fig.~\ref{fig:1sig}). A piece of arc is converted to the one-hand spiral. It twisted into a symmetric ring-shaped structure during the next few revolutions ($\sim 500$ years). An increase of the radius of distortion of the central plane of the disk relative to the initial position is faster than in the previous case. It reaches $40$ AU at the time $600$ years. The maximum inclination angle is $0.2^\circ$ in the case $I=5^\circ$ and $0.8^\circ$ at $I = 30^\circ$. The direction of maximum distortion in the vertical direction is also close to the axis $y$, but does not coincide with it. The visible asymmetry on the images of the disk are also appropriate for surface density (Fig.~\ref{fig:1img}). In this case, the propagation of the density wave gives multi-lane image of the protoplanetary disk at a certain point in time (right image of Fig.~\ref{fig:1img}). An increase in the angle of $I$ affects the image of the ring-shaped structure. Two symmetric weakly pronounced spirals are visible on the images instead of the ring, if the value of $I\geq20$ (Fig.~\ref{fig:img600}). \subsubsection{Super Keplerian perturbations} In this case, the clump matter motion in the protoplanetary disk causes severe density perturbations and significantly distorts the disk in the vertical direction. As in previous cases, during one convolution of the clump, the vortex-like structure is stretched into a spiral, which is converted into two spirals during the next revolution (Fig.~\ref{fig:1.2sig}). Each of the spirals is logarithmic, they are shifted by phase relative to each other by $180^\circ$. The form of the spirals weakly depends on the angle of inclination of $I$. Disk distortion in the vertical direction differs from the models described above for this case. The periphery of the disk is distorted, and the inner parts of the disk deforms weaker. The waves are still propagating along the disk in a vertical direction at the time of 600 years as seen from Fig.~\ref{fig:12inclxy}. In this case, the distortion of the inner region depends on the inclination angle and has the opposite character for $I<20$ and $I\geq 20$. It become seen noticeable asymmetry of the spirals on the periphery of the disk in images with an increase in the angle of inclination of $I$ (Fig.~\ref{fig:img1_600}). For this class of models, the case $K = 1$ (corresponds to the mass of the remnant $\sim 12 M_{\oplus}$) was considered. In this case, the perturbation is weaker; however, the two-arm spiral can also be identified in the disk images. It is more pronounced at a larger inclination angle $I$ (Fig.~\ref{fig:K1I30}). \subsubsection{Long-term dynamics} For the models described above, the number of SPH particles was $5\cdot 10^5$. These models have a lower resolution compared to the models, the calculations of which involved $2.5\cdot 10^6$ particles. However, the calculations showed that at the initial phases of clump destruction, the images of disks obtained on the basis of models with a small number of particles show the same structures as for more accurate models. However, to study the long-term dynamics of the fallen clump remnant, higher resolution is required. We have calculated two limiting cases that correspond to the parameters that cause the minimum ($K=3,I=5,L=1$) and maximum ($K=3,I=30,L=1.2$) disturbance in the disk. Over time, density waves scattered and the all structure settles down to the plane of the disk. The ring of the first model stretches along the radius and loses brightness mixing with the matter of the protoplanetary disk with time. It is faintly noticeable at $1000$ years after the fall of the clump (Fig.~\ref{fig:longImg} left). Than it is not visible against the background of the disk matter. For the second model spiral waves are still noticeable after $1000$ years (Fig.~\ref{fig:longImg} right), but after $\sim 2000$ years they completely disappear. An asymmetric ring structure can be seen on the disk by the time of 2000 years (Fig.~\ref{fig:im2000}). The charateristic time of the disk dynamic relaxation after the fall of the clump also depends on the place of its fall. For example, with $R_0 = 30$ AU and the same clump parameters as in the previous model, the lifetime of spirals and ring structures generated by its fall increases to $4 \times 10^3$ years. At $R_0 = 50$ AU, the characteristic relaxation time of disturbances on the disk is even longer: $\sim 10^4$ years. On the Fig.~\ref{fig:longIncl} it is seen that the plane of the disk settles to its original position over time. However, even 2000 years after the fall of the clump, a slight inclination of the disk plane near the axis $y$ remains. For the first model, it is equal to $\sim 0.14^\circ$, and for the second, it is $\sim 0.72^\circ$. \subsection{Perturbations at small radii} In this class of models the perturbation was located near the star at the distance $R_0 = 10$ AU with the step of $dR = 2$ AU. The clump had parameters $\phi=30^\circ$, $K=3$ and $I=30^\circ$. The parameter $L$ was varied. The mass of the clump was about $13 M_{\oplus}$. Fig.~\ref{fig:closeImg} shows images of the disk for three models with the L parameter $0.8$, $1$ and $1.2$ after one period ($36.5$ years). Bright dense structures and areas of shadow, which are caused by matter rising above the plane of the disk, are visible on the disk. But all structures are scattered after next period for models with $L \leq 1$. For case $L=1.2$ waves propagate along the disk, which in time from 250 to 500 years can be seen in the images as a multi-lane structure (Fig.~\ref{fig:close400}). In the vertical direction, the disk is distorted in all considered cases. The maximum distortion is achieved near the $y$ axis as for models distant from the star. The Fig.~\ref{fig:yincl} shows the average values of $z$ along the y-axis for two points in time $365$ and $1460$ years. One can see in all cases, the perturbation propagates outward from the inner part of the disk, tilting its central plane. With an increase in L, the final inclination of the disk increases, but remains within $0.5^\circ$. \section{Conclusion} Calculations have shown that at the initial stages of the remnant of the clump disintegration, the structures that are visible in the images of the protoplanetary disk are similar for the entire set of the models. However, the shape of the final long-lived structure primarily depends on the kinetic energy of the falling clump. During the first revolution of the center of the remnant (at the initial moment of time), an arc-like structure resembling a vortex is visible in the disk image. Similar structures are observed, for example, in objects HD 135344B \citep{2018A&A...619A.161C} and HD 143006 \citep{2018ApJ...869L..50P}. At the next stage of evolution, the image shows a tightly wound spiral, as, for example, in the case of object HD 163296 \citep{2018ApJ...869L..42H}. In the case when the residual velocity of the remnant does not exceed the Keplerian velocity, a ring is a long-lived structure, which can also be asymmetric. In this case, at a certain moment in time, the passage of a wave over the disk can give a multi-lane structure in the image. The ring-shaped structure is visible in the images of a number of objects, for example, HD 169142 \citep{2017A&A...600A..72F}, HD 97048 \citep{2017A&A...597A..32V}, RU Lup, Elias 24, AS 209, GW Lup \citep{2018ApJ...869L..42H}. In the case of a high kinetic energy of a clump, a two-armed spiral appears on the disk image, which, after several thousand years, transforms into an asymmetric ring. Two-arm spirals were obtained on images of objects Elias 27, IM Lup, WaOph 6 \citep{2018ApJ...869L..43H}. The median age of these sources is $\sim 1$ Myr~\citep{2018ApJ...869L..42H}, and the youngest sources have estimated ages about a few tenths of Myr~\citep{2018ApJ...869L..43H}. Since the velocity vector of the remnant has a residual inclination relative to the plane of the disk, it is distorted. However, over time, the inclination of the plane of the disk relative its original position decreases. The tilt angle at the end of calculations does not exceed $1^\circ$. Probably, for a more noticeable change in the inclination of the disk, a large mass of the clump remnant is required. In this work, we were looking for the minimum mass of the remnant, which develops into a structure visible in the image of the protoplanetary disk. It turned out that in the case of a high-energy fall, the minimum mass of the remnant of the clump is $\sim 10 M _{\oplus}$. If the residual velocity of the remnant does not exceed Kepler one, then the minimum mass is $\sim 0.1M_J$. Ring-shaped structures formed from the material of the remnant of the fallen clump and the protoplanetary disk are long-living structures and can exist for more than $3000$ years. Thus with a sufficiently large mass of the falling clump, the evolution of the disturbance can lead to the formation of a planet on an inclined orbit. It should be noted that Toomre parameter in the disks under consideration is equal to $\sim 40$ at a distance of $20$ AU. Consequently, in a more massive disk, the fall of a clump several times denser than the material of the disk can trigger the process of gravitational collapse and the formation of a planet. The fall of the clump near the star can cause not only a FUOR flare, but also a strong increase in circumstellar extinction, leading to a deep and prolonged weakening of the optical brightness of the star and an increase in its infrared radiation. Such events were observed in three young objects: V1184 Tau~\citep{2009AstL...35..114G}, RW Aur~\citep{2015IBVS.6143....1S,2019A&A...625A..49K} and AA Tau~\citep{2021AJ....161...61C}. Let us make a rough estimate of the additional mass of circumstellar matter, which can be added to the mass of the disk during the lifetime of a star due to episodic falls of clumps. Suppose that the average age of stars with protoplanetary disks $10^6$ years~\citep{2018ApJ...869L..42H} and the average lifetime of one disturbance $3 \times 10^3$ years. This will give the probability of observing one event $P_1 \sim 3\times 10^{-3}$. The probability close to unity will be obtained when $\sim 3\times 10^3$ clumps fall during the disk lifetime ($\sim 1$~Myr). If the clump mass of $0.1 M_J$ is necessary to create a strong disturbance (see above), the disk mass will increase during this time by $0.03 M_\odot$ at an average accretion rate on the disk of $3\times 10^{-8} M_\odot yr^{-1}$. Such an increase in mass is not critical for a typical disk mass of $0.01-0.2 M_\odot$ and an accretion rate onto a young star of $\sim 10^{-7}-10^{-8} M_\odot yr^{-1}$. Thus, the mechanism of the clumpy accretion in protoplanetary disks can explain the formation of the main types of structures identified in images of protoplanetary disks. In addition, one clump falling at an angle onto a protoplanetary disk can produce multi-lane bright ring-shaped structures. So far, we have demonstrated here the fundamental possibility of obtaining the observed structures on protoplanetary disks in the model of the clumpy accretion. Appreciate the complexity of this process, it needs a more detailed consideration, taking into account the thermal regime in the perturbed region of the disk. {\bf Acknowledgments.} It is a pleasure to thank the referee for valuable and useful remarks. Authors acknowledge the support of Ministry of Science and Higher Education of the Russian Federation under the grant 075-15-2020-780 (N13.1902.21.0039). \software{Gadget-2~\citep{2001NewA....6...79S, 2005MNRAS.364.1105S}, RADMC-3D~\cite{2012ascl.soft02015D}}
1,116,691,496,977
arxiv
\section{Introduction} \label{sec:intro} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{motivation.pdf} \caption{\textbf{A comparison between Mod-Squad\xspace and MoE ViT.} Our key motivation is that experts should leverage commonalities in some tasks (cooperation) but focus on a subset of tasks that require specific features and do not interfere with each other (specialization). } \label{fig:motivation} \end{figure} Computer vision involves a great number of tasks including recognition, depth estimation, edge detection, etc. Some of them have a clear and strong relationship: {they are likely to benefit from shared features. An example would be a task to classify cars and pedestrians and a task to segment the same classes. Other tasks appear to be less related: it is not clear what features they would share. An example could be tumor detection in medical images and face recognition.} Multi-task learning (MTL) aims to model the relationships among tasks and build a unified model for a diverse set of tasks. On the one hand, tasks often benefit by sharing parameters, i.e., {\bf cooperation}. On the other hand, some tasks may require specialized expertise that only benefits that single task, i.e., {\bf specialization}. A good MTL system should be flexible to optimize experts for the dual purposes of cooperation and specialization. There are two well-known challenges in MTL: (1)~gradient conflicts across tasks~\cite{chen2020just, yu2020gradient}; and (2)~how to design architectures that have both high accuracy and computational efficiency. Previous efforts include manually designing architectures \cite{caruana1997multitask} or conducting neural architecture search \cite{ahn2019deep} to induce cooperation and specialization in different parts of the model. However, these methods either require heavy manual customization, reducing generality and limiting applicability, or require very long training times. To address these challenges, we introduce {\bf Mod-Squad}, a new model that constructs a Mixture of Experts (MoE)~\cite{shazeer2017} to be {\bf mod}ularized multi-task learners (a {\bf squad}). { Our design allows experts to cooperate on tasks {\bf when it is helpful}, rather than penalizing experts that do not participate in {\em every} task. At the same time, some experts naturally develop a deep specialization in particular tasks, improving performance.} {The left figure in Fig.~\ref{fig:motivation} shows an example of the specialization and cooperation of experts in Mod-Squad\xspace.} A further and important side benefit, discussed below, is that this sparsification of experts allows our model to be decomposed into {much smaller single-task models that perform extremely well}. We achieve these goals by first integrating mixture of experts (MoE) layers into our vision transformer~\cite{dosovitskiy2021an} backbone network. The motivation is to divide the model into groups of experts, and for each expert to construct a minimum part of the model that can be shared among tasks or be specialized for one task. The experts can have any network structure (e.g., MLP or attention network~\cite{zhang2022mixture}) so that we can incorporate advanced model designs. Our modular design allows cooperation and specialization via the distribution of tasks to experts and also experts to tasks. Below, we formalize this idea mathematically by analyzing the probability distribution over tasks and experts, and using a novel loss function to induce a specific structure on this distribution. Many previous MoE works~\cite{shazeer2017, riquelme2021scaling, zhang2022mixture} use a load-balancing loss that encourages the frequency of expert usage (across all tasks and batches) to be highly similar. Some MoE methods~\cite{liang2022m, mustafa2022multimodal} directly apply this loss after the forward pass of each task on the multi-task scenario so that each task evenly uses all experts. However, this approach may force experts to set parameters on conflicting tasks with learning gradients that counteract each other. In other words, while an expert may benefit from being shared among certain pairs of tasks, it may be harmed by being forced to share among other pairs of tasks. This is an explanation for the difficulty of training multi-task models under such an expert-balancing loss. In comparison, we contend that experts should leverage commonalities in some tasks (cooperation) but also create a subset of experts that learn specific features (as needed by some tasks) and do not interfere with each other (specialization). Such an assignment of tasks to experts can be represented via \textbf{a sparse but strong dependence between experts and tasks}. Fig.~\ref{fig:motivation} illustrates this key difference between our model and previous MoE work, showing how our model induces a sparser structure in the assignment of experts to tasks. To implement this idea, we add a loss term to maximize the mutual information between experts and tasks. This induces a strong dependency between experts and tasks, with each task heavily related to a small set of experts and vice versa. Interestingly, we find that our model converges to a state in which, after training, most experts are never or rarely used for many tasks (evidence of specialization), but the experts are still balanced in their activation frequency. This property enables us to extract a compact sub-network from the giant model for each task. The small networks extracted in this fashion work independently as standalone models for individual tasks with {\em no performance drop}. This property enables us to train a giant, sparse model in a scaled-up multi-task learning scenario and later get compact sub-networks for each task with high performance. \noindent Our main contributions can be summarized as follows: \vspace{-2mm} \begin{itemize}[align=right,itemindent=0em,labelsep=2pt,labelwidth=1em,leftmargin=*,itemsep=0em] \item \textbf{Modular multi-task learner.} We propose a new modular backbone model, {Mod-Squad\xspace}, that is composed of a large group of attention and feed-forward experts. The experts can be flexibly assigned a subset of tasks to achieve specialization and cooperation. \item \textbf{Optimizing the joint distribution over tasks and experts}. Mod-Squad\xspace includes a new loss term that encourages a sparse but strong dependence between experts and tasks. This is done by measuring and maximizing the mutual information between tasks and experts. \item \textbf{Effective and Efficient multi-task learners at scale.} Experiment results show that Mod-Squad\xspace achieves state-of-the-art performance on two major multi-task datasets while maintaining its computational efficiency. \item \textbf{Extracting small sets of experts as standalone models with no performance drop.} We further show that Mod-Squad\xspace can be effectively pruned for a designated task without sacrificing performance. \end{itemize} \section{Related Work} \noindent \textbf{Multi-task Learning.} Multi-task learning jointly learns multiple tasks by sharing parameters among tasks. One common approach is to manually design the architecture, sharing the bottom layers of a model across tasks \cite{caruana1997multitask, kokkinos2017ubernet, bragman2019stochastic}. Some works~\cite{vandenhende2019branched} design the architecture according to task affinity. Others~\cite{ahn2019deep, bragman2019stochastic, sun2020adashare} leverage Neural Architecture Search or a routing network~\cite{rosenbaum2018routing} to learn sharing patterns across tasks and automatically learn the architecture. Recently, transformer-based MTL architectures~\cite{xumtformer} have been explored and have shown advantages over CNN-based models. In comparison, we customize MoE layers into vision transformers; each MoE module constructs a minimum part of the model that can be distributed to a subset of all tasks instead of all tasks. As a result, our model is flexible in its creation of cooperation and specialization. \noindent \textbf{Mixture of Experts (MoE). } The MoE was first proposed by Jacobs et al.~\cite{jacobs1991adaptive} as a technique to combine a series of sub-models and perform conditional computation. Recent work~\cite{shazeer2017} in NLP proposes sparse MoE to reduce computation cost, and some works~\cite{lepikhin2021gshard, JMLR:v23:21-0998} train gigantic models with trillions of parameters based on the sparse model. Some have used the MoE technique to train huge models in vision~\cite{riquelme2021scaling, wu2022residual} or multi-modal applications~\cite{mustafa2022multimodal}. These works typically focused on combining the Feed-Forward Network layer with the MoE or develop a better routing strategy~\cite{lewis2021base, nie2021dense}. MoA~\cite{zhang2022mixture} proposes a new module that combines the attention network with the MoE while having a low computational cost and the same parameter budget as a regular attention network. More recently, M$^3$ViT~\cite{liang2022m} uses MoE techniques to design a multi-task learning model that is computationally efficient during training. Compared to these previous methods, we demonstrate a MoE model that is not only computationally efficient, but is also flexible as a modularized multi-task learner that can easily induce both cooperation and specialization. Although M$^3$ViT~\cite{liang2022m} also use MoE in their approach, the experts in their model share between all tasks and cannot be specialized for tasks. \noindent \textbf{Pruning.} {Pruning refers to the process of removing components of a larger model to produce a smaller model for inference, with the goal of maintaining as much accuracy as possible while improving runtime computation efficiency.} Generally, pruning is categorized into \textit{unstructured pruning}~\cite{han2015deep_compression}, which removes individual weights that have a minimal contribution to accuracy and \textit{structured pruning}~\cite{he2018soft, li2019learning}, which ranks filters or blocks and prunes these based on some criterion. Usually, extra fine-tuning is conducted for the pruned network to help maintain the performance~\cite{Renda2020Comparing, liu2018rethinking, yu2018slimmable}. Most of pruning is for single task and very few of them consider the case in multi-task learning. In this work, our proposed model has a unique property that a series of small sub-network for each task can be extracted from it with no performance drop and no additional fine-tuning. This is somehow similar to pruning but more likely to be an advantage of our model rather than a new way of pruning. \begin{figure*} \centering \includegraphics[width=17cm]{pipeline.pdf} \caption{\textbf{The pipeline of our multi-task foundation model.} Each transformer block in Mod-Squad\xspace consists a MoE attention network (MoE attn.) and a MoE MLP network. The multi-task model Mod-Squad\xspace is trained with our proposed mutual information loss. Mod-Squad\xspace develops a strong dependence between experts and tasks. Then we can extract a small sub-network from Mod-Squad\xspace for each task with no performance drop. } \label{fig:pipeline} \end{figure*} \section{Method} We start with the definition of multi-task learning. Suppose we have $M$ tasks $T_1, T_2, ...,T_M$ and $Q$ images $I_1, I_2, ..., I_Q$. We define a task $T$ as a function that maps image $I_q$ to $T(I_q)$. Our dataset $D$ contains for each task $T_i$ a set of training pairs $(I_q; T_i(I_q))$, e.g. (image; depthMap). Here, for simplicity, we assume that every task contains a training pair for every one of the $Q$ images, but note that our approach can be extended to the case in which every task contains a different subset of images in its training pairs. \subsection{Preliminaries} \noindent\textbf{Mixture of Experts.} A Mixture of Experts (MoE) layer typically contains a set of expert networks $E_1, E_2, ..., E_N$ along with a routing network $G$. The output of a MoE layer is the weighted sum of the output $E_k(x)$ from every expert. The routing network model $G$ calculates the weight $G^k$ for each expert given input $x$. Formally, the output of a MoE layer is \begin{align} \label{eqn:moe_output} y =&\sum_{k=1}^{N} G^k(x) E_{k}({x}). \end{align} The routing network $G$ is a Noisy Top-$K$ Routing network \cite{shazeer2017} with parameters $W_g$ and $W_{noise}$. It models $P(E_k|x)$ as the probability of using expert $E_k$ and selects the Top-$K$ to contribute to the final output. The whole process is shown as follows: \begin{align} \label{eqn:moe_output} G(x) =& \operatorname{TopK}(\operatorname{Softmax} (xW_g \nonumber \\ & + \mathcal{N}(0,1) \operatorname{Softplus}(xW_{noise}))), \end{align} where $\operatorname{TopK}(\cdot,k)$ sets all elements in the vector to zero except the elements with the largest $K$ values, $\operatorname{Softplus}$ is the smooth approximation to the ReLU function: \begin{align} \operatorname{Softplus}(x) =& log \left( 1+\exp \left( x \right) \right). \end{align} \subsection{Mod-Squad\xspace} Mod-Squad\xspace is a multi-task model with the vision transformer as the backbone network and several parallel task-specific heads. As shown in Fig.~\ref{fig:pipeline}, a key design in our model is customizing MoE into the vision transformer so that each expert can construct a minimum part of the model that can be either shared between tasks or specialized for tasks. Specifically, we customize the MoE attention block (MoA)~\cite{zhang2022mixture} and MoE MLP block~\cite{shazeer2017} into the transformer layer. Each MoE block consists of $N$ experts $E_1, E_2, ..., E_N$ which can be either an attention head or an MLP layer along with $M$ \textbf{task-specific routing networks} $G_1, G_2, ..., G_M$ that select experts conditioned on input tokens. Note that each routing network $G_i$ has its own parameters $\left( W_g^i, W_{noise}^i \right)$. We also add a learnable task embedding to the hidden input state so that each expert is aware of the target task. Thus, in Mod-Squad\xspace, the output of each MoE layer is \begin{equation} y = \sum^N_{k=1} G^k_i(x) \cdot E_k \left( x + e_i \right), \end{equation} where $i$ is the task id and $e_i$ is the respective task embedding. \subsection{A joint probability model over tasks and experts} In order to model cooperation and specialization, we define a probability model over tasks $T$ and experts $E$. We assume that when our trained network is deployed, it will be assigned a random task $T$ according to a global distribution over tasks $P(T)$. (Typically we assume this distribution to be uniform over tasks.) Subsequently, it will be given a random image $X$ according to $P(X|T)$. For a given MoE layer, we model the probability $P(E_i|T_j)$ of using expert $E_i$ with task $T_j$ as the frequency with which $E_i$ is assigned to task $T_j$ by the routing network. For example, for 100 images in task $T_j$, if the routing network assigns 30 of them to expert $E_i$, then $P(E_i|T_j)=0.3$. Since the routing network does not make hard assignments of experts to tasks, but rather assigns weights resulting from a softmax function to each expert, we sum these soft weights to measure the frequency: $$ P(E_i|T_j)=\sum_{k=1}^{Q_{T_i}} G_{T_j}^{E_i}(x_k), $$ where $G_{T_j}^{E_i}$ gives the weight for expert $E_i$ for task $T_j$ on the input $x_k$ from image $I_k$. $Q_{T_i}$ is the number of images for task $T_i$. Given this definition of conditional probability, the joint probability $P(E,T)=P(E|T)P(T)$, and of course, we can obtain $P(E)=\sum_T P(E,T)$. A key intuition in our work is that {\bf experts should be dependent on tasks}, that is, experts should specialize in specific tasks, at least to some extent. This notion can be captured by measuring the {\em mutual information (MI)} between tasks and experts, using the probability model defined above: \begin{align} \label{eqn:MI} I(T;E) =& \sum_{i=1}^M \sum_{j=1}^N P(T_i,E_j) \log \frac{P(T_i,E_j)}{P(T_i)P(E_j)}. \end{align} If experts are assigned with equal frequency to all tasks, then the mutual information will be 0. If each expert is assigned to exactly one task (when $M=N$), then the dependence (and hence the mutual information) is maximized. \subsection{Maximize mutual information between experts and tasks} To understand what mutual information do, we break down the Equation. \ref{eqn:MI} as following: \begin{align} \label{eqn:MI_split} I(T;E) =& \sum_{i=1}^M \sum_{j=1}^K P(T_i,E_j) \log P(T_i,E_j) \nonumber \\ & - \sum_{i=1}^M P(T_i) \log P(T_i) \nonumber \\ & - \sum_{j=1}^K P(E_j) \log P(E_j). \end{align} In Eq.~\ref{eqn:MI_split}, the first term is the negative entropy of $P(T_i,E_j)=P(E_i|T_j) P(T_j)$. Maximizing this term encourages the sharpness of the conditional distributions $P(E_i|T_j)$, since $P(T_j)$ is a constant decided by data distribution, and is not affected by model parameters. The second term is the entropy of $P(T_i)$ which, again, is a constant and can be ignored. The third term is the entropy of $P(E_j)$. Maximizing the term encourages a high-entropy or flat distribution of $P(E_j)$, encouraging the experts to be evenly used across the entire dataset. In practice, we add $-I(T;E_Y)$ to our total loss for each MoE layer $Y$ with a weight parameter $w_{MI}$ where $E_Y$ represents all the experts in $Y$. We follow \cite{kendall2018multi} to learn an auto-balancing weight $w_T$ for each task $T$ and add the task-specific loss $L_T$ for all tasks. So the total loss is \begin{equation} \mathtt{L} = \sum^M_{i=1} w_{T_i}L_{T_i} - w_{MI} \sum_{\forall MoE \ \mathtt{layers}\ Y} I(T;E_Y). \end{equation} \subsection{Train Once and Get All} \label{sec:pruning} In previous MoE works\cite{liang2022m, mustafa2022multimodal}, they use a subset of the experts for one input image but all the experts for each task. In comparison, Mod-Squad\xspace activates a subset of the experts when forwarding both single image and multiple images from the same task. Further, all the experts are evenly used in Mod-Squad\xspace when forwarding the whole multi-task dataset. This guarantees that the capacity of Mod-Squad\xspace is fully utilized and not wasted. A typical relation between tasks and experts will be demonstrated in \cref{relation}. Benefiting from the constant sparsity of Mod-Squad\xspace at the image-level and the task-level, unused or rarely used experts can be removed in each MoE module when doing single-task inference. This can be done by counting the using frequency of each expert for the task and removing those experts with smaller frequency than a threshold $\theta$. Note that some tasks could use more experts and others use less for each MoE layer. For example, a low-level task may require more experts at the first few layers of the network and a high-level task may require more experts at the last few layers of the network. Mod-Squad\xspace is capable of dynamically self-organize architecture and selecting experts according to the requirement of tasks, which provides some degree of freedom in architecture and extra flexibility in allocating model capacity. After removing experts, our pruned model can be directly deployed for the respective task. Since the removed experts are never or rarely used, the pruned model achieves the same level of performance as the original model but with a much smaller number of parameters and without any fine-tuning. In the case where we set $\theta=0$ and keep all the experts that have ever been used, we observe no drop in performance {while still effectively pruning a large portion of the model}. This removing experts process is similar to pruning, but we just adapt a simple thresh then remove strategy and no additional training is needed like in some of the pruning work\cite{cai2020once}. Once training, a series of small sub-networks can be extracted for all tasks. This property enables us to build a very large model benefit from all tasks, but only requires a fraction of model capacity for single-task inference or fine-tuning. \section{Experiment} \begin{table*}[t] \begin{center} \footnotesize \tabcolsep=0.07cm \begin{tabular}{c|c|c|c|cc|ccc|c|c|c|c|c|c} \toprule \multirow{3}{*}{Model} & \multicolumn{1}{c|}{Obj. Cls.} & \multicolumn{1}{c|}{Scene Cls.} & \multicolumn{6}{c|}{Depth Euc.} & \multicolumn{1}{c|}{Normal} & \multicolumn{1}{c|}{Curvature} & \multicolumn{1}{c|}{Reshading} & \multicolumn{1}{c|}{Edge3D} & \multicolumn{1}{c|}{Keyp.2D} & \multicolumn{1}{c}{Segm.2D}\\ \cline{2-15} & \multirow{2}{*}{$Acc(\%) \uparrow$} & \multirow{2}{*}{$Acc(\%) \uparrow$} & \multirow{2}{*}{$RMSE \downarrow$} & \multicolumn{2}{c|}{Error $\downarrow$} & \multicolumn{3}{c|}{$\delta$, within $\uparrow$} & \multirow{2}{*}{L1 dis. $\downarrow$} & \multirow{2}{*}{L2 dis. $\downarrow$} & \multirow{2}{*}{L1 dis. $\downarrow$} & \multirow{2}{*}{L1 dis. $\downarrow$} & \multirow{2}{*}{L1 dis. $\downarrow$} & \multirow{2}{*}{L1 dis. $\downarrow$} \\ \cline{5-9} & & & & Abs. & Rel. & 1.25 & $1.25^2$ & $1.25^3$ & & & & & & \\ \midrule STL & 56.5 & 60.0 & 6.94 & 0.089 & 1.77 & 92.8 & 96.9 & 98.7 & 0.403 & 1.12 & 0.184 & \best{0.119} & 0.0312 & 0.171\\ MTL & 57.3 & 64.9 & 6.75 & 0.084 & 1.26 & 93.0 & 97.0& 98.9 & 0.386 & 1.06 & 0.170 & 0.127 & 0.0284 & 0.166\\ $M^3ViT$\cite{liang2022m} & 58.0 & 65.6 & 6.69 & 0.083 & 1.26 & 93.2 & \best{97.2} & 98.9 & 0.383 & 1.05 & 0.174 & 0.126 & 0.0289 & 0.164 \\ \hline Mod-Squad\xspace & \best{59.0} & \best{66.8} & \best{6.59} & \best{0.082} & \best{1.25} & \best{93.3} & \best{97.2} & \best{99.0} & \best{0.374} & \best{1.02} & \best{0.167} & {0.123} & \best{0.0275} & \best{0.161} \\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Metric for each task on the taskonomy dataset.} For each task, we use different metrics to evaluate its performance. More results on other tasks can be found in the supplementary.} \label{tab:metric} \end{table*} \begin{table}[] \centering \small \setlength{\tabcolsep}{1mm}{ \begin{tabular}{l|c|cc|cccc} \hline Method & STL & MTL & M$^{3}$ViT & MLP & Attn & Ours & Pruning \tabularnewline \hline Params(M) & \best{86.4} & 90.0 & 176.4 & 176.4 & 105.6 & 201.3 & 116.9 \tabularnewline FLOPs(G) & \best{17.7} & 18.5 & 19.7 & 19.7 & 19.7 & 19.7 & 18.4 \tabularnewline \hline \hline Object Cls.& 0.0 & +1.4 & +2.6 & +3.0 & +3.0 & \best{+4.4} & \best{+4.4} \tabularnewline Scene Cls. & 0.0 & +8.1 & +9.3 & +10.0& +9.6 & \best{+11.3}& \best{+11.3}\tabularnewline Depth Euc. & 0.0 & +2.7 & +3.6 & +3.9 & +4.4 & \best{+5.0} & \best{+5.0} \tabularnewline Depth Zbu. & 0.0 & +2.1 & +2.4 & +2.6 & +2.4 & \best{+2.8} & \best{+2.8}\tabularnewline Normal & 0.0 & +3.5 & +4.2 & +4.5 & +4.5 & \best{+6.5} & \best{+6.5}\tabularnewline Curvature & 0.0 & +5.3 & +6.2 & +7.1 & +6.2 & \best{+8.9} & \best{+8.9} \tabularnewline Reshading & 0.0 & +7.6 & +5.4 & +5.9 & +8.1 & \best{+9.2} & \best{+9.2} \tabularnewline Edge2D & 0.0 & +0.6 & +2.0 & +1.8 & +1.2 & \best{+3.6} & \best{+3.6} \tabularnewline Edge3D & \best{0.0} & -6.7 & -5.8 & -4.2 & -5.8 & -3.3 & -3.3\tabularnewline Keyp.2D & 0.0 & +5.3 & +3.6 & +3.6 & +6.3 & \best{+8.3} & \best{+8.3}\tabularnewline Keyp.3D & 0.0 & +1.3 & +2.7 & +4.1 & +2.7 & \best{+5.5} & \best{+5.5}\tabularnewline Segm. 2D. & 0.0 & +2.9 & +4.0 & +5.2 & +3.5 & \best{+5.8} & \best{+5.8} \tabularnewline Segm. 2.5D & 0.0 & +1.9 & +3.2 & +3.8 & +3.2 & \best{+5.1} & \best{+5.1} \tabularnewline \hline \rowcolor{mygray} Mean & 0.0 & +2.8 & +3.3 & +3.9 & +3.8 & \best{+5.6} & \best{+5.6} \tabularnewline \end{tabular} } \caption{ \textbf{Comparison of $\Delta_t$ between MTL methods on the Taskonomy.} We report their average drop for each task with respect to the vanilla single-task model. MLP and Attn represent using only MoE MLP and MoE attention network in the backbone respectively. } \label{tab:MTL} \end{table} \subsection{Experiments Settings} \noindent \textbf{Datasets and Tasks.} We evaluate on two multi-task datasets: \textbf{PASCAL-Context}\cite{mottaghi2014role} and \textbf{Taskonomy}\cite{zamir2018taskonomy}. The PASCAL-Context includes 10,103 training images and 9,637 testing images with the five task annotation of edge detection (Edge), semantic segmentation (Seg.), human parts segmentation (H.Parts), surface normals (Norm.), and saliency detection (Sal.). The Taskonomy benchmark includes 3,793k training images and 600k testing images with 16 types of annotation. We use 13 annotations among them\footnote{Due to corrupt annotation for some samples, we discard three types of annotation (points, nonfixated matches, and semantic segmentation).} as our multi-task target: object classification, scene classification, depth estimation with euclidean depth, depth estimation with z-buffer depth, surface normals, curvature estimation, reshading, edge detection in 2D and 3D, keypoint detection in 2D and 3D, unsupervised segmentation in 2D and 2.5D. Details of these tasks can be found in \cite{zamir2018taskonomy}. \noindent \textbf{Loss Functions and Evaluation Metrics.} Classification tasks and semantic segmentation use cross-entropy loss and pixel-wise cross-entropy loss respectively. Surface normals calculate the inverse of cosine similarity between the l2-normalized prediction and ground truth. Curvature estimation uses L2 loss. All other tasks use L1 loss. We follow previous work \cite{maninis2019attentive} to use $\Delta t_i$ to evaluate a MTL model $m$ as the average drop for task $T_i$ with respect to the baseline model $b$: $\Delta t_i = (-1)^{s_i}(M_{m,i}-M_{b, i})/M_{b, i}$ where $M_{m,i}$ and $M_{b,i}$ are the metrics of task $T_i$ for the model $m$ and $b$ respectively, and $s_i$ is $1$ if the metric is the lower the better and $0$ otherwise. We also report $\Delta t$ as the average of $\Delta t_i$ on all tasks. For here, the baseline model $b$ is the vanilla single-task learning model. On the taskonomy, for depth estimation, we also report root mean square error (rmse), absolute and relative errors between the prediction and the ground truth as well as the percentage of pixels whose prediction is within the thresholds of $1.25, 1.25^2, 1.25^3$ to the ground truth following \cite{eigen2014depth}. We also report accuracy (Acc) for classification, L2 distance for curvature estimation, and L1 distance for all other tasks. These metrics are used to calculate $\Delta t_i$ and note that depth estimation use rmse only. On the PASCAL-Context, we follow \cite{liang2022m} and report mean intersection over union (mIoU) for semantic and human parts segmentation, and saliency; mean error (mErr) for normals estimation, root mean square error (rmse) for depth estimation; and optimal dataset F-measure (odsF) for edge detection. \noindent \textbf{Baselines and Competitors.} We compare with the following baselines. \textbf{STL}: vanilla single-task learning baseline that trains its own model on each task independently. \textbf{MTL}: vanilla multi-task learning baseline that all tasks share the backbone model but have separate prediction heads. For our proposed model, we also have MLP and Attn (in Table.~\ref{tab:MTL}) that represent only MoE MLP and only MoE attention networks are customized into the transformer layer respectively. Mod-Squad\xspace w/ pruning (or pruning in Table.~\ref{tab:MTL}) is Mod-Squad\xspace with experts removing for each specific task and we report the maximum FLOPs and Params over all tasks. We also compare with $M^3ViT$\cite{liang2022m} and several state-of-the-art MTL models: MTAN\cite{liu2019end}, Cross-Stitch \cite{misra2016cross} and NDDR-CNN\cite{gao2019nddr}. Further, we compare with \textbf{modified-MoE}: it has the same architecture as Mod-Squad\xspace but without our mutual information loss. It applies the standard balanced loss~\cite{zhang2022mixture} after forward propagation of all tasks for each image instead of one task. As a result, experts will be evenly used for all tasks instead of for every task. \noindent \textbf{Implementation.} We use ViT-base\cite{dosovitskiy2021an} and ViT-small as backbone networks on the Taskonomy and the PASCAL-Context respectively. We introduce MoA and MoE MLP into ViT every two layers. For MoA, we follow \cite{zhang2022mixture} to design the block and use 15 experts with top-k as 6 for ViT-small and 24 experts with top-k as 12 for ViT-base. For MoE MLP, we use 16 experts with top-k as 4. The task-specific heads are single linear layers on the Taskonomy and multiple layers network same as \cite{liang2022m} on the PASCAL-Context. We set $w_{MI}=0.001$ and removed threshold $\theta=1.0\%$. On the PASCAL-Context, the hyperparameters are the same as in $M^3ViT$\cite{liang2022m}. On the Taskonomy, we set the base learning rate to $2e-4$ with a batch size of $1,440$ and AdamW\cite{loshchilov2019decoupled} as the optimizer. The weight decay is $0.05$. We use 10 warmup epochs with 100 total training epochs and the model converges in 80 hours with 240 NVIDIA V100 GPUs. Cosine decay\cite{loshchilov2017sgdr} is used for the learning rate schedule. \begin{table*}[] \centering \small \setlength{\tabcolsep}{1mm}{ \begin{tabular}{l|c|ccccc>{\columncolor{mygray}}c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & Seg. & Norm. & H. Parts & Sal. & Edge & $\Delta _t$ & FLOPs & Params \tabularnewline & & mIoU$\uparrow$ & mErr$\downarrow$ & mIoU$\uparrow$ & mIoU$\uparrow$ & odsF$\uparrow$ & (\%)$\uparrow$ & (G)$\downarrow$ & (M)$\downarrow$ \tabularnewline \hline STL & ResNet-18 & 66.2 & 13.9 & 59.9 & 66.3 & 68.8 & 0.00 & \best{1.8} & \best{11} \tabularnewline \hline MTL & ResNet-18 & 63.8 & 14.9 & 58.6 & 65.1 & 69.2 & $-$2.86 & \best{1.8} & \best{11} \tabularnewline MTAN\cite{liu2019end} & ResNet-18 & 63.7 & 14.8 & 58.9 & 65.4 & 69.6 & $-$2.39 & \best{1.8} & \best{11} \tabularnewline Cross-Stitch \cite{misra2016cross} & ResNet-18 & 66.1 & 13.9 & 60.6 & 66.8 & 69.9 & +0.60 & \best{1.8} & \best{11} \tabularnewline NDDR-CNN\cite{gao2019nddr} & ResNet-18 & 65.4 & 13.9 & 60.5 & 66.8 & 69.8 & +0.39 & \best{1.8} & \best{11} \tabularnewline \hline MTL & ViT-small & 70.7 & 15.5 & 58.7 & 64.9 & 68.8 & $-$1.77 & 4.6 & 21 \tabularnewline $M^3ViT$\cite{liang2022m} & MoE ViT-small & 72.8 & 14.5 & 62.1 & 66.3 & 71.7 & +2.71 & 5.2 & 42 \tabularnewline \hline Mod-Squad\xspace & MoE ViT-small & \best{74.1} & \best{13.7} & \best{62.7} & \best{66.9} & \best{72.0} & \best{+4.72} & 5.2 & 50 \tabularnewline Mod-Squad\xspace w/ Pruning & MoE ViT-small & \best{74.1} & \best{13.7} & 62.6 & \best{66.9} & 71.9 & +4.65 & 5.2 & 22 \tabularnewline \hline \end{tabular} } \caption{\textbf{Quantitative Results on the PASCAL-Context.} Mod-Squad\xspace constantly outperform other MTL methods on all tasks.} \label{tab:pascal} \end{table*} \subsection{Results on MTL} \begin{figure} \centering \includegraphics[width=8.5cm]{prune.pdf} \caption{\textbf{Ablation study on pruning. } We explore two ways of pruning: (1) thresh then remove with $\theta$ (2) Keep the top $H\%$ experts that have the highest used frequency in each MoE module. For the first way of pruning, we report results with $\theta$ as 90\%, 50\%, 20\%, 5\%, 0.1\%, and 0.0\% (no pruning). For the second way of pruning, we report results with $H\%$ as 30\%, 40\%, 60\%, 80\%, and 100\% (no pruning). We also compare our pruning with applying the same pruning strategy on modified-MoE (m-MoE). } \label{fig:prune} \end{figure} \noindent \textbf{Efficacy.} We demonstrate the efficacy of our model in performance, computation cost, and model capacity. The results on the Taskonomy and the PASCAL-Context are shown in Table.~\ref{tab:MTL} and Table.~\ref{tab:pascal} respectively. Specific metrics for each task on the Taskonomy is shown in Table.~\ref{tab:metric}. In terms of performance, our method significantly outperforms other baselines and competitors on both datasets: we beat MTL and M$^3$ViT for over 2 points in mean $\Delta_t$ on the two datasets. On Taskonomy, we defeat MTL on all tasks, which proves the improvement is consistent. In terms of computation cost and model capacity, our model with ViT-Base backbone has a very low computation cost (19.7G FLOPs) while benefiting from a huge model capacity (201.3M). In comparison, MTL baselines with ViT-Base use 18.5G FLOPs with 86.4M parameters. Furthermore, our standalone pruned model keeps the same performance as Mod-Squad\xspace for each individual task when having the same level of computation cost and model capacity as STL: 18.4 FLOPs vs. 17.7 FLOPs and 116.9M vs. 86.4M. The extra computation cost is mainly from the lightweight routing network and the extra parameters can be further removed with a higher $\theta$ as will be shown later. \begin{figure} \centering \includegraphics[width=8.5cm]{few-shot.pdf} \caption{\textbf{Router fine-tuning can quickly learn new tasks by selecting proper experts. } We train our model on the other 11 tasks from the Taskonomy and transfer to cls. object and cls.scene with few training samples. We compare the few-shot classification accuracy with the following three baselines. (1) Fine-tuning: We fine-tune the whole model on the few training samples. (2) Task: we freeze the backbone model and only train the new task-specific head. (3) LR: the state-of-the-art few-shot learning method \cite{tian2020rethinking} based on logistic regression. We report the test accuracy when training with 0.5\%, 1\%, 2\%, and 5\% of the training set. } \label{fig:fewshot} \end{figure} \noindent \textbf{Ablation study on MoE Mlp and MoE Attention.} As shown in Table.~\ref{tab:MTL}, we report results (MLP and Attn in Table.~\ref{tab:MTL}) where we only introduce MoE into MLP and attention networks. Both ways of adding experts can improve $>1.0\%$ in $\Delta_t$ compared to MTL. By combining them, Mod-Squad\xspace gets the best result and further boost 2 points in $\Delta_t$. This demonstrates that introducing MoE and increasing model capacity in both attention and MLP network can increase the performance. \subsection{Experts, Tasks, and Pruning} \label{relation} \noindent \textbf{Relation between experts and tasks.} As shown in Fig.~\ref{fig:expert_task}, we visualize the frequency of experts being selected for each task. The x-axis and y-axis represent experts and tasks respectively. Experiments are conducted on the Taskonomy with all 13 tasks using MoE ViT-Small as the backbone. The visualization is for the MoE attention module in the 6th transformer block. We also compare with modified-MoE and Normal MoE which have different MoE losses but the exact model architecture. From the figure, we observe that our expert activation map is sharper and more sparse than the two comparisons, which aligns with our key motivation: a sparse but strong dependence between experts and tasks helps MTL. \noindent \textbf{Extracting sub-network for an individual task. } As introduced in \cref{sec:pruning}, we extract a small sub-network from Mod-Squad\xspace for an individual task. Specifically, we explore two ways of removing experts as follows. (1) Thresh and remove: we simply remove all experts that have an usage frequency lower than $\theta$ for the specific task. Note that some MoE modules could have fewer than Top-K experts after removing if most of the experts have a low usage frequency. In that case, we reduce the top-k of that MoE module to the number of experts it keeps. (2) Keep the top: we keep the top $H\%$ experts in each MoE module that have the highest usage frequency. The results are shown in Fig. \ref{fig:prune}. For the first way of removing experts, we try $\theta$ as 90\%, 50\%, 20\%, 5\%, 0.1\%, and 0\% (no removing). For the second way, we try $H\%$ as 50\%, 20\%, 5\%, and 0\% (no removing). For both removing strategies, we compare with STL, MTL, and M$^3$ViT. From the figure, we notice several interesting observations: (1) Mod-Squad\xspace can remove the majority of extra experts than a normal ViT-Base (116.9M vs. 90.0M in model parameters) with a tiny performance lost ($<0.3\%$ in $\delta_t$) and still better than competitors. (2) Only keeping the top 40\% of experts still give us the same performance (5.5\% in $\delta_t$ while the best is 5.6\%). (3) The performance of modified-MoE significantly drops when removing more experts, which prove the effectiveness of our mutual information loss. \noindent \textbf{Fine-tuning the router network. } Another interesting property of Mod-Squad\xspace is that we can quickly adapt to new tasks by only tuning the lightweight routing network and the task-specific head with all other parameters frozen. We refer to this technique as router fine-tuning. Router fine-tuning can be generalized to any MoE network when they need lightweight tuning with limited budgets in dataset size, computation cost, or training time. As shown in Fig. \ref{fig:fewshot}, we explore router fine-tuning. We first pre-train our model on 11 tasks on the Taskonomy except for cls. object and cls. scene as the target of new tasks. We compare different ways of fine-tuning with limited training examples. We report performance using 0.5\%, 1\%, 2\%, and 5\% of the dataset to learn the new tasks. The router fine-tuning strategy is compared with several baselines as follows. (1) Fine-tuning: fine-tune the whole model and learn the new task-specific head. (2) Task: freeze the backbone model and only learn the new task heads. (3) We follow the state-of-the-art few-shot learning method \cite{tian2020rethinking} based on logistic regression to fine-tune the model. From the figure, we find that the router fine-tuning strategy surpasses other baselines constantly on both tasks with different proportions of the training set. These results show that Mod-Squad\xspace can be quickly adapted for various purposes with router fine-tuning. \begin{figure} \centering \includegraphics[width=8.5cm]{compare_expert_task.pdf} \caption{\textbf{Visualization of the frequency that experts being selected for each task. } We visualize the activation frequency of a MoE attention module in the 6-th transformer block with 15 experts and top-k as 6. The y-axis represents the tasks and the x-axis represents the 15 experts. We compare the visualization of Mod-Squad\xspace to m-MoE and normal MoE. All three methods have the exact same MoE module but with different MoE losses. Our frequency map is much \textbf{sharp} and \textbf{sparse} than other methods. } \label{fig:expert_task} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm]{my_task_relation_all.pdf} \caption{\textbf{Task relation from Mod-Squad\xspace.} We evaluate the similarity between tasks as the mean of the percentage of experts that they are sharing with the same input. } \label{fig:task} \vspace{-0.05in} \end{figure} \noindent \textbf{Task Relation. } Mod-Squad\xspace can not only model the task relation implicitly like other multi-task models but also visualize it explicitly. We define the similarity between tasks as the mean of the percentage of experts that they are sharing given the same input. If two tasks are sharing more experts than other pairs of tasks, they are considered to be more related. This definition may not be perfectly accurate but is based on one simple rule: related tasks are more likely to share experts than unrelated tasks. As shown in Fig.~\ref{fig:task}, Mod-Squad\xspace visualizes task relations in a correlation matrix with our new definition of task similarity. We notice that some of the structures among tasks are interesting: the 3D tasks including Normal, Reshading, two depth tasks, Edge3D, Keyp. 3D and curvature are grouped together; closed relation exists among two segmentations tasks and among two two depth tasks; Edge2D and Edge3D are not closed in the visualization. It demonstrates Mod-Squad\xspace can also be used as a visualization tool to explore the structure among tasks. \section{Conclusion} In this work, we propose Mod-Squad\xspace, a modular multi-task learner based on mixture-of-experts and a novel loss to address the gradient conflicts among tasks. We demonstrate its potential to scale up in both model capacity and target task numbers while keeping the computation cost low. It is noteworthy that Mod-Squad\xspace can be scaled down in model size with no performance drop for specific purposes. Future work could extend Mod-Squad\xspace to a large variety of tasks and scenes not only in the vision domain but also in other modalities (e.g., text and audio). We hope Mod-Squad\xspace will become an important building block of future efficient and modular foundation models. {\small \bibliographystyle{ieee_fullname}
1,116,691,496,978
arxiv
\section*{1. Introduction} Quantum tunneling from a metastable state through a potential barrier is a fundamental nonlinear phenomenon occuring in many branches of the physical sciences \cite{landau,schulman,kramers,melnikov,hanggi,barone,larkvar,okuda,bao}. The standard semiclassical approach to metastability has been formulated since long by Langer \cite{langer} and Coleman \cite{coleman} in the fields of statistical and nuclear physics respectively. The main idea underlying such approach consists in selecting a classical background which solves the Euler-Lagrange equation and makes a saddle point for the action. Around the background, the quantum fluctuations are treated in quadratic approximation and their spectrum is obtained by solving a Schr\"{o}dinger like stability equation whose potential is given by the second spatial derivative of the metastable potential. The semiclassical method finds a concise and powerful description in the Euclidean path integral formalism \cite{feyn,fehi} in which the time for the bounce to perform a full excursion (inside the classically allowed region) is a measure of the inverse temperature of the system. In the standard treatments of metastability \cite{langer,coleman}, it is assumed that such time is {\it infinite} and therefore the decay rate formula holds, strictly speaking, only at $T=\,0$. For applications to specific systems however, a precise knowledge of the decay rate at finite $T$ (but within the quantum tunneling regime) may be of great interest. To this purpose one has to build the {\it finite} time theory of metastability for specific nonlinear potentials, setting the crossover temperature between (low $T$) quantum and (high $T$) activated regimes and find the shape of the decay rate when the crossover is approached from below. Focussing on a widely investigated model in nonlinear science, a particle in the one dimensional cubic potential, I present in Section 2 the finite time solution of the Euler-Lagrange equation in terms of the powerful Jacobian elliptic functions formalism \cite{wang}: this generalizes the well known {\it infinite} time bounce which is recovered asymptotically. I emphasize that the system is taken as non dissipative therefore the {\it temperature} should not be viewed here as a property of the heat bath \cite{grabwei,rise,antunes}, but rather as a measure of the system size along the time axis. Section 3 is devoted to the computation of the classical action. Section 4 describes the method of the semiclassical path integral and presents the calculation of the overall quantum fluctuation contribution through the theory of the functional determinants. Section 5 solves the stability equation for the periodic potential defined by the classical background. This permits to obtain analytically the lowest quantum fluctuation eigenvalues as a function of the finite time/temperature. It is shown in Section 6 that the softening of such eigenvalues close to the crossover largely determines the peculiar shape of the decay rate and its deviation from the prediction of the standard zero $T$ theory. The conclusions are drawn in Section 7. \section*{2. Cubic Potential Model} To begin, consider a particle of mass $M$ in the one dimensional cubic anharmonic potential: \begin{eqnarray} V(x)=\, {{M\omega^2} \over 2}x^2 - {{\gamma} \over 3}x^3 \,\, , \label{eq:55} \end{eqnarray} plotted in Fig.~\ref{fig:1}(a) for $\hbar\omega=\,20meV$ and $M=\,10^3m_e$, $m_e$ being the electron mass. Say $a$ the position of the top of barrier whose height is $V(a)=\,\gamma a^3/6$ with $\gamma=\,M\omega^2/a$. Let's take throughout the paper, $a=\,1\AA$. At $x=\,0$ the particle is in a local minimum from which it cannot escape classically. Thus, in the real time formalism, the classical equation of motion admits only the trivial solution $x_{cl}=\,0$. Physically however such local minimum is metastable as quantum fluctuations allow the particle to explore the abyss at $x \geq 3a/2$. In fact, a non trivial classical solution can be found in the Euclidean space. Performing a Wick rotation from the real to the imaginary time, $t \rightarrow -i\tau$, the equation of motion reads: \begin{eqnarray} M\ddot{x}_{cl}(\tau)=\,V'(x_{cl}(\tau)) \,\, , \label{eq:1} \end{eqnarray} where $V'$ means derivative with respect to $x_{cl}$. In the spirit of the semiclassical method, the particle path $x(\tau)$ has been split in the sum of a classical and a quantum component, $x(\tau)=\,x_{cl}(\tau) + \eta(\tau)$. The Wick rotation is equivalent to turn the potential upside down with respect to the real time as shown in Fig.~\ref{fig:1}(b). Now it is clear that the classical motion can take place in the reversed potential: precisely, the particle moves back and forth between the turning points $x_1$ and $x_2$ in which the particle velocity vanishes. Integrating Eq.~(\ref{eq:1}), one gets: \begin{eqnarray} {M \over 2}\dot{x}_{cl}^2(\tau) - V(x_{cl}(\tau))=\, E \,\, , \label{eq:2} \end{eqnarray} with the constant $E$ representing the classical energy. Defining: \begin{eqnarray} & &\chi_{cl}(\tau)=\, {2 \over {3}}{{x_{cl}(\tau)} \over a} \, \nonumber \\ & &\kappa=\,{{4E}\over {27 V(a)}} \,\, , \label{eq:56} \end{eqnarray} Eq.~(\ref{eq:2}) is easily integrated to yield: \begin{eqnarray} \tau - \tau_0 =\,\pm {{1 \over \omega}} \int_{\chi_{cl}(\tau_0)}^{\chi_{cl}(\tau)} {{d\chi} \over {\sqrt{-\chi^3 + \chi^2 + \kappa }}} \,\, , \label{eq:57} \end{eqnarray} where $\tau_0$ is the center of motion between the turning points. The boundary conditions on the classical motion define a physical picture in which the particle starts from $x_2$ at the time $\tau=\,-L/2$, reaches $x_1$ at $\tau=\,\tau_0$ and returns to the initial position at $\tau=\,L/2$. Then, Eq.~(\ref{eq:57}) has a time reversal invariant solution whose period $L$ is finite and dependent on $E$. Looking at Fig.~\ref{fig:1}(b), one sees that the amplitude $x_1 - x_2$ attains the largest value for the $E=\,0$ motion while $x_2$ and $x_3$ coincide. For $E < \,0$ motions, $x_3$ is negative. The turning points $x_1$, $x_2$, $x_3$ are given by the zeros of the equation $-\chi^3 + \chi^2 + \kappa=\,0$ ($\chi\equiv \, 2x/(3a)$) which admits three real solutions for $\kappa \in [-4/27, 0]$, that is for $E \in [-V(a),0]$. After some algebra I find: \begin{eqnarray} & &\chi_1=\, {1 \over {3}} + {2 \over {3}}\cos(\vartheta) \, \nonumber \\ & & \chi_2=\, {1 \over {3}} + {2 \over {3}}\cos(\vartheta - 2\pi/3) \, \nonumber \\ & & \chi_3=\, {1 \over {3}} + {2 \over {3}}\cos(\vartheta - 4\pi/3) \, \nonumber \\ & & \vartheta=\,{1 \over {3}}\arccos\bigl({{27 \kappa} \over {2}} + {1}\bigr) \,\, . \label{eq:58} \end{eqnarray} At the bounds of the energy range, Eq.~(\ref{eq:58}) yields: \begin{eqnarray} & &{\bf E=\,0} \Rightarrow \chi_1=\,1 ;\,{}\, \chi_2=\, \chi_3=\,0 \, \nonumber \\ & & {\bf E=\,-V(a)} \Rightarrow \chi_1=\,\chi_2=\,2/3 ;\,{}\, \chi_3=\,-1/3 \, \,\, . \nonumber \\ \label{eq:59} \end{eqnarray} Thus, at the sphaleron energy $E_{sph}=\,|E|=\,V(a)$, the amplitude of the finite time solution has to shrink into a point. Let's find the general solution of Eq.~(\ref{eq:57}) by pinning the center of motion at $\chi_{cl}(\tau_0)=\,\chi_1$ and using the result \cite{grad}: \begin{eqnarray} & &\int_{\chi_{cl}(\tau)}^{\chi_{1}} {{d\chi} \over {\sqrt{(\chi_1 - \chi)(\chi - \chi_2)(\chi - \chi_3) }}}=\,{{2 F(\lambda,p)} \over {\sqrt{\chi_1 - \chi_3}}} \, \nonumber \\ & &\lambda=\,\arcsin\Biggl(\sqrt{{\chi_1 - \chi_{cl}(\tau)}\over {\chi_1 - \chi_2}}\Biggr)\, \nonumber \\ & &p=\, \sqrt{{\chi_1 - \chi_{2}}\over {\chi_1 - \chi_3}} \,\, , \label{eq:61} \end{eqnarray} where $F(\lambda,p)$ is the elliptic integral of the first kind with amplitude $\lambda$ and modulus $p$. Then, through Eqs.~(\ref{eq:56}),~(\ref{eq:57}),~(\ref{eq:61}), I derive the bounce solution of the {\it finite time} theory: \begin{eqnarray} & &x_{cl}(\tau)=\,{{3a} \over 2}\bigl[\chi_1 cn^2(\varpi,p) + \chi_2 sn^2(\varpi,p)\bigr] \, \nonumber \\ & &\varpi=\, \sqrt{{\chi_1 - \chi_3}}\,{\omega \over 2}(\tau - \tau_0)\, \,\, , \nonumber \\ \label{eq:62} \end{eqnarray} $sn(\varpi,p)$ and $cn(\varpi,p)$ are the {\it sine-} and {\it cosine-} amplitudes respectively \cite{wang}. The modulus $p$ keeps tracks of the classical mechanics through the second of Eq.~(\ref{eq:56}) and Eq.~(\ref{eq:58}). At {\bf E =\,0, p=\,1}, the bounce of the {\it infinite time} theory is recovered: \begin{eqnarray} & &x_{cl}(\tau)=\,{{3a} \over 2}cn^2(\varpi,1)=\,{{3a} \over 2} sech^2\bigl({\omega \over 2}(\tau - \tau_0)\bigr) \, \,\, . \nonumber \\ \label{eq:63} \end{eqnarray} At ${\bf E_{sph}, p=\,0}$, the bounce solution is (as expected) a point-like object set at the bottom of the valley in the reversed potential: $x_{cl}(\tau)=\,a$. Thus, Eq.~(\ref{eq:62}) defines the transition state which is a saddle for the action below the sphaleron. Computation of Eq.~(\ref{eq:62}), shows that the bounce amplitude contracts by increasing the {\it energy over potential height} ratio (in absolute value). As the bounce is a combination of squared Jacobi elliptic functions, its period is $2K(p)$ with $K(p)=\,F(\pi/2,p)$ being the complete elliptic integral of the first kind \cite{wang}. Hence, from Eq.~(\ref{eq:62}), I get: \begin{eqnarray} \sqrt{{\chi_1 - \chi_3}}\,{\omega \over 4}L=\,K(p) \,\, , \label{eq:64} \end{eqnarray} which establishes the relation between the oscillation period and the classical energy embedded in the turning points. As stated in the Introduction, one can map the imaginary time onto the temperature axis, $L=\,\hbar /(K_BT^*)$, where $T^*$ is the temperature at which the particle makes the excursion to and from the edge of the abyss for a given $E$. Then, only periodic bounces whose period is proportional to the inverse temperature determine the decay rate and the {\it finite time} theory can be viewed as a finite $T^*$ theory. From Eq.~(\ref{eq:64}) I get: \begin{eqnarray} K_BT^*=\,{{\hbar \omega} \over 4} {\sqrt{{\chi_1 - \chi_3}} \over { K(p)}} \,\, . \label{eq:64a} \end{eqnarray} Eq.~(\ref{eq:64a}) is plotted in Fig.~\ref{fig:2} on a linear scale. Approaching $E=\,0$, $T^*$ consistently drops to zero while the value at $E_{sph}$ defines the transition temperature $T^*_c$ between quantum and activated regimes. Analytically, at the sphaleron, Eq.~(\ref{eq:64a}) yields \begin{eqnarray} K_BT_{c}^*=\,{{\hbar \omega} \over {2\pi}} \,\, , \label{eq:64b} \end{eqnarray} which represents the upper bound for the occurence of quantum tunneling and precisely sets the Goldanskii criterion \cite{gold,larkin} for a cubic anharmonic potential. Taking $\hbar \omega=\,20meV$, I get $T_{c}^*=\,36.938K$. The following calculations are carried out in the low temperature range up to $T_{c}^*$. \subsection*{3. Classical Action} The classical action $A[x_{cl}]$ for the bounce in the finite time theory can be computed in terms of the path velocity $\dot{x}_{cl}(\tau)$ by the relations: \begin{eqnarray} & &A[x_{cl}]=\, M N^{-2} - E\cdot L(E) \, \nonumber \\ & &{N}^{-2}=\, 2\int_{0}^{L/2}d\tau [\dot{x}_{cl}(\tau)]^2 \, \nonumber \\ & &\dot{x}_{cl}(\tau)=\,{{3a} \over 2}\mathcal{F}\cdot sn(\varpi,p)cn(\varpi,p)dn(\varpi,p)\, \nonumber \\ & &\mathcal{F}=\,-\omega (\chi_1 - \chi_2) \sqrt{{\chi_1 - \chi_3}} \,\, , \label{eq:65} \end{eqnarray} where $dn(\varpi,p)$ is the {\it delta-} amplitude \cite{wang}. Computation of Eq.~(\ref{eq:65}) requires knowledge of $L(E)$ through Eqs.~(\ref{eq:56}),~(\ref{eq:58}) and ~(\ref{eq:64}). In the $E\rightarrow 0$ limit, from Eq.~(\ref{eq:65}), I get the result \begin{eqnarray} {{A[x_{cl}]} \over \hbar} \rightarrow \, {{6 M^3 \omega^5} \over {5\hbar \gamma^2}} \,\, , \label{eq:66} \end{eqnarray} which serves as testbench for the computational method. The dependence of the classical action on $1 / \gamma^2$ reflects the well known fact that metastable systems are non perturbative and provides the fundamental motivation for the semiclassical treatment. Eq.~(\ref{eq:66}) permits to set the potential parameters such as the condition ${A[x_{cl}] > \hbar}$ holds and the semiclassical method is thus justified. As $M$ and $a$ have been taken constant, ${{A[x_{cl}]}}\propto \omega$ in the $E\rightarrow 0$ limit. The classical action and the squared norm of the path velocity (times $M$) are displayed in Fig.~\ref{fig:3}(a) and Fig.~\ref{fig:3}(b) respectively. While, at low $T^*$, the two plots are essentially identical the role of the term $E \cdot L(E)$ in Eq.~(\ref{eq:65}) becomes more significant at increasing $T^*$. At $T_{c}^*$, $N^{-2}$ vanishes whereas ${A[x_{cl}]}$ is finite. Note that ${A[x_{cl}]}$ decreases smoothly versus $T^*$ confirming that the transition to the activated regime at $T_{c}^*$ is of second order as suggested long ago \cite{larkin,affl}. In general, the criterion to establish the order of the transitions in periodic tunneling systems has been formulated by Chudnovsky \cite{chudno} through the behavior of the oscillation period $L(E)$: {\it i)} a monotonic $L(E)$ below the sphaleron implies $A[x_{cl}] < A_0$ for $T^* < T_c^*$, with $A_0$ being the thermal action given by $A_0=\,\hbar V(a)/K_B T^*$. At $T^* =\,T_c^*$, both conditions $A[x_{cl}]=\, A_0$ and $dA[x_{cl}]/dT=\, dA_0/dT$ are fulfilled hence the crossover from the quantum to the thermal regime is expected to be smooth; {\it ii)} on the other hand, a nonmonotonic behavior of $L(E)$ would indicate a sharp transition. As it can deduced from Fig.~\ref{fig:2}, $L(E)$ increases versus $E \in [- V(a), 0]$, thus the case {\it i)} applies to the cubic potential in Eq.~(\ref{eq:55}) and, consistently, the action is convex upwards versus $T^*$. At the sphaleron, I find numerically: $L(E_{sph})/\hbar=\,0.314meV^{-1}$. Note, from Eq.~(\ref{eq:64}), that such value corresponds to $2\pi/\hbar\omega$ (being $K(p=0)=\,\pi/2$) and this proves the correctness of the computation. In fact at $E_{sph}$ the bounce is a point, that is a static solution of Eq.~(\ref{eq:2}) but, near $E_{sph}$, the periodic path is the sum of the sphaleron and a fluctuation with negative eigenvalue $\varepsilon_{-1}$ whose period tends to $L(E_{sph})=\,2\pi/\sqrt{|\varepsilon_{-1}|}$ \cite{park,blatter}. Then one infers that, for $|E| \rightarrow E_{sph}$, the ground state eigenvalue has to behave as: $\varepsilon_{-1}\rightarrow -\omega^2$. This key point will be further investigated in Section 5. \section*{4. Semiclassical Euclidean Path Integral} This Section presents the calculation of the space-time Euclidean path integral between the positions $x_i$ and $x_f$ connected in the time $L$. In the semiclassical model and treating the quantum fluctuations in quadratic approximation, the path integral reads: \begin{eqnarray} & &<x_f|x_i>_L=\,\exp\biggl[- {{A[x_{cl}]} \over {\hbar}} \biggr] \cdot \int D\eta \exp\biggl[- {{A_f[\eta]} \over {\hbar}} \biggr] \, \nonumber \\ & &A_f[\eta]=\,\int_{-L/2}^{L/2} d\tau \biggl({M \over 2} \dot{\eta}^2(\tau) + {1 \over 2}V''(x_{cl}(\tau))\eta^2(\tau) \biggr) \, \nonumber \\ & &{{V''(x_{cl}(\tau))}}=\,{M\omega^2} \Bigl(1 - {2 \over a} x_{cl}(\tau) \Bigr) \,\, . \label{eq:66+++} \end{eqnarray} Thus, to get the quantum fluctuation action $A_f[\eta]$, one has to solve a second order differential problem which, after partial integration in the first term, is formulated as follows: \begin{eqnarray} & &\hat{O} \eta_n(\tau)=\,\varepsilon_n \eta_n(\tau) \, \nonumber \\ & &\hat{O}\equiv -\partial_{\tau}^2 + {{ V''(x_{cl}(\tau))}/ M}\,\nonumber \\ & &\eta(\tau)=\,\sum_{n=\,-1}^{\infty} \varsigma_n \eta_n(\tau) \,\, , \label{eq:66++} \end{eqnarray} where the $\varepsilon_n$ are the quantum fluctuation eigenvalues while the coefficients $\varsigma_n$ of the series expansion in ortonormal components $\eta_n(\tau)$ define the measure of the fluctuation paths integration in Eq.~(\ref{eq:66+++}): \begin{eqnarray} \int D\eta=\,\aleph \prod_{n=\,-1}^{\infty} \int_{-\infty}^{\infty} {{d\varsigma_n}\over {\sqrt{2\pi\hbar/M}}}\,\, , \label{eq:66aa} \end{eqnarray} $\aleph$ depends only on the functional integral measure. First, observe from Eq.~(\ref{eq:1}) that ${\dot{x}_{cl}(\tau)}$ satisfies the homogeneous equation associated to the second order Schr\"{o}dinger-like differential operator $\hat{O} \eta_n(\tau)=\,0$. This is a general consequence of the $\tau$-translational invariance of the system. Hence, ${\dot{x}_{cl}(\tau)}$ is proportional to the ortonormal eigenmode $\eta_0(\tau)$, ($\eta_0(\tau) \equiv \,N {\dot{x}_{cl}(\tau)}$) with $\varepsilon_0=\,0$. The latter however cannot be the ground state as the bounce solution (Eq.~(\ref{eq:62})) is non monotonic and ${\dot{x}_{cl}(\tau)}$ has one node along the time axis within the period $L$. This implies that the quantum fluctuation spectrum has one negative eigenvalue corresponding to the ground state \cite{i1}. Here lies the origin of metastability. Second, from Eq.~(\ref{eq:65}), note that for any two points $\varpi_1, \varpi_2$ such that $\varpi_2=\,\varpi_1 \pm 2K(p)$, $\dot{x}_{cl}(\varpi_2)=\,\dot{x}_{cl}(\varpi_1)$. The important consequence is that the fluctuation eigenmodes obey periodic boundary conditions (PBC). As $x_i$ and $x_f$ defined in Eq.~(\ref{eq:66+++}) coincide for the periodic bounce, Eq.~(\ref{eq:66+++}) represents the single bounce contribution $Z_1$ to the total partition function $Z_T$. In fact, the latter also contains the effects of all multiple (non interacting) excursions to and from the abyss which is equivalent to sum over an infinite number of single bounce contributions like $Z_1$. Moreover, also the static solution of Eq.~(\ref{eq:1}), $x_{cl}=\,0$ contributes to $Z_T$ by the harmonic partition function $Z_h$ which can be easily determined using the same measure in Eq.~(\ref{eq:66aa}). Summing up, $Z_T$ is given by \begin{eqnarray} & & Z_T=\,Z_h \exp(Z_1/Z_h)\, \nonumber \\ & &Z_h=\,\aleph |Det[\hat{h}]|^{-1/2} \,\, , \label{eq:66a+++} \end{eqnarray} $Det[\hat{h}]$ ($\hat{h}\equiv \, -\partial^2_{\tau} + \omega^2$) is the harmonic fluctuation determinant. Being the decay rate $\Gamma$ proportional to the imaginary exponential argument through the Feynman-Kac formula \cite{schulman}, it follows that there is no need to determine $\aleph$ as it cancels out in the ratio $Z_1/Z_h$. Supposed to have solved Eq.~(\ref{eq:66++}), the quantum fluctuation term in Eq.~(\ref{eq:66+++}) can be worked out by carrying out Gaussian path integrals. Formally one gets: \begin{eqnarray} & &\int D\eta \exp\biggl[- {{A_f[\eta]} \over {\hbar}} \biggr]= \, \aleph \cdot Det\Bigl[\hat{O}\Bigr]^{-1/2} \, \nonumber \\ & & Det[\hat{O}]\equiv \, \prod_{n=\,-1}^{\infty}\varepsilon_n \,\, . \label{eq:66++++} \end{eqnarray} The evaluation of Eq.~(\ref{eq:66++++}) is carried out through the two following steps. \subsection*{A. Zero Mode} The Gaussian approximation leading to Eq.~(\ref{eq:66++++}) is broken by the Goldstone mode arising from the fact that $\tau_0$, the center of the bounce, can be located arbitrarily inside $L$. The technique to overcome the obstacle is well known \cite{larkin}: the divergent integral over the coordinate $d\varsigma_0$ associated to the zero mode in the measure $D\eta$ is transformed into a $d\tau_0$ integral. Accordingly the eigenvalue $\varepsilon_0=\,0$ is extracted from $Det[\hat{O}]$ and its contribution to Eq.~(\ref{eq:66++++}) is replaced as follows \begin{eqnarray} (\varepsilon_0)^{-1/2} \rightarrow \sqrt{{{M {N}^{-2}}\over {2\pi\hbar}}}L\, \,\, . \nonumber \\ \label{eq:66+++++} \end{eqnarray} To be rigorous, this replacement holds in the approximation of quadratic fluctuations \cite{kleinert} while higher order terms may be significant around the crossover. It is also worth noticing that Eq.~(\ref{eq:66+++++}) is often encountered in the form $(\varepsilon_0)^{-1/2} \rightarrow \sqrt{{{A[x_{cl}]}\over {2\pi\hbar}}}L$. However the latter is correct only in the low $T$ limit where $A[x_{cl}]$ equals $M {N}^{-2}$ (as made clear by Fig.~\ref{fig:3}) while, approaching $T^*_c$, the difference between the two objects gets large. This fact is crucial in establishing the behavior of the decay rate at the crossover as shown in Section 6. Thus, handled the zero mode, I turn to the evaluation of the regularized determinant $Det^R[\hat{O}]$ defined by $Det[\hat{O}]=\,\varepsilon_0 \cdot Det^R[\hat{O}]$. \subsection*{B. Regularized Fluctuation Determinant} The calculation of $Det^R[\hat{O}]$ is based on the theory of functional determinants for second order differential operators which was first developed for Dirichlet boundary conditions \cite{gelfand} and then extended to general operators and boundary conditions in several ways \cite{forman,tarlie,kirsten1}. As a fundamental feature, to evaluate $Det^R[\hat{O}]$ one has to know only the classical path which makes the action stationary. As shown above the path velocity obeys PBC for any two points $\varpi_1, \varpi_2$ separated by the period $2K(p)$. The latter corresponds to the oscillation period $L$ along the $\tau$-axis. It can be easily checked that also the path acceleration fulfills the PBC. Then, the regularized determinant is given by \cite{tarlie}: \begin{eqnarray} Det^R[\hat{O}]=\,{{<f_0 |f_0> \bigl(f_1(\varpi_2) - f_1(\varpi_1)\bigr)} \over {{f_0(\varpi_1) W(f_0, f_1)}}}\,\, , \label{eq:67} \end{eqnarray} where $f_0, f_1$ are two independent solutions of the homogeneous equation: $\hat{O} \eta_n(\tau)=\,0$. $f_0$ is obviously $\dot{x}_{cl}$ while $f_1$ can be taken as: \begin{eqnarray} f_1=\,{{\partial {x}_{cl}} \over {\partial q}}\, ;{}\, q \equiv \,p^2 \,\, , \label{eq:67+} \end{eqnarray} $W(f_0, f_1)$ is their Wronskian and $<f_0 |f_0> \equiv N^{-2}$ is given by Eq.~(\ref{eq:65}). The Wronskian, being constant along $\tau$, can be calculated in any convenient point. Let's take $\tau_0$ as $f_0(\tau_0)=\,0$. Then: \begin{eqnarray} & &W(f_0, f_1)\Bigr|_{\tau_0}=\,- \dot{f}_0(\tau_0)f_1(\tau_0) \, \nonumber \\ & &=\,{9 \over 8}a^2\omega^2 (\chi_1 - \chi_2){(\chi_1 - \chi_3)} {{\partial \chi_1} \over {\partial q}} \, \,\, . \nonumber \\ \label{eq:69} \end{eqnarray} Working out the calculation, ${Det^R[\hat{O}]}$ in Eq.~(\ref{eq:67}) transforms into: \begin{eqnarray} {Det^R[\hat{O}]} =\,& &{{2} \over {\omega \sqrt{\chi_1 - \chi_3} \bar{p}^2}} \Biggl[ {{E(\pi/2,p) - \bar{p}^2K(p)} \over {p^2}} \Biggr] \cdot {{<f_0 |f_0>} \over {W(f_0, f_1)}}\, \nonumber \\ \bar{p}^2=\, & & 1 - p^2 \,\, , \label{eq:70} \end{eqnarray} which can be directly computed using Eqs.~(\ref{eq:65}),~(\ref{eq:69}). $E(\pi/2,p)$ is the complete elliptic integral of the second kind \cite{wang}. It is however known in the theory of functional determinants \cite{gelfand,kleinert} that only ratios of determinants are meaningful in value and sign, such ratios arising naturally in the path integral method as it has been pointed out above. In fact, $Det^R[\hat{O}]$ would diverge in the $T^* \rightarrow 0$ limit due to the fact that the determinant is the product over an infinite number of eigenvalues with magnitude greater than one. Consistently with Eq.~(\ref{eq:66a+++}), $Det^R[\hat{O}]$ has to be normalized over $Det[\hat{h}]$ which, in the case of PBC, is: ${Det[\hat{h}]}=\,-4\sinh^2(\omega L/2)$ \cite{tarlie}. The normalization cancels the exponential divergence and makes the ratio finite. Then, observing that for $E \rightarrow 0$ ( $T^* \rightarrow 0$): \begin{eqnarray} & &W(f_0, f_1)\Bigr|_{\tau_0}\rightarrow {9 \over 8}a^2\omega^2 (1 - p^2)\, \nonumber \\ & &<f_0 |f_0>\, \rightarrow {6 \over 5}a^2\omega \, \nonumber \\ & &K(p) \rightarrow \ln(4/\sqrt{1 - p^2}) \,\, , \label{eq:71} \end{eqnarray} from Eq.~(\ref{eq:70}), I finally get the finite ratio \begin{eqnarray} {{Det^R[\hat{O}]} \over {Det[\hat{h}]}} \rightarrow -{{1} \over {60\omega^2}} \,\, . \label{eq:71+} \end{eqnarray} The dimensionality $[\omega^{-2}]$ correctly accounts for the fact that one eigenvalue has been extracted from $Det^R[\hat{O}]$. From the computation of Eq.~(\ref{eq:70}), the following informations can be extracted: {\bf 1)} the $T^* \rightarrow 0$ limit given by Eq.~(\ref{eq:71+}) is in fact an excellent estimate up to $T^* \sim T^*_c/2$ whereas a strong deviation is found at larger $T^*$ up to $\sim T^*_{c}$. {\bf 2)} The ratio $Det^R[\hat{O}]/Det[\hat{h}]$ is negative for any $T^*$ and this sign has physical meaning as it is due precisely to the negative ground state eigenvalue $\varepsilon_{-1}$ of the fluctuation spectrum. As $Z_1/Z_h$ contributes to the partition function by the square root of the fluctuation (inverse) determinants ratio it follows that such contribution is purely imaginary. Moreover, close to the sphaleron, $T^* \sim T^*_{c}$, ${<f_0 | f_0>} \propto p^4$ and $W(f_0, f_1) \propto p^2$, thus $Det^R[\hat{O}]$ tends to zero as $Det^R[\hat{O}] \propto p^2$. The fact that $Det^R[\hat{O}]$ vanishes at the crossover causes the divergence of the inverse ratio. This divergence may look surprising since the zero mode had been extracted from the determinant: physically one realizes that there must be a quantum fluctuation mode ($\varepsilon_{1}$) which softens by increasing $T^*$ and ultimately vanishes at the sphaleron. To understand in detail the key effects of the low lying fluctuation eigenvalues $\varepsilon_{1}$ and $\varepsilon_{-1}$, one has to determine them analytically by solving the stability equation in Eq.~(\ref{eq:66++}). This is done in the next Section. \subsection*{5. Lam\`{e} Equation} Take Eq.~(\ref{eq:66++}) with the second derivative of the potential given in Eq.~(\ref{eq:66+++}). Using Eq.~(\ref{eq:62}) and working out the algebra, I get the stability equation which governs the fluctuation spectrum around the classical background: \begin{eqnarray} & &{{d^2} \over {d\varpi^2}} \eta_n(\tau) =\,\bigl[ l(l + 1)p^2 sn^2(\varpi,p) + \mathcal{A}_n \bigr] \eta_n(\tau) \, \nonumber \\ & &\mathcal{A}_n=\,{{4(1 - 3\chi_1)} \over {\chi_1 - \chi_3}} - {{4\varepsilon_n} \over {\omega^2(\chi_1 - \chi_3)}} \, \nonumber \\ & &l(l + 1)\equiv 12 \,\, . \label{eq:73} \end{eqnarray} This is the Lam\`{e} equation in the Jacobian form for the case $l=\,3$ \cite{whittaker}. For a given $l$ and $p$, Eq.~(\ref{eq:73}) yields periodic solutions (which can be expanded in infinite series) for an infinite sequence of characteristic $\mathcal{A}_n$ values. The continuum of the fluctuation spectrum stems from this sequence. However, being $l$ positive and integer, the first $2l + 1$ solutions of Eq.~(\ref{eq:73}) are not infinite series but polynomials in the Jacobi elliptic functions with real period $2K(p)$ or $4K(p)$. Being the period of the potential, $2K(p)$ plays the role of a lattice constant. Then, Eq.~(\ref{eq:73}) admits seven polynomial solutions with eigenvalues $\mathcal{A}_n, n\in [-l,l]$, from which the corresponding $\varepsilon_n$ are derived. However not all the $\varepsilon_n$ are good fluctuation eigenvalues. In fact four out of seven have to be discarded as their eigenfunctions do not fulfill the PBC required for the fluctuation components: $\eta_n(\varpi_1)=\,\eta_n(\varpi_1 \mp 2K(p))$. Thus, the three good eigenmodes and relative eigenvalues in polynomial form are: \begin{eqnarray} & &\eta_{0}\propto \,sn(\varpi,p)cn(\varpi,p)dn(\varpi,p) \nonumber \\ & &\varepsilon_0=\,0 \, \nonumber \\ & &\eta_1 \propto \,(sn^2\varpi - p^{-2})^{1/2}\biggl[sn^2\varpi + {2 \over {p^2 + \mathcal{A}_1}}\biggr] \nonumber \\ & & \varepsilon_1=\, \omega^2\Bigl(\alpha_1 -\alpha_2 \mathcal{A}_1 \Bigr) \nonumber \\ & &\eta_{-1}\propto \,(sn^2\varpi - p^{-2})^{1/2}\biggl[sn^2\varpi + {2 \over {p^2 + \mathcal{A}_{-1}}}\biggr] \nonumber \\ & & \varepsilon_{-1}=\, \omega^2\Bigl(\alpha_1 - \alpha_2\mathcal{A}_{-1} \Bigr)\, \nonumber \\ & &\mathcal{A}_1= -(2 + 5p^2) - 2\sqrt{4p^4 - p^2 + 1}\, \nonumber \\ & &\mathcal{A}_{-1}= -(2 + 5p^2) + 2\sqrt{4p^4 - p^2 + 1}\, \nonumber \\ & &\alpha_1\equiv\, 1 - 3\chi_1 \, \nonumber \\ & &\alpha_2\equiv\, {{\chi_1 - \chi_3} \over 4} \,\, . \label{eq:75} \end{eqnarray} The plots of $\varepsilon_1$ and $\varepsilon_{-1}$ versus the {\it energy over potential height} ratio are reported on Fig.~\ref{fig:4}(a) and Fig.~\ref{fig:4}(b) respectively. Note that: {\bf i)} $\varepsilon_0=\,0$ is the zero mode eigenvalue correctly recovered through the stability equation. {\bf ii)} $\varepsilon_1$ lies in the continuum and, as it can be easily deduced from Eq.~(\ref{eq:75}), it drops to zero close to the sphaleron as $\varepsilon_1 \propto p^2$: this is precisely the behavior previously envisaged for $Det^R[\hat{O}]$. Hence, $\varepsilon_1$ is the soft mode driving the enhancement in the decay rate below the sphaleron which is discussed in the next Section. Observe that, for $p \rightarrow 0$, $K(p) \simeq \pi/2 + \pi p^2/8$. Then, by Eq.~(\ref{eq:64a}), close to the crossover: $\varepsilon_1 \propto T^*_c - T^*$. At $T_{c}^*$, $\varepsilon_1$ and $\varepsilon_0$ merge consistently with the double degeneracy of the corresponding eigenmodes above the crossover. {\bf iii)} $\varepsilon_{-1}$ is the negative eigenvalue responsible for metastability. $\varepsilon_{-1}$ also softens (in absolute value) with respect to the value $\varepsilon_{-1}=\, -5\omega^2/4$ found at $E=\,0$. Interestingly, along the temperature scale, the substantial reduction starts up at $T^* \sim T_{c}^*/2$, that is in the same range at which the classical properties deviate from the predictions of the infinite time theory. Finally, at the sphaleron, from Eq.~(\ref{eq:75}) I get $\varepsilon_{-1}=\,-\omega^2$ thus confirming the prediction made at the end of Section 3. This completes the analysis of the soft eigenvalues which ultimately govern the quantum fluctuation spectrum. \section*{6. Decay Rate} The decay rate $\Gamma$ of a metastable state is given in semiclassical theory by \begin{eqnarray} \Gamma=\,A\exp(-B/\hbar)[1 + O(\hbar)] \,\, , \label{eq:100} \end{eqnarray} where $A$ and $B$ depend on the specific shape of the potential. The investigation carried out so far allows us to identify the coefficients $A$ and $B$ in Eq.~(\ref{eq:100}) with $\hbar \sqrt{\bigl|Det[\hat{h}]/ Det[\hat{O}]\bigr|}/L$ and $A[x_{cl}]$ respectively. Then, the general expression for the finite time/temperature $\Gamma(T^*)$ is: \begin{eqnarray} & & \Gamma(T^*)=\,\hbar \sqrt{{{M {N}^{-2}}\over {2\pi\hbar}}} \sqrt{\Biggl|{{Det[\hat{h}]} \over {Det^{R} [\hat{O}] }}\Biggr|} \exp\biggl[-{{A[x_{cl}]} \over {\hbar}} \biggr] \, \,\, . \nonumber \\ \label{eq:76} \end{eqnarray} Eq.~(\ref{eq:76}) is plotted in Fig.~\ref{fig:5} against temperature up to $T^*_c$ for three oscillator energies. While at low $T^*$, $\Gamma(T^*)$ merges with the constant decay rate of the {\it infinite time} theory, an increase of $\Gamma(T^*)$ is found in all plots above $T^* \sim T^*_c/2$ where the combined effects of quantum fluctuations and classical action softening become evident. Approaching the crossover, $\Gamma(T^*)$ deviates from the $T=\,0$ result and reaches a peak value $\Gamma(T_P^*)$ which is larger for lower $\omega$. This effect is mainly ascribable to the soft eigenvalue $\varepsilon_1$. Note however that for $\hbar \omega=\,10meV$, $\Gamma(T_P^*)/\hbar\omega \sim 1$, signalling that the application of the semiclassical method itself becomes questionable. In fact, as noted below Eq.~(\ref{eq:66}), the latter works when ${{A[x_{cl}]} > {\hbar}}$ and such condition starts to be well fulfilled by the case $\hbar \omega=\,20meV$ as shown in Fig.~\ref{fig:3}. Clearly the $\omega$ values making the semiclassical method feasible also depend on $M$ which has been assumed light in the present discussion. Heavvier particle masses favor the condition $\Gamma(T_P^*)/\hbar\omega < 1$ and sustain the applicability of the semiclassical method over a broader range of $\omega$. Above $T_P^*$ the decay rate smoothly merges with the classical Arrhenius factor as ${{A[x_{cl}]}/ {\hbar}} \rightarrow V(a)/K_BT^*_c$. The temperatures $T^*_A$, corresponding to the symbols in Fig.~\ref{fig:5}, mark the effective values at which quantum and thermal decay rates overlap. Beyond $T_A^*$ and approaching $T^*_c$, the decay rate falls to zero as $\Gamma(T^*) \propto (T^*_c - T^*)^{1/2}$ and the quantum tunneling ceases to exist. The latter power law dependence is driven by $\sqrt{N^{-2}/Det^R[\hat{O}]} \propto p$. The increase found for the quantum decay rate up to $T_P^*$ and the subsequent sharp drop is interesting also in view of a comparison with activated systems described by classical Ginzburg-Landau finite size models \cite{faris,tu} in which spatio-temporal noise induces transitions between locally stable states of a nonlinear potential \cite{chaud}. The changes in radius and the stability conditions of metastable metallic nanowires are an example of current interest \cite{yanson,stafford}. In classical systems of finite size $L$ a power-law divergence in the escape rate (with critical exponent $ 1/2$) is predicted once a critical lenghtscale $L_c$ is approached at fixed $T$ \cite{stein1}. Instead, the quantum decay rate of Eq.~(\ref{eq:76}) cannot be divergent as the small parameter is $\hbar$ which, unlike the noise in classical systems, cannot be varied as a function of $L$ at fixed $T$ (or vice-versa) \cite{stein2}. Accordingly the quantum tunneling decay rate is small and continuous. Finally, it is worth pointing out that the decay rate may be computed {\it independently} of the squared norm $N^{-2}$ as the latter cancels out in Eq.~(\ref{eq:76}) by explicitly inserting Eq.~(\ref{eq:67}). For this reason the behavior of the decay rate essentially depends on $Det^R[\hat{O}]/N^{-2}$ consistently with the quadratic approximation for the quantum fluctuations which enters the calculation at two stages: {\it a)} it determines the form of the quantum action in Eq.~(\ref{eq:66+++}) and accordingly leads to Eq.~(\ref{eq:66++++}); {\it b)} it allows us to replace the inverse zero mode eigenvalue by the squared norm of the bounce velocity, via Eq.~(\ref{eq:66+++++}). \section*{7. Conclusion} I have developed the finite time (temperature) semiclassical theory for the quantum decay rate of a particle in the metastable state of a cubic potential model. In the Euclidean path integral formalism, the optimal escape trajectory emerges as the solution of the Euler-Lagrange equation in terms of Jacobian elliptic functions. Such solution is a time dependent bounce whose periodicity naturally leads to relate the temperature $T^*$ to the energy of the classical motion. Consistently one defines the crossover temperature $T^*_c$ between quantum and activated regimes which depends only on the fundamental oscillator frequency $\omega$. As the path integral has been solved treating the quantum fluctuations in quadratic approximation, the calculations are confined to the low $\omega$ range, that is to the low temperature regime. In the numerical analysis I have considered a light particle mass and established, for this case, the lowest bound of $\omega$ values which make the semiclassical method reliable. The stumbling block in the calculation of the quantum decay rate is the estimate of the quantum fluctuation effect in the {\it finite} time theory. In particular, I have {\it i)} derived a compact expression for the overall fluctuation contribution to the path integral in terms of the complete elliptic integrals and {\it ii)} solved the periodic stability equation which yields the low lying fluctuation eigenmodes and eigenvalues in polynomial form. The latter point permits to quantify the softening of the lowest positive and of the ground state (in absolute value) eigenvalues as $T^*_c$ is approached. The softening of the lowest positive eigenvalue is mainly responsible for the enhancement in the quantum decay rate above the prediction of the infinite time (zero temperature) theory. The behavior of the decay rate has been studied in detail below $T^*_c$. At $T^* \sim T^*_c$, the thermal activation sets in while the quantum decay rate drops to zero according to the power law $\Gamma(T^*) \propto (T^*_c - T^*)^{1/2}$. Similar conclusions may be drawn by the analyses of a quartic metastable potential although the decay rate of the latter is smaller than in a cubic potential having the same structural parameters.
1,116,691,496,979
arxiv
\section{Introduction} Up to 20\% of patients undergoing abdominal surgery develop chronic pain due to post-operative adhesions \cite{van_der_wal_adhesion_2011}. Recently, cine-MRI has been introduced as an effective method to detect these adhesions non-invasively \cite{lang_cine-mri_2008}. Non-invasive detection plays a key role in patient management, as it prevents both unnecessary surgeries and severe complications during surgery \cite{van_den_beukel_shared_2018}. Radiological interpretation of cine-MRI, however, is time-consuming and strongly depends on expertise. Adhesions are very thin tissue structures, which in itself are invisible on a single time frame on MRI or other imaging modalities. During the scan, patients are instructed to perform the Valsalva maneuver repeatedly, thereby inducing motion in the entire abdomen. The radiologist detects an adhesion by its property to connect different structures, appearing as a local absence of sliding motion on the entire cine-MRI time-series. In this work, we approach adhesion detection as a classification problem and efficiently model spatio-temporally using a hybrid architecture. A feed-forward CNN (ResNet-18) extracts low dimensional spatial features. These feature maps allow temporal aggregation with a lightweight recurrent neural network, ConvGRU, which models spatial information through time. We show that this approach works for adhesion detection and expect that it applies equally well to any medical imaging task with a temporal dimension. \begin{figure} \floatconts {fig:example2 {\caption{(a) A schematic overview of Resnet-18-ConvGRU. Each consecutive frame pair is fed to a ConvGRU model, through a ResNet-18 encoder. A final fully connected layer outputs a probability score. (b) ROC curves of both models, with 95\% confidence intervals estimated with bootstrapping.} {% \subfigure{% \label{fig:architecture \includegraphics[width=0.6\linewidth]{architecture_colored.png} }\qquad \subfigure{% \label{fig:auc \includegraphics[width=0.3\linewidth]{roc_two_models.png} } } \end{figure} \section{Methods} All cine-MRI series used in this work are sagittal abdominal series acquired at a single center in the Netherlands and were annotated by an experienced radiologist. Patients were scanned because of clinical suspicion of adhesions. The total number of series is 104, taken from the scans of 63 patients. Each series has a dimensionality of $30\times256\times192$ ($T\times H\times W$), with a time between each frame of 0.4 seconds. The baseline architecture, referred to as ResNet-18-inspexp, is a ResNet-18 that receives a single frame pair (two time points) as 2-channel input \cite{he_deep_2016}. These frames are pre-selected such that the difference in abdominal position is largest. In this model, temporal information is used by choosing the two most relevant time points of the series. We also experimented with taking consecutive frame pairs, but that was inferior to the method described above. These results are excluded here for brevity. The proposed architecture (ResNet-18-ConvGRU), draws inspiration from recent work on video processing with recurrent networks, using a GRU-based architecture with convolutional instead of fully connected layers \cite{ballas_delving_2016, zhu_faster_2019}. The ResNet-18-inspexp model is used as a pre-trained encoder, stripping away its fully connected layers. The resulting low-dimensional activation maps are fed to a ConvGRU model, a recurrent neural network that can efficiently model spatio-temporal data, as in illustrated in \figureref{fig:architecture}. This allows ResNet-18-ConvGRU to model full temporal data as opposed to the two time-point ResNet baseline. Model performance is evaluated using 5-fold cross validation, with unique patients in each fold. The performance metric AUROC is obtained over the full dataset, by aggregating the validation predictions of each fold. 95\% confidence intervals are estimated with bootstrapping. P-values for the difference in AUROC between models are estimated using a permutation test. \section{Results} ResNet-18-ConvGRU performs significantly better ($p=0.002$) than ResNet-18-inspexp (see \figureref{fig:auc}) with an AUROC (95\%CI) of 0.83 (0.70-0.93), as opposed to 0.74 (0.60-0.87). \section{Discussion} A lightweight architecture based on a ConvGRU model outperforms a handcrafted two time-point temporal classification approach. By adding only about 5\% of the weights of the baseline classifier, it can exploit the full temporal dimension as shown by a substantial increase in performance. Based on a currently running observer study (results to be published), the observed model performance (AUROC 0.83) compares to a radiologist with moderate experience. The method seems promising to aid radiologists with detection of adhesions on cine-MRI. Other work using ConvGRU in a similar manner, e.g. \cite{zhu_faster_2019}, typically uses high resolution video input and uses larger 3D models as encoders. We show that a small ResNet-18 encoder suffices in the case of low resolution input, possibly allowing for computationally tractable end-to-end learning. With enough compute, this approach may also be a viable way to process high-resolution 4D medical data, using 3D encoders. Generating saliency maps may give more insight into the adhesion localization capacity of the model. It may also be possible to convert the model to a detection model, by regressing the ConvGRU output on bounding box parameters instead of a binary label.
1,116,691,496,980
arxiv
\section{From Proofs of Paths to Proofs of Programs} \label{sec:alg} In this section, we describe how \textsc{SplInter}\xspace constructs a proof of correctness (i.e., unreachability of the error location $v_{\rm e}$) of a program $\mathcal{P} = \tuple{V,E,v_{\rm i},v_{\rm e}}$ from proofs of individual paths. We note that our algorithm is an extension of \textsc{Impact}\xspace~\cite{McMillan2006} to \textsf{RSep}\xspace proofs; we refer the reader to~\cite{McMillan2006} for optimizations and heuristics. The main difference is the procedure \textsf{ProvePath}, which constructs an \textsf{RSep}\xspace proof for a given program path. \begin{figure}[t] \centering \begin{algorithmic}[1] \Function{SplInter}{$\mathcal{P}$} \State $S \gets \emptyset$ \Loop \label{l:loop} \State $\pi \gets \textsf{IsProof}(S)$ \If{$\pi$ is empty} // \emph{$S$ is a proof} \State \Return found proof $S$ \Else ~~ // \emph{$\pi = v_{\rm i},\ldots,v_{\rm e}$ in $\mathcal{P}$ does not appear in $S$} \State $\kappa \gets \textsf{ProvePath}(\pi)$ \If{$\kappa$ is empty} ~~ // \emph{No proof computed for $\pi$} \State \Return found erroneous execution $\pi$ \EndIf \State $S' \gets \emptyset$ \For {each $\kappa' \in S$} \State $(\kappa,\kappa') \gets \textsf{Conj}\xspace(\kappa,\kappa')$ \State $S' \gets S' \cup \{ \kappa' \}$ \EndFor \State $S \gets S' \cup \{\kappa\}$ \EndIf \EndLoop \EndFunction \end{algorithmic} \caption{Main Algorithm of \textsc{SplInter}\xspace.} \label{alg:main} \end{figure} \paragraph{The Main Algorithm} Figure~\ref{alg:main} shows the main algorithm of \textsc{SplInter}\xspace. Elements of the set $S$ are program paths from $v_{\rm i}$ to $v_{\rm e}$, annotated with $\textsf{RSep}\xspace$ formulas. For example, $$(a_1,v_1),(a_2,v_2),\ldots,(a_n,v_n)$$ is an annotated path where (1) $\{a_j\}_j$ are $\textsf{RSep}\xspace$ formulas; (2) $v_1 = v_{\rm i}$ and $v_n = v_{\rm e}$; (3) for $j \in [1,n-1]$, $(v_j,v_{j+1}) \in E$; (4) for each edge $e = (v_j,v_{j+1})$, $\{a_j\}\;e^{\rm c}\;\{a_{j+1}\}$ is valid; and (5) $a_n$ is $\emph{false}$ (since we are interested in proving unreachability of $v_{\rm e}$). \textsc{SplInter}\xspace uses the procedure $\textsf{IsProof}$ to check if the set of annotated paths $S$ represents a proof of correctness of the whole program. If not, $\textsf{IsProof}$ samples a new program path ($\pi$) that does not appear in $S$. Using the procedure $\textsf{ProvePath}$, it tries to construct an annotation/proof (using spatial($\mathcal{T}$) interpolants) of $\pi$. If no proof is constructed, \textsc{SplInter}\xspace concludes that $\pi$ is a feasible execution to $v_{\rm e}$ (or it performs unsafe memory operations). Otherwise, it uses the procedure $\textsf{Conj}\xspace$ to strengthen the proofs of all program paths in $S$, adds the annotated path to $S$, and restarts the loop. Note that the annotated paths in $S$ represent an \emph{Abstract Reachability Tree} (ART), as used in~\cite{McMillan2006} as well as other software model checking algorithms. The tree in this case is rooted at $v_{\rm i}$, and branching represents when two paths diverge. We will now describe \textsc{SplInter}\xspace's components in more detail. \paragraph{\textsf{IsProof}: Checking Proofs} Given a set of annotated paths $S$, we use the procedure $\textsf{IsProof}(S)$ to check if the annotations represent a proof of the whole program. Specifically, for each $v \in V$, we use $I(v)$ to denote the formula $\bigvee\{a_j \mid (a_j,v) \in \kappa, \kappa \in S\}$. In other words, for each location $v$ in $\mathcal{P}$, we \emph{hypothesize} an inductive invariant $I(v)$ from our current annotations of paths passing through $v$. We can then check if our hypotheses indeed form an inductive invariant of the program. If so, the program is safe: since $I(v_{\rm e})$ is always $\emph{false}$ (by definition), the fact that $I$ is an invariant implies that the post-state of any execution which reaches $v_{\rm e}$ must satisfy $\emph{false}$, and therefore the error location $v_{\rm e}$ is unreachable. Otherwise, $\textsf{IsProof}$ returns a new program path on which our hypothesized invariant does not hold, which we use to refine our hypotheses. In practice, one can perform $\textsf{IsProof}$ implicitly and lazily by maintaining a \emph{covering} (entailment) relation~\cite{McMillan2006} over the nodes of the ART. \begin{figure}[t] \centering \begin{algorithmic}[1] \Function{\sf{ProvePath}}{$\pi = v_1,\ldots,v_n$} \Statex // \emph{Check if path is feasible (can reach error location $v_{\rm e}$)}. \State let $k$ be largest integer s.t. $\textsf{exec}(v_1,\ldots,v_k)$ is defined. \label{line:err1} \State $\emph{symbHeap} \gets \textsf{exec}(v_1,\ldots,v_k)$ \Statex \State // \emph{If $\pi$ is memory-infeasible,} \State // \emph{only compute spatial interpolants,} \State // \emph{as theory interpolants are not needed} \If {$S_k \models \emph{false}$, where $\emph{symbHeap} = \ldots,(S_k,v_k)$} \label{line:mem} \State let $k' \leq k$ be the largest integer s.t. $S_{k'} \not\models \emph{false}$ \State $\emph{spInt} \gets \textsf{Spatial}((v_1,\ldots,v_{k'}), S_{k'})$ \State\Return $\emph{spInt},(\emph{false},v_{k'+1}),\ldots,(\emph{false},v_n)$ \EndIf \label{line:mem2} \State $\emph{refSymbHeap} \gets \textsf{Refine}(\emph{symbHeap},\emph{false})$ \If {$k = 0$ or \emph{refSymbHeap} is undefined} \State\Return no proof found \label{line:err2} \EndIf \\ \Statex // \emph{Path infeasible -- construct proof}. \State $\emph{spInt} \gets \textsf{Spatial}((v_1,\ldots,v_k),\emph{true}:\textsf{true})$ \label{line:pf1} \State $\emph{spTheoryInt} \gets \textsf{Refine}(\emph{spInt}, \emph{false})$ \If {\emph{spTheoryInt} is undefined} \State\Return $\emph{refSymbHeap},(\emph{false},v_{k+1}),\ldots,(\emph{false},v_n)$ \label{line:sym} \EndIf \State\Return $\emph{spTheoryInt},(\emph{false},v_{k+1}),\ldots,(\emph{false},v_n)$ \label{line:pf2} \EndFunction \end{algorithmic} \caption{Pseudocode for \textsf{ProvePath}.} \label{alg:rpath} \end{figure} \paragraph{\textsf{ProvePath}: Constructing a Proof of a Path} Figure~\ref{alg:rpath} shows an algorithm for computing a proof of a path $\pi$. First, in lines~\ref{line:err1}-\ref{line:err2}, we check if $\pi$ is feasible (i.e., whether $\pi$ corresponds to a real program execution). We do this by computing the strongest postconditions along $\pi$ (using $\textsf{exec}$) and then attempting to strengthen the annotation (using \textsf{Refine}) with theory interpolants to prove that \textit{false} is a postcondition of $\pi$. If no such strengthening is found, we know that $\pi$ is feasible (by Theorem~\ref{thm:refine_complete}). Note that if $\pi$ is memory-infeasible (lines~\ref{line:mem}-\ref{line:mem2}), then we only compute spatial interpolants along the memory-feasible prefix of the path and return the result. This is because when the path is memory-infeasible, we do not need data refinements along the path to prove it cannot reach $v_{\rm e}$. The function $\textsf{Spatial}((v_1,\ldots,v_n),P)$ takes a program path $\pi = v_1,\ldots,v_n$ and a \textsf{Sep}\xspace formula $P$ and returns the path annotated with spatial interpolants w.r.t the postcondition $P$. The function $\textsf{Refine}(\kappa,\varphi)$ takes an annotated program path $\kappa$ with $\textsf{Sep}\xspace$ formulas (from spatial interpolants) and a $\varphi \in \textsf{DFormula}$ and returns a refined annotation of $\kappa$ that proves the postcondition $\varphi$ (using theory interpolants). If the path $\pi$ is infeasible, we proceed by constructing spatial($\mathcal{T}$) interpolants for it (lines~\ref{line:pf1}-\ref{line:pf2}). We use the function \textsf{Spatial} (Section~\ref{sec:spint}) to construct spatial interpolants, which we then refine with theory interpolants using the function \textsf{Refine} (Section~\ref{sec:snint}). Spatial path interpolants are computed with respect to the postcondition $\emph{true} : \textsf{true}$, indicating that we are looking for a memory safety proof. Note that we might not be able to find theory interpolants if the spatial interpolants computed are too weak and \emph{hide important data elements} (in which case, on line~\ref{line:sym}, we use the result of $\textsf{exec}$ as the spatial interpolants -- the strongest possible spatial interpolants). To illustrate how this could happen, consider Figure~\ref{code:ref}: a modification to our illustrative example from Figure~\ref{code:ex}. \begin{figure} \centering \begin{minipage}{4.9cm} \begin{lstlisting} node* x = null; int i = 2; while (i > 0) node* tmp = malloc(node); tmp->N = x; tmp->D = i; x = tmp; i--; i = 1; P: while (x != null) if (isOdd(i)) assert(isOdd(x->D)) x = x->N; i++; \end{lstlisting} \end{minipage} \caption{Refinement Example.} \vspace{0.1in} \label{code:ref} \end{figure} Here, a list of length 2 is constructed. The second loop checks that nodes at odd positions in the linked list have odd data elements. Suppose \textsc{SplInter}\xspace samples the path that goes through the first loop twice and enters the second loop arriving at the assertion. Our spatial interpolation procedure will introduce a list segment $\textsf{ls}(x,\textsf{null})$ at location \texttt{P}. As a result, we cannot find a refinement that proves the assertion, since using the list segment predicate definition from Section~\ref{sec:prelims} we cannot specify that the first element is odd and the second is even. This is because refinements must apply to all elements of $\textsf{ls}$, and cannot mention specific positions. In this case, we use the symbolic heaps as our spatial interpolants. That is, we annotate location \texttt{P} with $x\mapsto [d_1',e'] * e' \mapsto [d_2',\textsf{null}]$. Consequently, theory interpolants are able to refine this by specifying that $d_1'$ is odd. \paragraph{\textsf{Conj}\xspace: Conjoining Proofs of Paths} When a new annotated path $\kappa$ is computed, we strengthen the proofs of all annotated paths $\kappa'$ in $S$ that share a prefix with $\kappa$ using an operation $\textsf{Conj}\xspace(\kappa,\kappa')$, defined in the following. (This is analogous to strengthening annotations of a path in an ART -- all other paths sharing a prefix with the strengthened path also get strengthened.) Let $\kappa = (a_1,v_1),\ldots,(a_n,v_n)$ and $\kappa' = (a_1',v_1'),\ldots,(a_{m}',v_{m}')$. Let $k$ be the largest integer such that for all $j \leq k$, $v_j = v_j'$ (i.e., $k$ represents the longest shared prefix of $\kappa$ and $\kappa'$). $\textsf{Conj}\xspace$ returns a pair $(\ol{\kappa},\ol{\kappa}')$ consisting of the strengthened annotations: \begin{align*} \ol{\kappa} &\gets (a_1 \land a_1', v_1),\ldots,(a_k \land a_k', v_k), (a_{k+1},v_{k+1}),\ldots,(a_n,v_n)\\ \ol{\kappa}' &\gets (a_1 \land a_1', v_1),\ldots,(a_k \land a_k', v_k), (a_{k+1}',v_{k+1}'),\ldots,(a_{m}',v_{m}') \end{align*} The issue here is that $\textsf{RSep}\xspace$ is not closed under logical conjunction ($\land$), since we do not allow logical conjunction of spatial conjunctions, e.g., $(\textsf{ls}(x,y)*\textsf{true}) \land (\textsf{ls}(y,z)*\textsf{true})$. In practice, we heuristically under-approximate the logical conjunction, using the strongest postcondition of the shared prefix to guide the under-approximation. Any under-approximation which over-approximates the strongest postcondition (including the strongest postcondition itself) is sound, but overly strong annotations may not generalize to a proof of the whole program. Note that the above transformation maintains the invariant that all paths in $S$ are annotated with valid Hoare triples. \section{Proofs} In this section, we present proof sketches for the theorems appearing in this paper. \subsection{Proof of Theorem~\ref{thm:spint_sound}} \begin{theorem} Let $S$ and $I'$ be \textsf{RSep}\xspace formulas and let $c$ be a command such that $\textsf{exec}(c,S) \models I'$. Then \begin{enumerate} \item[(I)] $S \models \interp{S}{c}{I'}$ \item[(II)] $\hoare{\interp{S}{c}{I'}}{c}{I'}$ \end{enumerate} \end{theorem} \begin{proof} We prove one case to give intuition on why the theorem holds. Suppose \texttt{c} is an allocation statement \texttt{x := new($n$,$m$)}. Recall that we defined \[ \interp{S}{c}{I'} = \qex{x}{A} \] where \[\abduce{\textsf{exec}(c,S)}{\emptyset}{\qex{\vec{a},\vec{z}}{x \mapsto [\vec{a},\vec{z}]}}{I'}\] Define \[ S' = \textsf{exec}(c,S) = S[x'/x] * x \mapsto [\vec{a},\vec{z}] \] for $x',\vec{a},\vec{z}$ fresh. First we show (I). By the properties of bounded abduction, we have \[ S' = S[x'/x] * x \mapsto [\vec{a},\vec{z}] \models A * (\qex{\vec{a},\vec{z}}{x \mapsto [\vec{a},\vec{z}]}) \] from which we can see that \[ S[x'/x] \models A \models \qex{x}{A} \] and thus \[ S \models \qex{x}{A} = \interp{S}{c}{I'} \] Next we show (II). We compute \begin{align*} \textsf{exec}(c,\qex{x}{A}) &= \qex{x',\vec{a},\vec{z}}{(\qex{x}{A})[x'/x] * x \mapsto [\vec{a},\vec{z}]}\\ & \hspace*{10pt}\hfill \emph{where $x',\vec{a},\vec{z}$ are fresh.}\\ & \equiv \qex{x',\vec{a},\vec{z}}{(\qex{x}{A}) * x \mapsto [\vec{a},\vec{z}]}\\ & \equiv (\qex{x}{A}) * (\qex{\vec{a},\vec{z}}{x \mapsto [\vec{a},\vec{z}]}) \end{align*} Since $S'$ is of the form $S[x'/x] * x \mapsto [\vec{a},\vec{z}]$ and $S' \models A * \textsf{true}$, the only place where $x$ may appear in $A$ is in a disequality with some other allocated variable. It follows that \[ (\qex{x}{A}) * (\qex{\vec{a},\vec{z}}{x \mapsto [\vec{a},\vec{z}]}) \equiv A * (\qex{\vec{a},\vec{z}}{x \mapsto [\vec{a},\vec{z}]}) \] By the properties of bounded abduction, we have \[ A * (\qex{\vec{a},\vec{z}}{x \mapsto [\vec{a},\vec{z}]}) \models I \] and thus \[ \textsf{exec}(c,\qex{x}{A}) \models I \] and finally \[ \hoare{\interp{S}{c}{I'}}{c}{I'} \] \end{proof} \subsection{Proof of Theorem~\ref{thm:refine_sound}} \begin{theorem}[Soundness] Suppose that $\pi$ is a path and that $\zeta$ is a proof of the judgement $\cjudge{\hoare{\Pi:\Sigma}{$\pi$}{\Pi':\Sigma'}}{\mathcal{C}}$, and that $\sigma$ is a solution to $\mathcal{C}$. Then $\zeta^\sigma$, the proof obtained by applying the substitution $\sigma$ to $\zeta$, is a (refined) separation logic proof of \[ \hoare{(\Pi:\Sigma)^\sigma}{$\pi$}{(\Pi':\Sigma')^\sigma}\ .\] \end{theorem} \begin{proof} The proof proceeds by induction on $\zeta$. We will give an illustrative example using an entailment judgement. Suppose that $\zeta$ is an entailment proof consisting of a single application of the \textsc{Predicate} rule: \begin{mathpar} \inferrule*[lab=Predicate]{ \Pi \models \Pi' \\ }{ \Phi' \gets \Phi; \Psi_1' \gets \Psi_1 \land \Phi; \dotsi; \Psi_{|\vec{\tau}|}' \gets \Psi_{|\vec{\tau}|} \land \Phi \triangleright\\ \Pi \land \Phi : Z(\vec{\tau},\vec{E}) \vdash \Pi' \land \Phi' : Z(\vec{\tau'},\vec{E}) } \end{mathpar} (where $\tau_i = \lda{\vec{a}_i. \Psi_i}$ and $\tau_i' = \lda{\vec{a}_i. \Psi_i'}$). Suppose that $\sigma$ is a solution to the constraint system \[ \mathcal{C} = \Phi' \gets \Phi; \Psi_1' \gets \Psi_1 \land \Phi; \dotsi; \Psi_{|\vec{\tau}|}' \gets \Psi_{|\vec{\tau}|} \land \Phi \] Since $\sigma$ is a solution to $\mathcal{C}$, we have that \[ \Phi'^\sigma \Leftarrow \Phi^\sigma \text{ and for all $i$, } \Psi_i'^\sigma \Leftarrow \Psi_i^\sigma \land \Phi^\sigma \] and thus (noting that $\Pi \models \Pi'$) \[ \Pi \land \Phi^\sigma \models \Pi' \land \Phi'^\sigma \text{ and for all $i$, } \Psi_i^\sigma \land \Pi \land \Phi^\sigma \models \Psi_i'^\sigma\] It follows that $\zeta^\sigma$, given below, is a valid derivation: \begin{mathpar} \inferrule*[lab=Predicate]{ \Pi \land \Phi'^\sigma \models \Pi' \land \Phi'^\sigma \\ \Psi_1^\sigma \land \Pi \land \Phi^\sigma \models \Psi_1'\\ \dotsi\\ \Psi_n^\sigma \land \Pi \land \Phi^\sigma \models \Psi_n'^\sigma }{ \Pi \land \Phi^\sigma : Z(\vec{\tau}_0^\sigma,\vec{E}) \vdash \Pi' \land \Phi'^\sigma : Z(\vec{\tau}_1^\sigma,\vec{E}) } \end{mathpar} \end{proof} \subsection{Proof of Theorem~\ref{thm:refine_complete}} \begin{theorem}[Completeness] Suppose that $\pi$ is a memory-safe path and $\zeta$ is the proof of the judgement \[\cjudge{\hoare{R_0(\vec{v}):\textsf{emp}}{$\pi$}{R_1(\vec{v}) : \textsf{true}}}{\mathcal{C}}\] obtained by symbolic execution. If $\phi$ is a data formula such that $\hoare{\textit{true}:\textsf{emp}}{$\pi$}{\phi : \textsf{true}}$ holds, then there is a solution $\sigma$ to $\mathcal{C}$ such that $R_1^\sigma(\vec{v}) \Rightarrow \phi$. \end{theorem} \begin{proof} Consider that for each formula $\qs{X}{\Pi}{\Sigma}$ in a symbolic execution sequence, $\Sigma$ is a *-conjunction of (finitely many) points-to predicates. The constraints we generate in this situation are the same as the ones that would be generated for a program which does not access the heap (but which has additional variables corresponding to data-typed heap fields). \end{proof} \section{Formalization of \textsf{RSep}\xspace and \textsf{Sep}\xspace} \label{sec:logic} In this section we present the full proof systems for \textsf{RSep}\xspace and \textsf{Sep}\xspace, as well as the full set of constraint generation rules which we described in Section~\ref{sec:snint}. The syntax and semantics of \textsf{RSep}\xspace formulas, in terms of stacks and heaps, are shown in Figures~\ref{fig:logic-full-syntax} and~\ref{fig:logic-full-semantics}. \begin{figure*}[t] \begin{minipage}[b]{0.45\linewidth} \textbf{\emph{Syntax}} \begin{align*} x,y \in \textsf{HVar}& \hspace*{60pt}\text{Heap variables} \\ a,b \in \textsf{DVar}& \hspace*{60pt}\text{Data variables} \\ A \in \textsf{DTerm} & \hspace*{60pt} \text{First-order term that evaluates to value in $\mathds{D}$}\\ \phi \in \textsf{DFormula} & \hspace*{60pt} \text{Data formulas}\\ Z \in \textsf{RPred}\xspace & \hspace*{60pt} \text{Recursive predicates}\\ \theta \in \textsf{Refinement} &::= \lambda \vec{a}. \phi\\ X \subseteq \textsf{Var}&::= x \mid a\\ E,F \in \textsf{HTerm}&::= \textsf{null} \mid x\\ \AE &::= A \ \mid E\\ \Pi \in \textsf{Pure}&::= \emph{true} \mid E = E \mid E \neq E \mid \varphi \ \mid \Pi \land \Pi\\ H \in \textsf{Heaplet}&::= \textsf{true} \mid \textsf{emp} \mid E \mapsto [\vec{A},\vec{E}] \mid Z(\vec{\theta},\vec{E})\\ \Sigma \in \textsf{Spatial}&::= H \mid H * \Sigma\\ P \in \textsf{RSep}\xspace&::= \qs{X}{\Pi}{\Sigma} \end{align*} \end{minipage} \caption{Syntax of \textsf{RSep}\xspace formulas.} \label{fig:logic-full-syntax} \end{figure*} \begin{figure*}[t] \begin{minipage}[b]{0.45\linewidth} \textbf{\emph{Semantic Domains}} \begin{align*} \textsf{Var} &= \textsf{HVar} + \textsf{DVar}\\ \textsf{Val} &= \textsf{Loc} + \mathds{D}\\ \textsf{Stack} &= \textsf{Var} \rightarrow \textsf{Val}\\ \textsf{Heap} &= \textsf{Loc} \rightharpoonup_{\textsf{fin}} \textsf{Rec}\\ \textsf{Rec} &= \tuple{\mathds{N} \rightharpoonup_{\textsf{fin}} \mathds{D}, \mathds{N} \rightharpoonup_{\textsf{fin}} \textsf{Loc}}\\ \textsf{State} &= \textsf{Stack} \times \textsf{Heap} \end{align*} \end{minipage} \begin{minipage}[b]{0.45\linewidth} ~\\ \textbf{\emph{Satisfaction Semantics}} \begin{align*} s,h \models E = F &\iff \sem{E}(s) = \sem{F}(s)\\ s,h \models E \neq F &\iff \sem{E}(s) \neq \sem{F}(s)\\ s,h \models \varphi &\iff \sem{\varphi}(s)\\ s,h \models \Pi_1 \land \Pi_2 &\iff (s,h \models \Pi_1) \text{ and } (s,h \models \Pi_2)\\ s,h \models Z(\vec{\tau},\vec{E}) &\iff \exists P \in \mathit{cases}(Z(\vec{R},\vec{x})). s,h \models P[\vec{\tau}/\vec{R},\vec{E}/\vec{x}]\\ s,h \models \textsf{emp} &\iff \text{dom}(h) = \emptyset\\ s,h \models E \mapsto [\vec{A}, \vec{F}] & \iff \text{dom}(h) = \{\sem{E}(s)\} \\ &\hspace*{1cm}\text{and } h(\sem{E}(s)) = &\\ &\hspace*{1.25cm} \tuple{\{i \mapsto \sem{A_i}(s) | i \in [1,|\vec{A}|]\}, \{i \mapsto \sem{F_i}(s) | i \in [1,|\vec{F}|]\}}\\ s,h \models \Sigma * \Sigma' &\iff \text{there exists } h_0, h_1 \text{ s.t. } h_0 \uplus h_1 = h \\ &\hspace*{1cm} \text{and } (s,h_0 \models \Sigma) \text{ and } (s,h_1 \models \Sigma')\\ s,h \models \Pi \colon \Sigma &\iff (s,h \models \Pi) \text{ and } (s,h \models \Sigma)\\ s,h \models \qs{X}{\Pi}{\Sigma} &\iff \text{there exists } \overline{s} : X \rightarrow \textsf{Val} \text{ s.t. s\oplus \overline{s},h \models \Pi:\Sigma \end{align*} \end{minipage} ~\\ Note that we model records (\textsf{Rec}) as two finite maps representing data fields and heap fields.\\ We use $\uplus$ to denote union of functions with disjoint domains, and $\oplus$ to denote overriding union of functions. \caption{Stack/heap semantics of \textsf{RSep}\xspace formulas.} \label{fig:logic-full-semantics} \end{figure*} \begin{figure*}[t] \footnotesize \figsep{Entailment rules} \begin{mathpar} \inferrule*[lab=Empty]{\Pi \models \Pi'}{ \Pi : \textsf{emp} \vdash \Pi' : \textsf{emp}} \inferrule[$\exists$-left]{ P[x'/x] \vdash Q }{ \qex{x}{P} \vdash Q } \inferrule[$\exists$-right]{ P \vdash Q[\mbox{\AE}/x] }{ P \vdash \qex{x}{Q} } \inferrule*[lab=Predicate,right={\rm\begin{minipage}{2.85cm} Where $\tau_i = \lda{\vec{a}_i. \phi_i}$\\ and $\tau_i' = \lda{\vec{a}_i. \phi_i'}$ \end{minipage}}]{ \Pi \models \Pi' \\ \phi_1 \land \Pi \models \phi_1'\\ \dotsi\\ \phi_n \land \Pi \models \phi_n' }{ \Pi : Z(\vec{\tau},\vec{E}) \vdash \Pi' : Z(\vec{\tau}',\vec{E}) } \inferrule*[lab=True]{\Pi \models \Pi'}{ \Pi : \Sigma \vdash \Pi' : \textsf{true} } \inferrule[Points-to]{\Pi \models \Pi'}{ \Pi : E \mapsto [\vec{A},\vec{F}] \vdash \Pi' : E \mapsto [\vec{A},\vec{F}] } \inferrule[Star]{ \Pi : \Sigma_0 \vdash \Pi' : \Sigma_0' \\ \Pi : \Sigma_1 \vdash \Pi' : \Sigma_1' }{ \Pi : \Sigma_0 * \Sigma_1 \vdash \Pi' : \Sigma_0' * \Sigma_1' } \inferrule[Substitution]{ \Pi[E/x] : \Sigma[E/x] \vdash \Pi'[E/x] : \Sigma'[E/x] \\ \Pi \models x = E }{ \Pi : \Sigma \vdash \Pi' : \Sigma' } \inferrule[\textsf{null}-not-Lval]{ \Pi \land E \neq \textsf{null} : \Sigma * E \mapsto [\vec{A},\vec{F}] \vdash \Pi' : \Sigma' }{ \Pi : \Sigma * E \mapsto [\vec{A},\vec{F}] \vdash \Pi' : \Sigma' } \inferrule[*-Partial]{ \Pi \land E \neq F : \Sigma * E \mapsto [\vec{A},\vec{E}] * F \mapsto [\vec{B},\vec{F}] \vdash \Pi' : \Sigma' }{ \Pi : \Sigma * E \mapsto [\vec{A},\vec{E}] * F \mapsto [\vec{B},\vec{F}] \vdash \Pi' : \Sigma' } \inferrule*[lab=Fold,right={\rm $P \in \mathit{cases}(Z(\vec{R},\vec{x}))$}]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * P[\vec{\tau}/\vec{R},\vec{E}/\vec{x}] }{ \Pi : \Sigma \vdash \Pi' : \Sigma' * Z(\vec{\tau},\vec{E}) } \inferrule*[lab=Unfold,right={\rm $\{P_1,\dotsc,P_n\} = \mathit{cases}(Z(\vec{R},\vec{x}))$}]{ \Pi : \Sigma * P_1[\vec{\tau}/\vec{R},\vec{E}/\vec{x}] \vdash \Pi' : \Sigma'\\ \dotsi\\ \Pi : \Sigma * P_n[\vec{\tau}/\vec{R},\vec{E}/\vec{x}] \vdash \Pi' : \Sigma' }{ \Pi : \Sigma * Z(\vec{\tau},\vec{E}) \vdash \Pi' : \Sigma' } \end{mathpar} \caption{\textsf{RSep}\xspace Proof System} \label{fig:qsep-ent} \end{figure*} \begin{figure*}[t] \figsep{Execution rules} \begin{mathpar} \inferrule[Assign]{ }{ \hoare{\Pi \colon \Sigma} {x := \AE} {\qex{x'}{\Pi[x'/x] \land x = \mbox{\AE}[x'/x] : \Sigma[x'/x]}} } \inferrule[Assume]{ }{ \hoare{\Pi \colon \Sigma}{assume($\Pi'$)}{\Pi \land \Pi' : \Sigma} } \inferrule[Sequence]{ \hoare{P}{$\pi_0$}{O}\\ \hoare{O}{$\pi_1$}{Q} }{ \hoare{P}{$\pi_0;\pi_1$}{Q} } \inferrule[Data-Store]{ \Pi \colon \Sigma \vdash \Pi' : \Sigma' * x \mapsto [\vec{A},\vec{E}] }{ \hoare{\Pi \colon \Sigma} {x->D$_i$ := A} {\Pi' : \Sigma' * x \mapsto [\vec{A}[A/A_i],\vec{E}]} } \inferrule[Heap-Store]{ \Pi \colon \Sigma \vdash \Pi' : \Sigma' * x \mapsto [\vec{A},\vec{E}] }{ \hoare{\Pi \colon \Sigma}{x->N$_i$ := E}{\Pi' : \Sigma' * x \mapsto [\vec{A},\vec{E}[E/E_i]]} } \inferrule[Data-Load]{ \Pi \colon \Sigma \vdash \Pi' : \Sigma' * x \mapsto [\vec{A},\vec{E}] }{ \hoare{\Pi \colon \Sigma} {y := x->D$_i$} {\qs{y'} {\Pi'[y'/y] \land y = A_i[y'/y]} { (\Sigma' * x \mapsto [\vec{A},\vec{E}])[y'/y]}} } \inferrule[Free]{ \Pi \colon \Sigma \vdash \Pi' : \Sigma' * x \mapsto [\vec{A},\vec{E}] }{ \hoare{\Pi \colon \Sigma}{free(x)}{\Pi' : \Sigma'} } \inferrule[Heap-Load]{ \Pi \colon \Sigma \vdash \Pi' : \Sigma' * x \mapsto [\vec{A},\vec{E}] }{ \hoare{\Pi \colon \Sigma} {y := x->N$_i$} {\qs{y'} {\Pi'[y'/y] \land y = E_i[y'/y]} { (\Sigma' * x \mapsto [\vec{A},\vec{E}])[y'/y]}} } \inferrule[Consequence]{ P' \vdash P\\ \hoare{P}{c}{Q}\\ Q \vdash Q' }{ \hoare{P'}{c}{Q'} } \inferrule[Alloc]{ }{ \hoare{\Pi \colon \Sigma} {x := new(n,m)} {\qs{x',\vec{a},\vec{y}}{\Pi[x'/x] }{ \Sigma[x'/x] * x \mapsto [\vec{a},\vec{y}]}} } \inferrule*[lab=Exists,right={\rm $x \notin \textsf{Var}(Q) \cup \textsf{Var}(c)$}]{ \hoare{P}{c}{Q} }{ \hoare{\qex{x}{P}}{c}{Q} } \end{mathpar} \caption{\textsf{RSep}\xspace proof system.} \label{fig:qsep-exec} \end{figure*} \begin{figure*}[t] \begin{mathpar} \small \inferrule*[lab=Predicate]{ \Pi \models \Pi' }{ \Pi : Z(\vec{E}) \vdash \Pi' : Z(\vec{E}) } \inferrule*[lab=Fold,right={\rm $P \in \mathit{cases}(Z(\vec{R},\vec{x}))$}]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * P[\vec{E}/\vec{x}] }{ \Pi : \Sigma \vdash \Pi' : \Sigma' * Z(\vec{E}) } \inferrule*[lab=Unfold,right={\rm $\{P_1,\dotsc,P_n\} = \mathit{cases}(Z(\vec{R},\vec{x}))$}]{ \Pi : \Sigma * P_1[\vec{E}/\vec{x}] \vdash \Pi' : \Sigma'\\\\ \dotsi\\\\ \Pi : \Sigma * P_n[\vec{E}/\vec{x}] \vdash \Pi' : \Sigma' }{ \Pi : \Sigma * Z(\vec{E}) \vdash \Pi' : \Sigma' } \end{mathpar} \caption{\textsf{Sep}\xspace proof system. All other entailment and execution rules are as in Figure~\ref{fig:qsep-ent}.} \label{fig:symb} \end{figure*} \begin{figure*}[t] \figsep{Entailment rules} \vspace{-.15in} \begin{mathpar} \footnotesize \inferrule[Empty]{ \Pi \models \Pi' }{ \cjudge{\Pi \land \Phi : \textsf{emp} \vdash \Pi' \land \Phi' : \textsf{emp}} {\Phi' \gets \Phi} } \inferrule[True]{\Pi \models \Pi'}{ \cjudge{\Pi \land \Phi : \Sigma \vdash \Pi' \land \Phi' : \textsf{true}} {\Phi' \gets \Phi} } \inferrule[Inconsistent]{ \Pi \models \textit{false} }{ \cjudge{\Pi \land \Phi : \Sigma \vdash \Pi' \land \Phi' : \Sigma'} { [] } } \inferrule[$\exists$-left]{ \cjudge{P[x'/x] \vdash Q}{\mathcal{C}} }{ \cjudge{\qex{x}{P} \vdash Q}{\mathcal{C}} } \inferrule[Substitution]{ \cjudge{\Pi[E/x] \land \Phi : \Sigma[E/x] \vdash \Pi'[E/x] \land \Phi' : \Sigma'[E/x]} {\mathcal{C}}\\ \Pi \models x = E }{ \cjudge{\Pi \land \Phi : \Sigma \vdash \Pi' \land \Phi' : \Sigma'} {\mathcal{C}} } \inferrule[\textsf{null}-not-Lval]{ \cjudge{\Pi \land \Phi \land E \neq \textsf{null} : \Sigma * E \mapsto [\vec{A},\vec{F}] \vdash \Pi' \land \Phi' : \Sigma'} {\mathcal{C}} }{ \cjudge{\Pi \land \Phi : \Sigma * E \mapsto [\vec{A},\vec{F}] \vdash \Pi' \land \Phi' : \Sigma'} {\mathcal{C}} } \inferrule[$\exists$-right]{ \cjudge{P \vdash Q[\mbox{\AE}/x]}{\mathcal{C}} }{ \cjudge{P \vdash \qex{x}{Q}}{\mathcal{C}} } \inferrule[*-Partial]{ \cjudge{\Pi \land E \neq F \land \Phi : \Sigma * E \mapsto [\vec{A},\vec{E}] * F \mapsto [\vec{B},\vec{F}] \vdash \Pi' \land \Phi' : \Sigma'} {\mathcal{C}} }{ \cjudge{\Pi \land \Phi : \Sigma * E \mapsto [\vec{A},\vec{E}] * F \mapsto [\vec{B},\vec{F}] \vdash \Pi' \land \Phi' : \Sigma'} {\mathcal{C}} } \inferrule[Star]{ \cjudge{\Pi \land \Phi : \Sigma_0 \vdash \Pi' \land \Phi' : \Sigma_0'} { \mathcal{C}_0} \\ \cjudge{\Pi \land \Phi : \Sigma_1 \vdash \Pi' \land \Phi' : \Sigma_1'} { \mathcal{C}_1 } }{ \cjudge{\Pi \land \Phi : \Sigma_0 * \Sigma_1 \vdash \Pi' \land \Phi' : \Sigma_0' * \Sigma_1'} {\mathcal{C}_0;\mathcal{C}_1} } \inferrule[Points-to]{ \Pi \models \Pi' } { \cjudge{\Pi \land \Phi : E \mapsto [\vec{A},\vec{F}] \vdash \Pi' \land \Phi' : E \mapsto [\vec{A},\vec{F}]} { \Phi' \gets \Phi } } \inferrule*[lab=Fold,right={\rm $P \in \mathit{cases}(Z(\vec{R},\vec{x}))$}]{ \cjudge{\Pi : \Sigma \vdash \Pi' : \Sigma' * P[\vec{\tau}/\vec{R},\vec{E}/\vec{x}]} {\mathcal{C}} }{ \cjudge{\Pi : \Sigma \vdash \Pi' : \Sigma' * Z(\vec{\tau},\vec{E}) } {\mathcal{C}} } \inferrule*[lab=Unfold,right={\rm $\{P_1,\dotsc,P_n\} = \mathit{cases}(Z(\vec{R},\vec{x}))$}]{ \cjudge{\Pi : \Sigma * P_1[\vec{\tau}/\vec{R},\vec{E}/\vec{x}] \vdash \Pi' : \Sigma' } {\mathcal{C}_1}\\ \dotsi\\ \cjudge{\Pi : \Sigma * P_n[\vec{\tau}/\vec{R},\vec{E}/\vec{x}] \vdash \Pi' : \Sigma'} {\mathcal{C}_n} }{ \cjudge{\Pi : \Sigma * Z(\vec{\tau},\vec{E}) \vdash \Pi' : \Sigma'} {\mathcal{C}_1; \dotsc{;}\, \mathcal{C}_n} } \inferrule*[lab=Predicate,right={\rm\begin{minipage}{2.85cm} Where $\tau_i = \lda{\vec{a}_i. \Psi_i}$\\ and $\tau_i' = \lda{\vec{a}_i. \Psi_i'}$ \end{minipage}}]{ \Pi \models \Pi' \\ }{ \Phi' \gets \Phi; \Psi_1' \gets \Psi_1 \land \Phi; \dotsc{;}\, \Psi_{|\vec{\tau}|}' \gets \Psi_{|\vec{\tau}|}' \land \Phi \triangleright \Pi \land \Phi : Z(\vec{\tau},\vec{E}) \vdash \Pi' \land \Phi' : Z(\vec{\tau'},\vec{E}) } \end{mathpar} \caption{Constraint Generation: Entailment Rules} \label{fig:constraint-ent} \end{figure*} \begin{figure*}[t] \figsep{Execution rules} \vspace{-.15in} \begin{mathpar} \footnotesize \inferrule[Data-Assume]{ \cjudge{P \land \phi \vdash Q} {\mathcal{C}} }{ \cjudge{\hoare{P} {assume($\phi$)} {Q}} {\mathcal{C}} } \inferrule[Free]{ \cjudge{P \vdash \Pi \land \Phi : \Sigma * x \mapsto [\vec{A},\vec{E}]} {\mathcal{C}} }{ \cjudge{\hoare{P}{free(x)}{\Pi \land \Phi : \Sigma}} {\mathcal{C}} } \inferrule[Sequence]{ \cjudge{\hoare{P} {$\pi_0$} {\widehat{O}}} {\mathcal{C}_0}\\ \cjudge{\hoare{\widehat{O}} {$\pi_1$} {Q}} {\mathcal{C}_1} }{ \cjudge{\hoare{P} {$\pi_0;\pi_1$} {Q}} {\mathcal{C}_0;\mathcal{C}_1} } \inferrule[Data-Load]{ \cjudge{P \vdash \qex{X}{\Pi \land \widehat{\Phi} : \widehat{\Sigma} * x \mapsto [\vec{A},\vec{E}]}} {\mathcal{C}_0}\\\\ \cjudge{\qex{X,a'}{\Pi[a'/a] \land \widehat{\Phi}[a'/a] \land a = A_i[a'/a] \colon (\widehat{\Sigma} * x \mapsto [\vec{A},\vec{E}])[a'/a]} \vdash Q} {\mathcal{C}_1} }{ \cjudge{\hoare{P} {a := x->D$_i$} {Q}} {\mathcal{C}_0; \mathcal{C}_1 } } \inferrule[Data-Assign]{ \cjudge{\qex{a'}{\Pi \land \Phi[a'/a] \land a = A[a'/a] : \Sigma[a'/a] \vdash Q}} {\mathcal{C}} }{ \cjudge{\hoare{\Pi \land \Phi \colon \Sigma}{a := A}{Q}} { \mathcal{C} } } \inferrule[Data-Store]{ \cjudge{P \vdash \qex{X}{\Pi \land \widehat{\Phi} : \widehat{\Sigma} * x \mapsto [\vec{A},\vec{E}]}} {\mathcal{C}_0}\\\\ \cjudge{\qex{X,a'}{\Pi \land \widehat{\Phi} \land a' = A : \widehat{\Sigma} * x \mapsto [\vec{A}[a'/A_i],\vec{E}]} \vdash Q} {\mathcal{C}_1} }{ \cjudge{\hoare{P}{x->D$_i$ := A}{Q}} {\mathcal{C}_0; \mathcal{C}_1} } \inferrule[Alloc]{ \cjudge{\qex{x',\vec{a},\vec{x}}{\Pi[x'/x] \land \Phi : \Sigma[x'/x] * x \mapsto [\vec{a},\vec{x}]} \vdash Q} {\mathcal{C}} }{ \cjudge{\hoare{\Pi \land \Phi \colon \Sigma} {x := new($n$,$m$)} {Q}} {\mathcal{C}} } \inferrule[Heap-Store]{ \cjudge{\Pi \colon \Sigma \vdash \Pi' : \Sigma' * x \mapsto [\vec{A},\vec{E}]} {\mathcal{C}} }{ \cjudge{\hoare{\Pi \colon \Sigma}{x->N$_i$ := E}{\Pi' : \Sigma' * x \mapsto [\vec{A},\vec{E}[E/E_i]]}} {\mathcal{C}} } \inferrule[Consequence]{ \cjudge{P' \vdash \widehat{P}}{\mathcal{C}_1}\\ \cjudge{\hoare{\widehat{P}}{c}{\widehat{Q}}}{\mathcal{C}_2}\\ \cjudge{\widehat{Q} \vdash Q'}{\mathcal{C}_3} }{ \cjudge{\hoare{P'}{c}{Q'}}{\mathcal{C}_1;\mathcal{C}_2;\mathcal{C}_3} } \inferrule[Heap-Load]{ \cjudge{\Pi \colon \Sigma \vdash \Pi' : \Sigma' * x \mapsto [\vec{A},\vec{E}]} {\mathcal{C}} }{ \cjudge{\hoare{\Pi \colon \Sigma} {y := x->N$_i$} {\qs{y'} {\Pi'[y'/y] \land y = E_i[y'/y]} { (\Sigma' * x \mapsto [\vec{A},\vec{E}])[y'/y]}}} {\mathcal{C}} } \end{mathpar} \caption{Constraint Generation: Execution Rules} \label{fig:constraint-exec} \end{figure*} \section{Spatial interpolation for assumptions} \label{sec:assum} In the spatial interpolation rules for \texttt{assume} presented in Section~\ref{sec:spint}, we encountered the following problem: we have an equality or disequality assertion $\Pi$, a symbolic heap $S$, and a formula $M$ such that $S \land \Pi \vdash M$, and we need to compute a formula $M'$ such that $S \vdash M'$ and $M' \land \Pi \vdash M$. Moreover, we wish $M'$ to be as weak as possible (i.e., $M'$ should be ``close to $M$'' rather than ``close to $S$''). In this section, we will define a recursive procedure $\textsf{pitp}(S,\Pi,M)$ which takes as input a symbolic heap $S$, an equality or disequality formula $\Pi$, and a $\textsf{Sep}\xspace$ formula $M$ such that $S \land \Pi \vdash M$ and computes a formula $M' = \textsf{pitp}(S,\Pi,M)$ such that $S \vdash M'$ and $M' \land \Pi \vdash M$. We will assume that $S = \Pi_S : \Sigma_S$ is saturated in the sense that for any assertion $\Pi_0$, if $S \vdash \Pi_0$ then $\Pi_S \vdash \Pi_0$, and that $S \land \Pi$ is satisfiable. We observe that if $M$ has existentially quantified variables, they can be instantiated using the proof of $S \land \Pi \vdash M$. Thus we may assume that $M$ is quantifier-free, and write $M$ as \[ M = \Pi_M : H_1 * \dotsi * H_n \] The proof of $S \land \Pi \vdash M$ also induces an $n$-colouring on $S$ which colours each points-to predicate in $S$ with the index $i$ of the corresponding heaplet $H_i$ (cf. step 1 of the bounded abduction procedure presented in Section~\ref{sec:abduction}). We may thus write $S$ as follows: \[ S = \Pi_S : \coloured{\Sigma_1}{1} * \dotsi * \coloured{\Sigma_n}{n} \] (such that for each $i$, $\Pi_S \land \Pi : \Sigma_i \vdash \Pi_M : H_i $). We will compute a pure formula $\Pi_M'$ such that $\Pi_S \vdash \Pi_M'$ and $\Pi_M' \land \Pi \vdash \Pi_M$ and for each $i$ we will compute a formula $P_i$ such that $\Pi_S : \Sigma_i \vdash P_i$ and $P_i \land \Pi \vdash H_i$. We then take $\textsf{pitp}(S,\Pi,M)$ to be $\Pi_M' : P_1 * \dotsi * P_n$. First, we show how to compute $\Pi_M'$. Note that note that since $S$ is saturated, the fact that $S \land \Pi \vdash M$ implies $\Pi_S \land \Pi \vdash \Pi_M$. We will assume that $\Pi_M$ consists of a single equality or disequality: the procedure can be extended to arbitrary conjunction by applying it separately for each conjunct and conjoining the results. If $\Pi_S \vdash \Pi_M$ then we simply take $\Pi_M'$ to be $\Pi_M$. Otherwise, assume $w,x,y,z$ are such that $\Pi$ is an equality or disequality $w = x$ / $w \neq x$ and $\Pi_M$ is an equality or disequality $y = z$ / $y \neq z$. Since $\Pi_S \land \Pi \vdash \Pi_M$ and $\Pi_S \not\vdash \Pi_M$, there is some $y',z' \in \{w,x\}$ such that $\Pi_S \vdash y = y' \land z = z'$ (to see why, consider each of the four cases for $\Pi_S$ and $\Pi_M$). We take $\Pi_M'$ to be $y = y' \land z = z'$. Finally, we show how to compute $P_i$ (for each $i \in [1,n]$). If $\Pi_S : \Sigma_i \vdash H_i$, then we simply take $P_i$ to be $H_i$. Otherwise, suppose that $H_i$ is $Z(\vec{E})$ for some predicate $Z$ and vector of heap terms $\vec{E}$ (the case that $H_i$ is a points-to predicate is essentially a special case). First, we attempt to find a vector of heap terms $\vec{E'}$ such that $\Pi_S : \Sigma_i \vdash Z(\vec{E'})$ and $\Pi_S \land \Pi \vdash E_j = E_j'$ for each $j$ (noting that there are finitely many such $\vec{E'}$ to choose from). If we succeed, we may take $P_i$ to be $\Pi' : Z(\vec{E'})$, where $\Pi' = \textsf{pitp}(S, \Pi, \bigwedge_j E_j = E_j')$. If we fail, then since $\Pi_S \land \Pi : \Sigma_i \vdash Z(\vec{E})$, there is some $Q \in \mathit{cases}(Z(\vec{R},\vec{x}))$ such that $\Pi_S \land \Pi : \Sigma_i \vdash \ul{Q}[\vec{E}/\vec{x}]$. We may take $P_i$ to be $\textsf{pitp}(\Pi_S : \Sigma_i, \Pi, Q[\vec{E}/\vec{x}])$. \section{Bounded Abduction} \label{sec:abduction} \newcommand{\coloured}[2]{% \ifthenelse{\equal{#2}{r}}{% {\color{BrickRed}{\ensuremath[#1]^{\rm #2}}} }{\ifthenelse{\equal{#2}{b}}{% {\color{RoyalBlue}\ensuremath[#1]^{\rm #2}} }{% \ensuremath[#1]^{#2} } } } \newcommand{\hiding}[2]{\ensuremath\langle#1 \unlhd #2\rangle} \begin{figure*}[t] \setlength{\abovecaptionskip}{-4pt} \setlength{\belowcaptionskip}{4pt} \centering \begin{mathpar} \scriptsize \inferrule[Empty]{\Pi \models \Pi'}{ \Pi : \coloured{\textsf{emp}}{c} \vdash \Pi' : \hiding{\coloured{\textsf{emp}}{c}}{\textsf{emp}} } \inferrule[Star]{ \Pi : \Sigma_0 \vdash \Pi' : \Sigma_0' \\ \Pi : \Sigma_1 \vdash \Pi' : \Sigma_1' }{ \Pi : \Sigma_0 * \Sigma_1 \vdash \Pi' : \Sigma_0' * \Sigma_1' } \inferrule[Points-to]{\Pi \models \Pi'}{ \Pi : \coloured{E \mapsto [a,F]}{c} \vdash \Pi' : \hiding{\coloured{E \mapsto [a, F]}{c}}{E \mapsto [a, F]} } \inferrule[True]{\Pi \models \Pi'}{ \Pi : \Sigma \vdash \Pi' : \hiding{\coloured{\textsf{true}}{c}}{\textsf{true}} } \inferrule[Substitution]{ \Pi[E/x] : \Sigma[E/x] \vdash \Pi'[E/x] : \Sigma'[E/x] \\ \Pi \models x = E }{ \Pi : \Sigma \vdash \Pi' : \Sigma' } \inferrule[$\exists$-right]{ P \vdash Q[\AE/x] }{ P \vdash \qex{x}{Q} } \end{mathpar} \caption{Coloured strengthening. All primed variables are chosen fresh.} \label{fig:coloured_strengthening} \end{figure*} In this section, we discuss our algorithm for bounded abduction. Given a bounded abduction problem \[L \vdash \qex{X}{M * [\ ]} \vdash R\] we would like to find a formula $A$ such that $L \vdash \qex{X}{M * A} \vdash R$. Our algorithm is sound but not complete: it is possible that there exists a solution to the bounded abduction problem, but our procedure cannot find it. In fact, there is in general no complete procedure for bounded abduction, as a consequence of the fact that we do not pre-suppose that our proof system for entailment is complete, or even that entailment is decidable. \paragraph{High level description} Our algorithm proceeds in three steps: \begin{compactenum} \item Find a \emph{colouring} of $L$. This is an assignment of a colour, either \emph{red} or \emph{blue}, to each heaplet appearing in $L$. Intuitively, red heaplets are used to satisfy $M$, and blue heaplets are left over. This colouring can be computed by recursion on a proof of $L \vdash \qex{X}{M * \textsf{true}}$. \item Find a \emph{coloured strengthening} $\Pi : \coloured{M'}{r} * \coloured{A}{b}$ of $R$. (We use the notation $\coloured{\Sigma}{r}$ or $\coloured{\Sigma}{b}$ to denote a spatial formula $\Sigma$ of red or blue colour, respectively.) Intuitively, this is a formula that (1) entails $R$ and (2) is coloured in such a way that the red heaplets correspond to the red heaplets of $L$, and the blue heaplets correspond to the blue heaplets of $L$. This coloured strengthening can be computed by recursion on a proof of $L \vdash R$ using the colouring of $L$ computed in step 1. \item Check $\Pi' : M * A \models R$, where $\Pi'$ is the strongest pure formula implied by $L$. This step is necessary because $M$ may be weaker than $M'$. If the entailment check fails, then our algorithm fails to compute a solution to the bounded abduction problem. If the entailment check succeeds, then $\Pi'' : A$ is a solution, where $\Pi''$ is the set of all equalities and disequalities in $\Pi'$ which were actually used in the proof of the entailment $\Pi' : M * A \models R$ (roughly, all those equalities and disequalities which appear in the leaves of the proof tree, plus the equalities that were used in some instance of the {\sc Substitution} rule). \shortenpar \end{compactenum} First, we give an example to illustrate these high-level steps: \begin{example} Suppose we want to solve the following bounded abduction problem: \[ L \vdash \textsf{ls}(x,y) * [\ ] \vdash R \] where $L = x \mapsto [a,y] * y \mapsto [b,\textsf{null}]$ and $R = \qex{z}{x\mapsto [a,z] * \textsf{ls}(y,\textsf{null})}$. Our algorithm operates as follows: \begin{compactenum} \item Colour $L$: $\coloured{x \mapsto [a,y]}{r} * \coloured{y \mapsto [b,\textsf{null}]}{b}$ \item Colour $R$: $\qex{z}{\coloured{x \mapsto [a,z]}{r} * \coloured{\textsf{ls}(y,\textsf{null})}{b}}$ \item Prove the entailment \[ x \neq \textsf{null} \land y \neq \textsf{null} \land x \neq y : \textsf{ls}(x,y) * \textsf{ls}(y,\textsf{null}) \models R\] This proof succeeds, and uses the pure assertion $x \neq y$. \end{compactenum} Our algorithm computes $x \neq y : \textsf{ls}(y,\textsf{null})$ as the solution to the bounded abduction problem. \eoe\end{example} We now elaborate our bounded abduction algorithm. We assume that $L$ is quantifier free (without loss of generality, since quantified variables can be Skolemized) and \emph{saturated} in the sense that for any pure formula $\Pi'$, if $L \vdash \Pi'$, where $L = \Pi : \Sigma$, then $\Pi \vdash \Pi'$. \paragraph{Step 1} The first step of the algorithm is straightforward. If we suppose that there exists a solution, $A$, to the bounded abduction problem, then by definition we must that have $L \models \qex{X}{M * A}$. Since $\qex{X}{M * A} \models \qex{X}{M * \textsf{true}}$, we must also have $L \models \qex{X}{M * \textsf{true}}$. We begin step 1 by computing a proof of $L \vdash \qex{X}{M * \textsf{true}}$. If we fail, then we abort the procedure and report that we cannot find a solution to the abduction problem. If we succeed, then we can colour the heaplets of $L$ as follows: for each heaplet $E \mapsto [\vec{A},\vec{F}]$ in $L$, either $E \mapsto [\vec{A},\vec{F}]$ was used in an application of the {\sc Points-to} axiom in the proof of $L \vdash \qex{X}{M * \textsf{true}}$ or not. If yes, we colour $E \mapsto [\vec{A},\vec{F}]$ red; otherwise, we colour it blue. We denote a heaplet $H$ coloured by a colour $c$ by $\coloured{H}{c}$. \shortenpar \paragraph{Step 2} The second step is to find a coloured strengthening of $R$. Again, supposing that there is some solution $A$ to the bounded abduction problem, we must have $L \models \qex{X}{M * A} \models R$, and therefore $L \models R$. We begin step 2 by computing a proof of $L \vdash R$. If we fail, then we abort. If we succeed, then we define a coloured strengthening of $R$ by recursion on the proof of $L \vdash R$. Intuitively, this algorithm operates by inducing a colouring on points-to predicates in the leaves of the proof tree from the colouring of $L$ (via the {\sc Points-to} rule in Fig.~\ref{fig:coloured_strengthening}) and then only folding recursive predicates when all the folded heaplets have the same colour. \shortenpar More formally, for each formula $P$ appearing as the consequent of some sequent in a proof tree, our algorithm produces a mapping from heaplets in $P$ to coloured spatial formulas. The mapping is represented using the notation $\hiding{\Sigma}{H}$, which denotes that the heaplet $H$ is mapped to the coloured spatial formula $\Sigma$. For each recursive predicate $Z$ and each $\qex{X}{\Pi : H_1 * \dotsi * H_n} \in \mathit{cases}(Z(\vec{R},\vec{x}))$, we define two versions of the fold rule, corresponding to when $H_1,\ldots,H_n$ are coloured homogeneously ({\sc Fold1}) and heterogeneously ({\sc Fold2}): \begin{mathpar} \scriptsize \inferrule[Fold1]{ (\Pi : \Sigma \vdash \Pi' : \Sigma' * \hiding{\coloured{H_1}{c}}{H_1} * \dotsi * \hiding{\coloured{H_n}{c}}{H_n})[\vec{E}/\vec{x}] }{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \hiding{\coloured{Z(\vec{E})}{c}}{Z(\vec{E})} } \inferrule[Fold2]{ (\Pi : \Sigma \vdash \Pi' : \Sigma' * \hiding{\Sigma_1'}{H_1} * \dotsi * \hiding{\Sigma_n'}{H_n})[\vec{E}/\vec{x}] }{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \hiding{\Sigma_1'* \dotsi * \Sigma_n'}{Z(\vec{E})} } \end{mathpar} The remaining rules for our algorithm are presented formally in Fig.~\ref{fig:coloured_strengthening}.\footnote{Note that some of the inference rules are missing. This is because these rules are inapplicable (in the case of {\sc Unfold} and {\sc Inconsistent}) or unnecessary (in the case of {\sc \textsf{null}-not-Lval} and {\sc*-Partial}), given our assumptions on the antecedent.} To illustrate how this algorithm works, consider the {\sc Fold1} and {\sc Fold2} rules. If a given (sub-)proof finishes with an instance of {\sc Fold} that folds $H_1 *\cdots* H_n$ into $Z(\vec{E})$, we begin by colouring the sub-proof of \[\Pi : \Sigma \vdash \Pi' : \Sigma' * H_1 * \dotsi * H_n \] This colouring process produces a coloured heaplet $\Sigma_i$ for each $H_i$. If there is some colour $c$ such that each $\Sigma_i'$ is $\coloured{H_i}{c}$, then we apply {\sc Fold1} and $Z(\vec{E})$ gets mapped to $\coloured{Z(\vec{E})}{c}$. Otherwise (if there is some $i$ such that $\Sigma_i$ is not $H_i$ or there is some $i,j$ such that $\Sigma_i$ and $\Sigma_j$ have different colours), we apply {\sc Fold2}, and map $Z(\vec{E})$ to $\Sigma_1* \dotsi *\Sigma_n$. After colouring a proof, we define $A$ to be the blue part of $R$. That is, if the colouring process ends with a judgement of\\ $\Pi : \coloured{\Sigma_1}{r} * \coloured{\Sigma_2}{b} \vdash \Pi' : \hiding{\coloured{\Sigma_{11}'}{r}*\coloured{\Sigma_{12}}{b}}{H_1} * \cdots * \hiding{\coloured{\Sigma_{n1}'}{r}*\coloured{\Sigma_{n2}}{b}}{H_n}$\\ (where for any coloured spatial formula $\Sigma$, its partition into red and blue heaplets is denoted by $\coloured{\Sigma_1}{r}*\coloured{\Sigma_2}{b}$), we define $A$ to be $\Pi' : \Sigma_{12} * \dotsi * \Sigma_{n2}$. This choice is justified by the following lemma: \begin{lemma} Suppose that\\ $\Pi : \coloured{\Sigma_1}{r} * \coloured{\Sigma_2}{b} \vdash \Pi' : \hiding{\coloured{\Sigma_{11}'}{r}*\coloured{\Sigma_{12}}{b}}{H_1} * \cdots * \hiding{\coloured{\Sigma_{n1}'}{r}*\coloured{\Sigma_{n2}}{b}}{H_n}$\\ is derivable using the rules of Fig.~\ref{fig:coloured_strengthening}, and that the antecedent is saturated. Then the following hold: \begin{compactitem} \item $\Pi' : \Sigma_{11}*\Sigma_{12}*\dotsi*\Sigma_{n2} \models \Pi' : H_1 * \dotsi * H_n$; \item $\Pi : \Sigma_1 \models \Pi' : \Sigma_{11} * \dotsi * \Sigma_{n1}$; and \item $\Pi : \Sigma_2 \models \Pi' : \Sigma_{12} * \dotsi * \Sigma_{n2}$. \end{compactitem} \end{lemma} \paragraph{Step 3} The third step of our algorithm is to check the entailment $\Pi : M * A \models R$. To illustrate why this is necessary, consider the following example: \begin{example} Suppose we want to solve the following bounded abduction problem: \[x \neq y : x \mapsto [a,y] \vdash \textsf{ls}(x,y)*[\ ] \vdash x \mapsto [a,y]\ .\] In Step 1, we compute the colouring $x \neq y : \coloured{x \mapsto [a,y]}{r}*\coloured{\textsf{emp}}{b}$ of the left hand side. In step 2, we compute the colouring $\coloured{x \mapsto [a,y]}{r}*\coloured{\textsf{emp}}{b}$ of the right hand side. However, $\textsf{emp}$ is not a solution to the bounded abduction problem. In fact, there is no solution to the bounded abduction problem. Intuitively, this is because $M$ is too weak to entail the red part of the right hand side. \eoe\end{example} \section{Discussion} We have presented \textsc{SplInter}\xspace, a new technique for proving safety properties of programs requiring heap and data reasoning. \textsc{SplInter}\xspace combines a new path-based separation logic analysis with first-order interpolation techniques for inferring intricate invariants. By bringing the advantages of the path-based refinement approach that has proven to be extremely effective in the domain of numerical and control-sensitive property verification to the domain of combined heap and data verification, we believe \textsc{SplInter}\xspace is an important step in creating precise and generic automatic heap/data analyses. In this work we compute spatial($\mathcal{T}$) interpolants in a two-tiered manner: computing spatial interpolants followed by theory interpolants. This suffers from the problem that the computed spatial interpolants might not have theory interpolants (even though refinable spatial interpolants may exist), in which case our algorithm reverts to the strongest spatial interpolants. In the future, we would like to study systematically searching the space of spatial interpolants, until a set of spatial interpolants that can be refined with theory interpolants is found. We believe this can be performed in a counterexample-guided manner, where an unsolvable set of Horn clauses can inform us how to modify our spatial interpolants. \section{Putting it All Together} \label{sec:alg} \begin{itemize} \item Describe how from path interpolants we can go to full proofs -- impact style. \item Describe how we need to take prefixes of paths in cases where the whole path does not have a proof of correctness. \item Finally, maybe describe how we can perform conjunction -- or we can handwave it if it's too much work. \end{itemize} \section{Implementation and Evaluation} \label{sec:impl} Our primary goal is to study the feasibility of our proposed algorithm. To that end, we implemented an instantiation of our generic algorithm with the linked list recursive predicate $\textsf{ls}$ (as defined in Sec.~\ref{sec:prelims}) and refinements in the theory of linear arithmetic (QF\_LRA). The following describes our implementation and evaluation of \textsc{SplInter}\xspace in detail. \paragraph{Implementation} We implemented \textsc{SplInter}\xspace in the T2 safety and termination verifier~\cite{t2}. Specifically, we extended T2's front-end to handle heap-manipulating programs, and used its safety checking component (which implements McMillan's \textsc{Impact}\xspace algorithm) as a basis for our implementation of \textsc{SplInter}\xspace. To enable reasoning in separation logic, we implemented an entailment checker for \textsf{RSep}\xspace along with a bounded abduction procedure. We implemented a constraint-based solver using the linear rational arithmetic interpolation techniques of Rybalchenko and Stokkermans~\cite{Rybalchenko07} to solve the non-recursive Horn clauses generated by \textsc{SplInter}\xspace. Although many off-the-shelf tools for interpolation exist (e.g.,~\cite{McMillan2011}) we implemented our own solver for experimentation and evaluation purposes to allow us more flexibility in controlling the forms of interpolants we are looking for. We expect that \textsc{SplInter}\xspace would perform even better using these highly tuned interpolation engines. \shortenpar Our main goal is to evaluate the feasibility of our proposed extension of interpolation-based verification to heap and data reasoning, and not necessarily to demonstrate performance improvements against other tools. Nonetheless, we note that there are two tools that target similar programs: (1) \textsc{Thor}~\cite{Magill2010}, which computes a memory safety proof and uses off-the-shelf numerical verifiers to strengthen it, and (2) \textsc{Xisa}~\cite{Chang2008}, which combines shape and data abstract domains in an abstract interpretation framework. \textsc{Thor} cannot compute arbitrary refinements of recursive predicates (like the ones demonstrated here and required in our benchmarks) unless they are manually supplied with the required theory predicates. Instantiated with the right abstract data domains, \textsc{Xisa} can in principle handle most programs we target in our evaluation. (\textsc{Xisa} was unavailable for comparison~\cite{chang}.) Sec.~\ref{sec:rel} provides a detailed comparison with related work. \paragraph{Benchmarks} To evaluate \textsc{SplInter}\xspace, we used a number of linked list benchmarks that require heap and data reasoning. First, we used a number of simple benchmarks: \texttt{listdata} is similar to Fig.~\ref{code:ex}, where a linked list is constructed and its data elements are later checked; \texttt{twolists} requires an invariant comparing data elements of two lists (all elements in list $A$ are greater than those in list $B$); \texttt{ptloop} tests our spatial interpolation technique, where the head of the list must not be folded in order to ensure its data element is accessible; and \texttt{refCount} is a reference counting program, where our goal is to prove memory safety (no double free). For our second set of benchmarks, we used a cut-down version of BinChunker (\url{http://he.fi/bchunk/}), a Linux utility for converting between different audio CD formats. BinChunker maintains linked lists and uses their data elements for traversing an array. Our property of interest is thus ensuring that all array accesses are within bounds. To test our approach, we used a number of modifications of BinChunker, \texttt{bchunk\_a} to \texttt{bchunk\_f}, where \texttt{a} is the simplest benchmark and \texttt{f} is the most complex one. \paragraph{Heuristics} We employed a number of heuristics to improve our implementation. First, given a program path to prove correct, we attempt to find a similar proof to previously proven paths that traverse the same control flow locations. This is similar to the \emph{forced covering} heuristic of~\cite{McMillan2006} to force path interpolants to generalize to inductive invariants. Second, our Horn clause solver uses Farkas' lemma to compute linear arithmetic interpolants. We found that minimizing the number of non-zero \emph{Farkas coefficients} results in more generalizable refinements. A similar heuristic is employed by~\cite{Albarghouthi2013}. \begin{table}[t] \centering \scalebox{0.8}{ \begin{tabular}{c|c|c|c|c} Benchmark & \#\textsf{ProvePath} & Time (s) & $\mathcal{T}$ Time & Sp. Time \\ \hline \hline \texttt{listdata} & 5 & 1.37 & 0.45 & 0.2 \\ \texttt{twolists} & 5 & 3.12 & 2.06 & 0.27 \\ \texttt{ptloop} & 3 & 1.03 & 0.28 & 0.15\\ \texttt{refCount} & 14 & 1.6 & 0.59 & 0.14\\ \hline \texttt{bchunk\_a} & 6 & 1.56 & 0.51 & 0.25\\ \texttt{bchunk\_b} & 18 & 4.78 & 1.7 & 0.2 \\ \texttt{bchunk\_c} & 69 & 31.6 & 14.3 & 0.26\\ \texttt{bchunk\_d} & 23 & 9.3 & 4.42 & 0.27 \\ \texttt{bchunk\_e} & 52 & 30.1 & 12.2 & 0.25 \\ \texttt{bchunk\_f} & 57 & 22.4 & 12.0 & 0.25 \\ \end{tabular}} \caption{Results of running \textsc{SplInter}\xspace on our benchmark set. } \label{tbl:res} \end{table} \paragraph{Results} Table~\ref{tbl:res} shows the results of running \textsc{SplInter}\xspace on our benchmark suite. Each row shows the number of calls to $\textsf{ProvePath}$ (number of paths proved), the total time taken by \textsc{SplInter}\xspace in seconds, the time taken to generate Horn clauses and compute theory interpolants ($\mathcal{T}$ Time), and the time taken to compute spatial interpolants (Sp. Time). \textsc{SplInter}\xspace proves all benchmarks correct w.r.t. their respective properties. As expected, on simpler examples, the number of paths sampled by \textsc{SplInter}\xspace is relatively small (3 to 14). In the \texttt{bchunk\_*} examples, \textsc{SplInter}\xspace examines up to 69 paths (\texttt{bchunk\_c}). It is important to note that, in all benchmarks, almost half of the total time is spent in theory interpolation. We expect this can be drastically cut with the use of a more efficient interpolation engine. The time taken by spatial interpolation is very small in comparison, and becomes negligible in larger examples. The rest of the time is spent in checking entailment of \textsf{RSep}\xspace formulas and other miscellaneous operations. Our results highlight the utility of our proposed approach. Using our prototype implementation of \textsc{SplInter}\xspace, we were able to verify a set of realistic programs that require non-trivial combinations of heap and data reasoning. We expect the performance of our prototype implementation of \textsc{SplInter}\xspace can greatly improve with the help of state-of-the-art Horn clause solvers, and more efficient entailment checkers for separation logic. \section{Overview} \label{sec:ex} In this section, we demonstrate the operation of \textsc{SplInter}\xspace (Fig.~\ref{fig:arch}) on the simple linked list example shown in Fig.~\ref{code:ex}. We assume that integers are unbounded (i.e., integer values are drawn from $\mathbb{Z}$ rather than machine integers) and% \setlength{\intextsep}{0pt}% \setlength{\columnsep}{5pt}% \begin{wrapfigure}{r}{4.6cm} \hfill \begin{minipage}{4.6cm} \begin{lstlisting} 1: int i = nondet(); node* x = null; 2: while (i != 0) node* tmp = malloc(node); tmp->N = x; tmp->D = i; x = tmp; i--; 3: while (x != null) 4: assert(x->D >= 0); x = x->N; \end{lstlisting} \end{minipage} \caption{Illustrative Example \label{code:ex}} \end{wrapfigure}% that there is a \texttt{struct} called \texttt{node} denoting a linked list node, with a next pointer \texttt{N} and an integer (data) element \texttt{D}. The function \texttt{nondet()} returns a nondeterministic integer value. This program starts by building a linked list in the loop on location \texttt{2}. The loop terminates if the initial value of \texttt{i} is $\geq 0$, in which case a linked list of size \texttt{i} is constructed, where data elements \texttt{D} of list nodes range from \texttt{1} to \texttt{i}. Then, the loop at location \texttt{3} iterates through the linked list asserting that the data element of each node in the list is $\geq 0$. Our goal is to prove that the assertion at location \texttt{4} is never violated. \paragraph{Sample a Program Path} To start, we need a path $\pi$ through the program to the assertion at location \texttt{4}. Suppose we start by sampling the path \texttt{1,2,2,3,4}, that is, the path that goes through the first loop once, and enters the second loop arriving at the assertion. This path is illustrated in Fig.~\ref{fig:ex} (where \texttt{2a} indicates the second occurrence of location \texttt{2}). Our goal is to construct a Hoare-style proof of this path: an annotation of each location along the path with a formula describing reachable states, such that location \texttt{4} is annotated with a formula implying that \texttt{x->D >= 0}. This goal is accomplished in two phases. First, we use \emph{spatial interpolation} to compute a memory safety proof for the path $\pi$ (Fig.~\ref{fig:ex}(b)). Second, we use \emph{theory refinement} to strengthen the memory safety proof and establish that the path satisfies the post-condition \texttt{x->D >= 0} (Fig.~\ref{fig:ex}(c)). \paragraph{Compute Spatial Interpolants} The first step in constructing the proof is to find \emph{spatial interpolants}: a sequence of separation logic formulas \emph{approximating} the shape of the heap at each program location, and forming a Hoare-style memory safety proof of the path. Our spatial interpolation procedure is a two step process that first symbolically executes the path in a forward pass and then derives a weaker proof using a backward pass. The backward pass can be thought of as an under-approximate weakest precondition computation, which uses the symbolic heap from the forward pass to guide the under-approximation. We start by showing the \emph{symbolic heaps} in Fig.~\ref{fig:ex}(a), which are the result of the forward pass obtained by symbolically executing \emph{only} heap statements along this program path (i.e., the strongest postcondition along the path). The separation logic annotations in Fig.~\ref{fig:ex} follow standard notation (e.g.,~\cite{Distefano2006}), where a formula is of the form $\Pi:\Sigma$, where $\Pi$ is a Boolean first-order formula over heap variables (pointers) as well as data variables (e.g., $x = \textsf{null}$ or $i > 0$), and $\Sigma$ is a \emph{spatial conjunction} of \emph{heaplets} (e.g., \textsf{emp}, denoting the empty heap, or $Z(x,y)$, a recursive predicate, e.g., that denotes a linked list between $x$ and $y$). For the purposes of this example, we assume a recursive predicate $\textsf{ls}(x,y)$ that describes linked lists. In our example, the symbolic heap at location \texttt{2a} is $true:x\mapsto[d',\textsf{null}],$ where the heap consists of a node, pointed to by variable $x$, with $\textsf{null}$ in the \texttt{N} field and the (implicitly existentially quantified) variable $d'$ in the \texttt{D} field (since so far we are only interested in heap shape and not data). \shortenpar The symbolic heaps determine a memory safety proof of the path, but it is too strong and would likely not generalize to other paths. The goal of spatial interpolation is to find a sequence of annotations that are weaker than the symbolic heaps, but that still prove memory safety of the path. A sequence of spatial interpolants is shown in Fig.~\ref{fig:ex}(b). Note that all spatial interpolants are implicitly spatially conjoined with $\textsf{true}$; for clarity, we avoid explicitly conjoining formulas with $\textsf{true}$ in the figure. For example, location \texttt{2} is annotated with $true : \textsf{ls}(x,\textsf{null}) * \textsf{true}$, indicating that there is a list on the heap, as well as other potential objects not required to show memory safety. We compute spatial interpolants by going backwards along the path and asking questions of the form: \emph{how much can we weaken the symbolic heap while still maintaining memory safety?} We will describe how to answer such questions in Section~\ref{sec:spint}. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{figs/ex.pdf} \caption{ Path through program in Fig.~\ref{code:ex}, annotated with (a) results of forward symbolic execution, (b) spatial interpolants, and (c) spatial($\mathcal{T}$) interpolants, where $\mathcal{T}$ is linear integer arithmetic. Arrows $\Rightarrow$ indicate implication (entailment) direction. } \label{fig:ex} \end{figure} \paragraph{Refine with Theory Interpolants} Spatial interpolants give us a memory safety proof as an approximate heap shape at each location. Our goal now is to strengthen these heap shapes with data refinements, in order to prove that the assertion at the end of the path is not violated. To do so, we generate a system of Horn clause constraints from the path in some first-order theory admitting interpolation (e.g., linear arithmetic). These Horn clauses carefully encode the path's data manipulation along with the spatial interpolants, which tell us heap shape at each location along the path. A solution of this constraint system, which can be solved using off-the-shelf interpolant generation techniques (e.g.,~\cite{McMillan2011,Rybalchenko07}), is a \emph{refinement} (strengthening) of the memory safety proof. \shortenpar In this example, we encode program operations over integers in the theory of linear integer arithmetic, and use Craig interpolants to solve the system of constraints. A solution of this system is a set of linear arithmetic formulas that refine our spatial interpolants and, as a result, imply the assertion we want to prove holds. One possible solution is shown in Fig.~\ref{fig:ex}(c). For example, location \texttt{2a} is now labeled with $true \color{black} : \mathsf{ls}(\color{OliveGreen} \lda{\nu. \nu \geq i} \color{black} ,x,\mathsf{null}),$ where the {\color{OliveGreen}green} parts of the formula are those added by refinement. Specifically, after refinement, we know that \emph{all} elements in the list from $x$ to $\textsf{null}$ after the first loop have data values greater than or equal to $i$, as indicated by the predicate $\color{OliveGreen} \lda{\nu. \nu \geq i}$. (In Section~\ref{sec:prelims}, we formalize recursive predicates with data refinements.) Location \texttt{4} is now annotated with $\color{OliveGreen}d' \geq 0 \color{black} : x \mapsto [d',n'] * \textsf{true}$, which implies that \texttt{x->D >= 0}, thus proving that the path satisfies the assertion. \paragraph{From Proofs of Paths to Proofs of Programs} We go from proofs of paths to whole program proofs implicitly by building an \emph{abstract reachability tree} as in \textsc{Impact}~\cite{McMillan2006}. To give a flavour for how this works, consider that the assertions at \texttt{2} and \texttt{2a} are identical: this implies that this assertion is an inductive invariant at line \texttt{2}. Since this assertion also happens to be strong enough to prove safety of the program, we need not sample any longer unrollings of the first loop. However, since we have not established the inductiveness of the assertion at \texttt{3}, the proof is not yet complete and more traces need to be explored (in fact, exploring one more trace will do: consider the trace that unrolls the second loop once and shows that the second time \texttt{3} is visited can also be labeled with $\color{black} true \color{black} : \mathsf{ls}(\color{OliveGreen} \lda{\nu. \nu \geq 0} \color{black} ,x,\mathsf{null})$). \shortenpar Since our high-level algorithm is virtually the same as \textsc{Impact}~\cite{McMillan2006}, we will not describe it further in the paper. For the remainder of this paper, we will concentrate on the novel contribution of our algorithm: computing spatial interpolants with theory refinements for program paths. \section{Introduction} \label{sec:intro} Since the problem of determining whether a program satisfies a given property is undecidable, every verification algorithm must make some compromise. There are two classical schools of program verification, which differ in the compromise they make: the \emph{static analysis} school gives up refutation soundness (i.e., may report \emph{false positives}); and the \emph{software model checking} school gives up the guarantee of termination. In the world of integer program verification, both schools are well explored and enjoy cross-fertilization of ideas: each has its own strengths and uses in different contexts. In the world of heap-manipulating programs, the static analysis school is well-attended \cite{Sagiv99,Distefano2006,Chang2008,Calcagno09}, while the software model checking school has remained essentially vacant. This paper initiates a program to rectify this situation, by proposing one of the first path-based software model checking algorithms for proving combined shape-and-data properties. The algorithm we propose, \textsc{SplInter}\xspace, marries two celebrated program verification ideas: McMillan's \emph{lazy abstraction with interpolants} (\textsc{Impact}\xspace) algorithm for software model checking \cite{McMillan2006}, and \emph{separation logic}, a program logic for reasoning about shape properties \cite{Reynolds2002}. \textsc{SplInter}\xspace (like \textsc{Impact}\xspace) is based on a path-sampling methodology: given a program $P$ and safety property $\varphi$, \textsc{SplInter}\xspace constructs a proof that $P$ is memory safe and satisfies $\varphi$ by sampling a finite number of paths through the control-flow graph of $P$, proving them safe, and then assembling proofs for each sample path into a proof for the whole program. The key technical advance which enables \textsc{SplInter}\xspace is an algorithm for \emph{spatial interpolation}, which is used to construct proofs in \emph{separation logic} for the sample traces (serving the same function as \emph{Craig interpolation} for first-order logic in \textsc{Impact}\xspace). \shortenpar \textsc{SplInter}\xspace is able to prove properties requiring integrated heap and data (e.g., integer) reasoning by strengthening separation logic proofs with \emph{data refinements} produced by classical Craig interpolation, using a technique we call \emph{spatial interpolation modulo theories}. Data refinements are \emph{not tied to a specific logical theory}, giving us a rather generic algorithm and freedom to choose an appropriate theory to encode a program's data. Fig.~\ref{fig:arch} summarizes the high-level operation of our algorithm. Given a program with no heap manipulation, \textsc{SplInter}\xspace only computes theory interpolants and behaves exactly like \textsc{Impact}\xspace, and thus one can thus view \textsc{SplInter}\xspace as a proper extension of \textsc{Impact}\xspace to heap manipulating programs. At the other extreme, given a program with no data manipulation, \textsc{SplInter}\xspace is a new shape analysis that uses path-based relaxation to construct memory safety proofs in separation logic. \shortenpar There is a great deal of work in the static analysis school on shape analysis and on combined shape-and-data analysis, which we will discuss further in Sec.~\ref{sec:rel}. We do not claim superiority over these techniques (which have had the benefit of 20 years of active development). \textsc{SplInter}\xspace, as the first member of the software model checking school, is not \emph{better}; however, it \emph{is} fundamentally \emph{different}. Nonetheless, we will mention two of the features of \textsc{SplInter}\xspace (not enjoyed by any previous verification algorithm for shape-and-data properties) that make our approach worthy of exploration: path-based refinement and property-direction. \begin{compactitem} \item \emph{Path-based refinement}: This supports a progress guarantee by tightly correlating program exploration with refinement, and by avoiding imprecision due to lossy join and widening operations employed by abstract domains. \textsc{SplInter}\xspace does not report false positives, and produces counterexamples for violated properties. This comes, as usual, at the price of potential divergence. \item \emph{Property-direction}: Rather than seeking the strongest invariant possible, we compute one that is \emph{just strong enough} to prove that a desired property holds. Property direction enables scalable reasoning in rich program logics like the one described in this paper, which combines separation logic with first-order data refinements. \end{compactitem} We have implemented an instantiation of our generic technique in the \textsc{T2} verification tool~\cite{t2}, and used it to prove correctness of a number of programs, partly drawn from open source software, requiring combined data and heap invariants. Our results indicate the usability and promise of our approach. \begin{figure}[t] \centering \includegraphics[scale=0.32]{figs/arch.pdf} \caption{Overview of \textsc{SplInter}\xspace verification algorithm.} \label{fig:arch} \end{figure} \paragraph{Contributions} We summarize our contributions as follows: \begin{enumerate} \item A generic property-directed algorithm for verifying and falsifying safety of programs with heap and data manipulation. \item A precise and expressive separation logic analysis for computing memory safety proofs of program paths using a novel technique we term \emph{spatial interpolation}. \item A novel interpolation-based technique for strengthening separation logic proofs with data refinements. \item An implementation and an evaluation of our technique for a fragment of separation logic with linked lists enriched with linear arithmetic refinements. \end{enumerate} \section{Introduction} \label{sec:intro} The past decade has witnessed significant advances in the quest for efficient automated software verification techniques. For example, advances such as predicate abstraction~\cite{Graf97,Ball01,Henzinger2002}, interpolation~\cite{McMillan2006}, and SMT solving~\cite{Barrett09} have facilitated efficient verification of control-sensitive and numerical properties. Within the same arena of numerical properties, techniques employing numerical abstract domains~\cite{Cousot77} have been successfully applied to large safety-critical systems~\cite{Blanchet03}. On the other hand, in the heap world, scalable techniques based on separation logic~\cite{Reynolds2002} have been applied to proving memory safety of large low-level software~\cite{Calcagno09}. However, combined heap and data reasoning is comparatively under-explored. For instance, memory safety analyses generally either try to prove array accesses are within bounds assuming pointers to arrays are valid, or to prove accesses through pointers are valid assuming accesses to valid arrays are within bounds. This paper is concerned with verifying safety properties that require heap as well as data reasoning. Consider, for example, a program that stores array indices in a linked list and then uses them to access the array, or a program that uses reference counting to manage memory. In order to prove memory safety of such programs, we need to be able to infer intricate invariants that involve both data and heap shape. The aforementioned techniques are unable to prove safety of such programs: software model checking techniques track only very shallow heap information (e.g., using traditional pointer analysis), and most separation logic and shape analysis techniques do not track data invariants of heap data structures. In this paper, we propose \textsc{SplInter}\xspace, a novel safety verification algorithm that is able to prove properties requiring integrated heap and data reasoning. \textsc{SplInter}\xspace marries (1) a new form of rich separation logic reasoning, for \emph{lazily} inferring heap invariants, with (2) interpolation-based reasoning, for strengthening heap invariants with \emph{data refinements}. \begin{figure}[t] \centering \includegraphics[scale=0.32]{figs/arch.pdf} \caption{Overview of \textsc{SplInter}\xspace verification algorithm.} \label{fig:arch} \end{figure} Given a program $P$ and safety property $\varphi$, \textsc{SplInter}\xspace constructs a proof that $P$ is memory safe and satisfies $\varphi$ by \emph{sampling paths} through the control-flow graph of $P$ and proving them safe. That is, by proving safety of a finite number of paths (samples), we construct a proof of the whole program. This is inspired by McMillan's \emph{lazy abstraction with interpolants} (\textsc{Impact}\xspace) verification technique~\cite{McMillan2006}, where \emph{Craig interpolants} are used to prove individual paths. The \emph{path sampling} methodology (e.g., as in~\cite{McMillan2006,Beyer07,Heizmann10}) enables the following features: \vspace{-.05in} \begin{enumerate} \item \emph{Path-based refinement}: This supports a progress guarantee by tightly correlating program exploration with refinement, and by avoiding imprecision due to lossy join and widening operations employed by abstract domains. \textsc{SplInter}\xspace does not report false positives, and produces counterexamples for violated properties. This comes, as usual, at the price of potential divergence. \item \emph{Property-direction}: Rather than seeking the strongest invariant possible, we compute one that is \emph{just strong enough} to prove that a desired property holds. Property direction enables scalable reasoning in rich program logics (like the one described in this paper), which is necessary for proving properties combining data and the heap. \end{enumerate} Figure~\ref{fig:arch} summarizes the high-level operation of our algorithm. Notice that, at one extreme, given a program with no heap manipulation, \textsc{SplInter}\xspace only computes theory interpolants and behaves exactly like \textsc{Impact}\xspace. One can thus view \textsc{SplInter}\xspace as a proper extension of \textsc{Impact}\xspace to heap manipulating programs. At the other extreme, given a program with no data manipulation, our algorithm is a new shape analysis that uses path-based relaxation to construct memory safety proofs in separation logic. Thus, \textsc{SplInter}\xspace is a step forward in the direction towards fully automatic verification: since it integrates both numerical and heap reasoning, it obviates the need to choose which category of verification tool to apply to a program of interest. Moreover, it can be used to prove properties that lie beyond the scope of either category. There is some prior work on combined heap and data analysis, discussed in Section~\ref{sec:rel}, but none enjoys \textsc{SplInter}\xspace's two key features: path-based refinement and property-direction. At the core of \textsc{SplInter}\xspace is the idea of \emph{spatial interpolants modulo theories}, which are used to construct a proof of correctness of a given program path \paragraph{Spatial Interpolants} To prove a program path $\pi$ safe, we first construct a Hoare-style proof of memory safety of $\pi$ using a new separation logic--based technique. We call the annotations along the path \emph{spatial path interpolants}, since, in the style of interpolation-based verification, their logical strength is \emph{in between} that of the sequences of strongest postconditions and of weakest preconditions. Effectively, spatial interpolants tell us which parts of the heap, and their structure, we should remember at each point along the path in order to ensure safe execution thereafter. Consequently, our technique does not suffer from the imprecision incurred by forward-running shape analyses, which might \emph{abstract too much} (e.g., by widening) and report unsafe memory operations in a safe program. Our spatial interpolation procedure is a two step process that first symbolically executes the path in a forward pass and then derives a weaker proof using a backward pass. The backward pass can be thought of as an under-approximate weakest precondition computation, which uses the symbolic heap from the forward pass to guide the under-approximation. This second pass is the principal point at which heuristic approximation is involved. The approximation strives toward the weakest preconditions while, in the limit, ``unsuccessful'' approximation yields the overly-strong proof from the first pass. \paragraph{Spatial Interpolants Modulo Theories} Given a memory safety proof of a path $\pi$ (spatial interpolants), we aim to strengthen it with data invariants such that the proof establishes that the path satisfies the safety property $\varphi$. To do so, we generate a system of Horn clause constraints from the path in some first-order theory admitting interpolation (e.g., linear arithmetic). These Horn clauses carefully encode the path's data manipulation along with the spatial interpolants, which tell us heap shape at each location along the path. The solution of this constraint system, which can be solved using off-the-shelf first-order interpolant generation techniques (e.g.,~\cite{McMillan2011,Rybalchenko07}), is a \emph{refinement} (strengthening) of the memory safety proof. For example, we might transform an assertion $\textsf{ls}(x,\textsf{null})$ -- there is a linked list from $x$ to $\textsf{null}$ -- into the stronger formula $\textsf{ls}(\lda{a.a > 0}, x, \textsf{null})$ -- all elements of the list from $x$ to $\textsf{null}$ are greater than $0$. We call the resulting annotations \emph{spatial interpolants modulo theories} (or \emph{spatial($\mathcal{T}$)} interpolants for short). Note that our data refinements are \emph{not tied to a specific logical theory} $\mathcal{T}$, giving us a rather generic algorithm and freedom to choose an appropriate theory to encode a program's data. Heap shapes are expressed using a fragment of separation logic with general recursive predicates. The (second-order) recursive predicates are parameterized by data refinement predicates that at each unfolding are able to constrain a finite history or window of the data stored in memory described by the recursive predicate. The recursive predicate definitions contribute to the Horn constraints, and first-order interpolants are used to refine the data refinement predicate parameters of recursive predicates. We have implemented an instantiation of our generic technique in the \textsc{T2} safety and termination verification tool~\cite{t2}, and used it to prove correctness of a number of programs, partly drawn from open source software, requiring combined data and heap invariants. Our results indicate the usability and promise of our approach. \paragraph{Contributions} We summarize our contributions as follows: \begin{compactitem} \item A generic property-directed algorithm for verifying and falsifying safety of programs with heap and data manipulation. \item A precise and expressive separation logic analysis for computing memory safety proofs of program paths using a novel technique we term \emph{spatial interpolation}. \item A novel interpolation-based technique for strengthening separation logic proofs with data refinements. \item An implementation and an evaluation of our technique for a fragment of separation logic with linked lists enriched with linear arithmetic refinements. \end{compactitem} \emph{The \textbf{appendix} contains an extended version (A) a complete description of \textsc{SplInter}\xspace, (B) proofs, (C) complete semantics and proof system of our program logic, and (D) other extended clarifications. } \section{Preliminaries} We deal with list-manipulating programs, where each list node contains a data field. Our separation logic fragment is the standard one with lists segments, points-to predicates, and existential quantifiers. \section{Memory Safety Proof Relaxation} Let $\pi = p_1, \ldots, p_n$ be a program path and $C = c_0,\ldots, c_n$ be the result of symbolically executing this path (without applying abstraction rules), where $c_0 \equiv true : \textsf{emp}$. We assume that symbolic execution does not perform an unsafe memory operation. As a result, we would like to find a weaker proof $W = w_0,\ldots,w_n$ that still preserves memory safety. Each symbolic state $c_i$ is of the form $\Pi_i:\Sigma_i$, where $\Sigma_i$ is treated as a set of points-to predicates (since no abstraction is performed). An empty set $\Sigma_i$ denotes $\textsf{emp}$. Computing $w_0, \ldots, w_n$ is performed backwards starting from $w_n$. We describe the algorithm as follows: \textbf{[Note:The following assumes that the only non-determinism in the program is in the control flow. That is, symbolic execution without abstraction entails either the condition of the assume statement or its negation.]} \subsection{Initialization ($w_n$)} $w_n = \emph{true}:\textsf{true}$. We also create a function $f_n$ that maps each points-to predicate in $\Sigma_n$ to the predicate $\textsf{true}$ in $w_n$. Effectively, $f_n$ specifies a fine-grained entailment relation between heaplets in $c_n$ and $w_n$. \subsection{Computing $w_i$ (without abstraction)} We now show how to compute $w_i$, where $i < n$. There are a number of cases, depending on the command $p_{i+1}$ along the path. \paragraph{Assignment} When $p_{i+1}$ is an assignment of the form $\texttt{x := y}$, then $$w_{i} = \Pi[y/x]:\Sigma[y/x],$$ where $w_{i+1} = \Pi:\Sigma$. The map $f_i$ from $c_i$ to $w_i$ is the same as $f_{i+1}$, since the list and points-to predicates have not changed. \paragraph{Data-field assignment} When $p_{i+1}$ is an assignment of the form $\texttt{d := y->D}$ or $\texttt{y->D := d}$, then we have to enforce that $w_i$ entails the existence of a cell $y \mapsto [\_]$, thus ensuring a safe memory operation. Let points-to predicate $X = x \mapsto [d,n]$ such that $X \in \Sigma_{i+1}$ and $\Pi_{i+1} \vdash x = y$. Then, $$w_i = \Pi \land (x = y) : (\Sigma - f_{i+1}(X)) * x \mapsto [d,n] * \textsf{sub}(f_{i+1}(X), x \mapsto [d,n]),$$ where $w_{i+1} = \Pi:\Sigma$, and $\textsf{sub}(Pred, x \mapsto [d,n])$ is defined as follows: \begin{itemize} \item If $Pred = \textsf{true}$, then result is $\textsf{true}$. \item If $Pred = ls(z,w)$, then result is $ls(z,x) * ls(n,w)$, where $n$ is a fresh (existentially quantified) variable. \item If $Pred = y \mapsto [d,n]$, then result is $\textsf{emp}$. \end{itemize} $f_{i}$ is set to $f_{i+1}$, with $X$ mapping to the points-to predicate $x \mapsto [d,n]$ and the rest of the predicates mapping to $f_{i+1}(X)$ now map to the result of $\textsf{sub}$. Note that in case $\textsf{sub}$ returns two lists (case 2), then we have to split the predicates mapping to the result of subtract to those that can reach $X$, and those that $X$ can reach, the former and the latter predicates map to $ls(z,x)$ and $ls(n,w)$, respectively, in $f_i$. \paragraph{Allocation} When $p_{i+1}$ is an allocation of the form $\texttt{alloc(y)}$, Let points-to predicate $X = y \mapsto [d,n]$ such that $X \in \Sigma_{i+1}$. Then, $$w_i = (\Pi:\Sigma - f_{i+1}(X) * \textsf{sub}(f_{i+1}(X), y \mapsto [d,n]))[y'/y],$$ where $w_{i+1} = \Pi : \Sigma$. $f_i$ is the same as $f_{i+1}$, where $y \mapsto [d,n]$ where $X$ is not in the domain. \paragraph{De-allocation} When $p_{i+1}$ is a de-allocation statement of the form $\texttt{free(y)}$, then there exists a predicate $X = x \mapsto [d,n]$ such that $X \in \Sigma_{i}$ and $\Pi_i \vdash x = y$. Thus, $$w_i = \Pi \land (x = y) : x \mapsto [d,n] * \Sigma,$$ where $w_{i+1} = \Pi : \Sigma$. $f_i$ is the same as $f_{i+1}$, with the difference that $X$ maps to the predicate $x \mapsto [d,n]$ in $f_i$. \paragraph{Next-field assignment} When $p_{i+1}$ is of the form $\texttt{z := y->n}$, let points-to predicate $X = x \mapsto [d,n]$ such that $X \in \Sigma_{i+1}$ and $\Pi_{i+1} \vdash x = y$. Then, $$w_i = \Pi[n/z] \land (x = y):$$ $$(\Sigma[n/z] - f_{i+1}(X)) * x \mapsto [d,n] * \textsf{sub}(f_{i+1}(X), x \mapsto [d,n])[n/z],$$ Similarly, if $p_{i+1}$ is of the form $\texttt{y->n := z}$, then $$w_i = \Pi[z/n] \land (x = y):$$ $$(\Sigma[z/n] - f_{i+1}(X)) * x \mapsto [d,n] * \textsf{sub}(f_{i+1}(X), x \mapsto [d,n])[z/n],$$ $f_i$ is set as described for data-field assignment statements. \paragraph{Assumptions} When $p_{i+1}$ is of the form $\texttt{assume(X)}$, where $\texttt{X}$ is $\texttt{x = y}$ or $\texttt{x != y}$, then $$w_i = (X \Rightarrow \Pi):\Sigma,$$ where $w_{i+1} = \Pi:\Sigma$. Note that we've added a disjunction here. \subsection{Transforming $w_i$ (list introduction)} In the previous rules, we always only exposed the heaplet that is required for ensuring memory safety. In order to get a proof $W$ that is more likely to be inductive, we need to be more aggressive, by not only introducing heaplets, but folding sequences of heaplets into lists. The following transformation rules are used to introduce list segments in some $w_i$. Note that given a $w_i$, applying the following rules doesn't necessarily produce a new $w_i'$ that is weaker than $w_i$. The only guarantee is that the resulting $w_i'$ satisfies $\{w_i'\}p_{i+1}\{w_{i+1}\}$. The following assumes $w_i$ is of the form $\Pi:\Sigma$. \paragraph{Carving a list out of $\textsf{true}$} Let $x\mapsto[n,d]$ be a predicate in $w_i$ produced by the aforementioned rules. Let the set of predicates $$S = \{e_0 \mapsto[e_1,\_], e_1' \mapsto[e_2,\_], e_2' \mapsto [e_3,\_], \ldots, e_{n-1}' \mapsto [e_n,\_]\} \subseteq \Sigma_i$$ such that for all $1 \leq j \leq n-1$, $\Pi_{i}:\Sigma_i \vdash e_j = e_j'$, and $\Pi_i:\Sigma_i \vdash x = e_0$ and $e_n$ doesn't equal any of the other $e_i$ or $e'_i$ variables (i.e., the sequence of predicates forms an acyclic list). Assuming that all predicates $e_i' \mapsto [e_{i_1},\_]$ map to $\textsf{true}$ in $f_i$, then $$w_i' = \Pi \land x \neq e' : (\Sigma - x\mapsto[n,d]) * ls(x,e'),$$ where $e'$ is a fresh existentially quantified variable. \emph{Note: If $\Pi_i:\Sigma_i \vdash e_n = null$, then we can replace $e'$ with $null$ in $w_i'$, which is often desirable when dealing with null-terminated lists.} \zak{Making the decision to always introduce an existentially quantified variable as a list endpoint is probably OK - I think it just shifts the burden of choosing how to instantiate that quantifier to later (e.g., we can probably delay the choice to the cutpoint, and we will have collected a bunch of disequalities in the antecedent of $\Sigma$ by passing through the assumptions between the cutpoint and the error location).} \paragraph{Carving lists through other lists} Now assume the above predicates in $S$ are not all mapped to $\textsf{true}$ in $f_i$, but some of them map to list segments in $w_i$. If we make the additional assumption that for all $X \in S$, if $f_i(X) = ls(e',y)$, then $f_i^{-1}(ls(e',y)) \subset S$, no program variable $v$ aliases $e'$ in $w_i$, and $ls(e',y)$ could be empty in $w_{i+1}$, then $$w_i' = \Pi \land x \neq e' : (\Sigma - lists) * ls(x,e'),$$ where $lists = \{f_i(X) \mid X \in S \text{ and } f_i(X) \text{ is a list segment}\}$. As before, $e'$ can be replaced by $null$ or a program variable $v$ that aliases $e_n$. Of course, after these transformations, all predicates in the set $S$ now map to the newly introduced list segment $ls(x,e')$ in $f_i$. \section{Formalism} \subsection{Syntax} \begin{align*} x,y \in \textsf{HVar}&& \text{Heap variables} \\ a,b \in \textsf{AVar}&& \text{Arithmetic variables} \\ X \subseteq \textsf{Var}&::= x | a\\ E,F \in \textsf{HTerm}&::= \textsf{null} \mid x\\ t,u \in \textsf{ATerm}&::= k \mid a \mid t + t \mid t - t \mid t \cdot t\\ \Pi \in \textsf{Pure}&::= \textsf{true} \mid E = E \mid E \neq E \mid t \leq t \mid \Pi \land \Pi\\ \Sigma \in \textsf{Spatial}&::= \textsf{true} \mid \textsf{emp} \mid E \mapsto [t,E] \mid \textsf{ls}(\Pi,E,E) \mid \Sigma * \Sigma\\ P \in \textsf{Formula}&::= (\exists X)(\Pi : \Sigma) \end{align*} \subsection{Semantics} We use $+$ to denote disjoint union of sets and $\oplus$ to denote disjoint union of functions. \begin{align*} \textsf{Var} &= \textsf{HVar} + \textsf{AVar}\\ \textsf{Val} &= \textsf{Loc} + \mathbb{Z}\\ \textsf{Stack} &= \textsf{Var} \rightarrow \textsf{Val}\\ \textsf{Heap} &= \textsf{Loc} \rightharpoonup_{\textsf{fin}} \mathbb{Z} \times \textsf{Loc}\\ \textsf{State} &= \textsf{Stack} \times \textsf{Heap} \end{align*} \begin{align*} s,h \models E = F &\iff \sem{E}(s) = \sem{F}(s)\\ s,h \models E \neq F &\iff \sem{E}(s) \neq \sem{F}(s)\\ s,h \models t \leq u &\iff \sem{t}(s) \leq \sem{u}(s)\\ s,h \models \phi \land \psi &\iff (s,h \models \phi) \land (s,h \models \psi)\\ s,h \models \textsf{emp} &\iff \text{dom}(h) = \emptyset\\ s,h \models E \mapsto [t, F] & \iff \text{dom}(h) = \{\sem{E}(s)\} \land h(\sem{E}(s)) = \tuple{\sem{t}(s), \sem{F}(s)}\\ s,h \models \textsf{ls}(\phi,E,F) & \iff (s,h \models E = F \land \textsf{emp}) \\&\hspace*{1cm}\lor (\exists k,E'. s,h \models E \neq F \land \phi[d/k] : E \mapsto [k,E'] * \textsf{ls}(\phi,E',F))\\ s,h \models \Sigma * \Sigma' &\iff \exists h_0,h_1. h_0 \oplus h_1 = h \land (s,h_0 \models \Sigma) \land (s,h_1 \models \Sigma')\\ s,h \models \Pi : \Sigma &\iff (s,h \models \Pi) \land (s,h \models \Sigma)\\ s,h \models (\exists X)(P) &\iff \exists \overline{s} : X \rightarrow \in \textsf{Val} \text{ such that } s\oplus \overline{s},h \models P\\ \end{align*} \subsection{Predicate transformer} \begin{definition}[Witness] \label{def:witness} Let $\Sigma = \Sigma_1 * \dotsi * \Sigma_N$ be a spatial formulae such that each $\Sigma_i$ is an atom, and let $s \in \textsf{Stack}$ and $h \in \textsf{Heap}$. A \emph{witness} for the state $\tuple{s,h}$ and the formula $(\exists X)(\Pi : \Sigma)$ is a pair $\omega = \tuple{\rho, \overline{s}}$ consisting of a map $\rho : \text{dom}(h) \rightarrow [1,N]$ and a stack $\overline{s} : X \rightarrow \textsf{Val}$ such that \begin{enumerate} \item $s \oplus \overline{s},h_i \models \Pi$ \item For all $i \in [1,N]$, $s \oplus \overline{s},h_i \models \Sigma_i$\\ where $h_i = h|_{\{ x \in \text{dom}(h) : \rho(x) = i\}}$. \end{enumerate} If $\omega$ is a witness for $\tuple{s,h}$ and $(\exists X)(\Pi : \Sigma)$, we write \[s,h \models_\omega (\exists X)(\Pi : \Sigma)\] \end{definition} \begin{lemma}[Partition] Let $s,h$ be a state and $(\exists X)(\Pi:\Sigma)$ be a formula. Then $s,h \models (\exists X)(\Pi : \Sigma)$ iff there exists a witness $\omega = \tuple{\rho, \overline{s}}$ such that \[s,h \models_\omega (\exists X)(\Pi : \Sigma)\] \end{lemma} Let $Q = \Pi : \Sigma_1 * \dotsi * \Sigma_N$, $c \in \textsf{Cmd}$, $s,s'$ be stores, $h,h'$ be heaps such that $\tuple{s,h}\sem{c}\tuple{s',h'}$, and $\omega' = \tuple{\rho', \overline{s}'}$ be such that $s',h' \models_{\omega'} Q$. We define the \emph{precondition} of $Q$ along $c$ as follows: \begin{itemize} \item Case: $c$ is \texttt{x := y->next} \begin{itemize} \item Case: $\Sigma_{\rho'(\sem{\texttt{y}}(s'))} = \texttt{z} \mapsto [d,n]$ \[\textsf{pre}(c,Q,\rho,s,h,s',h') = (\exists X)(\texttt{y} = \texttt{z} \land \Pi[\texttt{n}/\texttt{x}]: \Sigma_1' * \dotsi * \Sigma_N') \] where for each $i$ \[\Sigma_i' = \Sigma_i[\texttt{n}/\texttt{x}] \] We define $\omega = \omega'$. \item Case: $\Sigma_{\rho'(\sem{\texttt{y}}(s'))} = \textsf{ls}(\phi,E,F)$ Let $\tuple{d, n} = h(s(\texttt{y}))$, and let $\texttt{d},\texttt{n}$ be fresh variable symbols. \[\textsf{pre}(c,Q,\omega',s,h,s',h') = (\exists X \cup \{\texttt{d},\texttt{n}\})(\Pi[\texttt{n}/\texttt{x}]: \Sigma_1' * \dotsi * \Sigma_N') \] where for each $i$ \[\Sigma_i' = \begin{cases} (\textsf{ls}(\phi,E,\texttt{y}) * \texttt{y} \mapsto [\texttt{d},\texttt{n}] * \textsf{ls}(\phi,\texttt{n},F))[\texttt{n}/\texttt{x}] & \text{if } i=\rho'(\sem{\texttt{y}}(s'))\\ \Sigma_i[\texttt{n}/\texttt{x}] & \text{otherwise} \end{cases}\] We define $\omega = \tuple{\rho, \overline{s}}$ by \[ \rho(\ell) = \begin{cases} \rho'(\ell) & \text{if } \rho'(\ell) < \rho'(\sem{\texttt{y}}(s'))\\ \rho'(\sem{\texttt{y}}(s)) & \text{if } \rho'(\ell) = \rho'(\sem{\texttt{y}}(s')) \land \ell \in Between(s',h',\sem{E}(s'),\sem{\texttt{y}}(s')) \\ \rho'(\sem{\texttt{y}}(s)) + 1 & \text{if } \ell = \sem{\texttt{y}}(s')\\ \rho'(\ell) + 2 & \text{otherwise} \end{cases}\] \[ \overline{s}' = \overline{s}[\texttt{d} \gets d, \texttt{n} \gets n] \] \item Case: $\Sigma_{\rho'(\sem{\texttt{y}}(s'))} = \textsf{true}$ Let $\tuple{d, n} = h(s(\texttt{y}))$, and let $\texttt{d},\texttt{n}$ be fresh variable symbols. \[\textsf{pre}(c,Q,\omega',s,h,s',h') = (\exists X \cup \{\texttt{d},\texttt{n}\})(\Pi[\texttt{n}/\texttt{x}] : \Sigma_1' * \dotsi * \Sigma_N') \] where for each $i$ \[\Sigma_i' = \begin{cases} \texttt{y} \mapsto [\texttt{d},\texttt{n}] * \textsf{true} & \text{if } i=\rho'(\sem{\texttt{y}}(s'))\\ \Sigma_i[\texttt{n}/\texttt{x}] & \text{otherwise} \end{cases}\] We define $\omega = \tuple{\rho, \overline{s}}$ by \[ \rho(\ell) = \begin{cases} \rho'(\ell) & \text{if } \rho'(\ell) < \rho'(\sem{\texttt{y}}(s'))\\ \rho'(\sem{\texttt{y}}(s)) & \text{if } \ell = \sem{\texttt{y}}(s')\\ \rho'(\sem{\texttt{y}}(s)) & \text{if } \ell \neq \sem{\texttt{y}}(s') \land \rho'(\ell) = \rho'(\sem{\texttt{y}}(s'))\\ \rho'(\ell) + 1 & \text{otherwise} \end{cases}\] \[ \overline{s}' = \overline{s}[\texttt{d} \gets d, \texttt{n} \gets n] \] \end{itemize} \item ... \end{itemize} \begin{lemma} Let $Q = \Pi : \Sigma_1 * \dotsi * \Sigma_N$, $c \in \textsf{Cmd}$, $s,s'$ be stores, $h,h'$ be heaps such that $\tuple{s,h} \sem{c} \tuple{s',h'}$, and $\omega' : \text{dom}(h') \rightarrow [1,N]$ be such that $s',h' \models_\omega Q$. Let $\omega,P$ be such that $\textsf{pre}(c,Q,\rho,s,h,s',h') = \tuple{\omega,P}$. Then the following hold: \begin{enumerate} \item $s,h \models_\omega P$ \item $\hoare{P}{c}{Q}$ \end{enumerate} \end{lemma} \begin{proposition} Let $\tau = \tuple{s_0,h_0}\texttt{c}_0\tuple{s_1,h_1}\texttt{c}_1 \dotsi \tuple{s_n,h_n}$ be a program path, and let $\Pi,\Sigma,\omega$ be such that $s_n,h_n \models_\omega \Pi:\Sigma$. Define a sequence of predicates $\{P_i\}$ by \begin{itemize} \item $P_n = Q_n$, $\omega_n = \omega$ \item $\tuple{P_i,\omega_i} = \textsf{pre}(\texttt{c}_{i},P_{i+1},s_i,h_i,s_{i+1},h_{i+1},\omega_{i+1}))$. \end{itemize} Then $\{ P_0 \} \texttt{c}_0 \{ P_1 \} \dotsi \{ P_{n-1} \} \texttt{c}_{n-1} \{ P_n \}$ is a valid Hoare proof. \end{proposition} \subsection{Proof system} \begin{figure*} \begin{mathpar} \inferrule[Arith-Weak]{ \Pi \vDash \Pi' }{ \Pi : \Sigma \vdash \Pi' : \Sigma } \inferrule[Refine-Weak]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,y) \\ \Pi' \land \phi \vDash \psi }{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\psi,x,y) } \inferrule[Rearrangement]{ \Pi : \Sigma \vdash \Pi' : \Sigma_0 * \Sigma_1}{ \Pi : \Sigma \vdash \Pi' : \Sigma_1 * \Sigma_0 } \inferrule[Fold/Base]{ \Pi : \Sigma \vdash \Pi' : \Sigma'}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,x) } \inferrule[Fold/Rec]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,y] * \textsf{ls}(\phi,y,z) \\ \Pi' \land \nu = d \vDash \phi}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,z) } \inferrule[Fold/Seg-Null]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,y) * \textsf{ls}(\phi,y,\textsf{null})}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,\textsf{null}) } \inferrule[Fold/Seg-Pt]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,y) * \textsf{ls}(\phi,y,z) * z \mapsto [d,n]}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,z) * z \mapsto [d,n] } \inferrule*[lab=Unfold/Rec,right={\rm $d,n$ fresh}]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,y) \\ \Pi' \vDash x \neq y }{ \Pi : \Sigma \vdash \Pi' \land \phi[d/\nu] : \Sigma' * x \mapsto [d,n] * \textsf{ls}(\phi,h,y) } \inferrule[Unfold/Base]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,y) \\ \Pi' \vDash x = y}{ \Pi : \Sigma \vdash \Pi' : \Sigma' } \inferrule*[lab=Drop/Pt,right={\rm $d$ not free in $\Pi':\Sigma'$}]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n]}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{true} } \inferrule[Drop/Ls]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(\phi,x,y)}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{true} } \end{mathpar} \caption{Entailment rules} \end{figure*} \begin{figure*} \begin{mathpar} \inferrule[Assign]{\Pi : \Sigma \vdash \Pi' : \Sigma'}{\hoare{\Pi : \Sigma}{x := E}{\Pi'[x'/x] \land x = E[x'/x] : \Sigma'[x'/x]}} \inferrule[Assume]{\Pi : \Sigma \vdash \Pi' : \Sigma'}{\hoare{\Pi : \Sigma}{assume($\phi$)}{\Pi' \land \phi : \Sigma'}} \inferrule[Arith-Store]{\Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n]}{\hoare{\Pi : \Sigma}{x->D := E}{\Pi'[d'/d] \land d=E[d'/d] : \Sigma'[d'/d]}} \inferrule[Arith-Load]{\Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n]}{\hoare{\Pi : \Sigma}{y := x->D}{\Pi'[y'/y] \land y = d : \Sigma'[y'/y]}} \inferrule[Heap-Store]{\Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n]}{\hoare{\Pi : \Sigma}{x->D := E}{\Pi'[d'/d] \land d=E[d'/d] : \Sigma'[d'/d]}} \inferrule[Heap-Load]{\Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n]}{\hoare{\Pi : \Sigma}{y := x->D}{\Pi'[y'/y] \land y = d : \Sigma'[y'/y]}} \inferrule*[lab=Alloc,right={\rm $d,n$ free in $\Pi':\Sigma'$}]{\Pi : \Sigma \vdash \Pi' : \Sigma'}{\hoare{\Pi : \Sigma}{x := new list}{\Pi'[x'/x] : \Sigma'[x'/x] * x \mapsto [d,n]}} \inferrule[Free]{\Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n]}{\hoare{\Pi : \Sigma}{free(x)}{\Pi' : \Sigma'}} \end{mathpar} \caption{Execution rules} \end{figure*} \newpage \begin{figure*} \begin{mathpar} \inferrule[Arith-Weak]{\Pi \vDash \Pi'}{ \Pi \land R(\vec{x}) : \Sigma \vdash \Pi' \land R'(\vec{x}) : \Sigma \triangleright R'(\vec{x}) \leftarrow R(\vec{x}) } \inferrule[Rearrangement]{ \Pi : \Sigma \vdash \Pi' : \Sigma_0 * \Sigma_1 \triangleright \mathcal{C}}{ \Pi : \Sigma \vdash \Pi' : \Sigma_1 * \Sigma_0 \triangleright \mathcal{C} } \inferrule[Fold/Base]{ \Pi : \Sigma \vdash \Pi' : \Sigma' \triangleright \mathcal{C}}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(R,x,x) \triangleright \mathcal{C} } \inferrule[lab=Fold/Rec]{ \Pi : \Sigma \vdash \Pi' \land P(\vec{x}) : \Sigma' * x \mapsto [d,y] * \textsf{ls}(R,y,z) \triangleright \mathcal{C}}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(R,x,z) \triangleright \mathcal{C}; R(\nu, \vec{x}) \leftarrow P(\vec{x}) \land \nu = d } \inferrule[Fold/Seg-Null]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(R_0,x,y) * \textsf{ls}(R_1,y,\textsf{null}) \triangleright \mathcal{C}}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(R,x,\textsf{null})\triangleright \mathcal{C}; R(\nu,\vec{x}) \gets R_0(\nu,\vec{x}) \lor R_1(\nu,\vec{x})} \inferrule[Fold/Seg-Pt]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(R_0,x,y) * \textsf{ls}(R_1,y,z) * z \mapsto [d,n] \triangleright \mathcal{C}}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(R,x,z) * z \mapsto [d,n]\triangleright \mathcal{C}; R(\nu,\vec{x}) \gets R_0(\nu,\vec{x}) \lor R_1(\nu,\vec{x}) } \inferrule*[lab=Unfold/Rec,right={\rm $d,n$ fresh}]{ \Pi : \Sigma \vdash \Pi' \land R_p(\vec{x}): \Sigma' * \textsf{ls}(R_{ls},x,y) \triangleright \mathcal{C} \\ \Pi' \vDash x \neq y }{ \Pi : \Sigma \vdash R_p'(\vec{x},d) : \Sigma' * x \mapsto [d,n] * \textsf{ls}(R_{ls},h,y)\triangleright \mathcal{C}; R_p'(\vec{x},d) \leftarrow R_p(\vec{x}) \land R_{ls}(d,\vec{x}) } \inferrule[Unfold/Base]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(R,x,y) \triangleright \mathcal{C} \\ \Pi' \vDash x = y}{ \Pi : \Sigma \vdash \Pi' : \Sigma' \triangleright \mathcal{C} } \inferrule*[lab=Drop/Pt,right={\rm $d$ not free in $\Pi':\Sigma'$}]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n] \triangleright \mathcal{C}}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{true} \triangleright \mathcal{C}} \inferrule[Drop/Ls]{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{ls}(R,x,y) \triangleright \mathcal{C}}{ \Pi : \Sigma \vdash \Pi' : \Sigma' * \textsf{true} \triangleright \mathcal{C} } \end{mathpar} \caption{Constraint generation for entailment rules} \end{figure*} \begin{figure*} \begin{mathpar} \inferrule[Assign]{\Pi : \Sigma \vdash \Pi' \land R(\vec{x}) : \Sigma' \triangleright \mathcal{C}}{\hoare{\Pi : \Sigma}{x := E}{\Pi' : \Sigma'[x'/x]} \triangleright \mathcal{C}; R'(\vec{x}[x'/x]) \leftarrow R(\vec{x}) \land x' = E} \inferrule[Assume]{\Pi : \Sigma \vdash \Pi' \land R(\vec{x}) : \Sigma' \triangleright \mathcal{C}}{\hoare{\Pi : \Sigma}{assume($\phi$)}{\Pi' \land R'(\vec{x}) : \Sigma'} \triangleright \mathcal{C} ; R'(\vec{x}) \leftarrow R(\vec{x}) \land \phi} \inferrule[Arith-Store]{\Pi : \Sigma \vdash \Pi' \land R(\vec{x}) : \Sigma' * x \mapsto [d,n]\triangleright \mathcal{C}}{\hoare{\Pi : \Sigma}{x->D := E}{\Pi' \land R'(\vec{x}) : \Sigma'[d'/d]} \triangleright \mathcal{C}; R'(\vec{x}[d'/d]) \leftarrow R(\vec{x}) \land d' = E} \inferrule[Arith-Load]{\Pi : \Sigma \vdash \Pi' \land R(\vec{x}) : \Sigma' * x \mapsto [d,n] \triangleright \mathcal{C}}{\hoare{\Pi : \Sigma}{y := x->D}{\Pi' \land R'(\vec{x}) : \Sigma'[y'/y]} \triangleright \mathcal{C}; R'(\vec{x}[d/y]) \leftarrow R(\vec{x})} \inferrule[Heap-Store]{\Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n] \triangleright \mathcal{C}}{\hoare{\Pi : \Sigma}{x->D := E}{\Pi'[d'/d] \land d=E[d'/d] : \Sigma'[d'/d]} \triangleright \mathcal{C}} \inferrule[Heap-Load]{\Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n] \triangleright \mathcal{C}}{\hoare{\Pi : \Sigma}{y := x->D}{\Pi'[y'/y] \land y = d : \Sigma'[y'/y]} \triangleright \mathcal{C}} \inferrule*[lab=Alloc,right={\rm $d,n$ free in $\Pi':\Sigma'$}]{\Pi : \Sigma \vdash \Pi' : \Sigma' \triangleright \mathcal{C}}{\hoare{\Pi : \Sigma}{x := new list}{\Pi'[x'/x] : \Sigma'[x'/x] * x \mapsto [d,n]} \triangleright \mathcal{C}} \inferrule[Free]{\Pi : \Sigma \vdash \Pi' : \Sigma' * x \mapsto [d,n] \triangleright \mathcal{C}}{\hoare{\Pi : \Sigma}{free(x)}{\Pi' : \Sigma'} \triangleright \mathcal{C}} \end{mathpar} \caption{Constraint generation for execution rules} \end{figure*} \section{Characterization of the precondition operation} \begin{definition} Let $(s,h)$ be a state. The \emph{underlying graph} of $(s,h)$ is a triple $G_{s,h} = (V,E,\lambda)$ where $V = \text{dom}(h) \cup \{ \ell' : \exists \ell \in \text{dom}(h). \exists d \in \mathbb{Z}. h(\ell) = [d,\ell']\}$ is a set of vertices, $E \subseteq V \times V$ is a set of edges defined by \[ (v,v') \in E \text{ iff there exists some } d \text{ such that } h(v) = [d,v'] \] and $\lambda : \textsf{HVar} \rightarrow V$ is defined by \[ \lambda(x) = s(x) \] \end{definition} \begin{definition} Let $(\exists X)(\Pi : \Sigma)$ be a separation logic formula with $\Sigma = \Sigma_1 * ... * \Sigma_n$, and such that each $\Sigma_i$ is a points-to formula. The \emph{underlying graph} of $(\exists X)(\Pi : \Sigma)$ is a triple $G_{(\exists X)(\Pi : \Sigma)} = (V,E,\lambda)$ where the set of vertices $V$ is the set of variables appearing to the left or right of a points-to predicate (quotiented by equivalence), and $E \subseteq V \times V$ is a set of edges defined by \[ (v,v') \in E \text{ iff there exists some } i,d \text{ such that } \Sigma_i = v \mapsto [d,v'] \] and $\lambda : \textsf{HVar} \rightarrow V$ is defined by setting $\lambda(x) = y$, where $y$ is the representative of the equivalence class of $x$. \end{definition} \begin{definition}[Subdivision] Let $G = (V,E,\lambda)$. For an edge $(v,v') \in E$, the \emph{$(v,v')$-subdivision} of $G$ is $G^{(v,v')} = (V \cup \{u\}, (E \setminus \{(v,v')\}) \cup \{(v,u), (u,v')\}, \lambda)$. A graph $G'$ is a \emph{subdivision} of $G$ if it is the result of a sequence of edge subdivisions. \end{definition} \begin{definition}[Homeomorphism] Let $G = (V,E,\lambda)$ and $G' = (V',E',\lambda')$. $G$ and $G'$ are \emph{homeomorphic} if there exists a subdivision $\overline{G}$ of $G$ and $\overline{G'}$ of $G'$ such that $\overline{G}$ and $\overline{G'}$ are isomorphic. \end{definition} \begin{definition}[Topological entailment] Let $S$ be a separation logic formula without list-segment predicates, and let $P$, $P'$ be (arbitrary) separation logic formulae. We write $P \vDash_S P'$ if for all $(s,h)$ such that $(s,h) \models P$ and the underlying graph of $(s,h)$ is homeomorphic to the underlying graph of $S$, we have $(s,h) \models P'$. \end{definition} \begin{proposition} Let $P$, $P'$ be separation logic formulae. If $P \vDash P'$, then $P \vDash_S P'$ for any $S$. \end{proposition} \begin{conjecture} For any $S,S',I',c$, the precondition rules compute a formula $I$ such that \begin{enumerate} \item $S \models I$ \item $\hoare{I}{c}{I'}$ \end{enumerate} and for any $J$ such that the above two conditions hold, we have \[ J \vDash_S I \] \end{conjecture} \end{document} \section{Preliminaries} \label{sec:prelims} \subsection{Separation Logic} We define \textsf{RSep}\xspace, a fragment of separation logic formulas featuring points-to predicates and general recursive predicates refined by theory propositions. \begin{figure}[t] \centering \fontsize{8.4}{9}\selectfont $ \begin{array}{lll||lrl} x,y \in \textsf{HVar}& \hspace*{10pt}\text{(Heap variables)} & && E,F \in \textsf{HTerm}&::= \textsf{null} \mid x\\ a,b \in \textsf{DVar}& \hspace*{10pt}\text{(Data variables)} & && \AE &::= A \ \mid E\\ A \in \textsf{DTerm} & \hspace*{10pt} \text{(Data terms)} & && \Pi \in \textsf{Pure}&::= \emph{true} \mid E = E \mid E \neq E \mid\\ \phi \in \textsf{DFormula} & \hspace*{10pt} \text{(Data formulas)} && &&~~~~~\varphi \ \mid \Pi \land \Pi\\ Z \in \textsf{RPred}\xspace & \hspace*{10pt} \text{(Rec. predicates)} & && H \in \textsf{Heaplet}&::= \textsf{true} \mid \textsf{emp} \mid E \mapsto [\vec{A},\vec{E}] \mid Z(\vec{\theta},\vec{E})\\ \theta \in \textsf{Refinement} &::= \lambda \vec{a}. \phi & && \Sigma \in \textsf{Spatial}&::= H \mid H * \Sigma\\ X \subseteq \textsf{Var}&::= x \mid a & && P \in \textsf{RSep}\xspace&::= \qs{X}{\Pi}{\Sigma} \end{array} $ \caption{Syntax of \textsf{RSep}\xspace formulas.} \label{fig:logic-syntax} \end{figure} Fig.~\ref{fig:logic-syntax} defines the syntax of \textsf{RSep}\xspace formulas. In comparison with the standard list fragment used in separation logic analyses (e.g.,~\cite{Berdine2005,Cook2011,Perez2011}), the differentiating features of \textsf{RSep}\xspace are: (1) General \emph{recursive predicates}, for describing unbounded pointer structures like lists, trees, etc. (2) Recursive predicates are augmented with a vector of \emph{refinements}, which are used to constrain the data values appearing on the data structure defined by the predicate, detailed below. (3) Each heap cell (points-to predicate), $E \mapsto [\vec{A},\vec{E}]$, is a \emph{record} consisting of \emph{data} fields (a vector $\vec{A}$ of \textsf{DTerm}\xspace) followed by \emph{heap} fields (a vector $\vec{E}$ of \textsf{HTerm}\xspace). (Notationally, we will use $d_i$ to refer to the $i$th element of the vector $\vec{d}$, and $\vec{d}[t/d_i]$ to refer to the vector $\vec{d}$ with the $i$th element modified to $t$.) (4) \textsf{Pure} formulas contain heap and first-order data constraints. Our definition is (implicitly) parameterized by a first-order theory $\mathcal{T}$. $\textsf{DVar}$ denotes the set of theory variables, which we assume to be disjoint from $\textsf{HVar}$ (the set of heap variables). $\textsf{DTerm}$ and $\textsf{DFormula}$ denote the sets of theory terms and formulas, and we assume that heap variables do not appear in theory terms. For an \textsf{RSep}\xspace formula $P$, $\textsf{Var}(P)$ denotes its free (data and heap) variables. We treat a $\textsf{Spatial}$ formula $\Sigma$ as a multiset of heaplets, and consider formulas to be equal when they are equal as multisets. For \textsf{RSep}\xspace formulas $P = \qs{X_P}{\Pi_P}{\Sigma_P}$ and $Q = \qs{X_Q}{\Pi_Q}{\Sigma_Q}$, we write $P*Q$ to denote the \textsf{RSep}\xspace formula \shortenpar \[ P * Q = \qs{X_P \cup X_Q}{\Pi_P \land \Pi_Q}{\Sigma_P * \Sigma_Q} \] assuming that $X_P$ is disjoint from $\textsf{Var}(Q)$ and $X_Q$ is disjoint from $\textsf{Var}(P)$ (if not, then $X_P$ and $X_Q$ are first suitably renamed). For a set of variables $X$, we write $\qex{X}{P}$ to denote the \textsf{RSep}\xspace formula \[ \qex{X}{P} = \qs{X \cup X_P}{\Pi_P}{\Sigma_P} \] \paragraph{Recursive predicates} Each recursive predicate $Z \in \textsf{RPred}\xspace$ is associated with a definition that describes how the predicate is unfolded. Before we formalize these definitions, we will give some examples. \shortenpar The definition of the list segment predicate from Sec.~\ref{sec:ex} is: \[\begin{split} \textsf{ls}(R,x,y) \equiv {}& (x = y : \textsf{emp}) \lor {}\\ & \qs{d,n'} {x \neq y \land R(d)} {x \mapsto [d,n'] * \textsf{ls}(R,n',y)} \end{split}\] In the above, $R$ is a \emph{refinement variable}, which may be instantiated to a concrete refinement $\theta \in \textsf{Refinement}$. For example, $\textsf{ls}(\lda{a . a \geq 0},x,y)$ indicates that there is a list from $x$ to $y$ where every element of the list is at least $0$. A refined binary tree predicate is a more complicated example: \begin{align*} \textsf{bt}(Q,L,R,x) =&\; (x = \textsf{null} : \textsf{emp})\\ \lor &\; \qex{d,l,r}{Q(d) : x \mapsto [d,l,r] \\ &\hspace*{0.5cm} * \textsf{bt}((\lambda a. Q(a) \land L(d,a)), L, R, l) \\ &\hspace*{0.5cm} * \textsf{bt}((\lambda a. Q(a) \land R(d,a)), L, R, r)} \end{align*} This predicate has three refinement variables: a unary refinement $Q$ (which must be satisfied by every node in the tree), a binary refinement $L$ (which is a relation that must hold between every node and its descendants to the left), and a binary refinement $R$ (which is a relation that must hold between every node and its descendants to the right). For example, \[ \textsf{bt}((\lambda a. \emph{true}), (\lambda a, b. a \geq b), (\lambda a, b. a \leq b), x) \] indicates that $x$ is the root of a \emph{binary search tree}, and \[ \textsf{bt}((\lambda a. a \geq 0), (\lambda a, b. a \leq b), (\lambda a, b. a \leq b), x) \] indicates that $x$ is the root of a \emph{binary min-heap} with non-negative elements. To formalize these definitions, we first define \emph{refinement terms} and \emph{refined formulas}: a refinement term $\tau$ is either (1) a refinement variable $R$ or (2) an abstraction $(\lambda a_1,\dotsc,a_n. \Phi)$, where $\Phi$ is a refined formula. A \emph{refined formula} is a conjunction where each conjunct is either a data formula (\textsf{DFormula}) or the application $\tau(\vec{A})$ of a refinement term to a vector of data terms (\textsf{DTerm}). A \emph{predicate definition} has the form \[Z(\vec{R}, \vec{x}) \equiv \qex{X_1}{\Pi_1 \land \Phi_1 : \Sigma_1} \lor \dotsb \lor \qex{X_n}{\Pi_n \land \Phi_n : \Sigma_n}\] where $\vec{R}$ is a vector of refinement variables, $\vec{x}$ is a vector of heap variables, and where refinement terms may appear as refinements in the spatial formulas $\Sigma_i$. We refer to the disjuncts of the above formula as the \emph{cases} for $Z$, and define $\mathit{cases}(Z(\vec{R},\vec{x}))$ to be the set of cases of $Z$. $\vec{R}$ and $\vec{x}$ are bound in $\mathit{cases}(Z(\vec{R},\vec{x}))$, and we will assume that predicate definitions are closed, that is, for each case of $Z$, the free refinement variables belong to $\vec{R}$, the free heap variables belong to $\vec{x}$, and there are no free data variables. We also assume that they are well-typed in the sense that each refinement term $\tau$ is associated with an arity, and whenever $\tau(\vec{A})$ appears in a definition, the length of $\vec{A}$ is the arity of $\tau$. \shortenpar \paragraph{Semantics} The semantics of our logic, defined by a satisfaction relation $s,h \models Q$, is essentially standard. Each predicate $Z \in \textsf{RPred}\xspace$ is defined to be the least solution\footnote{Our definition does not preclude ill-founded predicates; such predicates are simply unsatisfiable, and do not affect the technical development in the rest of the paper.} to the following equivalence: \[ s,h \models Z(\vec{\theta},\vec{E}) \iff \exists P \in \mathit{cases}(Z(\vec{R},\vec{x})).\ s,h \models P[\vec{\theta}/\vec{R} , \vec{E}/\vec{x} ] \] Note that when substituting a $\lambda$-abstraction for a refinement variable, we implicitly $\beta$-reduce resulting applications. For example, $R(b)[(\lambda a. a \geq 0)/R] = b \geq 0$. Semantic entailment is denoted by $P \models Q$, and provable entailment by $P \vdash Q$. When referring to a proof that $P \vdash Q$, we will mean a sequent calculus proof. \subsection{Programs} A program $\mathcal{P}$ is a tuple $\tuple{V,E,v_{\rm i},v_{\rm e}}$, where \begin{compactitem} \item $V$ is a set of control locations, with a distinguished \emph{entry} node $v_{\rm i} \in V$ and \emph{error} (exit) node $v_{\rm e} \in V$, and \item $E \subseteq V \times V$ is a set of directed edges, where each $e \in E$ is associated with a program command $e^{\rm c}$. \end{compactitem} We impose the restriction that all nodes $V \setminus \{v_{\rm i}\}$ are reachable from $v_{\rm i}$ via $E$, and all nodes can reach $v_{\rm e}$. The syntax for program commands appears below. Note that the allocation command creates a record with $n$ data fields, $D_1,\ldots,D_n$, and $m$ heap fields, $N_1,\ldots,N_m$. To access the $i$th data field of a record pointed to by \texttt{x}, we use \texttt{x->D$_i$} (and similarly for heap fields). We assume that programs are well-typed, but not necessarily memory safe. ~\\~\\ \noindent {\small \begin{minipage}{0.3\linewidth} \textbf{Assignment}: \texttt{x := \AE}\\ \textbf{Heap store}: \texttt{x->N$_i$ := E}\\ \textbf{Heap load}: \texttt{y := x->N$_i$} \end{minipage} \hfill \begin{minipage}{0.3\linewidth} \textbf{Assumption}: \texttt{assume($\Pi$)}\\ \textbf{Data store}: \texttt{x->D$_i$ := A}\\ \textbf{Data load}: \texttt{y := x->D$_i$} \end{minipage} \hfill \begin{minipage}{0.33\linewidth} \textbf{Allocation}: \texttt{x := new($n,m$)}\\ \textbf{Disposal}: \texttt{free(x)}\\ \end{minipage}} \mbox{}\\\noindent As is standard, we compile assert commands to reachability of $v_{\rm e}$. \section{Related Work} \label{sec:rel} \paragraph{Abstraction Refinement for the Heap} To the best of our knowledge, the work of Botincan et al.~\cite{Botincan13} is the only separation logic shape analysis that employs a form of abstraction refinement. It starts with a family of separation logic domains of increasing precision, and uses spurious counterexample traces (reported by forward fixed-point computation) to pick a more precise domain to restart the analysis and (possibly) eliminate the counterexample. Limitations of this technique include: (1) The precision of the analysis is contingent on the set of abstract domains it is started with. (2) The refinement strategy (in contrast to \textsc{SplInter}\xspace) does not guarantee progress (it may explore the same path repeatedly), and may report false positives. On the other hand, given a program path, \textsc{SplInter}\xspace is guaranteed to find a proof for the path or correctly declare it an unsafe execution. (3) Finally, it is unclear whether refinement with a powerful theory like linear arithmetic can be encoded in such a framework, e.g., as a set of domains with increasingly more arithmetic predicates. Podelski and Wies~\cite{Podelski10} propose an abstraction refinement algorithm for a shape-analysis domain with a logic-based view of three-valued shape analysis (specifically, first-order logic plus transitive closure). Spurious counterexamples are used to either refine the set of predicates used in the analysis, or refine an imprecise abstract transformer. The approach is used to verify specifications given by the user as first-order logic formulas. A limitation of the approach is that refinement is syntactic, and if an important recursive predicate (e.g., there is a list from $x$ to $\textsf{null}$) is not explicitly supplied in the specification, it cannot be inferred automatically. Furthermore, abstract post computation can be expensive, as the abstract domain uses quantified predicates. Additionally, the analysis assumes a memory safe program to start, whereas, in \textsc{SplInter}\xspace, we construct a memory safety proof as part of the invariant, enabling us to detect unsafe memory operations that lead to undefined program behavior. Beyer et al.~\cite{Beyer06} propose using shape analysis information on demand to augment numerical predicate abstraction. They use shape analysis as a backup analysis when failing to prove a given path safe without tracking the heap, and incrementally refines TVLA's~\cite{Bogudlov07} three-valued shape analysis~\cite{Sagiv99} to track more heap information as required. As with~\cite{Podelski10},~\cite{Beyer06} makes an \emph{a priori} assumption of memory safety and requires an expensive abstract post operator. Finally, Manevich et al.~\cite{Manevich06} give a theoretical treatment of counterexample-driven refinement in power set (e.g., shape) abstract domains. \paragraph{Combined Shape and Data Analyses} The work of Magill et al.~\cite{Magill2010} infers shape and numerical invariants, and is the most closely related to ours. First, a separation logic analysis is used to construct a memory safety proof of the whole program. This proof is then \emph{instrumented} by adding additional user-defined integer parameters to the recursive predicates appearing in the proof (with corresponding user-defined interpretations). A numerical program is generated from this instrumented proof and checked using an off-the-shelf verification tool, which need not reason about the heap. Our technique and \cite{Magill2010}'s are similar in that we both decorate separation logic proofs with additional information: in \cite{Magill2010}, the extra information is instrumentation variables; in this paper, the extra information is refinement predicates. Neither of these techniques properly subsumes the other, and we believe that they may be profitably combined. An important difference is that we synthesize data refinements automatically from program paths, whereas \cite{Magill2010} uses a fixed (though user-definable) abstraction. A number of papers have proposed abstract domains for shape and data invariants. Chang and Rival~\cite{Chang2008} propose a separation logic--based abstract domain that is parameterized by programmer-supplied \emph{invariant checkers} (recursive predicates) and a data domain for reasoning about contents of these structures. McCloskey et al.~\cite{DBLP:conf/sas/McCloskeyRS10} also proposed a combination of heap and numeric abstract domains, this time using 3-valued structures for the heap. While the approaches to combining shape and data information are significantly different, an advantage of our method is that it does not lose precision due to limitations in the abstract domain, widening, and join. Bouajjani et al.~\cite{Bouajjani10,Bouajjani12} propose an abstract domain for list manipulating programs that is parameterized by a data domain. They show that by varying the data domain, one can infer invariants about list sizes, sum of elements, etc. Quantified data automata (QDA)~\cite{Garg13} have been proposed as an abstract domain for representing list invariants where the data in a list is described by a regular language. In~\cite{Garg13b}, invariants over QDA have been synthesized using language learning techniques from concrete program executions. Expressive logics have also been proposed for reasoning about heap and data~\cite{Qiu13}, but have thus far been only used for invariant checking, not invariant synthesis. A number of decision procedures for combinations of the singly-linked-list fragment of separation logic with SMT theories have recently been proposed~\cite{Piskac13,Navarro13}. \paragraph{Path-based Verification} A number of works proposed path-based algorithms for verification. Our work builds on McMillan's \textsc{Impact}\xspace technique~\cite{McMillan2006} and extends it to heap/data reasoning. Earlier work~\cite{Henzinger04} used interpolants to compute predicates from spurious paths in a CEGAR loop. Beyer et al.~\cite{Beyer07} proposed \emph{path invariants}, where infeasible paths induce program slices that are proved correct, and from which predicates are mined for full program verification. Heizmann et al.~\cite{Heizmann10} presented a technique that uses interpolants to compute path proofs and generalize a path into a visibly push-down language of correct paths. In comparison with \textsc{SplInter}\xspace, all of these techniques are restricted to first-order invariants. \shortenpar Our work is similar to that of Itzhaky et al.~\cite{DBLP:conf/cav/ItzhakyBRST14}, in the sense that we both generalize from bounded unrollings of the program to compute ingredients of a proof. However, they compute proofs in a fragment of first-order logic that can only express linked lists and has not yet been extended to combined heap and data properties. \section{Spatial Interpolation Modulo Theories} \label{sec:snint} \begin{figure}[htp] \centering \figsep{Entailment rules} \vspace{-.15in} \begin{mathpar} \inferrule[Star]{ \cjudge{\Pi \land \Phi : \Sigma_0 \vdash \Pi' \land \Phi' : \Sigma_0'} { \mathcal{C}_0} \\ \cjudge{\Pi \land \Phi : \Sigma_1 \vdash \Pi' \land \Phi' : \Sigma_1'} { \mathcal{C}_1 } }{ \cjudge{\Pi \land \Phi : \Sigma_0 * \Sigma_1 \vdash \Pi' \land \Phi' : \Sigma_0' * \Sigma_1'} {\mathcal{C}_0;\mathcal{C}_1} } \inferrule[Points-to]{ \Pi \models \Pi' } { \cjudge{\Pi \land \Phi : E \mapsto [\vec{A},\vec{F}] \vdash \Pi' \land \Phi' : E \mapsto [\vec{A},\vec{F}]} { \Phi' \gets \Phi } } \inferrule*[lab=Fold,right={\rm $P \in \mathit{cases}(Z(\vec{R},\vec{x}))$}]{ \cjudge{\Pi : \Sigma \vdash \Pi' : \Sigma' * P[\vec{\tau}/\vec{R},\vec{E}/\vec{x}]} {\mathcal{C}} }{ \cjudge{\Pi : \Sigma \vdash \Pi' : \Sigma' * Z(\vec{\tau},\vec{E}) } {\mathcal{C}} } \inferrule*[lab=Unfold,right={\rm\begin{minipage}{3cm}$\{P_1,\dotsc,P_n\} =$\\$\mathit{cases}(Z(\vec{R},\vec{x}))$\end{minipage}}]{ \cjudge{\Pi : \Sigma * P_1[\vec{\tau}/\vec{R},\vec{E}/\vec{x}] \vdash \Pi' : \Sigma' } {\mathcal{C}_1}\\ \dotsi\\\\ \cjudge{\Pi : \Sigma * P_n[\vec{\tau}/\vec{R},\vec{E}/\vec{x}] \vdash \Pi' : \Sigma'} {\mathcal{C}_n} }{ \cjudge{\Pi : \Sigma * Z(\vec{\tau},\vec{E}) \vdash \Pi' : \Sigma'} {\mathcal{C}_1; \dotsc{;}\, \mathcal{C}_n} } \inferrule*[lab=Predicate,right={\rm\begin{minipage}{3cm} Where $\tau_i = \lda{\vec{a}_i. \Psi_i}$\\ and $\tau_i' = \lda{\vec{a}_i. \Psi_i'}$ \end{minipage}}]{ \Pi \models \Pi' \\ }{ \Phi' \gets \Phi; \Psi_1' \gets \Psi_1 \land \Phi; \dotsc{;}\, \Psi_{|\vec{\tau}|}' \gets \Psi_{|\vec{\tau}|} \land \Phi \triangleright\\ \Pi \land \Phi : Z(\vec{\tau},\vec{E}) \vdash \Pi' \land \Phi' : Z(\vec{\tau'},\vec{E}) } \end{mathpar} \vspace{-.05in} \figsep{Execution rules} \vspace{-.15in} \begin{mathpar} \inferrule[Data-Assume]{ \cjudge{P \land \phi \vdash Q} {\mathcal{C}} }{ \cjudge{\hoare{P} {assume($\phi$)} {Q}} {\mathcal{C}} } \inferrule[Free]{ \cjudge{P \vdash \Pi \land \Phi : \Sigma * x \mapsto [\vec{A},\vec{E}]} {\mathcal{C}} }{ \cjudge{\hoare{P}{free(x)}{\Pi \land \Phi : \Sigma}} {\mathcal{C}} } \inferrule[Sequence]{ \cjudge{\hoare{P} {$\pi_0$} {\widehat{O}}} {\mathcal{C}_0}\\ \cjudge{\hoare{\widehat{O}} {$\pi_1$} {Q}} {\mathcal{C}_1} }{ \cjudge{\hoare{P} {$\pi_0;\pi_1$} {Q}} {\mathcal{C}_0;\mathcal{C}_1} } \inferrule[Data-Load]{ \cjudge{P \vdash \qex{X}{\Pi \land \widehat{\Phi} : \widehat{\Sigma} * x \mapsto [\vec{A},\vec{E}]}} {\mathcal{C}_0}\\\\ \cjudge{\qex{X,a'}{\Pi[a'/a] \land \widehat{\Phi}[a'/a] \land a = A_i[a'/a] \colon (\widehat{\Sigma} * x \mapsto [\vec{A},\vec{E}])[a'/a]} \vdash Q} {\mathcal{C}_1} }{ \cjudge{\hoare{P} {a := x->D$_i$} {Q}} {\mathcal{C}_0; \mathcal{C}_1 } } \inferrule[Data-Assign]{ \cjudge{\qex{a'}{\Pi \land \Phi[a'/a] \land a = A[a'/a] : \Sigma[a'/a] \vdash Q}} {\mathcal{C}} }{ \cjudge{\hoare{\Pi \land \Phi \colon \Sigma}{a := A}{Q}} { \mathcal{C} } } \inferrule[Data-Store]{ \cjudge{P \vdash \qex{X}{\Pi \land \widehat{\Phi} : \widehat{\Sigma} * x \mapsto [\vec{A},\vec{E}]}} {\mathcal{C}_0}\\\\ \cjudge{\qex{X,a'}{\Pi \land \widehat{\Phi} \land a' = A : \widehat{\Sigma} * x \mapsto [\vec{A}[a'/A_i],\vec{E}]} \vdash Q} {\mathcal{C}_1} }{ \cjudge{\hoare{P}{x->D$_i$ := A}{Q}} {\mathcal{C}_0; \mathcal{C}_1} } \inferrule[Alloc]{ \cjudge{\qex{x',\vec{a},\vec{x}}{\Pi[x'/x] \land \Phi : \Sigma[x'/x] * x \mapsto [\vec{a},\vec{x}]} \vdash Q} {\mathcal{C}} }{ \cjudge{\hoare{\Pi \land \Phi \colon \Sigma} {x := new($n$,$m$)} {Q}} {\mathcal{C}} \vspace{-19pt} } \end{mathpar} \caption{Constraint generation.} \label{fig:constraints} \end{figure} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \begin{figure} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{4pt} \small \scalebox{0.8}{ \begin{minipage}[t]{5.5cm} \textbf{Refined memory safety proof $\zeta'$ }\\ $\color{dgray} \{R_0(i):\textsf{true}\}$\\ \texttt{i = nondet(); x = null}\\ $\color{dgray} \{R_1(i):\textsf{ls}(\lda{a. R_{\textsf{ls} 1}(\nu,i)},x,\textsf{null}) * \textsf{true}\}$\\ \texttt{assume(i != 0); \ldots ; i-\!-; }\\ $\color{dgray} \{R_2(i):\textsf{ls}(\lda{a. R_{\textsf{ls} 2}(\nu,i)},x,\textsf{null}) * \textsf{true}\}$\\ \texttt{assume(i == 0)}\\ $\color{dgray} \{R_3(i):\textsf{ls}(\lda{a. R_{\textsf{ls} 3}(\nu,i)},x,\textsf{null}) * \textsf{true}\}$\\ \texttt{assume(x != null)}\\ $\color{dgray} \{\qex{d',y}{R_4(i,d'): x \mapsto [d',y] * \textsf{true}}\}$\\ \end{minipage}} % \scalebox{0.8}{ \begin{minipage}[t]{6.7cm} \textbf{Constraint system $\mathcal{C}$}\\ $R_0(i') \gets \textit{true}$\\ $R_1(i') \gets R_0(i)$\\ $R_2(i') \gets R_1(i) \land i \neq 0 \land i' = i + 1$\\ $R_3(i) \gets R_2(i) \land i = 0$\\ $R_4(i,d') \gets R_3(i) \land R_{\textsf{ls} 3}(d',i)$\\ $R_{\textsf{ls} 2}(\nu,i') \gets R_1(i) \land R_{\textsf{ls} 1}(\nu,i) \land i \neq 0 \land i' = i + 1$\\ $R_{\textsf{ls} 2}(\nu,i') \gets R_1(i) \land \nu = i \land i \neq 0 \land i' = i + 1$\\ $R_{\textsf{ls} 3}(\nu,i) \gets R_2(i) \land R_{\textsf{ls} 2}(\nu,i) \land i = 0$\\ $d' \geq 0 \gets R_4(i,d')$ \end{minipage}} % \scalebox{0.8}{ \begin{minipage}[t]{2.5cm} \textbf{Solution $\sigma$}\\ $R_0(i): \textit{true}$\\ $R_1(i): \textit{true}$\\ $R_2(i): \textit{true}$\\ $R_3(i): \textit{true}$\\ $R_4(i,d'): d' \geq 0$\\ $R_{\textsf{ls} 1}(\nu,i): \nu \geq i$\\ $R_{\textsf{ls} 2}(\nu,i): \nu \geq i$\\ $R_{\textsf{ls} 3}(\nu,i): \nu \geq 0$ \end{minipage}} \caption{Example constraints. \label{fig:ex_refine}} \end{figure} We now consider the problem of \emph{refining} (or \emph{strengthening}) a given separation logic proof of memory safety with information about (non-spatial) data. This refinement procedure results in a proof of a conclusion stronger than can be proved by reasoning about the heap alone. In view of our example from Fig.~\ref{fig:ex}, this section addresses how to derive the third sequence (Spatial Interpolants Modulo Theories) from the second (Spatial Interpolants). The input to our spatial interpolation modulo theories procedure is a path $\pi$, a separation logic (\textsf{Sep}\xspace) proof $\zeta$ of the triple $\hoare{\textit{true}:\textsf{emp}}{$\pi$}{\textit{true}:\textsf{true}}$ (i.e., a memory safety proof for $\pi$), and a postcondition $\phi$. The goal is to transform $\zeta$ into an \textsf{RSep}\xspace proof of the triple $\hoare{\textit{true}:\textsf{emp}}{$\pi$}{\phi:\textsf{true}}$. The high-level operation of our procedure is as follows. First, we traverse the memory safety proof $\zeta$ and build (1) a corresponding \emph{refined} proof $\zeta'$ where refinements may contain second-order variables, and (2) a constraint system $\mathcal{C}$ which encodes logical dependencies between the second-order variables. We then attempt to find a solution to $\mathcal{C}$, which is an assignment of data formulas to the second-order variables such that all constraints are satisfied. If we are successful, we use the solution to instantiate the second-order variables in $\zeta'$, which yields a valid \textsf{RSep}\xspace proof of the triple $\hoare{\textit{true}:\textsf{emp}}{$\pi$}{\phi:\textsf{true}}$. \shortenpar \paragraph{Horn Clauses} The constraint system produced by our procedure is a recursion-free set of Horn clauses, which can be solved efficiently using existing first-order interpolation techniques (see~\cite{Rummer13} for a detailed survey). Following \cite{Gupta2011}, we define a \emph{query} to be an application $Q(\vec{a})$ of a second-order variable $Q$ to a vector of (data) variables, and define an \emph{atom} to be either a data formula $\phi \in \textsf{DFormula}$ or a query $Q(\vec{a})$. A \emph{Horn clause} is of the form \iflong\[\else $\fi h \gets b_1 \land \cdots \land b_N \iflong\]\else$ \fi where each of $h,b_1,\dotsc,b_N$ is an atom. In our constraint generation rules, it will be convenient to use a more general form which can be translated to Horn clauses: we will allow constraints of the form \iflong\[\else $\fi h_1 \land \cdots \land h_M \gets b_1 \land \cdots \land b_N \iflong\]\else$ \fi (shorthand for the set of Horn clauses $\{ h_i \gets b_1 \land \cdots \land b_N \}_{1 \leq i \leq M}$) and we will allow queries to be of the form $Q(\vec{A})$ (i.e., take arbitrary data terms as arguments rather than variables). If $\mathcal{C}$ and $\mathcal{C}'$ are sets of constraints, we will use $\mathcal{C};\mathcal{C}'$ to denote their union. A \emph{solution} to a system of Horn clauses $\mathcal{C}$ is a map $\sigma$ that assigns each second-order variable $Q$ of arity $k$ a $\textsf{DFormula}$ $Q^\sigma$ with free variables drawn from $\vec{\nu} = \tuple{\nu_1,\dotsc,\nu_k}$ such that for each clause \iflong\[\else $\fi h \gets b_1 \land \cdots \land b_N \iflong\]\else$ \fi in $\mathcal{C}$ the implication \iflong\[\else $\fi \forall A. (h^\sigma \Leftarrow (\exists B. b_1^\sigma \land \cdots \land b_N^\sigma)) \iflong\]\else$ \fi holds, where $A$ is the set of free variables in $h$ and $B$ is the set of variables free in some $b_i$ but not in $h$. In the above, for any data formula $\phi$, $\phi^\sigma$ is defined to be $\phi$, and for any query $Q(\vec{a})$, $Q(\vec{a})^\sigma$ is defined to be $Q^\sigma[a_1/\nu_1,\dotsc,a_k/\nu_k]$ (where $k$ is the arity of $Q$). \shortenpar \paragraph{Constraint Generation Calculus} We will present our algorithm for spatial interpolation modulo theories as a calculus whose inference rules mirror the ones of separation logic. The calculus makes use of the same syntax used in recursive predicate definitions in Sec.~\ref{sec:prelims}. We use $\tau$ to denote a \emph{refinement term} and $\Phi$ to denote a \emph{refined formula}. The calculus has two types of judgements. An \emph{entailment judgement} is of the form \[ \cjudge{\qex{X}{\Pi \land \Phi : \Sigma} \vdash \qex{X'}{\Pi' \land \Phi' : \Sigma'}}{\mathcal{C}} \] where $\Pi,\Pi'$ are equational pure assertions over heap terms, $\Sigma,\Sigma'$ are refined spatial assertions, $\Phi$, $\Phi'$ are refined formulas, and $\mathcal{C}$ is a recursion-free set of Horn clauses. Such an entailment judgement should be read as ``for any solution $\sigma$ to the set of constraints $\mathcal{C}$, $\qex{X}{\Pi \land \Phi^\sigma : \Sigma^\sigma}$ entails $\qex{X'}{\Pi' \land \Phi'^\sigma : \Sigma'^\sigma}$,'' where $\Phi^\sigma$ is $\Phi$ with all second order variables replaced by their data formula assignments in $\sigma$ (and similarly for $\Sigma^\sigma$). Similarly, an \emph{execution judgement} is of the form \[ \cjudge{\hoare{\qex{X}{\Pi \land \Phi : \Sigma}}{$\pi$}{\qex{X'}{\Pi' \land \Phi' : \Sigma'}}}{\mathcal{C}} \] where $\pi$ is a path and $X,X',\Pi,\Pi',\Phi,\Phi',\Sigma,\Sigma'$, and $\mathcal{C}$ are as above. Such an execution judgement should be read as ``for any solution $\sigma$ to the set of constraints $\mathcal{C}$,\shortenpar \[\hoare{\qex{X}{\Pi \land \Phi^\sigma : \Sigma^\sigma}}{\ensuremath \pi}{\qex{X'}{\Pi' \land \Phi'^\sigma : \Sigma'^\sigma}}\] is a valid triple.'' Let $\pi$ be a path, let $\zeta$ be a separation logic proof of the triple $\hoare{\textit{true}:\textsf{emp}}{$\pi$}{\textit{true}:\textsf{true}}$ (i.e., a memory safety proof for $\pi$), and let $\phi \in \textsf{DFormula}$ be a postcondition. Given these inputs, our algorithm operates as follows. We use $\vec{v}$ to denote a vector of all data-typed program variables. The triple is \emph{rewritten with refinements} by letting $R$ and $R'$ be fresh second-order variables of arity $|\vec{v}|$ and conjoining $R(\vec{v})$ and $R'(\vec{v})$ to the pre and post. By recursing on $\zeta$, at each step applying the appropriate rule from our calculus in Fig.~\ref{fig:constraints}, we derive a judgement\shortenpar \begin{mathpar} \text{\footnotesize{\( \inferrule{ \zeta' }{ \cjudge{\hoare{\textit{true} \land R(\vec{v}):\textsf{true}}{$\pi$}{\textit{true} \land R'(\vec{v}):\textsf{true}}}{\mathcal{C}} } \)}} \end{mathpar} and then compute a solution $\sigma$ to the constraint system \[\mathcal{C};\hspace*{0.25cm} R(\vec{v}) \gets \textit{true};\hspace*{0.25cm} \phi \gets R'(\vec{v})\] (if one exists). The algorithm then returns $\zeta'^\sigma$, the proof obtained by applying the substitution $\sigma$ to $\zeta'$. Intuitively, our algorithm operates by recursing on a separation logic proof, introducing refinements into formulas on the way down, and building a system of constraints on the way up. Each inference rule in the calculus encodes both the downwards and upwards step of this algorithm. For example, consider the {\sc Fold} rule of our calculus: we will illustrate the intended reading of this rule with a concrete example. Suppose that the input to the algorithm is a derivation of the following form: \begin{mathpar} \text{\footnotesize{\( \inferrule*[right=Fold]{ \inferrule*{ \zeta_0 }{ x \mapsto [a,\textsf{null}] \vdash \qex{b,y}{x \mapsto [b,y] * \textsf{ls}(y,\textsf{null})} } }{ Q(i) : x \mapsto [a,\textsf{null}] \vdash R(i) : \textsf{ls}(\lda{a. S(x,a)}, x,\textsf{null}) } \)}} \end{mathpar} (i.e., a derivation where the last inference rule is an application of {\sc Fold}, and the conclusion has already been rewritten with refinements). We introduce refinements in the premise and recurse on the following derivation: \begin{mathpar} \text{\footnotesize{\( \inferrule*{ \zeta_0 }{ Q(i) : x \mapsto [a,\textsf{null}] \vdash \\\qex{b,y}{R(i) \land S(i,b) : x \mapsto [b,y] * \textsf{ls}(\lda{a. S(x,a)},y,\textsf{null})} } \)}} \end{mathpar} The result of this recursive call is a refined derivation $\zeta_0'$ as well as a constraint system $\mathcal{C}$. We then return both (1) the refined derivation obtained by catenating the conclusion of the {\sc Fold} rule onto $\zeta_0'$ and (2) the constraint system $\mathcal{C}$. A crucial point of our algorithm is hidden inside the hat notation in Fig.~\ref{fig:constraints} (e.g, $\widehat{O}$ in {\sc Sequence}): this notation is used to denote the introduction of fresh second-order variables. For many of the inference rules (such as {\sc Fold}), the refinements which appear in the premises follow fairly directly from the refinements which appear in the conclusion. However, in some rules entirely new formulas appear in the premises which do not appear in the conclusion (e.g., in the {\sc Sequence} rule in Fig.~\ref{fig:constraints}, the intermediate assertion $\widehat{O}$ is an arbitrary formula which has no obvious relationship to the precondition $P$ or the postcondition $Q$). We refine such formula $O$ by introducing a fresh second-order variable for the pure assertion and for each refinement term that appears in $O$. The following offers a concrete example. \begin{example} Consider the trace $\pi$ in Fig.~\ref{fig:ex}. Suppose that we are given a memory safety proof for $\pi$ which ends in an application of the {\sc Sequence} rule: \begin{mathpar} \text{\footnotesize{\( \inferrule*[right=Sequence]{ \hoare{\textit{true} : \textsf{emp}}{$\pi_0$}{\textit{true} : \textsf{ls}(x,\textsf{null})}\\ \hoare{\textit{true} : \textsf{ls}(x,\textsf{null})}{$\pi_1$}{\qex{b,y}{\textit{true} : x \mapsto [b,y]}} }{ \hoare{Q(i) : \textsf{emp}}{$\pi_0;\pi_1$}{\qex{b,y}{R(i,b) : x \mapsto [b,y]}} } \)}} \end{mathpar} where $\pi$ is decomposed as $\pi_0;\pi_1$, $\pi_0$ is the path from 1 to 3, and $\pi_1$ is the path from 3 to 4. Let $O = \textit{true} : \textsf{ls}(x,\textsf{null})$ denote the intermediate assertion which appears in this proof. To derive $\widehat{O}$, we introduce two fresh second order variables, $S$ (with arity 1) and $T$ (with arity 2), and define $\widehat{O} = {S(i) : \textsf{ls}(\lda{a. T(i,a)},x,\textsf{null})}$. The resulting inference is as follows: \begin{mathpar} \text{\footnotesize{\( \inferrule*{ \hoare{Q(i) : \textsf{emp}}{$\pi_0$}{S(i) : \textsf{ls}(\lda{a.T(i,a)},x,\textsf{null})}\\ \hoare{S(i) : \textsf{ls}(\lda{a.T(i,a)},x,\textsf{null})}{$\pi_1$}{\qex{b,y}{R(i,b) : x \mapsto [b,y]}} }{ \qquad \hoare{Q(i) : \textsf{emp}}{$\pi_0;\pi_1$}{\qex{b,y}{R(i,b) : x \mapsto [b,y]}} \qquad\lrcorner } \)}} \end{mathpar} \end{example} The following example provides a simple demonstration of our constraint generation procedure: \begin{example} Recall the example in Fig.~\ref{fig:ex} of Sec.~\ref{sec:ex}. The row of spatial interpolants in Fig.~\ref{fig:ex} is a memory safety proof $\zeta$ of the program path. Fig.~\ref{fig:ex_refine} shows the refined proof $\zeta'$, which is the proof $\zeta$ with second-order variables that act as placeholders for data formulas. \textbf{\emph{For the sake of illustration, we have simplified the constraints by skipping a number of intermediate annotations in the Hoare-style proof.}} The constraint system $\mathcal{C}$ specifies the logical dependencies between the introduced second-order variables in $\zeta'$. For instance, the relation between $R_{2}$ and $R_{3}$ is specified by the Horn clause $R_{3}(i) \gets R_{2}(i) \land i = 0$, which takes into account the constraint imposed by $\texttt{assume (i == 0)}$ in the path. The Horn clause $d' \geq 0 \gets R_4(i,d')$ specifies the postcondition defined by the assertion $\texttt{assert(x->D >= 0)}$, which states that the value of the data field of the node $x$ should be $\geq 0$. \shortenpar Replacing second-order variables in $\zeta'$ with their respective solutions in $\sigma$ produces a proof that the assertion at the end of the path holds (last row of Fig.~\ref{fig:ex}). \eoe\end{example} \paragraph{Soundness and Completeness} The key result regarding the constraint systems produced by these judgements is that any solution to the constraints yields a valid refined proof. The formalization of the result is the following theorem. \begin{theorem}[Soundness] \label{thm:refine_sound} Suppose that $\pi$ is a path, $\zeta$ is a derivation of the judgement $\cjudge{\hoare{P}{$\pi$}{Q}}{\mathcal{C}}$, and that $\sigma$ is a solution to $\mathcal{C}$. Then $\zeta^\sigma$, the proof obtained by applying the substitution $\sigma$ to $\zeta$, is a (refined) separation logic proof of $\hoare{P^\sigma}{$\pi$}{Q^\sigma}$. \end{theorem} Another crucial result for our counterexample generation strategy is a kind of completeness theorem, which effectively states that the strongest memory safety proof always admits a refinement. \begin{theorem}[Completeness] \label{thm:refine_complete} Suppose that $\pi$ is a memory-feasible path and $\zeta$ is a derivation of the judgement $\cjudge{\hoare{R_0(\vec{v}):\textsf{emp}}{$\pi$}{R_1(\vec{v}) : \textsf{true}}}{\mathcal{C}}$ obtained by symbolic execution. If $\phi$ is a data formula such that $\hoare{\textit{true}:\textsf{emp}}{$\pi$}{\phi : \textsf{true}}$ holds, then there is a solution $\sigma$ to $\mathcal{C}$ such that $R_1^\sigma(\vec{v}) \Rightarrow \phi$. \end{theorem} \section{Spatial Interpolants} \label{sec:spint} In this section, we first define the notion of spatial path interpolants, which serve as memory safety proofs of program paths. We then describe a technique for computing spatial path interpolants. This algorithm has two phases: the first is a (forwards) \emph{symbolic execution} phase, which computes the strongest memory safety proof for a path; the second is a (backwards) \emph{interpolation} phase, which weakens the proof so that it is more likely to generalize. Spatial path interpolants are bounded from below by the strongest memory safety proof, and (implicitly) from above by the weakest memory safety proof. Prior to considering the generation of inductive invariants using spatial path interpolants, consider what could be done with only one of the bounds, in general, with either a path-based approach or an iterative fixed-point computation. Without the upper bound, an interpolant or invariant could be computed using a standard forward transformer and widening. But this suffers from the usual problem of potentially widening too aggressively to prove the remainder of the path, necessitating the design of analyses which widen conservatively at the price of computing unnecessarily strong proofs. The upper bound neatly captures the information that must be preserved for the future execution to be proved safe. On the other hand, without the lower bound, an interpolant or invariant could be computed using a backward transformer (and lower widening). But this suffers from the usual problem that backward transformers in shape analysis explode, due to issues such as not knowing the aliasing relationship in the pre-state. The lower bound neatly captures such information, heavily reducing the potential for explosion. These advantages come at the price of operating over full paths from entry to error. Compared to a forwards iterative analysis, operating over full paths has the advantage of having information about the execution's past and future when weakening at each point along the path. A forwards iterative analysis, on the other hand, trades the information about the future for information about many past executions through the use of join or widening operations. The development in this section is purely spatial: we do not make use of data variables or refinements in recursive predicates. Our algorithm is thus of independent interest, outside of its context in this paper. We use \textsf{Sep}\xspace to refer to the fragment of \textsf{RSep}\xspace in which the only data formula (appearing in pure assertions and in refinements) is $\textit{true}$ (this fragment is equivalent to classical separation logic). An \textsf{RSep}\xspace formula $P$, in particular including those in recursive predicate definitions, determines a \textsf{Sep}\xspace formula $\ul{P}$ obtained by replacing all refinements (both variables and $\lambda$-abstractions) with $\lda{\vec{a}.\emph{true}}$ and all $\textsf{DFormula}$s in the pure part of $P$ with $\emph{true}$. Since recursive predicates, refinements, and \textsf{DFormula}s appear only positively, $\ul{P}$ is no stronger than any refinement of $P$. Since all refinements in $\textsf{Sep}\xspace$ are trivial, we will omit them from the syntax (e.g., we will write $Z(\vec{E})$ rather than $Z((\lambda \vec{a}. \emph{true}), \vec{E})$). \subsection{Definition} \label{sec:spint_def} \begin{figure}[t] {\footnotesize $\textsf{exec}(\texttt{x := new($k,l$)},\ \qs{X}{\Pi}{\Sigma}) =$ $ \qex{X \cup \{x',\vec{d},\vec{n}\}}{(\Pi : \Sigma)[x'/x] * x \mapsto [\vec{d},\vec{n}]}$\\ \hspace*{0pt}\hfill where $x',\vec{d},\vec{n}$ are fresh, $\vec{d} = (d_1,\ldots,d_k)$, and $\vec{n} = (n_1,\ldots,n_l)$. $\textsf{exec}(\texttt{free(x)},\ \qs{X}{\Pi}{\Sigma*z \mapsto [\vec{d},\vec{n}]} = \qs{X}{\Pi \land \Pi^{\neq}}{\Sigma}$\\ \hspace*{0pt}\hfill where $\Pi : \Sigma * z \mapsto [\vec{d},\vec{n}] \vdash x = z$ and $\Pi^{\neq}$ is the \\ \hspace*{0pt}\hfill conjunction of all disequalities $x \neq y$ s.t $y \mapsto [\_, \_] \in \Sigma$.\vspace*{3pt} $\textsf{exec}(\texttt{x := E},\ \qs{X}{\Pi}{\Sigma}) =$ $\qex{X\cup\{x'\}}{(x = E[x'/x]) * (\Pi : \Sigma)[x'/x]}$\\ \hspace*{0pt}\hfill where $x'$ is fresh. \vspace*{3pt} $\textsf{exec}(\texttt{assume}(\Pi'),\ \qs{X}{\Pi}{\Sigma}) = \qs{X}{\Pi \land \ul{\Pi'}}{\Sigma}$ . \vspace*{3pt} $\textsf{exec}(\texttt{x->N$_i$ := E},\ \qs{X}{\Pi}{\Sigma * z \mapsto [\vec{d},\vec{n}]}) =$ $\qs{X}{\Pi}{\Sigma * x \mapsto [\vec{d},\vec{n}[E/n_i]]}$\\ \hspace*{0pt}\hfill where $i \leq |\vec{n}|$ and $\Pi:\Sigma * z \mapsto [\vec{d},\vec{n}] \vdash x = z$ . \vspace*{3pt} $\textsf{exec}(\texttt{y := x->N$_i$},\ \qs{X}{\Pi}{\Sigma * z \mapsto [\vec{d},\vec{n}]}) =$\\ \hspace*{0pt}\hfill$\qex{X\cup\{y'\}}{(y = n_i[y'/y]) * (\Pi : \Sigma * z \mapsto [\vec{d},\vec{n}])[y'/y]}$\\ \hspace*{0pt}\hfill where $i \leq |\vec{n}|$ and $\Pi:\Sigma * z \mapsto [\vec{d},\vec{n}] \vdash x = z$, and $y'$ is fresh.} \caption{Symbolic execution for heap statements. Data statements are treated as skips. } \label{fig:post} \end{figure} We define a \emph{symbolic heap} to be a \textsf{Sep}\xspace formula where the spatial part is a *-conjunction of points-to heaplets and the pure part is a conjunction of pointer (dis)equalities. Given a command $\texttt{c}$ and a symbolic heap $S$, we use $\textsf{exec}(\texttt{c}, S)$ to denote the symbolic heap that results from symbolically executing $\texttt{c}$ starting in $S$ (the definition of $\textsf{exec}$ is essentially standard \cite{Berdine2005}, and is shown in Fig.~\ref{fig:post}). Given a program path $\pi = e_1,\ldots,e_n$, we obtain its strongest memory safety proof by symbolically executing $\pi$ starting from the empty heap \textsf{emp}. We call this sequence of symbolic heaps the symbolic execution sequence of $\pi$, and say that a path $\pi$ is \emph{memory-feasible} if every formula in its symbolic execution sequence is consistent. The following proposition justifies calling this sequence the strongest memory safety proof. \begin{proposition} \label{prop:sp} For a path $\pi$, if the symbolic execution sequence for $\pi$ is defined, then $\pi$ is memory safe. If $\pi$ is memory safe and memory-feasible, then its symbolic execution sequence is defined. \end{proposition} Recall that our strategy for proving program correctness is based on sampling and proving the correctness of several program paths (\emph{\'{a} la} {\sc Impact}~\cite{McMillan2006}). The problem with \emph{strongest} memory safety proofs is that they do not generalize well (i.e., do not generate inductive invariants). One solution to this problem is to take advantage of property direction. Given a desired postcondition $P$ and a (memory-safe and -feasible) path $\pi$, the goal is to come up with a proof that is weaker than $\pi$'s symbolic execution sequence, but still strong enough to show that $P$ holds after executing $\pi$. Coming up with such ``weak'' proofs is how traditional path interpolation is used in {\sc Impact}. In light of this, we define \emph{spatial path interpolants} as follows: \begin{definition}[Spatial path interpolant] Let $\pi = e_1,\ldots, e_n$ be a program path with symbolic execution sequence $S_0,\ldots,S_n$, and let $P$ be a \textsf{Sep}\xspace formula (such that $S_n \models P$). A \emph{spatial path interpolant} for $\pi$ is a sequence $I_0,\ldots,I_n$ of \textsf{Sep}\xspace formulas such that \shortenpar \begin{compactitem} \item for each $i \in [0,n]$, $S_i \models I_i$; \item for each $i \in [1,n]$, $\{I_{i-1}\}\;e^{\rm c}_i\;\{I_i\}$ is a valid triple in separation logic; and \item $I_n \models P$ . \end{compactitem} \end{definition} Our algorithm for computing spatial path interpolants is a backwards propagation algorithm that employs a \emph{spatial interpolation} procedure at each backwards step. Spatial interpolants for a single command are defined as: \begin{definition}[Spatial interpolant] \label{def:spint} Given \textsf{Sep}\xspace formulas $S$ and $I'$ and a command $\texttt{c}$ such that $\textsf{exec}(\texttt{c},S) \models I'$, a \emph{spatial interpolant} (for $S$, $\texttt{c}$, and $I'$) is a \textsf{Sep}\xspace formula $I$ such that $S \models I$ and $\hoare{I}{c}{I'}$ is valid. \end{definition} Before describing the spatial interpolation algorithm, we briefly describe how spatial interpolation is used to compute path interpolants. Let us use $\interp{S}{\texttt{c}}{I}$ to denote a spatial interpolant for $S,\texttt{c},I$, as defined above. Let $\pi = e_1,\ldots,e_n$ be a program path and let $P$ be a \textsf{Sep}\xspace formula. First, symbolically execute $\pi$ to compute a sequence $S_0,\ldots,S_n$. Suppose that $S_n \vdash P$. Then we compute a sequence $I_0,\ldots,I_n$ by taking $I_n = P$ and (for $k < n$) $I_{k} = \interp{S_{k}}{e_{k+1}^{\rm c}}{I_{k+1}}$. The sequence $I_0,\ldots,I_n$ is clearly a spatial path interpolant. \shortenpar \subsection{Bounded Abduction} Our algorithm for spatial interpolation is based on an abduction procedure. Abduction refers to the inference of explanatory hypotheses from observations (in contrast to deduction, which derives conclusions from given hypotheses). The variant of abduction we employ in this paper, which we call \emph{bounded abduction}, is simultaneously a form of abductive and deductive reasoning. Seen as a variant of abduction, bounded abduction adds a constraint that the abduced hypothesis be at least weak enough to be derivable from a given hypothesis. Seen as a variant of deduction, bounded abduction adds a constraint that the deduced conclusion be at least strong enough to imply some desired conclusion. Formally, we define bounded abduction as follows: \begin{definition}[Bounded abduction] Let $L,M,R$ be \textsf{Sep}\xspace formulas, and let $X$ be a set of variables. A solution to the \emph{bounded abduction problem} \[ L \vdash \qex{X}{M * [\ ]} \vdash R \] is a \textsf{Sep}\xspace formula $A$ such that $ L \models \qex{X}{M * A} \models R$. \end{definition} Note how, in contrast to bi-abduction~\cite{Calcagno09} where a solution is a pair of formulas, one constrained from above and one from below, a solution to bounded abduction problems is a single formula that is simultaneously constrained from above and below. The fixed lower and upper bounds in our formulation of abduction give considerable guidance to solvers, in contrast to bi-abduction, where the bounds are part of the solution. Sec.~\ref{sec:abduction} presents our bounded abduction algorithm. For the remainder of this section, we will treat bounded abduction as a black box, and use $L \vdash \qex{X}{M * [A]} \vdash R$ to denote that $A$ is a solution to the bounded abduction problem. \subsection{Computing Spatial Interpolants} We now proceed to describe our algorithm for spatial interpolation. Given a command $\texttt{c}$ and \textsf{Sep}\xspace formulas $S$ and $I'$ such that $\textsf{exec}(\texttt{c},S) \vdash I'$, this algorithm must compute a \textsf{Sep}\xspace formula $\interp{S}{\texttt{c}}{I'}$ that satisfies the conditions of Definition~\ref{def:spint}. Several examples illustrating this procedure are given in Fig.~\ref{fig:ex}. This algorithm is defined by cases based on the command $\texttt{c}$. We present the cases for the spatial commands; the corresponding data commands are similar. \paragraph{Allocate} Suppose \texttt{c} is \texttt{x := new($n,m$)}. We take $ \interp{S}{\texttt{c}}{I'} = \qex{x}{A}$, where $A$ is obtained as a solution to $\abduce{\textsf{exec}(\texttt{c},S)}{\vec{a},\vec{z}}{x \mapsto [\vec{a},\vec{z}]}{I'}$, and $\vec{a}$ and $\vec{z}$ are vectors of fresh variables of length $n$ and $m$, respectively. \paragraph{Deallocate} Suppose \texttt{c} is \texttt{free(x)}. We take $\interp{S}{\texttt{c}}{I'} = \qex{\vec{a},\vec{z}}{I' * x \mapsto [\vec{a},\vec{z}]}$, where $\vec{a}$ and $\vec{z}$ are vectors of fresh variables whose lengths are determined by the unique heap cell which is allocated to $x$ in $S$. \paragraph{Assignment} Suppose \texttt{c} is \texttt{x := E}. We take $\interp{S}{\texttt{c}}{I'} = I'[E/x]$. \paragraph{Store} Suppose \texttt{c} is \texttt{x->N$_i$ := E}. We take $\interp{S}{\texttt{c}}{I'} = \qex{\vec{a},\vec{z}}{A * x \mapsto [\vec{a},\vec{z}]}$, where $A$ is obtained as a solution to $\textsf{exec}(\texttt{c},S) \vdash \qex{\vec{a},\vec{z}}{x \mapsto [\vec{a},\vec{z}[E/z_i]] * [A]} \vdash I'$ and where $\vec{a}$ and $\vec{z}$ are vectors of fresh variables whose lengths are determined by the unique heap cell which is allocated to $x$ in $S$. \begin{example} Suppose that $S$ is $t \mapsto [4,y,\textsf{null}] * x \mapsto [2,\textsf{null},\textsf{null}]$ where the cells have one data and two pointer fields, \texttt{c} is \texttt{t->N$_0$ := x}, and $I'$ is $\textsf{bt}(t)$. Then we can compute $\textsf{exec}(\texttt{c},S) = t \mapsto [4,x,\textsf{null}] * x \mapsto [2,\textsf{null},\textsf{null}]$, and then solve the bounded abduction problem \[ \textsf{exec}(\texttt{c},S) \vdash \qex{a,z_1}{t \mapsto [a,x,z_1] * [\ ]} \vdash I'\ .\] One possible solution is $A = \textsf{bt}(x) * \textsf{bt}(z_1)$, which yields \[ \interp{S}{\texttt{c}}{I'} = \qex{a,z_0,z_1}{t \mapsto [a,z_0,z_1] * \textsf{bt}(z_1) * \textsf{bt}(x)} \ .\tag*{\ensuremath{\lrcorner}} \] \end{example} \paragraph{Load} Suppose \texttt{c} is \texttt{y := x->N$_i$}. Suppose that $\vec{a}$ and $\vec{z}$ are vectors of fresh variables of lengths $|\vec{A}|$ and $|\vec{E}|$ where $S$ is of the form $\Pi : \Sigma * w \mapsto [\vec{A},\vec{E}]$ and $\Pi:\Sigma * w \mapsto [\vec{A},\vec{E}] \vdash x = w$ (this is the condition under which $\textsf{exec}(\texttt{c}, S)$ is defined, see Fig.~\ref{fig:post}). Let $y'$ be a fresh variable, and define $\ol{S} = (y = z_i[y'/y]) * (\Pi : \Sigma * w \mapsto [\vec{a},\vec{z}])[y'/y]$. Note that $\ol{S} \vdash \qex{y'}{\ol{S}} \equiv \textsf{exec}(\texttt{c},S) \vdash I'$. We take $\interp{S}{\texttt{c}}{I'} = \qex{\vec{a},\vec{z}}{A[z_i/y,y/y'] * x \mapsto [\vec{a},\vec{z}]}$ where $A$ is obtained as a solution to $\ol{S} \vdash \qex{\vec{a},\vec{z}}{x[y'/y] \mapsto [\vec{a},\vec{z}] * [A]} \vdash I'$. \begin{example} Suppose that $S$ is $y = t : y \mapsto [1,\textsf{null},x] * x \mapsto [5,\textsf{null},\textsf{null}]$, \texttt{c} is \texttt{y := y->N$_1$}, and $I'$ is $y \neq \textsf{null} : \textsf{bt}(t)$. Then $\ol{S}$ is \[ y = x \land y' = t : y' \mapsto [1,\textsf{null},x] * x \mapsto [5,\textsf{null},\textsf{null}] \] We can then solve the bounded abduction problem \[ \overline{S} \vdash \qex{a,z_0,z_1}{y' \mapsto [a,z_0,z_1] * [\ ]} \vdash I'\] A possible solution is $y \neq \textsf{null} \land y' = t : \textsf{bt}(z_0) * \textsf{bt}(z_1)$, yielding\\ \hspace*{0.5cm}$\interp{S}{\texttt{c}}{I'} = (\exists a,z_0,z_1. z_1 \neq \textsf{null} \land y = t :\textsf{bt}(z_0) * \textsf{bt}(z_1) * y \mapsto[a,z_0,z_1])\ .$\hspace*{0.5cm}\null \ensuremath{\lrcorner} \end{example} \paragraph{Assumptions} The interpolation rules defined up to this point cannot introduce recursive predicates, in the sense that if $I'$ is a \mbox{*-conjunction} of points-to predicates then so is $\interp{S}{\texttt{c}}{I'}$.\footnote{But if $I'$ \emph{does} contain recursive predicates, then $\interp{S}{\texttt{c}}{I'}$ may also.} A \mbox{*-conjunction} of points-to predicates is \emph{exact} in the sense that it gives the full layout of some part of the heap. The power of recursive predicates lies in their ability to be \emph{abstract} rather than exact, and describe only the shape of the heap rather than its exact layout. It is a special circumstance that $\hoare{P}{\texttt{c}}{I'}$ holds when $I'$ is exact in this sense and $P$ is not: intuitively, it means that by executing \texttt{c} we somehow gain information about the program state, which is precisely the case for \texttt{assume} commands. For an example of how spatial interpolation can introduce a recursive predicate at an \texttt{assume} command, consider the problem of computing an interpolant \[\interp{S}{\texttt{assume(x $\neq \textsf{null}$)}}{(\exists a,z.\; x\mapsto[a,z] * \textsf{true})}\] where $S \equiv x \mapsto [d,y] * y \mapsto [d',\textsf{null}]$: a desirable interpolant may be $\textsf{ls}(x,\textsf{null}) * \textsf{true}$. The disequality introduced by the assumption ensures that one of the $\mathit{cases}$ of the recursive predicate $\textsf{ls}(x,\textsf{null})$ (where the list from $x$ to $\textsf{null}$ is empty) is impossible, which implies that the other case (where $x$ is allocated) must hold. Towards this end, we now define an auxiliary function $\textsf{intro}$ which we will use to introduce recursive predicates for the \texttt{assume} interpolation rules. Let $P,Q$ be \textsf{Sep}\xspace formulas such that $P \vdash Q$, let $Z$ be a recursive predicate and $\vec{E}$ be a vector of heap terms. We define $\textsf{intro}(Z,\vec{E},P,Q)$ as follows: if $\abduce{P}{\emptyset}{Z(\vec{E})}{Q}$ has a solution and $A \nvdash Q$, define $\textsf{intro}(Z,\vec{E},P,Q) = Z(\vec{E}) * A$. Otherwise, define $\textsf{intro}(Z,\vec{E},P,Q) = Q$. Intuitively, the abduction problem has a solution when $P$ implies $Z(\vec{E})$ and $Z(\vec{E})$ can be \emph{excised} from $Q$. The condition $A \nvdash Q$ is used to ensure that the excision from $Q$ is non-trivial (i.e., the part of the heap that satisfies $Z(\vec{E})$ ``consumes'' some heaplet of $Q$). To define the interpolation rule for assumptions, suppose \texttt{c} is \texttt{assume(E $\neq$ F)} (the case of equality assumptions is similar). Letting $\{\tuple{Z_i,\vec{E}_i}\}_{i\leq n}$ be an enumeration of the (finitely many) possible choices of $Z$ and $\vec{E}$, we define a formula $M$ to be the result of applying $\textsf{intro}$ to $I'$ over all possible choices of $Z$ and $\vec{E}$: \shortenpar \[M = \textsf{intro}(Z_1,\vec{E}_1,S\land E \neq F,\textsf{intro}(Z_2,\vec{E}_2,S \land E \neq F,\dotsc))\] where the innermost occurrence of $\textsf{intro}$ in this definition is $\textsf{intro}(Z_n,\vec{E}_n,S\land E \neq F, I')$. Since \textsf{intro} preserves entailment (in the sense that if $P \vdash Q$ then $P \vdash \textsf{intro}(Z,\vec{E},P,Q)$), we have that $S \land E \neq F \vdash M$. From a proof of $S \land E \neq F \vdash M$, we can construct a formula $M'$ which is entailed by $S$ and differs from $M$ only in that it renames variables and exposes additional equalities and disequalities implied by $S$, and take $\interp{S}{\texttt{c}}{I'}$ to be this $M'$. The construction of $M'$ from $M$ is straightforward but tedious. \emph{The procedure is detailed in \iflong Appendix~\ref{sec:assum} \else the extended version\fi; here, we will just give an example to give intuition on why it is necessary.} Suppose that $S$ is $x = w : y \mapsto z$ and $I'$ is $\textsf{ls}(w,z)$, and \texttt{c} is \texttt{assume($x$ = $y$)}. Since there is no opportunity to introduce new recursive predicates in $I'$, $M$ is simply $\textsf{ls}(w,z)$. However, $M$ is not a valid interpolant since $S \not\models M$, so we must expose the equality $x=w$ and rename $w$ to $y$ in the list segment in $M' \equiv x = w: \textsf{ls}(y,z)$. In practice, it is undesirable to enumerate all possible choices of $Z$ and $\vec{E}$ when constructing $M$ (considering that if there are $k$ in-scope data terms, a recursive predicate of arity $n$ requires enumerating $k^n$ choices for $\vec{E}$). A reasonable heuristic is to let $\Pi$ be the strongest pure formula implied by $S$, and enumerate only those combinations of $Z$ and $\vec{E}$ such that there is some $\Pi' :\Sigma' \in \mathit{cases}(Z(\vec{R},\vec{x}))$ such that $\ul{\Pi'}[\vec{E}/\vec{x}] \land \Pi \land x \neq y$ is unsatisfiable. For example, for \texttt{assume(x $\neq$ y)}, this heuristic means that we enumerate only $\tuple{x,y}$ and $\tuple{y,x}$ (i.e, we attempt to introduce a list segment from $x$ to $y$ and from $y$ to $x$). \shortenpar We conclude this section with a theorem stating the correctness of our spatial interpolation procedure. \begin{theorem} \label{thm:spint_sound} Let $S$ and $I'$ be \textsf{Sep}\xspace formulas and let \texttt{c} be a command such that $\textsf{exec}(\texttt{c},S) \vdash I'$. Then $\interp{S}{\texttt{c}}{I'}$ is a spatial interpolant for $S$, \texttt{c}, and $I'$. \end{theorem}
1,116,691,496,981
arxiv
\section{Introduction} $E_6$ inspired models are well motivated extensions of the Standard Model (SM). Indeed, supersymmetric (SUSY) models based on the $E_6$ gauge symmetry or its subgroup can originate from the ten--dimensional heterotic superstring theory \cite{1}. Within this framework gauge and gravitational anomaly cancellation was found to occur for the gauge groups $SO(32)$ or $E_8\times E'_8$. However only $E_8\times E'_8$ can contain the SM since it allows for chiral fermions while $SO(32)$ does not. Compactification of the extra dimensions results in the breakdown of $E_8$ up to $E_6$ or one of its subgroups in the observable sector \cite{2}. The remaining $E'_8$ couples to the usual matter representations of the $E_6$ only by virtue of gravitational interactions and comprises a hidden sector that is thought to be responsible for the spontaneous breakdown of local SUSY (supergravity). At low energies the hidden sector decouples from the observable sector of quarks and leptons, the gauge and Higgs bosons and their superpartners. Its only manifest effect is a set of soft SUSY breaking terms which spoil the degeneracy between bosons and fermions within one supermultiplet \cite{3}. The scale of soft SUSY breaking terms is set by the gravitino mass, $m_{3/2}$. In the simplest SUSY extensions of the SM these terms also determine the electroweak (EW) scale. A large mass hierarchy between $m_{3/2}$ and Planck scale can be caused by the non--perturbative effects in the hidden sector that may trigger the breakdown of supergravity (SUGRA) \cite{4}. Since $E_6$ is a rank - 6 group the breakdown of $E_6$ symmetry may result in low energy models based on rank - 5 or rank - 6 gauge groups, with one or two additional $U(1)$ gauge group factors in comparison to the SM. Indeed, $E_6$ contains the maximal subgroup $SO(10)\times U(1)_{\psi}$ while $SO(10)$ can be decomposed in terms of the $SU(5)\times U(1)_{\chi}$ subgroup \cite{5}--\cite{Langacker:2008yv}. By means of the Hosotani mechanism \cite{6} $E_6$ can be broken directly to $$ E_6\to SU(3)_C\times SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi} $$ which has rank--6. This rank--6 model may be reduced further to an effective rank--5 model with only one extra gauge symmetry $U(1)'$ which is a linear combination of $U(1)_{\chi}$ and $U(1)_{\psi}$: \begin{equation} U(1)'=U(1)_{\chi}\cos\theta+U(1)_{\psi}\sin\theta\,. \label{0} \end{equation} In the models based on rank - 6 or rank - 5 subgroups of $E_6$ the anomalies are automatically cancelled if the low energy particle spectrum consists of a complete representations of $E_6$. Consequently, in $E_6$-inspired SUSY models one is forced to augment the minimal particle spectrum by a number of exotics which, together with ordinary quarks and leptons, form complete fundamental $27$ representations of $E_6$. Thus we will assume that the particle content of these models includes at least three fundamental representations of $E_6$ at low energies. These multiplets decompose under the $SU(5)\times U(1)_{\psi}\times U(1)_{\chi}$ subgroup of $E_6$ as follows: \begin{equation} \begin{array}{rcl} 27_i &\to & \displaystyle\left(10,\,\dfrac{1}{\sqrt{24}},\,-\dfrac{1}{\sqrt{40}}\right)_i +\left(5^{*},\,\dfrac{1}{\sqrt{24}},\,\dfrac{3}{\sqrt{40}}\right)_i +\left(5^{*},\,-\dfrac{2}{\sqrt{24}},\,-\dfrac{2}{\sqrt{40}}\right)_i \\[3mm] & + & \displaystyle\left(5,\,-\dfrac{2}{\sqrt{24}},\,\dfrac{2}{\sqrt{40}}\right)_i +\left(1,\,\dfrac{4}{\sqrt{24}},\,0\right)_i +\left(1,\,\dfrac{1}{\sqrt{24}},\,-\dfrac{5}{\sqrt{40}}\right)_i\,. \end{array} \label{1} \end{equation} The first, second and third quantities in brackets are the $SU(5)$ representation and extra $U(1)_{\psi}$ and $U(1)_{\chi}$ charges respectively, while $i$ is a family index that runs from 1 to 3. An ordinary SM family, which contains the doublets of left--handed quarks $Q_i$ and leptons $L_i$, right-handed up-- and down--quarks ($u^c_i$ and $d^c_i$) as well as right--handed charged leptons $(e^c_i)$, is assigned to $\left(10,\,\dfrac{1}{\sqrt{24}},\,-\dfrac{1}{\sqrt{40}}\right)_i$ + $\left(5^{*},\,\dfrac{1}{\sqrt{24}},\,\dfrac{3}{\sqrt{40}}\right)_i$. Right-handed neutrinos $N^c_i$ are associated with the last term in Eq.~(\ref{1}), $\left(1,\,\dfrac{1}{\sqrt{24}},\,-\dfrac{5}{\sqrt{40}}\right)_i$. The next-to-last term, $\left(1,\,\dfrac{4}{\sqrt{24}},\,0\right)_i$, represents new SM-singlet fields $S_i$, with non-zero $U(1)_{\psi}$ charges that therefore survive down to the EW scale. The pair of $SU(2)_W$--doublets ($H^d_{i}$ and $H^u_{i}$) that are contained in $\left(5^{*},\,-\dfrac{2}{\sqrt{24}},\,-\dfrac{2}{\sqrt{40}}\right)_i$ and $\displaystyle\left(5,\,-\dfrac{2}{\sqrt{24}},\,\dfrac{2}{\sqrt{40}}\right)_i$ have the quantum numbers of Higgs doublets. They form either Higgs or Inert Higgs $SU(2)_W$ multiplets.~\footnote{We use the terminology ``Inert Higgs'' to denote Higgs--like doublets that do not develop VEVs.} Other components of these $SU(5)$ multiplets form colour triplets of exotic quarks $\overline{D}_i$ and $D_i$ with electric charges $+ 1/3$ and $-1/3$ respectively. These exotic quark states carry a $B-L$ charge $\left(\pm\dfrac{2}{3}\right)$ twice larger than that of ordinary ones. In phenomenologically viable $E_6$ inspired models they can be either diquarks or leptoquarks. The presence of the $Z'$ bosons associated with extra $U(1)$ gauge symmetries and exotic matter in the low-energy spectrum stimulated the extensive studies of the $E_6$ inspired SUSY models over the years \cite{5},~\cite{7}. Recently, the latest Tevatron and early LHC $Z'$ mass limits in these models have been discussed in \cite{Accomando:2010fz} while different aspects of phenomenology of exotic quarks and squarks have been considered in \cite{Kang:2007ib}. Also the implications of the $E_6$ inspired SUSY models have been studied for EW symmetry breaking (EWSB) \cite{Langacker:1998tc}--\cite{Daikoku:2000ep}, neutrino physics \cite{Kang:2004ix}--\cite{Ma:1995xk}, leptogenesis \cite{Hambye:2000bn}--\cite{King:2008qb}, EW baryogenesis \cite{baryogen}, muon anomalous magnetic moment \cite{g-2}, electric dipole moment of electron \cite{Suematsu:1997tv} and tau lepton \cite{GutierrezRodriguez:2006hb}, lepton flavour violating processes like $\mu\to e\gamma$ \cite{Suematsu:1997qt} and CP-violation in the Higgs sector \cite{Ham:2008fx}. The neutralino sector in $E_6$ inspired SUSY models was analysed previously in \cite{Keith:1997zb}, \cite{Suematsu:1997tv}--\cite{Suematsu:1997qt}, \cite{Suematsu:1997au}--\cite{E6neutralino-higgs}. Such models have also been proposed as the solution to the tachyon problems of anomaly mediated SUSY breaking, via $U(1)^\prime$ D-term contributions \cite{Asano:2008ju}, and used in combination with a generation symmetry to construct a model explaining fermion mass hierarchy and mixing \cite{Stech:2008wd}. An important feature of $E_6$ inspired SUSY models is that the mass of the lightest Higgs particle can be substantially larger in these models than in the minimal supersymmetric standard model (MSSM) and next-to-minimal supersymmetric standard model (NMSSM) \cite{Daikoku:2000ep}, \cite{King:2005jy}--\cite{Accomando:2006ga}. The Higgs sector in these models was examined recently in \cite{E6neutralino-higgs}, \cite{King:2005jy}, \cite{E6-higgs}. Within the class of rank - 5 $E_6$ inspired SUSY models, there is a unique choice of Abelian $U(1)_{N}$ gauge symmetry that allows zero charges for right-handed neutrinos and thus a high scale see-saw mechanism. This corresponds to $\theta=\arctan\sqrt{15}$. Only in this Exceptional Supersymmetric Standard Model (E$_6$SSM) \cite{King:2005jy}--\cite{King:2005my} right--handed neutrinos may be superheavy, shedding light on the origin of the mass hierarchy in the lepton sector and providing a mechanism for the generation of the baryon asymmetry in the Universe via leptogenesis \cite{Hambye:2000bn}--\cite{King:2008qb}. Indeed, the heavy Majorana right-handed neutrinos may decay into final states with lepton number $L=\pm 1$, thereby creating a lepton asymmetry in the early universe. Since in the E$_6$SSM the Yukawa couplings of the new exotic particles are not constrained by neutrino oscillation data, substantial values of the CP--asymmetries can be induced even for a relatively small mass of the lightest right--handed neutrino ($M_1 \sim 10^6\,\mbox{GeV}$) so that successful thermal leptogenesis may be achieved without encountering a gravitino problem \cite{King:2008qb}. Supersymmetric models with an additional $U(1)_{N}$ gauge symmetry have been studied in \cite{Ma:1995xk} in the context of non--standard neutrino models with extra singlets, in \cite{Suematsu:1997au} from the point of view of $Z-Z'$ mixing, in \cite{Keith:1997zb} and \cite{Suematsu:1997au}--\cite{Keith:1996fv} where the neutralino sector was explored, in \cite{Keith:1997zb}, \cite{King:2007uj} where the renormalisation group (RG) flow of couplings was examined and in \cite{Suematsu:1994qm}--\cite{Daikoku:2000ep} where EWSB was studied. The presence of a $Z'$ boson and of exotic quarks predicted by the Exceptional SUSY model provides spectacular new physics signals at the LHC which were analysed in \cite{King:2005jy}--\cite{Accomando:2006ga}, \cite{Howl:2007zi}. The presence of light exotic particles in the E$_6$SSM spectrum also lead to the nonstandard decays of the SM--like Higgs boson that were discussed in details in \cite{Hall:2010ix}. Recently the particle spectrum and collider signatures associated with it were studied within the constrained version of the E$_6$SSM \cite{8}. Although the presence of TeV scale exotic matter in $E_6$ inspired SUSY models gives rise to specatucular collider signatures, it also causes some serious problems. In particular, light exotic states generically lead to non--diagonal flavour transitions and rapid proton decay. To suppress flavour changing processes as well as baryon and lepton number violating operators one can impose a set of discrete symmetries. For example, one can impose an approximate $Z^{H}_2$ symmetry, under which all superfields except one pair of $H^{d}_{i}$ and $H^{u}_{i}$ (say $H_d\equiv H^{d}_{3}$ and $H_u\equiv H^{u}_{3}$) and one SM-type singlet field ($S\equiv S_3$) are odd \cite{King:2005jy}--\cite{King:2005my}. When all $Z^{H}_2$ symmetry violating couplings are small this discrete symmetry allows to suppress flavour changing processes. If the Lagrangian of the $E_6$ inspired SUSY models is invariant with respect to either a $Z_2^L$ symmetry, under which all superfields except leptons are even (Model I), or a $Z_2^B$ discrete symmetry that implies that exotic quark and lepton superfields are odd whereas the others remain even (Model II), then the most dangerous baryon and lepton number violating operators get forbidden and proton is sufficiently longlived \cite{King:2005jy}--\cite{King:2005my}. The symmetries $Z^{H}_2$, $Z_2^L$ and $Z_2^B$ obviously do not commute with $E_6$ because different components of fundamental representations of $E_6$ transform differently under these symmetries. The necessity of introducing multiple discrete symmetries to ameliorate phenomenological problems that generically arise due to the presence of low mass exotics is an undesirable feature of these models. In this paper we consider rank - 6 $E_6$ inspired SUSY models in which a {\em single} discrete $\tilde{Z}^{H}_2$ symmetry serves to simultaneously forbid tree--level flavor--changing transitions and the most dangerous baryon and lepton number violating operators. We consider models where the $U(1)_{\psi}$ and $U(1)_{\chi}$ gauge symmetries are spontaneously broken at some intermediate scale so that the matter parity, \begin{equation} Z_{2}^{M}=(-1)^{3(B-L)}\;, \label{2} \end{equation} is preserved. As a consequence the low-energy spectrum of the models will include {\em two} stable weakly interacting particles that potentially contribute to the dark matter density of our Universe. The invariance of the Lagrangian with respect to $Z_{2}^{M}$ and $\tilde{Z}^{H}_2$ symmetries leads to unusual collider signatures associated with exotic states that originate from $27$--plets. These signatures have not been studied in details before. In addition to the exotic matter multiplets that stem from the fundamental $27$ representations of $E_6$ the considered models predict the existence of a set of vector-like supermultiplets. In particular the low-energy spectrum of the models involves either a doublet of vector-like leptons or a triplet of vector-like down type quarks. If these extra states are relatively light, they will manifest themselves at the LHC in the near future. The layout of this paper is as follows. In Section 2 we specify the rank--6 $E_6$ inspired SUSY models with exact custodial symmetry. In Section 3 we present five--dimensional ($5D$) and six--dimensional ($6D$) orbifold Grand Unified theories (GUTs) that lead to the rank--6 $E_6$ inspired SUSY models that we propose. In Sections 4 and 5 the RG flow of gauge couplings and implications for collider phenomenology and cosmology are discussed. Our results are summarized in Section 6. \section{$E_6$ inspired SUSY models with exact custodial $\tilde{Z}^{H}_2$ symmetry} In our analysis we concentrate on the rank--6 $E_6$ inspired SUSY models with two extra $U(1)$ gauge symmetries --- $U(1)_{\chi}$ and $U(1)_{\psi}$. In other words we assume that near the GUT or string scale $E_6$ or its subgroup is broken down to $SU(3)_C\times SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi}$. In the next section we argue that this breakdown can be achieved within orbifold GUT models. We also allow three copies of 27-plets to survive to low energies so that anomalies get cancelled generation by generation within each complete $27_i$ representation of $E_6$. In $E_6$ models the renormalisable part of the superpotential comes from the $27\times 27\times 27$ decomposition of the $E_6$ fundamental representation and can be written as \begin{equation} \begin{array}{rcl} W_{E_6}&=&W_0+W_1+W_2,\\[3mm] W_0&=&\lambda_{ijk}S_i(H^d_{j}H^u_{k})+\kappa_{ijk}S_i(D_j\overline{D}_k)+ h^N_{ijk} N_i^c (H^u_{j} L_k)+ h^U_{ijk} u^c_{i} (H^u_{j} Q_k)+\\[2mm] &&+h^D_{ijk} d^c_i (H^d_{j} Q_k) + h^E_{ijk} e^c_{i} (H^d_{j} L_k)\,,\\[3mm] W_1&=& g^Q_{ijk} D_{i} (Q_j Q_k)+g^{q}_{ijk}\overline{D}_i d^c_j u^c_k\,,\\[3mm] W_2&=& g^N_{ijk}N_i^c D_j d^c_k+g^E_{ijk} e^c_i D_j u^c_k+g^D_{ijk} (Q_i L_j) \overline{D}_k\,. \end{array} \label{3} \end{equation} Here the summation over repeated family indexes ($i,j,k=1,2,3$) is implied. In the considered models $B-L$ number is conserved automatically since the corresponding global symmetry $U(1)_{B-L}$ is a linear superposition of $U(1)_Y$ and $U(1)_{\chi}$. At the same time if terms in $W_1$ and $W_2$ are simultaneously present in the superpotential then baryon and lepton numbers are violated. In other words one cannot define the baryon and lepton numbers of the exotic quarks $D_i$ and $\overline{D}_i$ so that the complete Lagrangian is invariant separately under $U(1)_{B}$ and $U(1)_{L}$ global symmetries. In this case the Yukawa interactions in $W_1$ and $W_2$ give rise to rapid proton decay. Another problem is associated with the presence of three families of $H^u_{i}$ and $H^d_{i}$. All these Higgs--like doublets can couple to ordinary quarks and charged leptons of different generations resulting in the phenomenologically unwanted flavor changing transitions. For example, non--diagonal flavor interactions contribute to the amplitude of $K^0-\overline{K}^0$ oscillations and give rise to new channels of muon decay like $\mu\to e^{-}e^{+}e^{-}$. In order to avoid the appearance of flavor changing neutral currents (FCNCs) at the tree level and forbid the most dangerous baryon and lepton number violating operators one can try to impose a single $\tilde{Z}^{H}_2$ discrete symmetry. One should note that the imposition of additional discrete symmetry to stabilize the proton is a generic feature of many phenomenologically viable SUSY models. In our model building strategy we use $SU(5)$ SUSY GUT as a guideline. Indeed, the low--energy spectrum of the MSSM, in addition to the complete $SU(5)$ multiplets, contains an extra pair of doublets from $5$ and $\overline{5}$ fundamental representations, that play a role of the Higgs fields which break EW symmetry. In the MSSM the potentially dangerous operators, that lead to the rapid proton decay, are forbidden by the matter parity $Z_{2}^{M}$ under which Higgs doublets are even while all matter superfields, that fill in complete $SU(5)$ representations, are odd. Following this inspirational example we augment three 27-plets of $E_6$ by a number of components $M_{l}$ and $\overline{M}_l$ from extra $27'_l$ and $\overline{27'}_l$ below the GUT scale. Because additional pairs of multiplets $M_{l}$ and $\overline{M}_l$ have opposite $U(1)_{Y}$, $U(1)_{\psi}$ and $U(1)_{\chi}$ charges their contributions to the anomalies get cancelled identically. As in the case of the MSSM we allow the set of multiplets $M_{l}$ to be used for the breakdown of gauge symmetry. If the corresponding set includes $H^u\equiv H_u$, $H^d\equiv H_d$, $S$ and $N^c\equiv N^c_H$ then the $SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi}$ symmetry can be broken down to $U(1)_{em}$ associated with electromagnetism. The VEVs of $S$ and $N^c$ break $U(1)_{\psi}$ and $U(1)_{\chi}$ entirely while the $SU(2)_W\times U(1)_Y$ symmetry remains intact. When the neutral components of $H_u$ and $H_d$ acquire non--zero VEVs then $SU(2)_W\times U(1)_Y$ symmetry gets broken to $U(1)_{em}$ and the masses of all quarks and charged leptons are generated. As in the case of the MSSM we assume that all multiplets $M_{l}$ are even under $\tilde{Z}^{H}_2$ symmetry while three copies of the complete fundamental representations of $E_6$ are odd. This forbids couplings in the superpotential that come from $27_i \times 27_j \times 27_k$. On the other hand the $\tilde{Z}^{H}_2$ symmetry allows the Yukawa interactions that stem from $27'_l \times 27'_m \times 27'_n$, and $27'_l \times 27_i \times 27_k$ The multiplets $M_{l}$ have to be even under $\tilde{Z}^{H}_2$ symmetry because some of them are expected to get VEVs. Otherwise the VEVs of the corresponding fields lead to the breakdown of the discrete $\tilde{Z}^{H}_2$ symmetry giving rise to the baryon and lepton number violating operators in general. If the set of multiplets $M_{l}$ includes only one pair of doublets $H_d$ and $H_u$ the $\tilde{Z}^{H}_2$ symmetry defined above permits to suppress unwanted FCNC processes at the tree level since down-type quarks and charged leptons couple to just one Higgs doublet $H_d$, whereas the up-type quarks couple to $H_u$ only. The superfields $\overline{M}_l$ can be either odd or even under this $\tilde{Z}^{H}_2$ symmetry. Depending on whether these fields are even or odd under $\tilde{Z}^{H}_2$ a subset of terms in the most general renormalizable superpotential can be written as \begin{equation} \begin{array}{c} W_{\rm{total}}=Y'_{lmn} 27'_l 27'_m 27'_n + Y_{lij} 27'_l 27_i 27_j + \tilde{Y}_{lmn} \overline{27'}_l \overline{27'}_m \overline{27'}_n +\\[2mm] + \mu'_{il} 27_i \overline{27'}_l + \tilde{\mu}'_{ml} 27'_m \overline{27'}_l...\,, \end{array} \label{4} \end{equation} where $Y'_{lmn}$ and $Y_{lij}$ are Yukawa couplings and $\mu'_{il}$ and $\tilde{\mu}'_{ml}$ are mass parameters. Also one should keep in mind that only $M_{l}$ and $\overline{M}_l$ components of $27'_l$ and $\overline{27'}_l$ appear below the GUT scale. If $\overline{M}_l$ is odd under $\tilde{Z}^{H}_2$ symmetry then the term $\tilde{\mu}'_{ml} 27'_m \overline{27'}_l$ and $\tilde{Y}_{lmn}\overline{27'}_l \overline{27'}_m \overline{27'}_n $ are forbidden while $\mu'_{il}$ can have non-zero values. When $\overline{M}_l$ is even $\mu'_{il}$ vanish whereas $\tilde{\mu}'_{ml} 27'_m \overline{27'}_l$ and $\tilde{Y}_{lmn}\overline{27'}_l \overline{27'}_m \overline{27'}_n $ are allowed by $\tilde{Z}^{H}_2$ symmetry. In general mass parameters $\mu'_{il}$ and $\tilde{\mu}'_{ml}$ are expected to be of the order of GUT scale. In order to allow some of the $\overline{M}_l$ multiplets to survive to low energies we assume that the corresponding mass terms are forbidden at high energies and get induced at some intermediate scale which is much lower than $M_X$. The VEVs of the superfields $N^c_H$ and $\overline{N}_H^c$ (that originate from $27'_N$ and $\overline{27'}_N$) can be used not only for the breakdown of $U(1)_{\psi}$ and $U(1)_{\chi}$ gauge symmetries, but also to generate Majorana masses for the right--handed neutrinos that can be induced through interactions \begin{equation} \Delta W_{N}=\displaystyle\frac{\varkappa_{ij}}{M_{Pl}}(27_i\,\overline{27'}_{N})(27_j\,\overline{27'}_{N})\,. \label{8} \end{equation} The non--renormalizable operators (\ref{8}) give rise to the right--handed neutrino masses which are substantially lower than the VEVs of $N^c_H$ and $\overline{N}_H^c$. Because the observed pattern of the left--handed neutrino masses and mixings can be naturally reproduced by means of seesaw mechanism if the right--handed neutrinos are superheavy, the $N^c_H$ and $\overline{N}_H^c$ are expected to acquire VEVs $<N_H^c>\simeq<\overline{N}_H^c>\lesssim M_X$. This implies that $U(1)_{\psi}\times U(1)_{\chi}$ symmetry is broken down to $U(1)_{N}$ near the GUT scale, where $U(1)_{N}$ symmetry is a linear superposition of $U(1)_{\psi}$ and $U(1)_{\chi}$, i.e. \begin{equation} U(1)_N=\dfrac{1}{4} U(1)_{\chi}+\dfrac{\sqrt{15}}{4} U(1)_{\psi}\,, \label{7} \end{equation} under which right-handed neutrinos have zero charges. Since $N^c_H$ and $\overline{N}_H^c$ acquire VEVs both supermultiplets must be even under $\tilde{Z}^{H}_2$ symmetry. At the same time the VEVs of $N^c_H$ and $\overline{N}_H^c$ may break $U(1)_{B-L}$ symmetry. In particular, as follows from Eq.~(\ref{3}) the VEV of $N^c_H$ can induce the bilinear terms $M^{L}_{ij} (H^u_{i} L_{j})$ and $M^{B}_{ij} (D_i d^c_j)$ in the superpotential. Although such breakdown of gauge symmetry might be possible the extra particles tend to be rather heavy in the considered case and thus irrelevant for collider phenomenology. Therefore we shall assume further that the couplings of $N^c_H$ to $27_i$ are forbidden. This, for example, can be achieved by imposing an extra discrete symmetry $Z_n$. Although this symmetry can forbid the interactions of $N^c_H$ with three complete $27_i$ representations of $E_6$ it should allow non--renormalizable interactions (\ref{8}) that induce the large Majorana masses for right-handed neutrinos. These requirements are fulfilled if Lagrangian is invariant under $Z_2$ symmetry transformations $N^c_H\to -N^c_H$ and $\overline{N}_H^c\to -\overline{N}_H^c$. Alternatively, one can impose $Z_n$ symmetry ($n>2$) under which only $N^c_H$ transforms. The invariance of the Lagrangian with respect to $Z_n$ symmetry ($n>2$) under which only $N^c_H$ transforms implies that the mass term $\mu_H N^c_H \overline{N}_H^c$ in the superpotential (\ref{4}) is forbidden. On the other hand this symmetry allows non--renormalizable term in the superpotential \begin{equation} \Delta W_{N^c_{H}} = \varkappa\, \displaystyle\frac{(N^c_H \overline{N}_H^c)^n}{M^{2n-3}_{Pl}}\,,. \label{5} \end{equation} In this case $N^c_H$ and $\overline{N}_H^c$ can develop VEVs along the $D$--flat direction so that \begin{equation} <N_H^c>\simeq<\overline{N}_H^c>\sim M_{Pl} \cdot \biggl[ \displaystyle\frac{1}{\varkappa}\frac{M_S}{M_{Pl}}\biggr]^{\displaystyle\frac{1}{2n-2}}\,, \label{6} \end{equation} where $M_S$ is a low--energy supersymmetry breaking scale. This mechanism permits to generate $<N_H^c>\, \gtrsim 10^{14}\,\mbox{GeV}$ resulting in right-handed neutrino masses of order of $$ \varkappa_{ij} M_{Pl} \cdot \biggl[ \displaystyle\frac{1}{\varkappa}\frac{M_S}{M_{Pl}}\biggr]^{\displaystyle\frac{1}{n-1}} \gtrsim 10^{11}\,\mbox{GeV}\,. $$ \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & $27_i$ & $27_i$ &$27'_{H_u}$&$27'_{S}$& $\overline{27'}_{H_u}$&$\overline{27'}_{S}$&$27'_N$&$27'_{L}$&$27'_{d}$\\ & & &$(27'_{H_d})$& &$(\overline{27'}_{H_d})$& &$(\overline{27'}_N)$&$(\overline{27'}_L)$&$(\overline{27'}_{d})$\\ \hline &$Q_i,u^c_i,d^c_i,$&$\overline{D}_i,D_i,$ & $H_u$ & $S$ & $\overline{H}_u$&$\overline{S}$&$N^c_H$&$L_4$&$d^c_4$\\ &$L_i,e^c_i,N^c_i$ & $H^d_{i},H^u_{i},S_i$& $(H_d)$ & & $(\overline{H}_d)$&&$(\overline{N}_H^c)$&$(\overline{L}_4)$&$(\overline{d^c}_4)$\\ \hline $\tilde{Z}^{H}_2$ & $-$ & $-$ & $+$ & $+$ & $-$&$\pm$&$+$&$+$&$+$\\ \hline $Z_{2}^{M}$ & $-$ & $+$ & $+$ & $+$ & $+$&$+$&$-$&$-$&$-$\\ \hline $Z_{2}^{E}$ & $+$ & $-$ & $+$ & $+$ & $-$&$\pm$&$-$&$-$&$-$\\ \hline \end{tabular} \caption{Transformation properties of different components of $E_6$ multiplets under $\tilde{Z}^H_2$, $Z_{2}^{M}$ and $Z_{2}^{E}$ discrete symmetries.} \label{tab1} \end{table} The mechanism of the gauge symmetry breaking discussed above ensures that the low--energy effective Lagrangian is automatically invariant under the matter parity $Z_{2}^{M}$. Such spontaneous breakdown of the $U(1)_{\psi}\times U(1)_{\chi}$ gauge symmetry can occur because $Z_{2}^{M}$ is a discrete subgroup of $U(1)_{\psi}$ and $U(1)_{\chi}$. This follows from the $U(1)_{\psi}$ and $U(1)_{\chi}$ charge assignments presented in Eq.~(\ref{1}). Thus in the considered case the VEVs of $N^c_H$ and $\overline{N}_H^c$ break $U(1)_{\psi}\times U(1)_{\chi}$ gauge symmetry down to $U(1)_{N}\times Z_{2}^{M}$. As a consequence the low--energy effective Lagrangian is invariant under both $Z_{2}^{M}$ and $\tilde{Z}^{H}_2$ discrete symmetries. Moreover the $\tilde{Z}^{H}_2$ symmetry is a product of \begin{equation} \tilde{Z}^{H}_2 = Z_{2}^{M}\times Z_{2}^{E}\,, \label{9} \end{equation} where $Z_{2}^{E}$ is associated with most of the exotic states. In other words all exotic quarks and squarks, Inert Higgs and Higgsino multiplets as well as SM singlet and singlino states that do not get VEV are odd under $Z_{2}^{E}$ symmetry. The transformation properties of different components of $27_i$, $27'_l$ and $\overline{27'}_l$ multiplets under the $\tilde{Z}^{H}_2$, $Z_{2}^{M}$ and $Z_{2}^{E}$ symmetries are summarized in Table~\ref{tab1}. Since the Lagrangian of the considered $E_6$ inspired models is invariant under $Z_{2}^{M}$ and $\tilde{Z}^{H}_2$ symmetries it is also invariant under the transformations of $Z_{2}^{E}$ symmetry. Because $Z_{2}^{E}$ is conserved the lightest exotic state, which is odd under this symmetry, is absolutely stable and contributes to the relic density of dark matter. It is also well known that in SUSY models the lightest supersymmetric particle (LSP), i.e. the lightest $R$--parity odd particle ($Z_{2}^{R}=(-1)^{3(B-L)+2s}$), must be stable. If in the considered models the lightest exotic state (i.e. state with $Z_{2}^{E}=-1$) has even $R$--parity then the lightest $R$--parity odd state cannot decay as usual. When the lightest exotic state is $R$--parity odd particle either the lightest $R$--parity even exotic state or the next-to-lightest $R$--parity odd state with $Z_{2}^{E}=+1$ must be absolutely stable. Thus the considered $E_6$ inspired SUSY models contain at least two dark-matter candidates. The residual extra $U(1)_{N}$ gauge symmetry gets broken by the VEV of the SM--singlet superfield $S$ (and possibly $\overline{S}$). The VEV of the field $S$ induces the mass of the $Z'$ associated with $U(1)_{N}$ symmetry as well as the masses of all exotic quarks and inert Higgsinos. If $S$ acquires VEV of order $10-100\,\mbox{TeV}$ (or even lower) the lightest exotic particles can be produced at the LHC. This is the most interesting scenario that we are going to focus on here. In some cases the superfield $\overline{S}$ may also acquire non--zero VEV breaking $U(1)_{N}$ symmetry as we will discuss later. If this is a case then $\overline{S}$ should be even under the $\tilde{Z}^{H}_2$ symmetry. Otherwise the superfield $\overline{S}$ can be $\tilde{Z}^{H}_2$ odd. The above consideration indicate that the set of multiplets $M_{l}$ has to contain at least $H_u$, $H_d$, $S$ and $N^c_H$ in order to guarantee the appropriate breakdown of the gauge symmetry in the rank--6 $E_6$ inspired SUSY models. However if the set of $\tilde{Z}^{H}_2$ even supermultiplets $M_{l}$ involve only $H_u$, $H_d$, $S$ and $N^c_H$ then the lightest exotic quarks are extremely long--lived particles. Indeed, in the considered case the $\tilde{Z}^{H}_2$ symmetry forbids all Yukawa interactions in $W_1$ and $W_2$ that allow the lightest exotic quarks to decay. Moreover the Lagrangian of such model is invariant not only with respect to $U(1)_L$ and $U(1)_B$ but also under $U(1)_D$ symmetry transformations \begin{equation} D\to e^{i\alpha} D\,,\qquad\qquad \overline{D}\to e^{-i\alpha}\overline{D}\,. \label{10} \end{equation} The $U(1)_D$ invariance ensures that the lightest exotic quark is very long--lived. The $U(1)_L$, $U(1)_B$ and $U(1)_D$ global symmetries are expected to be broken by a set of non--renormalizable operators which are suppressed by inverse power of the GUT scale $M_X$ or $M_{Pl}$. These operators give rise to the decays of the exotic quarks but do not lead to the rapid proton decay. Since the extended gauge symmetry in the considered rank--6 $E_6$ inspired SUSY models forbids any dimension five operators that break $U(1)_D$ global symmetry the lifetime of the lightest exotic quarks is expected to be of order of \begin{equation} \tau_D\gtrsim M_X^4/\mu_D^5\,, \label{11} \end{equation} where $\mu_D$ is the mass of the lightest exotic quark. When $\mu_D\simeq \mbox{TeV}$ the lifetime of the lightest exotic quarks $\tau_D\gtrsim 10^{49}\,\mbox{GeV}^{-1}\sim 10^{17}\,\mbox{years}$, i.e. considerably larger than the age of the Universe. The long--lived exotic quarks would have been copiously produced during the very early epochs of the Big Bang. Those lightest exotic quarks which survive annihilation would subsequently have been confined in heavy hadrons which would annihilate further. The remaining heavy hadrons originating from the Big Bang should be present in terrestrial matter. There are very strong upper limits on the abundances of nuclear isotopes which contain such stable relics in the mass range from $1\,\mbox{GeV}$ to $10\,\mbox{TeV}$. Different experiments set limits on their relative concentrations from $10^{-15}$ to $10^{-30}$ per nucleon \cite{42}. At the same time various theoretical estimations \cite{43} show that if remnant particles would exist in nature today their concentration is expected to be at the level of $10^{-10}$ per nucleon. Therefore $E_6$ inspired models with very long--lived exotic quarks are ruled out. To ensure that the lightest exotic quarks decay within a reasonable time the set of $\tilde{Z}^{H}_2$ even supermultiplets $M_{l}$ needs to be supplemented by some components of $27$-plet that carry $SU(3)_C$ color or lepton number. In this context we consider two scenarios that lead to different collider signatures associated with the exotic quarks. In the simplest case ({\bf scenario A}) the set of $\tilde{Z}^{H}_2$ even supermultiplets $M_{l}$ involves lepton superfields $L_4$ and/or $e^c_4$ that survive to low energies. This implies that $\overline{D}_i$ and $D_i$ can interact with leptons and quarks only while the couplings of these exotic quarks to a pair of quarks are forbidden by the postulated $\tilde{Z}^{H}_2$ symmetry. Then baryon number is conserved and exotic quarks are leptoquarks. In this paper we restrict our consideration to the $E_6$ inspired SUSY models that lead to the approximate unification of the $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ gauge couplings at some high energy scale $M_X$. This requirement implies that in the one--loop approximation the gauge coupling unification is expected to be almost exact. On the other hand it is well known that the one--loop gauge coupling unification in SUSY models remains intact if the MSSM particle content is supplemented by the complete representations of $SU(5)$ (see for example \cite{Hempfling:1995rb}). Thus we require that the extra matter beyond the MSSM fill in complete $SU(5)$ representations. In the {\bf scenario A} this requirement can be fulfilled if $\overline{H}_u$ and $\overline{H}_d$ are odd under the $\tilde{Z}^{H}_2$ symmetry while $\overline{L}_4$ is $\tilde{Z}^{H}_2$ even supermultiplet. Then $\overline{H}_u$ and $\overline{H}_d$ from the $\overline{27'}_l$ can get combined with the superposition of the corresponding components from $27_i$ so that the resulting vectorlike states gain masses of order of $M_X$. The supermultiplets $L_4$ and $\overline{L}_4$ are also expected to form vectorlike states. However these states are required to be light enough to ensure that the lightest exotic quarks decay sufficiently fast\footnote{Note that the superfields $e^c_4$ and $\overline{e^c}_4$ are not allowed to survive to low energies because they spoil the one--loop gauge coupling unification.}. The appropriate mass term $\mu_L L_4\overline{L}_4$ in the superpotential can be induced within SUGRA models just after the breakdown of local SUSY if the K\"ahler potential contains an extra term $(Z_L (L_4\overline{L}_4)+h.c)$\cite{45}. The presence of the bosonic and fermionic components of $\overline{S}$ at low energies is not constrained by the unification of the $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ gauge couplings since $\overline{S}$ is the SM singlet superfield. If $\overline{S}$ is odd under the $\tilde{Z}^{H}_2$ symmetry then it can get combined with the superposition of the appropriate components of $27_i$. The corresponding vectorlike states may be either superheavy ($\sim M_X$) or gain TeV scale masses. When $\overline{S}$ is $\tilde{Z}^{H}_2$ even superfield then its scalar component is expected to acquire a non-zero VEV breaking $U(1)_N$ gauge symmetry. Thus {\bf scenario A} implies that in the simplest case the low energy matter content of the considered $E_6$ inspired SUSY models involves: \begin{equation} \begin{array}{c} 3\left[(Q_i,\,u^c_i,\,d^c_i,\,L_i,\,e^c_i,\,N_i^c)\right] +3(D_i,\,\bar{D}_i)+2(S_{\alpha})+2(H^u_{\alpha})+2(H^d_{\alpha})\\[2mm] +L_4+\overline{L}_4+N_H^c+\overline{N}_H^c+S+H_u+H_d\,, \end{array} \label{12} \end{equation} where the right--handed neutrinos $N^c_i$ are expected to gain masses at some intermediate scale, while the remaining matter survives down to the EW scale. In Eq.~(\ref{12}) $\alpha=1,2$ and $i=1,2,3$. Integrating out $N^c_i$, $N^c_H$ and $\overline{N}_H^c$ as well as neglecting all suppressed non-renormalisable interactions one gets an explicit expression for the superpotential in the considered case \begin{equation} \begin{array}{c} W_{A} = \lambda S (H_u H_d) + \lambda_{\alpha\beta} S (H^d_{\alpha} H^u_{\beta}) + \kappa_{ij} S (D_{i} \overline{D}_{j}) + \tilde{f}_{\alpha\beta} S_{\alpha} (H^d_{\beta} H_u) + f_{\alpha\beta} S_{\alpha} (H_d H^u_{\beta}) \\[2mm] + g^D_{ij} (Q_i L_4) \overline{D}_j + h^E_{i\alpha} e^c_{i} (H^d_{\alpha} L_4) + \mu_L L_4\overline{L}_4 + W_{MSSM}(\mu=0)\,. \end{array} \label{13} \end{equation} A second scenario, that allows the lightest exotic quarks to decay within a reasonable time and prevents rapid proton decay, is realized when the set of multiplets $M_{l}$ together with $H_u$, $H_d$, $S$ and $N^c_H$ contains an extra $d^c_4$ superfield (instead of $L_4$) from $27'_{d}$. If the $\tilde{Z}^{H}_2$ even supermultiplet $d^c_4$ survives to low energies then exotic quarks are allowed to have non-zero Yukawa couplings with pair of quarks which permit their decays. They can also interact with $d^c_4$ and right-handed neutrinos. However if Majorana right-handed neutrinos are very heavy ($\sim M_X$) then the interactions of exotic quarks with leptons are extremely suppressed. As a consequence in this {\bf scenario B} $\overline{D}_i$ and $D_i$ manifest themselves in the Yukawa interactions as superfields with baryon number $\left(\pm\dfrac{2}{3}\right)$. Although in the {\bf scenario B} the baryon and lepton number violating operators are expected to be suppressed by inverse powers of the masses of the right--handed neutrinos they can still lead to the rapid proton decay. The Yukawa interactions of the $\tilde{Z}^{H}_2$ even superfield $d^c_4$ with other supermultiplets of ordinary and exotic matter can be written in the following form \begin{equation} \Delta W_{d^c_4} = h^D_{i k} d^c_4 (H^d_{i} Q_k) + g^{q}_{ij}\overline{D}_i d^c_4 u^c_j+ g^N_{ij} N_i^c D_j d^c_4\,. \label{14} \end{equation} Integrating out Majorana right-handed neutrinos one obtains in the leading approximation \begin{equation} \Delta W_{d^c_4} \to h^D_{i k} d^c_4 (H^d_{i} Q_k) + g^{q}_{ij}\overline{D}_i d^c_4 u^c_j+ \frac{\tilde{\varkappa}_{ij}}{M_N} (L_i H_u) (D_j d^c_4)\,, \label{15} \end{equation} where $M_N$ is an effective seesaw scale which is determined by the masses and couplings of $N^c_i$ and $\tilde{\varkappa}_{ij}\sim g^N_{ij}$. In the considered case the baryon and lepton number violation takes place only when all three terms in Eqs.~(\ref{14})--(\ref{15}) are present in the superpotential. If $g^N_{ij}=0$ ($\tilde{\varkappa}_{ij}=0$) or $g^{q}_{ij}=0$ the baryon and lepton number conservation requires exotic quarks to be either diquarks or leptoquarks respectively. When $h^D_{i k}$ vanish the conservation of the baryon and lepton numbers implies that the superfields $D_i$, $\overline{D}_i$ and $d^c_4$ have the following $U(1)_L$ and $U(1)_B$ charges $B_D=-B_{\overline{D}}=-B_{d^c_4}=-1/6$ and $L_D=-L_{\overline{D}}=L_{d^c_4}=-1/2$. This consideration indicates that in the case when all three terms are present in Eqs.~(\ref{14})--(\ref{15}) the $U(1)_L$ and $U(1)_B$ global symmetries can not be preserved. It means that in the leading approximation the proton decay rate is caused by all three types of the corresponding Yukawa couplings and has to go to zero when the Yukawa couplings of at least one type of Yukawa interactions vanish. In practice, the proton lifetime is determined by the one--loop box diagram that leads to the dimension seven operator \begin{equation} \mathcal{L}_{p}\simeq \left(\frac{c_{ijkl}}{M_S^2}\right) \left(\frac{\langle H_u \rangle}{M_N}\right) \Biggl[\epsilon_{\alpha\beta\gamma}\overline{u^c}_{\alpha i} d_{\beta j} \overline{\nu}_{k} d_{\gamma l}\Biggr]\,, \label{16} \end{equation} where $\langle H_u \rangle = v_2/\sqrt{2}$ and $c_{ijkl}\propto \tilde{\varkappa}\, g^{q}\, (h^D)^2$. In Eq.~(\ref{16}) Greek indices denote the color degrees of freedom while $SU(2)$ indices are suppressed. Here we assume that all particles propagating in the loop have masses of the order of $M_S$. For $M_N\gtrsim 10^{11}\,\mbox{GeV}$ and $h^D_{i k} \sim g^{q}_{ij} \sim g^N_{ij}$ the appropriate suppression of the proton decay rate can be achieved if the corresponding Yukawa couplings are less than $10^{-5}$. Once again, the requirement of the approximate unification of the $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ gauge couplings constrains the low energy matter content in the {\bf scenario B}. The concept of gauge coupling unification implies that the perturbation theory method provides an adequate description of the RG flow of gauge couplings up to the GUT scale $M_X$ at least. The requirement of the validity of perturbation theory up to the scale $M_X$ sets stringent constraint on the number of extra $SU(2)_W$ and $SU(3)_C$ supermultiplets that can survive to low energies in addition to three complete fundamental representations of $E_6$. For example, the applicability of perturbation theory up to the high energies permits only one extra pair of $SU(3)_C$ triplet superfields to have mass of the order of TeV scale. The same requirement limits the number of pairs of $SU(2)_W$ doublets to two. Because in the {\bf scenario B} the $\tilde{Z}^{H}_2$ even supermultiplets $d^c_4$ and $\overline{d^c}_4$ are expected to form vectorlike states which have to have TeV scale masses the limit caused by the validity of perturbation theory up to the scale $M_X$ is saturated. Then in order to ensure that the extra matter beyond the MSSM fill in complete $SU(5)$ representations $\overline{H}_u$ and $\overline{H}_d$ should survive to the TeV scale as well. As before we assume that these supermultiplets are odd under the $\tilde{Z}^{H}_2$ symmetry so that they can get combined with the superposition of the corresponding components from $27_i$ at low energies forming vectorlike states. Again the superfield $\overline{S}$ may or may not survive to the TeV scale. It can be either even or odd under the $\tilde{Z}^{H}_2$ symmetry. If $\overline{S}$ is $\tilde{Z}^{H}_2$ even, it should survive to low energies and its scalar component is expected to get a VEV. Following the above discussion the low energy matter content in the simplest case of the {\bf scenario B} may be summarized as: \begin{equation} \begin{array}{c} 3\left[(Q_i,\,u^c_i,\,d^c_i,\,L_i,\,e^c_i,\,N_i^c)\right] +3(D_i,\,\bar{D}_i)+3(H^u_{i})+3(H^d_{i})+2(S_{\alpha})\\[2mm] +d^c_4+\overline{d^c}_4+N_H^c+\overline{N}_H^c+H_u+\overline{H}_u +H_d+\overline{H}_d+S\,. \end{array} \label{17} \end{equation} All states in Eq.~(\ref{17}) are expected to be considerably lighter than the GUT scale $M_X$. Assuming that $N^c_i$, $N^c_H$ and $\overline{N}_H^c$ gain intermediate scale masses the renormalizable part of the TeV scale superpotential associated with the {\bf scenario B} can be written as \begin{equation} \begin{array}{c} W_{B} = \lambda S (H_u H_d) + \lambda_{ij} S (H^d_{i} H^u_{j}) + \kappa_{ij} S (D_{i} \overline{D}_{j}) + \tilde{f}_{\alpha i} S_{\alpha} (H^d_{i} H_u) + f_{\alpha i} S_{\alpha} (H_d H^u_{i}) \\[2mm] + g^{q}_{ij}\overline{D}_i d^c_4 u^c_j + h^D_{ij} d^c_4 (H^d_{i} Q_j) + \mu_d d^c_4\overline{d^c}_4 + \mu^u_{i} H^u_{i} \overline{H}_u + \mu^d_{i} H^d_{i} \overline{H}_d + W_{MSSM}(\mu=0)\,. \end{array} \label{18} \end{equation} The superpotential (\ref{18}) contains a set of the TeV scale mass parameters, i.e. $\mu_d$, $\mu^u_{i}$, $\mu^d_{i}$. These are introduced to avoid massless fermionic states associated with $d^c_4$, $\overline{d^c}_4$, $\overline{H}_u$ and $\overline{H}_d$ supermultiplets and can be induced after the breakdown of local SUSY as it has been discussed earlier. On the other hand the superpotential (\ref{18}) also contains the Yukawa couplings $g^{q}_{ij}$ and $h^D_{ij}$ which are expected to be small in order to avoid rapid proton decay. The appropriate suppression of the corresponding Yukawa couplings and mass parameters $\mu_d$, $\mu^u_{i}$ and $\mu^d_{i}$ can be achieved if the Lagrangian of the $E_6$ inspired model is invariant under the discrete $Z_k$ symmetry which gets broken spontaneously at the intermediate scale. As an example one can consider the model with extra SM singlet superfield $\Phi$ which transforms under the discrete $Z_k$ symmetry. For concreteness here we assume that at high energies the Lagrangian of the model is invariant under the $Z_6$ symmetry transformations \begin{equation} \Phi\to \omega\,\Phi\,,\qquad d^c_4\to \omega^5\, d^c_4\,,\qquad \overline{d^c}_4\to \omega^3\, \overline{d^c}_4\,,\qquad \overline{H}_u\to \omega^2\,\overline{H}_u\,,\qquad \overline{H}_d\to \omega^2\,\overline{H}_d\,, \label{19} \end{equation} where $\omega=e^{i\pi/3}$. Then the part of the superpotential that depends on the $d^c_4$, $\overline{d^c}_4$, $\overline{H}_u$, $\overline{H}_d$ and $\Phi$ takes the form \begin{equation} \begin{array}{rcl} \Delta W_{Z_6} &=& \displaystyle\frac{\Phi}{M_{Pl}}\biggl[\sigma_{ij} d^c_4 (H^d_{i} Q_j) + \tilde{\sigma}_{ij} \overline{D}_i d^c_4 u^c_j+ \hat{\sigma}_{ij} N_i^c D_j d^c_4\biggr]\\[3mm] &+&\displaystyle\frac{\Phi^4}{M_{Pl}^{3}}\biggl[\eta_d d^c_4\overline{d^c}_4 + \eta^u_{i} H^u_{i} \overline{H}_u + \eta^d_{i} H^d_{i} \overline{H}_d \biggr] + \sigma\displaystyle\frac{\Phi^6}{M_{Pl}^{3}} + ...\,. \end{array} \label{20} \end{equation} At the intermediate scale the imposed $Z_6$ symmetry may be broken spontaneously by the VEV of the superfield $\Phi$ \begin{equation} <\Phi>\sim \biggl[\displaystyle\frac{M_S}{M_{Pl}}\biggr]^{1/4} M_{Pl}\simeq 10^{14}\,\mbox{GeV} \label{21} \end{equation} inducing bilinear mass terms in the superpotential and small Yukawa couplings of the $d^c_4$ supermultiplet to other superfields. The corresponding Yukawa couplings and mass parameters are given by \footnote{The same mechanism can be used for the generation of the mass term $\mu_L L_4\overline{L}_4$ in the scenario A.} \begin{equation} \mu_d \sim \mu^u_{i} \sim \mu^d_{i} \sim \dfrac{<\Phi^4>}{M_{Pl}^3}\simeq M_S\,, \qquad h^D_{i k} \sim g^{q}_{ij} \sim g^N_{ij} \lesssim \dfrac{<\Phi>}{M_{Pl}}\sim 10^{-4}\,. \label{22} \end{equation} Although {\bf scenarios A} and {\bf B} discussed in this section allow us to suppress baryon and lepton number violating operators and non-diagonal flavor transitions they have at least one drawback. Both scenarios imply that a number of incomplete $E_6$ multiplets survive below the scale $M_X$. In fact, the number of incomplete $E_6$ multiplets tends to be larger than the number of generations. Therefore the origin and mechanism resulting in the incomplete $E_6$ representations requires further justification. The splitting of GUT multiplets can be naturally achieved in the framework of orbifold GUTs. In the next section we present $5D$ and $6D$ orbifold GUT models that can lead to the {\bf scenarios A} and {\bf B} just below the GUT scale. \section{5D and 6D orbifold GUT models} The structure of the $E_6$ inspired SUSY models discussed in the previous section, its gauge group and field content, points towards an underlying GUT model based on the $E_6$ or its subgroup. The breaking of these GUT groups down to the $SU(3)_C\times SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi}$ is in general rather involved and requires often large Higgs representations. In particular, the splitting of GUT multiplets (like doublet-triplet splitting within SU(5) GUT) requires either fine--tuning of parameters or additional, sophisticated mechanisms \cite{Masiero:1982fe}--\cite{Altarelli:2000fu}. Higher--dimensional theories offer new possibilities to describe gauge symmetry breaking. A simple and elegant scheme is provided by orbifold compactifications which have been considered for SUSY GUT models in five dimensions \cite{Kawamura:2000ev}--\cite{Braam:2010sy} and six dimensions \cite{5d+6d-susy-ogut}--\cite{Buchmuller:2004eg}. These models apply ideas that first appeared in string--motivated work \cite{Candelas:1985en}: the gauge symmetry is broken by identifications imposed on the gauge fields under the spacetime symmetries of an orbifold. In these models many good properties of GUT's like gauge coupling unification and charge quantization are maintained while some unsatisfactory properties of the conventional breaking mechanism, like doublet-triplet splitting, are avoided. Recently, orbifold compactifications of the heterotic string have been constructed which can account for the SM in four dimensions and which have five--dimensional or six--dimensional GUT structures as intermediate step very similar to orbifold GUT models \cite{Buchmuller:2005jr}. Hence, orbifold compactifications provide an attractive starting point for attempts to embed the SM into higher dimensional string theories. \subsection{$SU(5)\times U(1)_{\chi}\times U(1)_{\psi}$ model in five dimensions} The simplest GUT group which unifies the gauge interactions of the SM is $SU(5)$ \cite{Georgi:1974sy}. Therefore we first analyze the higher dimensional SUSY GUT model based on the $SU(5)\times U(1)_{\chi}\times U(1)_{\psi}$ gauge group which is a rank--6 subgroup of $E_6$. For simplicity we consider a single compact extra dimension $S^1$, $y (=x_5)$, and assume a fixed radius with size given by the GUT scale ($R\sim 1/M_X$). The orbifold $S^1/Z_2$ is obtained by dividing the circle $S^1$ with a $Z_2$ transformation which acts on $S^1$ according to $y\to -y$. The components of the $SU(5)$ supermultiplets that propagate in 5 dimensions transform under the specified $Z_2$ action as $\Phi(x_{\mu}, -y) = P \Phi(x_{\mu}, y)$, where $P$ acts on each component of the $SU(5)$ representation $\Phi$, making some components positive and some components negative, i.e. $P=(+,+,...-, -,...)$. The Lagrangian should be invariant under the $Z_2$ transformations\footnote{It is worth to point out that the $Z_2$ invariance of the Lagrangian does not require that $P=\pm I$, where $I$ is the unit matrix. In general, matrix $P$ should satisfy the condition $P^2=I$.}. The $Z_2$ transformation can be regarded as equivalence relation that allows to reduce the circle $S^1$ to the interval $y\in [0, \pi R]$. Here we consider a 5-dimensional space-time factorized into a product of the ordinary $4D$ Minkowski space time $M^4$ and the orbifold $S^1/(Z_2\times Z'_2)$. The orbifold $S^1/(Z_2\times Z'_2)$ is obtained by dividing $S^1/Z_2$ with another $Z_2$ transformation, denoted by $Z'_2$, which acts as $y'\to -y'$, with $y'\equiv y - \pi R/2$. Each reflection symmetry, $y\to -y$ and $y'\to -y'$, has its own orbifold parity, $P$ and $P'$, which are defined by \begin{equation} \begin{array}{c} \Phi(x,y)\to \Phi(x, -y) = P \Phi(x_{\mu}, y)\,,\\[2mm] \Phi(x,y')\to \Phi(x, -y') = P' \Phi(x_{\mu}, y') \end{array} \label{23} \end{equation} where $\Phi(x,y)$ is an $SU(5)$ multiplet field living in the $5D$ bulk, while $P$ and $P'$ are matrix representations of the two $Z_2$ operator actions which have eigenvalues $\pm 1$. All interactions must be invariant under $Z_2\times Z'_2$ symmetry. Each reflection also introduces special points, $O$ and $O'$, located at $y=0$ and $y=\pi R/2\equiv \ell$ which are fixed points of the transformations. The equivalences associated with the two reflection symmetries allow to work with the theory obtained by truncating to the physically irreducible interval $y\in [0, \ell]$ with the two $4D$ walls (branes) placed at the fixed points $y=0$ and $y=\ell$. These are only two inequivalent branes (the branes at $y=\pi R$ and $y=-\pi R/2$ are identified with those at $y=0$ and $y=\pi R/2$, respectively). Thus physical space reduces to the interval $[0, \ell]$ with a length of $\pi R/2$. Denoting the $5D$ bulk field with $(P,\,P')=(\pm 1, \pm 1)$ by $\phi_{\pm\pm}$ one obtains the following Fourier expansions \cite{Kawamura:2000ev}--\cite{Hebecker:2001wq}: \begin{eqnarray} \phi_{++}(x,y) =\sum_{n=0}^{\infty} \frac{1}{\sqrt{2^{\delta_{n,0}}}} \sqrt{\frac{4}{\pi R}} \phi^{(2n)}_{++}(x)\cos\frac{2ny}{R}\,, \label{24} \end{eqnarray} \begin{eqnarray} \phi_{+-}(x,y) =\sum_{n=0}^{\infty} \sqrt{\frac{4}{\pi R}} \phi^{(2n+1)}_{+-}(x)\cos\frac{(2n+1)y}{R}\,, \label{25} \end{eqnarray} \begin{eqnarray} \phi_{-+}(x,y) =\sum_{n=0}^{\infty} \sqrt{\frac{4}{\pi R}} \phi^{(2n+1)}_{-+}(x)\sin\frac{(2n+1)y}{R}\,, \label{26} \end{eqnarray} \begin{eqnarray} \phi_{--}(x,y) =\sum_{n=0}^{\infty} \sqrt{\frac{4}{\pi R}} \phi^{(2n+2)}_{--}(x)\sin\frac{(2n+2)y}{R}\,, \label{27} \end{eqnarray} where $n$ is a non--negative integer. From the $4D$ perspective the Fourier component fields $\phi^{(2n)}_{++}(x)$, $\phi^{(2n+1)}_{+-}(x)$, $\phi^{(2n+1)}_{-+}(x)$ and $\phi^{(2n+2)}_{--}(x)$ acquire masses $2n/R$, $(2n+1)/R$, $(2n+1)/R$ and $(2n+2)/R$ upon compactification. Note that only $\phi_{++}(x,y)$ and $\phi_{+-}(x,y)$ can exist on the $y=0$ brane. The fields $\phi_{++}(x,y)$ and $\phi_{-+}(x,y)$ are non--vanishing on the $y=\pi R/2$ brane, whereas the field $\phi_{--}(x,y)$ vanishes on both branes. Only $\phi_{++}(x,y)$ fields have zero--modes. Since full $SU(5)$ $5D$ multiplets $\Phi_i(x,y)$ can, in general, contain components with even and odd parities, $P$ and $P'$, the matter content of the massless sector can be smaller than that of the full $5D$ multiplet. Unless all components of $\Phi(x,y)$ have common parities, the gauge symmetry reduction occurs upon compactification. As in the case of the simplest orbifold GUT scenarios \cite{Kawamura:2000ev}--\cite{Hebecker:2001wq} we start from the model with the minimal SUSY in $5D$ (with $8$ real supercharges, corresponding to $N=2$ in 4D). We assume that the vector supermultiplets associated with the $SU(5)$, $U(1)_{\chi}$ and $U(1)_{\psi}$ interactions exist in the bulk $M^4\times S^1/(Z_2\times Z'_2)$. The $5D$ gauge supermultiplets contain vector bosons $A_{M}$ ($M=0,1,2,3,5$) and gauginos. The $5D$ gaugino is composed of two $4D$ Weyl fermions with opposite $4D$ chirality, $\lambda$ and $\lambda'$. In addition $5D$ vector supermultiplets have to involve real scalars $\sigma$ to ensure that the numbers of bosonic and fermionic degrees of freedom are equal. Thus $5D$ gauge supermultiplets can be decomposed into vector supermultiplets $V$ with physical components $(A_{\mu}, \lambda)$ and chiral multiplets $\Sigma$ with components $\biggl((\sigma+i A_5)/\sqrt{2},\lambda'\biggr)$ under $N=1$ supersymmetry in $4D$. These two $N=1$ supermultiplets also form $N=2$ vector supermultiplet in $4D$. In addition to the $5D$ vector supermultiplets we assume the presence of other $SU(5)$ representations as well as $SU(5)$ singlet superfields that carry non--zero $U(1)_{\chi}$ and $U(1)_{\psi}$ charges in the $5D$ bulk. The corresponding representations also contain $5D$ fermions. Since each $5D$ fermion state is composed of two $4D$ Weyl fermions, $\psi$ and $\psi^c$, SUSY implies that each $5D$ supermultiplet includes two complex scalars $\phi$ and $\phi^c$ as well. The states $\phi$, $\psi$, $\phi^c$ and $\psi^c$ form one $4D$ $N=2$ hypermultiplet that consists of two $4D$ $N=1$ chiral multiplets, $\hat{\Phi}\equiv (\phi,\,\psi)$ and $\hat{\Phi}^c \equiv (\phi^c,\,\psi^c)$, transforming as conjugate representations with each other under the gauge group. Taking into account that the derivative $\partial_5$ is odd under the reflection $Z_2$ one can show that the 5D SUSY Lagrangian is invariant under the following transformations \cite{Kawamura:2000ev} \begin{equation} \begin{array}{rcl} A_{\mu}(x,y) & \to &A_{\mu}(x, -y) = P A_{\mu}(x,y) P^{-1}\,,\\[1mm] A_{5}(x,y) & \to &A_{5}(x, -y) = - P A_{5}(x,y) P^{-1}\,,\\[1mm] \sigma(x,y) & \to &\sigma(x, -y) = - P \sigma(x,y) P^{-1}\,,\\[1mm] \lambda(x,y) & \to &\lambda(x, -y) = P \lambda(x,y) P^{-1}\,,\\[1mm] \lambda'(x,y)& \to &\lambda'(x, -y) = - P \lambda'(x,y) P^{-1}\,,\\[1mm] \phi_i(x,y) & \to &\phi_i(x, -y) = P \phi_i(x, y) \,,\\[1mm] \psi_i(x,y) & \to &\psi_i(x, -y) = P \psi_i(x, y) \,,\\[1mm] \phi_i^c(x,y)& \to &\phi_i^c(x, -y) = -P \phi_i^c(x, y) \,,\\[1mm] \psi_i^c(x,y)& \to &\psi_i^c(x, -y) = -P \psi_i^c(x, y) \,, \end{array} \label{28} \end{equation} where index $i$ represents different $SU(5)$ supermultiplets that exist in the bulk $M^4\times S^1/(Z_2\times Z'_2)$. In the case of $SU(5)$ the components of the corresponding $N=2$ vector supermultiplet in Eq.~(\ref{28}) are given by $V(x, y)=V^{A}(x, y) T^{A}$ and $\Sigma(x, y)=\Sigma^A(x, y) T^A$, where $T^A$ is the set of the $SU(5)$ generators ($A=1,2,...,24$). The transformations in Eq.~(\ref{28}) are associated with the $Z_2$ reflection symmetry. By replacing $y$ and $P$ by $y'$ and $P'$ in Eq.~(\ref{28}) one obtains $Z'_2$ transformations. Note that mass terms for $\phi_i$, $\psi_i$, $\phi^c_i$ and $\psi^c_i$ are allowed by $N=2$ SUSY but these terms are not compatible with the $P$ and $P'$ parity assignments as follows from Eq.~(\ref{28}). Therefore the zero--modes of these fields do not receive a bulk mass contribution. It is convenient to choose the matrix representation of the parity assignment $P$, expressed in the fundamental representation of $SU(5)$, to be $P=\mbox{diag}(+1, +1, +1, +1, +1)$ so that $V^{A}(x, -y) T^{A}=V^{A}(x, y) T^{A}$. This boundary condition does not break $SU(5)$ on the $O$ brane at $y=0$. However $4D$ $N=2$ supersymmetry gets broken by this parity assignment to $4D$ $N=1$ SUSY. This can be seen explicitly by examining the masses of the Kaluza--Klein (KK) towers of the fields. Indeed, according to the parity assignment $P$ only $A_{\mu}$, $\lambda$, $\phi$ and $\psi$ are allowed to have zero--modes whereas other components of the $N=2$ vector supermultiplet ($\sigma, \lambda'$) and $N=2$ hypermultiplets $(\phi^c_i,\,\psi^c_i)$ with odd parity P do not possess massless modes. For the $SU(5)$ gauge symmetry to provide an understanding of the quark and lepton quantum numbers, the three families of $27_i$ representations of $E_6$ should reside on the $O$ brane where the $SU(5)\times U(1)_{\chi}\times U(1)_{\psi}$ gauge symmetry and $N=1$ SUSY remains intact. Then at low energies all ordinary quarks and leptons have to fill in complete $SU(5)$ multiplets. The $5D$ $SU(5)$ gauge symmetry is reduced to $4D$ $SU(3)_C\times SU(2)_W\times U(1)_Y$ gauge symmetry by choosing $P'=\mbox{diag}(-1, -1, -1, +1, +1)$ acting on the fundamental representation of $SU(5)$. This boundary condition breaks not only $SU(5)$ but also $4D$ $N=2$ SUSY to $4D$ $N=1$ SUSY on the $O'$ brane at $y=\ell$. The parity assignment associated with the $Z'_2$ reflection symmetry leads to the two types of the $SU(5)$ gauge generators $T^a$ and $T^{\hat{a}}$. All generators of the SM gauge group satisfy the condition \begin{equation} P'\,T^a\, P' = T^a\,. \label{29} \end{equation} Therefore the corresponding gauge fields $A^{a}_{\mu}(x,y)$ and gauginos $\lambda^a(x,y)$ are even under the reflections $Z_2$ and $Z'_2$ whereas $\sigma^a(x, y)$ and $\lambda^{'a}(x,y)$ are odd. As a consequence the KK expansions of vector bosons $A^{a}_{\mu}(x,y)$ and gauginos $\lambda^a(x,y)$ contain massless zero modes $A^{a(0)}_{\mu}(x)$ and $\lambda^{a(0)}(x)$ corresponding to the unbroken gauge symmetry of the SM. These zero modes form $4D$ $N=1$ vector supermultiplets. The KK modes $A^{a(2n)}_{5}(x)$ are swallowed by $A^{a(2n)}_{\mu}(x)$ resulting in the formation of vector boson state with mass $2n/R$. The KK gaugino modes $\lambda^{a(2n)}(x)$ and $\lambda^{'a(2n)}(x)$ form $4D$ fermion state with mass $2n/R$. The KK scalar mode $\sigma^{a(2n)}(x)$ also gains mass $2n/R$. The other gauge generators $T^{\hat{a}}$ of $SU(5)$ obey the relationship \begin{equation} P'\,T^{\hat{a}}\, P' = - T^{\hat{a}}\,, \label{30} \end{equation} which implies that $A^{\hat{a}}_{\mu}(x,y)$ and $\lambda^{\hat a}(x,y)$ are odd under the $Z'_2$ symmetry while $\sigma^{\hat{a}}(x, y)$ and $\lambda^{'\hat{a}}(x,y)$ are even. This means that all components of the $5D$ vector supermultiplet associated with the broken $SU(5)$ generators $T^{\hat{a}}$ are odd either under the reflection $Z_2$ or $Z'_2$ so that their KK expansions does not possess massless modes. The $Z_2$ and $Z'_2$ parity assignments for all components of the $5D$ bulk vector supermultiplets are shown in Table~\ref{tab2}. The KK modes $A^{\hat{a}(2n+1)}_{\mu}(x)$, $A^{\hat{a}(2n+1)}_{5}(x)$, $\sigma^{\hat{a}(2n+1)}(x)$, $\lambda^{\hat{a}(2n+1)}(x)$ and $\lambda^{'\hat{a}(2n+1)}(x)$ form vector boson, scalar and fermion states with masses $(2n+1)/R$. \begin{table}[ht] \centering \begin{tabular}{|l|l|l|l|l|} \hline $5D$ fields &$SU(3)_C\times SU(2)_W$ &$Z_2\times Z'_2$& Mass\\ & quantum numbers & parity &\\ \hline $A^a_{\mu},\,\lambda^a$ & $(8,1)+(1,3)+(1,1)$ & $(+,+)$ & $2n/R$\\ \hline $A^{\hat{a}}_{\mu},\,\lambda^{\hat{a}}$ & $(3,2)+(\bar{3},2)$ & $(+,-)$ & $(2n+1)/R$\\ \hline $A^a_{5},\,\sigma^a,\,\lambda^{'a}$ & $(8,1)+(1,3)+(1,1)$ & $(-,-)$ & $(2n+2)/R$\\ \hline $A^{\hat{a}}_{5},\,\sigma^{\hat{a}},\,\lambda^{'\hat{a}}$ & $(3,2)+(\bar{3},2)$ & $(-,+)$ & $(2n+1)/R$\\ \hline $A^{\chi}_{\mu},\,\lambda_{\chi}$ & $(1,1)$ & $(+,+)$ & $2n/R$\\ \hline $A^{\chi}_{5},\,\sigma_{\chi},\,\lambda'_{\chi}$ & $(1,1)$ & $(-,-)$ & $(2n+2)/R$\\ \hline $A^{\psi}_{\mu},\,\lambda_{\psi}$ & $(1,1)$ & $(+,+)$ & $2n/R$\\ \hline $A^{\psi}_{5},\,\sigma_{\psi},\,\lambda'_{\psi}$ & $(1,1)$ & $(-,-)$ & $(2n+2)/R$\\ \hline \end{tabular} \caption{Parity assignments and KK masses of fields in the $5D$ bulk vector supermultiplets associated with the $SU(5)$, $U(1)_{\psi}$ and $U(1)_{\chi}$ gauge interactions.} \label{tab2} \end{table} At the fixed point $O'$ the gauge transformations generated by $T^{\hat{a}}$ as well as the corresponding components of the $5D$ $SU(5)$ vector supermultiplet vanish. At the same time at an arbitrary point in the bulk all generators of the $SU(5)$ gauge group are operative. Thus orbifold procedure leads to a local explicit breaking of $SU(5)$ at the fixed point $O'$ due to the non--trivial orbifold quantum numbers of the gauge parameters. The $Z_2$ and $Z'_2$ parity assignments for the components of the $U(1)_{\psi}$ and $U(1)_{\chi}$ bulk vector supermultiplets are such that the KK expansions of vector bosons $A^{\chi}_{\mu}(x,y)$ and $A^{\psi}_{\mu}(x,y)$ as well as the corresponding gaugino states $\lambda_{\chi}(x,y)$ and $\lambda_{\psi}(x,y)$ contain massless zero modes $A^{\chi (0)}_{\mu}(x)$, $A^{\psi (0)}_{\mu}(x)$, $\lambda^{(0)}_{\chi}(x)$ and $\lambda^{(0)}_{\psi}(x)$ associated with the unbroken $U(1)_{\psi}$ and $U(1)_{\chi}$ gauge symmetries (see Table~\ref{tab2}). Other KK modes form vector boson, scalar and fermion states with masses $(2n+2)/R$ similar to the ones that appear in the case of unbroken generators $T^{a}$ of $SU(5)$. As in the simplest orbifold GUT scenarios \cite{Kawamura:2000ev}--\cite{Altarelli:2001qj} we assume that all incomplete $SU(5)$ supermultiplets which are even under the custodial symmetry (the matter parity $Z_{2}^{M}$ in the case of the MSSM and the $\tilde{Z}^{H}_2$ symmetry in the case of the E$_6$SSM) originate from the $5D$ bulk supermultiplets. In order to ensure that $H_u$ and $\overline{H}_u$ as well as $H_d$ and $\overline{H}_d$ survive below the scale $M_X\sim 1/R$ we include two pairs of the $5D$ $SU(5)$ bulk supermultiplets $\Phi_{H_u}+\Phi_{\overline{H}_u}$ and $\Phi_{H_d}+\Phi_{\overline{H}_d}$ that decompose as follows \begin{equation} \Phi_{H_u} = \Phi_{\overline{H}_u}= \displaystyle\left(5,\,-\dfrac{2}{\sqrt{24}},\,\dfrac{2}{\sqrt{40}}\right),\qquad \Phi_{H_d} = \Phi_{\overline{H}_d}= \displaystyle\left(5,\,\dfrac{2}{\sqrt{24}},\,\dfrac{2}{\sqrt{40}}\right), \label{31} \end{equation} where first, second and third quantities in brackets are the $SU(5)$ representation, extra $U(1)_{\psi}$ and $U(1)_{\chi}$ charges respectively. The multiplets $\Phi_{H_u}$ and $\Phi_{\overline{H}_u}$ as well as $\Phi_{H_d}$ and $\Phi_{\overline{H}_d}$ transform differently under $Z_2$ and $Z'_2$ (see Table~\ref{tab3}). Since $P'$ does not commute with $SU(5)$ each $5D$ 5--plet is divided into four pieces associated with different $N=1$ chiral supermultiplets: \begin{equation} 5=(3,1,-1/3)+(1,2,1/2)+(\bar{3},1,1/3)+(1,2,-1/2)\,. \label{311} \end{equation} In Eq.~(\ref{311}) first and second quantities in brackets are $SU(3)_C$ and $SU(2)_W$ quantum numbers whereas the third quantity is $U(1)_Y$ charge. As one can see from Table~\ref{tab3} chiral supermultiplets in Eq.~(\ref{311}) have different $P$ and $P'$ parity assignments that result in different KK mode structures. These parity assignments are such that the orbifold projection accomplishes doublet--triplet splitting, in the sense that only one doublet superfield in Eq.~(\ref{311}) has zero mode while the KK expansions of another doublet, triplet and antitriplet superfields do not possess massless modes. Thus only $H_u$, $\overline{H}_u$, $H_d$ and $\overline{H}_d$ may survive to low--energies. The $4D$ superfields $N^c_H$, $\overline{N}_H^c$, $S$ and $\overline{S}$ can stem from the $5D$ SM singlet superfields that carry $U(1)_{\psi}$ and $U(1)_{\chi}$ charges \begin{equation} \Phi_{S}= \Phi_{\overline{S}}= \displaystyle\left(1,\,\dfrac{4}{\sqrt{24}},\,0\right),\qquad \Phi_{N^c_H}= \Phi_{\overline{N}_H^c}= \displaystyle\left(1,\,\dfrac{1}{\sqrt{24}},\,-\dfrac{5}{\sqrt{40}}\right)\,. \label{32} \end{equation} According to Eq.~(\ref{28}) only either $\phi_i$ and $\psi_i$ or $\phi^c_i$ and $\psi^c_i$ can have massless modes. Different parity assignments of $\Phi_{S}$ and $\Phi_{\overline{S}}$ as well as $\Phi_{N^c_H}$ and $\Phi_{\overline{N}_H^c}$ allow to project out different components of these superfields so that only $4D$ superfields $N^c_H$, $\overline{N}_H^c$, $S$ and $\overline{S}$ may be light (see Table~\ref{tab3}). \begin{table}[ht] \centering \begin{tabular}{|l|l|l|l|l|} \hline $5D$ fields &$SU(3)_C\times SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi}$&$Z_2\times Z'_2$& Mass\\ & quantum numbers & parity &\\ \hline & $(3,1,-1/3,-2,2)+(\bar{3},1,1/3,2,-2)$& $(+,-)$ & $(2n+1)/R$\\ $\Phi_{H_u}+\Phi_{\overline{H}_u}$ & $(1,2,1/2,-2,2)+(1,2,-1/2,2,-2)$ & $(+,+)$ & $2n/R$\\ & $(\bar{3},1,1/3,2,-2)+(3,1,-1/3,-2,2)$& $(-,+)$ & $(2n+1)/R$\\ & $(1,2,-1/2,2,-2)+(1,2,1/2,-2,2)$ & $(-,-)$ & $(2n+2)/R$\\ \hline & $(3,1,-1/3,2,2)+(\bar{3},1,1/3,-2,-2)$& $(-,+)$ & $(2n+1)/R$\\ $\Phi_{H_d}+\Phi_{\overline{H}_d}$ & $(1,2,1/2,2,2)+(1,2,-1/2,-2,-2)$ & $(-,-)$ & $(2n+2)/R$\\ & $(\bar{3},1,1/3,-2,-2)+(3,1,-1/3,2,2)$& $(+,-)$ & $(2n+1)/R$\\ & $(1,2,-1/2,-2,-2)+(1,2,1/2,2,2)$ & $(+,+)$ & $2n/R$\\ \hline $\Phi_{S}+\Phi_{\overline{S}}$ & $(1,1,0,4,0)+(1,1,0,-4,0)$ & $(+,+)$ & $2n/R$\\ & $(1,1,0,-4,0)+(1,1,0,4,0)$ & $(-,-)$ & $(2n+2)/R$\\ \hline $\Phi_{N^c_H}+\Phi_{\overline{N}_H^c}$& $(1,1,0,1,-5)+(1,1,0,-1,5)$ & $(+,+)$ & $2n/R$\\ & $(1,1,0,-1,5)+(1,1,0,1,-5)$ & $(-,-)$ & $(2n+2)/R$\\ \hline & $(3,1,-1/3,-1,-3)+(\bar{3},1,1/3,1,3)$& $(-,+)$ & $(2n+1)/R$\\ $\Phi_{L_4}+\Phi_{\overline{L}_4}$ & $(1,2,1/2,-1,-3)+(1,2,-1/2,1,3)$ & $(-,-)$ & $(2n+2)/R$\\ & $(\bar{3},1,1/3,1,3)+(3,1,-1/3,-1,-3)$& $(+,-)$ & $(2n+1)/R$\\ & $(1,2,-1/2,1,3)+(1,2,1/2,-1,-3)$ & $(+,+)$ & $2n/R$\\ \hline & $(3,1,-1/3,-1,-3)+(\bar{3},1,1/3,1,3)$& $(-,-)$ & $(2n+2)/R$\\ $\Phi_{d^c_4}+\Phi_{\overline{d^c}_4}$& $(1,2,1/2,-1,-3)+(1,2,-1/2,1,3)$ & $(-,+)$ & $(2n+1)/R$\\ & $(\bar{3},1,1/3,1,3)+(3,1,-1/3,-1,-3)$& $(+,+)$ & $2n/R$\\ & $(1,2,-1/2,1,3)+(1,2,1/2,-1,-3)$ & $(+,-)$ & $(2n+1)/R$\\ \hline \end{tabular} \caption{Parity assignments and KK masses of fields in the $4D$ chiral supermultiplets resulting from the $5D$ bulk supermultiplets $\Phi_{H_u}$, $\Phi_{\overline{H}_u}$, $\Phi_{H_d}$, $\Phi_{\overline{H}_d}$ $\Phi_{S}$, $\Phi_{\overline{S}}$, $\Phi_{N^c_H}$, $\Phi_{\overline{N}_H^c}$, $\Phi_{L_4}$, $\Phi_{\overline{L}_4}$ $\Phi_{d^c_4}$ and $\Phi_{\overline{d^c}_4}$.} \label{tab3} \end{table} Finally, the particle spectrum below the scale $M_X$ should be supplemented by either $L_4$ and $\overline{L}_4$ or $d^c_4$ and $\overline{d^c}_4$ (but not both) to allow the lightest exotic quarks to decay. These $4D$ $N=1$ chiral superfields can come from either $\Phi_{L_4}$ and $\Phi_{\overline{L}_4}$ or $\Phi_{d^c_4}$ and $\Phi_{\overline{d^c}_4}$ which are $5D$ $SU(5)$ bulk supermultiplets with quantum numbers \begin{equation} \Phi_{L_4}=\Phi_{\overline{L}_4}=\Phi_{d^c_4}=\Phi_{\overline{d^c}_4}= \displaystyle\left(5,\,-\dfrac{1}{\sqrt{24}},\,-\dfrac{3}{\sqrt{40}}\right)\,. \label{33} \end{equation} Again parity assignments guarantee that only two $4D$ doublet superfields $L_4$ and $\overline{L}_4$ from $\Phi_{L_4}$ and $\Phi_{\overline{L}_4}$ can survive to low--energies whereas the other $SU(2)_W$ doublet, color triplet and antitriplet partners do not have zero modes. Using the freedom to flip the overall action of the $P'$ parity on the $SU(5)$ multiplets by a sign relative to $\Phi_{L_4}+\Phi_{\overline{L}_4}$ one can get the KK spectrum in which only triplet or antitriplet components of $SU(5)$ fundamental supermultiplets possess massless modes. From Table~\ref{tab3} one can see that this freedom is used in the case $\Phi_{d^c_4}$ and $\Phi_{\overline{d^c}_4}$ supermultiplets. Due to the different structure of the KK spectrum only $4D$ triplet or antitriplet superfields, $\overline{d^c}_4$ and $d^c_4$, from $\Phi_{\overline{d^c}_4}$ and $\Phi_{d^c_4}$ are allowed to be light. Since the three families of $27_i$ representations of $E_6$ are located on the $O$ brane, where the $SU(5)\times U(1)_{\chi}\times U(1)_{\psi}$ gauge symmetry remains intact, the Yukawa interactions of quarks and leptons are necessarily $SU(5)$ symmetric. In general the $SU(5)$ invariance yields the prediction for the first and second generation fermion mass ratios $m_s/m_d=m_{\mu}/m_{e}$, which is in conflict with the data. In $4D$ GUTs acceptable mass relations can be obtained using higher dimensional operators and relatively large representations which acquire VEVs breaking $SU(5)$ or $SO(10)$ \cite{Altarelli:2000fu}, \cite{Ellis:1979fg}. In the case of the simplest 5D orbifold GUTs there are no $SU(5)$ breaking VEVs. Nevertheless in this case one can introduce two additional $5D$ bulk supermultiplets with quantum numbers given by Eq.~(\ref{33}) that transform under $Z_2$ and $Z'_2$ as either $\Phi_{L_4}$ and $\Phi_{\overline{L}_4}$ or $\Phi_{d^c_4}$ and $\Phi_{\overline{d^c}_4}$. Furthermore we assume that these bulk supermultiplets are odd under $\tilde{Z}^{H}_2$ symmetry which is defined on the $O$ brane. Hence the zero modes of these extra $5D$ supermultiplets, which are either weak doublets ($L_5$ and $\overline{L}_5$) or $SU(3)_C$ triplet and antitriplet ($\overline{d^c}_5$ and $d^c_5$), can mix with quark or lepton superfields from $27_i$ spoiling the $SU(5)$ relations between the down type quark and charged lepton masses. Indeed, suppose that zero modes are weak doublet superfields $L_5$ and $\overline{L}_5$. Then $\overline{L}_5$ can get combined with the superposition of lepton doublet superfields from $27_i$ so that the resulting vectorlike states gain masses slightly below $M_X$. The remaining three families of lepton doublets, that survive to low energies, are superpositions of the corresponding components from $27_i$ and $L_5$ while three generations of down type quarks stem from $27_i$ completely. As a consequence the $SU(5)$ relations between the down type quark and charged lepton masses may get spoiled entirely if the Yukawa couplings of $L_5$ to Higgs doublet $H_d$ are relatively large ($\sim 0.01-0.1$). \begin{table}[ht] \centering \begin{tabular}{|l|l|l|l|l|} \hline $5D$ fields &$SU(3)_C\times SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi}$&$Z_2\times Z'_2$& Mass\\ & quantum numbers & parity &\\ \hline & $(\bar{3},1,-2/3,1,-1)+(3,1,2/3,-1,1)$& $(+,+)$ & $2n/R$\\ & $(3,2,1/6,1,-1)+(\bar{3},2,-1/6,-1,1)$& $(+,-)$ & $(2n+1)/R$\\ $\Phi_{e^c}+\Phi_{\overline{e^c}}$ & $(1,1,1,1,-1)+(1,1,-1,-1,1)$ & $(+,+)$ & $2n/R$\\ & $(3,1,2/3,-1,1)+(\bar{3},1,-2/3,1,-1)$& $(-,-)$ & $(2n+2)/R$\\ & $(\bar{3},2,-1/6,-1,1)+(3,2,1/6,1,-1)$& $(-,+)$ & $(2n+1)/R$\\ & $(1,1,-1,-1,1)+(1,1,1,1,-1)$ & $(-,-)$ & $(2n+2)/R$\\ \hline \end{tabular} \caption{The $(Z_2,Z'_2)$ transformation properties and KK masses of $4D$ chiral supermultiplets that stem from $SU(5)$ bulk supermultiplets $\Phi_{e^c}$ and $\Phi_{\overline{e^c}}$.} \label{tab4} \end{table} Although the discussed specific realization of the mechanism which allows to obtain the realistic pattern of fermion masses is the simplest one it is worth to consider another very attractive possibility. Instead of two additional $5D$ $SU(5)$ fundamental supermultiplets one can include two larger representations of $SU(5)$ that decompose under $SU(5)\times U(1)_{\chi}\times U(1)_{\psi}$ as follows: \begin{equation} \Phi_{e^c}=\Phi_{\overline{e^c}}=\displaystyle\left(10,\,\dfrac{1}{\sqrt{24}},\,-\dfrac{1}{\sqrt{40}}\right)\,. \label{34} \end{equation} As before we assume that $\Phi_{e^c}$ and $\Phi_{\overline{e^c}}$ supermultiplets are odd under $\tilde{Z}^{H}_2$ symmetry. Due to $P$ and $P'$ parity assignments each $SU(5)$ bulk decuplet is divided into six pieces associated with different $N=1$ chiral supermultiplets: \begin{equation} 10=(\bar{3},1,-2/3)+(3,2,1/6)+(1,1,1)+(3,1,2/3)+(\bar{3},2,-1/6)+(1,1,-1)\,, \label{35} \end{equation} where quantities in brackets are $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ quantum numbers. The $Z_2$ and $Z'_2$ parity assignments and mass spectrum for all components of the $5D$ decuplets are given in Table~\ref{tab4}. These parity assignments guarantee that only two $4D$ $SU(2)_W$ singlet superfields ($e^c_5$ and $\overline{e^c}_5$) as well as $4D$ triplet and antitriplet supermultiplets ($u^c_5$ and $\overline{u^c}_5$) from $\Phi_{e^c}$ and $\Phi_{\overline{e^c}}$ can survive below scale $M_X\sim 1/R$. Again $\overline{e^c}_5$ and $\overline{u^c}_5$ can get combined with the superposition of the appropriate components of $27_i$ forming vectorlike states which may have masses slightly below $M_X$. At the same time $e^c_5$ can mix with the corresponding components of $27_i$ spoiling the $SU(5)$ relations between the masses of the down type quarks and charged leptons. It is worth noting that together bulk supermultiplets (\ref{31}), (\ref{32}), (\ref{33}) and (\ref{34}) form two complete $27$ representations of $E_6$. This simplifies the structure of bulk supermultiplets making the considered $5D$ orbifold GUT model more elegant. For the consistency of the considered model it is crucial that all anomalies get cancelled. In $5D$ theories no bulk anomalies exist. Nevertheless orbifold compactification may lead to anomalies at orbifold fixpoints \cite{ArkaniHamed:2001is}--\cite{Asaka:2002my}. At the fixed point brane anomaly reduces to the anomaly of the unbroken subgroup of the original group, i.e. $SU(5)\times U(1)_{\chi}\times U(1)_{\psi}$ on the $O$ brane and $SU(3)_C\times SU(2)_W\times U(1)_Y\times U(1)_{\chi}\times U(1)_{\psi}$ on the $O'$ brane. It was also shown that the sum of the contributions to the $4D$ anomalies at the fixpoint equals to the sum of the contributions of the zero modes localized at the corresponding brane \cite{ArkaniHamed:2001is}--\cite{Asaka:2002my}. In this context it is worth to emphasize that the contributions of three families of $27_i$ representations of $E_6$, which reside on the $O$ brane, to the anomalies associated with this fixpoint get cancelled automatically. Moreover from Tables \ref{tab3} and \ref{tab4} one can see that the $P$ and $P'$ parity assignments are chosen so that the zero modes of the bulk fields localized at the $O$ and $O'$ branes always form pairs of $N=1$ supermultiplets with opposite quantum numbers. Such choice of parity assignments guarantees that the contributions of zero modes of the bulk superfields to the brane anomalies are cancelled as well. Another important issue for any GUT model is proton stability which was discussed in the context of $5D$ orbifold GUT models in \cite{Hall:2001pg}, \cite{5d-susy-ogut-proton}--\cite{5d-susy-ogut-proton-unif}. In orbifold GUT models the dimension five operators, which are caused by an exchange of the color triplet Higgsino multiplets and give rise to proton decay in ordinary GUTs, do not get induced. Indeed, in the considered class of models colored Higgsinos acquire mass via the KK mode expansion of operators $\psi_i \partial_{5} \psi_i^c$ that leads to the Dirac mass terms of the form $\psi^{(2n+1)}_i \psi_i^{c(2n+1)}$. Since $\psi_i^{c(2n+1)}$ do not couple directly to the quarks (squarks) and sleptons (leptons) the dimension five operators are not generated. It turns out that the absence of tree-level amplitudes caused by the colored Higgsino exchange which result in proton decay is deeply entangled with the orbifold construction and continuous global $U(1)_R$ symmetry that $5D$ bulk Lagrangian possesses \cite{Hall:2001pg}. Although the dimension five operators discussed above do not get induced within orbifold GUT models one must also suppress the brane interactions $[QQQL]_F$ and $[u^c u^c d^c e^c]_F$ that may be already present on the $O$ brane as non--renormalizable interactions. Such operators can give a substantial contribution to the proton decay rate if the fundamental scale of gravity is close to the GUT scale. In the $5D$ orbifold GUT model considered here these dangerous operators are forbidden by $U(1)_{\chi}$ and $U(1)_{\psi}$ gauge symmetries. Nevertheless proton decay is mediated by dimension six operators induced by the leptoquark gauge bosons \cite{Ellis:1979hy}. Finally, one should mention that in the $5D$ orbifold GUT models gauge couplings of the $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ interactions do not exactly unify at the scale $M_X\sim 1/R$ where $SU(5)$ gauge symmetry gets broken. The reason for this is that the symmetry of the model on the GUT--breaking brane $O'$ remains limited to the SM gauge group. In particular, on this brane there are brane--localized $4D$ kinetic terms for the SM gauge fields with $SU(5)$--violating coefficients $1/g_{O'i}^2$. The part of the $5D$ effective SUSY Lagrangian that contains kinetic terms for the SM gauge fields can be written as follows \begin{equation} \mathcal{L}_{eff}=\int d^2\theta \biggl(\frac{1}{g_5^2}+\frac{1}{2g_{O}^2}\biggl\{\delta(y)+\delta(y-\pi R)\biggr\}\biggr) \mbox{Tr}\,\mathcal{W}^{\alpha} \mathcal{W}_{\alpha}\qquad\qquad \label{36} \end{equation} $$ \qquad\qquad + \sum_i \int d^2\theta \frac{1}{2g_{O'i}^2}\biggl\{\delta(y-\frac{\pi}{2}R)+\delta(y+\frac{\pi}{2}R)\biggr\} \mbox{Tr}\,\mathcal{W}^{\alpha}_i \mathcal{W}^i_{\alpha}+\mbox{h.c.}, $$ where $\mathcal{W}^i_{\alpha}$ $(i=1,2,3)$ are the supersymmetric gauge field strengths of the $U(1)_Y$, $SU(2)_W$ and $SU(3)_C$ gauge interactions on the $O'$ brane, and $\mathcal{W}_{\alpha}$ is the $SU(5)$ gauge field strength on the $O$ brane and in the bulk \footnote{Note the $O'$ brane contribution vanish for $\mathcal{W}_{\alpha}$ associated with the leptoquark gauge bosons which are odd under $Z'_2$.}. Integrating over $y$ one obtains zero--mode $4D$ SM gauge couplings at the scale $M_X\sim 1/R$ \begin{equation} \frac{1}{g^2_i(M_X)}=\frac{2\pi R}{g_5^2}+\frac{1}{g_{O}^2}+\frac{1}{g_{O'i}^2}\,. \label{37} \end{equation} Since $SU(5)$--violating coefficients $1/g_{O'i}^2$ may differ from each other substantially the $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ gauge couplings $g^2_i(M_X)$ are not identical. However if in the $5D$ model the bulk and brane gauge couplings have almost equal strength then after integrating out $y$ the zero--mode gauge couplings are dominated by the bulk contributions because of the spread of the wavefunction of the zero--mode gauge bosons. In other words the $SU(5)$--violating brane kinetic terms are dominated by the bulk contributions when the linear extent of the $5$th dimension is sufficiently large. Because the bulk contributions to the gauge couplings (\ref{37}) are necessarily $SU(5)$ symmetric, a $4D$ observer sees an approximate unification of the SM gauge couplings. The gauge coupling unification within $5D$ orbifold GUT models was discussed in \cite{5d-susy-ogut-proton-unif}--\cite{5d-susy-ogut-unif}. As one can see from Eqs.~(\ref{36})--(\ref{37}) the discrepancy between $g^2_i(M_X)$ is determined by the $SU(5)$--violating gauge kinetic terms on the $O'$ brane. This discrepancy is small when $g^2_i(M_X)$ are relatively small whereas $g_{O'i}^2$ are large ($g_{O'i}^2 \sim 4\pi $). On the other hand one can expect that the relative contribution of the $SU(5)$--violating brane corrections to $g^2_i(M_X)$ becomes more sizable in the case when the $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ gauge couplings are large at the scale $M_X$. \subsection{$E_6$ orbifold GUT model in six dimensions} Having discussed in detail the simplest 5D orbifold GUT model, that may lead at low energies to the gauge group and field content of the $E_6$ inspired SUSY model specified in section 2, we next study $E_6$ gauge theory in $6D$ with $N=1$ supersymmetry. We consider the compactification on a torus $T^2$ with two fixed radii $R_5$ and $R_6$ so that two extra dimensions $y (=x_5)$ and $z (=x_6)$ are compact, i.e. $y\in (-\pi R_5, \pi R_5]$ and $z\in (-\pi R_6, \pi R_6]$. The physical region associated with the compactification on the orbifold $T^2/Z_2$ is a pillow with the four fixed points of the $Z_2$ transformations ($y\to -y$, $z\to -z$) as corners. The orbifold $T^2/Z_2$ has the following fixpoints $(0,0)$, $(\pi R_5,0)$, $(0,\pi R_6)$ and $(\pi R_5,\pi R_6)$. Here we discuss $E_6$ gauge theory in $6D$ compactified on the orbifold $T^2/(Z_2 \times Z^{I}_2 \times Z^{II}_2)$. The $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ symmetries are reflections. The $Z_2$ transformations are defined as before, i.e. $y\to -y$, $z\to -z$. The $Z^{I}_2$ reflection symmetry transformations act as $y'\to -y'$, $z\to -z$ with $y' = y - \pi R_5/2$. The reflection $Z^{II}_2$ corresponds to $y\to -y$, $z'\to -z'$ where $z' = z - \pi R_6/2$. The $Z^{I}_2$ and $Z^{II}_2$ reflection symmetries introduce additional fixed points. As in the case of $5D$ orbifold GUT models extra reflection symmetries lead to the reduction of the physical region which is again limited by the appropriate fixed points. The $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ reflection symmetries allow to work with the theory obtained by truncating to the physically irreducible space in which $y\in [0, \pi R_5/2]$ and $z\in [0, \pi R_6/2]$ with the four $4D$ walls (branes) located at its corners. Again, we assume that the considered orbifold GUT model contains a set of $E_6$ bulk supermultiplets and another set of $N=1$ superfields which are confined on one of the branes. The set of superfields that propagate in the bulk $M^4\times T^2/(Z_2 \times Z^{I}_2 \times Z^{II}_2)$ includes $E_6$ gauge supermultiplet and a few 27--plets. As before all quark and lepton superfields are expected to be confined on one brane. The $E_6$ gauge supermultiplet that exist in the bulk must involve vector bosons $A_{M}$ ($M=0,1,2,3,5,6$) and $6D$ Weyl fermions (gauginos) which are composed of two $4D$ Weyl fermions, $\lambda$ and $\lambda'$. These fields can be conveniently grouped into vector and chiral multiplets of the $N=1$ supersymmetry in $4D$, i.e. \begin{equation} V=(A_{\mu}, \lambda)\,,\qquad\qquad \Sigma=\biggl((A_5+i A_6)/\sqrt{2},\lambda'\biggr)\,, \label{38} \end{equation} where $V$, $A_{M}$, $\lambda$ and $\lambda'$ are matrices in the adjoint representation of $E_6$. Two $N=1$ supermultiplets (\ref{38}) form $N=2$ vector supermultiplet in $4D$. The bulk $27'$ supermultiplets also include $6D$ Weyl fermion states (that involve two $4D$ Weyl fermions, $\psi_i$ and $\psi^c_i$) together with two complex scalars $\phi_i$ and $\phi^c_i$. The fields $\psi_i, \psi^c_i, \phi_i$ and $\phi^c_i$ compose $4D$ $N=2$ hypermultiplet containing two $4D$ $N=1$ chiral superfields: $\hat{\Phi}_i=(\phi_i,\,\psi_i)$ and its conjugate $\hat{\Phi}^c_i = (\phi^c_i,\,\psi^c_i)$ with opposite quantum numbers. Thus each bulk $27'$ supermultiplet involves two $4D$ $N=1$ supermultiplets $27'$ and $\overline{27'}$. To ensure the consistency of the construction the Lagrangian of the considered orbifold GUT model has to be invariant under $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ symmetries. As in the case of 5D orbifold GUT models each reflection symmetry, $Z_2$, $Z^{I}_2$ and $Z^{II}_2$, has its own orbifold parity, $P$, $P_{I}$ and $P_{II}$. The components $\hat{\Phi}$ and $\hat{\Phi}^c$ of the bulk $27'$ supermultiplet $\Phi$ transform under $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ as follows \begin{equation} \begin{array}{ll} \hat{\Phi}(x, -y, -z) = P \hat{\Phi}(x, y, z)\,,&\qquad \hat{\Phi}^c(x, -y, -z) = -P \hat{\Phi}^c (x, y, z)\,,\\ \hat{\Phi}(x, -y', -z) = P_{I} \hat{\Phi}(x, y', z)\,,&\qquad \hat{\Phi}^c(x, -y', -z) = -P_{I} \hat{\Phi}^c (x, y', z)\,,\\ \hat{\Phi}(x, -y, -z') = P_{II} \hat{\Phi}(x, y, z')\,,&\qquad \hat{\Phi}^c(x, -y, -z') = -P_{II} \hat{\Phi}^c (x, y, z')\,, \end{array} \label{39} \end{equation} where $P$, $P_{I}$ and $P_{II}$ are diagonal matrices with eigenvalues $\pm 1$ that act on each component of the fundamental representation of $E_6$. It is convenient to specify the matrix representation of the orbifold parity assignments in terms of the $E_6$ weights $\alpha_{j}$ and gauge shifts, $\Delta$, $\Delta_{I}$ and $\Delta_{II}$, associated with $Z_2$, $Z^{I}_2$ and $Z^{II}_2$. The diagonal elements of the matrices $P$, $P_{I}$ and $P_{II}$ can be presented in the following form \cite{Braam:2010sy} \begin{equation} \begin{array}{c} (P)_{jj}=\sigma\exp\{2\pi i \Delta \alpha_j\}\,,\qquad\qquad (P_{I})_{jj}=\sigma_{I}\exp\{2\pi i \Delta_{I} \alpha_j\}\,,\\ (P_{II})_{jj}=\sigma_{II}\exp\{2\pi i \Delta_{II} \alpha_j\}\,, \end{array} \label{40} \end{equation} where $\sigma$, $\sigma_{I}$ and $\sigma_{II}$ are parities of the bulk $27'$ supermultiplet, i.e. $\sigma, \sigma_{I}, \sigma_{II} \in \{+,-\}$. The particle assignments of the weights in the fundamental representation of $E_6$ are well known (see, for example \cite{Braam:2010sy}). Here we choose the following gauge shifts \begin{equation} \begin{array}{c} \Delta=\biggl(\dfrac{1}{2},\,\dfrac{1}{2},\,0,\,\dfrac{1}{2},\,\dfrac{1}{2},\,0\biggr)\,,\qquad \Delta_{I}=\biggl(\dfrac{1}{2},\,\dfrac{1}{2},\,\dfrac{1}{2},\,\dfrac{1}{2},\,\dfrac{1}{2},\,0\biggr)\,,\\ \Delta_{II}=\biggl(\dfrac{1}{2},\,\dfrac{1}{2},\,0,\,0,\,\dfrac{1}{2},\,0\biggr)\,, \end{array} \label{41} \end{equation} that correspond to the orbifold parity assignments shown in Table~\ref{tab5}. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & $Q$ & $u^c$ & $e^c$ & $L$ & $d^c$ & $N^c$ & $S$ & $H^u$ & $D$ & $H^d$ & $\overline{D}$ \\ \hline $Z_2$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $+$ & $+$ & $+$ & $+$ & $+$ \\ \hline $Z_2^{I}$ & $-$ & $+$ & $+$ & $-$ & $+$ & $+$ & $+$ & $-$ & $+$ & $-$ & $+$ \\ \hline $Z_2^{II}$& $-$ & $-$ & $-$ & $+$ & $+$ & $+$ & $-$ & $+$ & $+$ & $-$ & $-$ \\ \hline \end{tabular} \caption{Orbifold parity assignments in the bulk $27'$ supermultiplet with $\sigma=\sigma_{I}=\sigma_{II}=+1$.} \label{tab5} \end{table} The components $V$ and $\Sigma$ of the $E_6$ gauge supermultiplet transform under $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ as follows \begin{equation} \begin{array}{ll} V(x, -y, -z) = P V(x, y, z) P^{-1}\,,&\qquad \Sigma(x, -y, -z) = -P \Sigma (x, y, z) P^{-1}\,,\\ V(x, -y', -z) = P_{I} V(x, y', z) P^{-1}_{I}\,,&\qquad \Sigma(x, -y', -z) = -P_{I} \Sigma (x, y', z) P^{-1}_{I}\,,\\ V(x, -y, -z') = P_{II} V(x, y, z') P^{-1}_{II}\,,&\qquad \Sigma(x, -y, -z') = -P_{II} \Sigma (x, y, z') P^{-1}_{II}\,, \end{array} \label{42} \end{equation} where $V(x, y, z)=V^{A}(x, y, z) T^{A}$ and $\Sigma(x, y, z)=\Sigma^A(x, y, z) T^A$ while $T^A$ is the set of generators of the $E_6$ group. The boundary conditions given by Eqs.~(\ref{39}) and (\ref{42}) break $4D$ $N=2$ supersymmetry because different components of the $N=2$ supermultiplets transform differently under $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ reflection symmetries. Moreover since $P$, $P_{I}$ and $P_{II}$ are not unit matrices the $E_6$ gauge symmetry also gets broken by these parity assignments. The $P$ parity assignment indicates that on the $O$ brane at $y=z=0$ associated with the $Z_2$ reflection symmetry the $E_6$ gauge group is broken down to $SO(10)\times U(1)_{\psi}$ subgroup. Indeed, according to Table~\ref{tab5} the $SO(10)$ representations that compose bulk $27'$ supermultiplet ($27\to 16+10+1$) transform differently under $Z_2$ symmetry, i.e. $16\to -16$, $10\to 10$ and $1\to 1$. Since the considered symmetry breaking mechanism preserves the rank of the group the unbroken subgroup at the fixed point $O$ should be $SO(10)\times U(1)_{\psi}$. On the brane $O_{I}$ located at the fixed point $y=\pi R_5/2$, $z=0$ and associated with the $Z_2^{I}$ symmetry the $E_6$ gauge symmetry is broken to $SU(6)\times SU(2)_W$. Again this follows from the $P_{I}$ parity assignment in the bulk $27'$ supermultiplet. The fundamental representation of $E_6$ decomposes under the $SU(6)\times SU(2)_W$ as follows: $$ 27\to (\overline{15},\, 1) + (6,\,2)\,, $$ where the first and second quantities in brackets are the $SU(6)$ and $SU(2)_W$ representations respectively. The multiplet $(6,\,2)$ is formed by all $SU(2)_W$ doublets which are contained in $27$--plet. From Table~\ref{tab5} one can see that all $SU(2)_W$ doublet components of the $27'$ supermultiplet transform differently under the $Z^{I}_2$ reflection symmetry as compared with other components of this supermultiplet which form $(\overline{15},\, 1)$. The $E_6$ gauge symmetry is also broken on the brane $O_{II}$ placed at the fixed point $y=0$, $z=\pi R_6/2$ of the $Z_2^{II}$ symmetry transformations. The $P_{II}$ parity assignment is such that $16$ components of the $27'$ are odd whereas $10+1$ components are even or viceversa. This implies that $E_6$ group gets broken down to its $SO(10)'\times U(1)'$ subgroup. It is worth to emphasize here that $SO(10)$ and $SO(10)'$ are not the same $SO(10)$ subgroups of $E_6$. In particular, from Table~\ref{tab5} one can see that the 16-plets of $SO(10)$ and $SO(10)'$ are formed by different components of the fundamental representation of $E_6$. The $U(1)_{\psi}$ and $U(1)'$ charge assignments should be also different. In addition to the three branes mentioned above there is a fourth brane located at the corner $O_{III}=(\pi R_5/2, \pi R_6/2)$ of the physically irreducible space. The $Z^{III}_2$ reflection symmetry associated with this brane is obtained by combining the three symmetries $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ defined above. As a consequence the corresponding parity assignment $P_{III}=P\, P_{I}\, P_{II}$. Combining three parity assignments $P$, $P_{I}$ and $P_{II}$ it is easy to see that on the brane $O_{III}$ the unbroken subgroup is $SO(10)''\times \tilde{U}(1)$. The unbroken gauge group of the effective $4D$ theory is given by the intersection of the $E_6$ subgroups at the fixed points. Since $P$ and $P_{II}$ commute with $SU(5)$ the intersection of the $E_6$ subgroups $SO(10)\times U(1)_{\psi}$ and $SO(10)'\times U(1)'$ is $SU(5)\times U(1)_{\chi}\times U(1)_{\psi}$. The intersection of $SU(6)\times SU(2)_W$ and $SU(5)\times U(1)_{\chi}\times U(1)_{\psi}$ gives the SM gauge group with two additional $U(1)$ factors, $U(1)_{\psi}$ and $U(1)_{\chi}$. The mode expansion for the $6D$ bulk fields $\phi(x,y,z)$ with any combinations of parities reads \cite{Asaka:2001eh}: \begin{eqnarray} \phi_{+++}(x,y,z) = \sum_{n,m}^{\infty} \frac{1}{2^{\delta_{n,0}\delta_{m,0}}\pi \sqrt{R_5 R_6}} \phi^{(2n,2m)}_{+++}(x)\cos\biggl(\frac{2ny}{R_5}+\frac{2mz}{R_6}\biggr)\,, \label{43} \end{eqnarray} \begin{eqnarray} \phi_{+-+}(x,y,z) = \sum_{n,m}^{\infty} \frac{1}{\pi \sqrt{R_5 R_6}} \phi^{(2n+1,2m)}_{+-+}(x)\cos\biggl(\frac{(2n+1)y}{R_5}+\frac{2mz}{R_6}\biggr)\,, \label{44} \end{eqnarray} \begin{eqnarray} \phi_{++-}(x,y,z) = \sum_{n,m}^{\infty} \frac{1}{\pi \sqrt{R_5 R_6}} \phi^{(2n,2m+1)}_{++-}(x)\cos\biggl(\frac{2ny}{R_5}+\frac{(2m+1)z}{R_6}\biggr)\,, \label{45} \end{eqnarray} \begin{eqnarray} \phi_{+--}(x,y,z) = \sum_{n,m}^{\infty} \frac{1}{\pi \sqrt{R_5 R_6}} \phi^{(2n+1,2m+1)}_{+--}(x)\cos\biggl(\frac{(2n+1)y}{R_5}+\frac{(2m+1)z}{R_6}\biggr)\,, \label{46} \end{eqnarray} \begin{eqnarray} \phi_{-++}(x,y,z) = \sum_{n,m}^{\infty} \frac{1}{\pi \sqrt{R_5 R_6}} \phi^{(2n+1,2m+1)}_{-++}(x)\sin\biggl(\frac{(2n+1)y}{R_5}+\frac{(2m+1)z}{R_6}\biggr)\,, \label{47} \end{eqnarray} \begin{eqnarray} \phi_{--+}(x,y,z) = \sum_{n,m}^{\infty} \frac{1}{\pi \sqrt{R_5 R_6}} \phi^{(2n,2m+1)}_{--+}(x)\sin\biggl(\frac{2ny}{R_5}+\frac{(2m+1)z}{R_6}\biggr)\,, \label{48} \end{eqnarray} \begin{eqnarray} \phi_{-+-}(x,y,z) = \sum_{n,m}^{\infty} \frac{1}{\pi \sqrt{R_5 R_6}} \phi^{(2n+1,2m)}_{--+}(x)\sin\biggl(\frac{(2n+1)y}{R_5}+\frac{2mz}{R_6}\biggr)\,, \label{49} \end{eqnarray} \begin{eqnarray} \phi_{---}(x,y,z) = \sum_{n,m}^{\infty} \frac{1}{\pi \sqrt{R_5 R_6}} \phi^{(2n,2m)}_{---}(x)\sin\biggl(\frac{2ny}{R_5}+\frac{2mz}{R_6}\biggr)\,, \label{50} \end{eqnarray} where $n$ and $m$ are non--negative integers. As follows from Eqs.~(\ref{43})--(\ref{50}) each bosonic and fermionic KK mode $\phi^{(k,\ell)}(x)$ is characterized by two integer numbers and from the $4D$ perspective acquires mass $\sqrt{\left(\dfrac{k}{R_5}\right)^2+\left(\dfrac{\ell}{R_5}\right)^2}$ upon compactification. Only fields for which all parities are positive have zero modes, i.e. modes with $k=0$ and $\ell=0$. Such modes form $4D$ $N=1$ massless vector multiplet of the unbroken $SU(3)_C\times SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi}$ subgroup of $E_6$. The corresponding $6D$ bulk fields are non--vanishing on all branes. All other KK modes of the bulk gauge fields combine to massive states. In particular, one linear combination of $A^{a(k,\ell)}_{5}(x)$ and $A^{a(k,\ell)}_{6}(x)$ play the role of the Nambu--Goldstone boson, i.e. it is swallowed by $A^{a(k,\ell)}_{\mu}(x)$ leading to the formation of the $4D$ vector boson state with mass $\sqrt{\left(\dfrac{k}{R_5}\right)^2+\left(\dfrac{\ell}{R_5}\right)^2}$. Thus the mass generation of the vector boson states is analogous to the Higgs mechanism. The orthogonal superposition of $A^{a(k,\ell)}_{5}(x)$ and $A^{a(k,\ell)}_{6}(x)$ compose a scalar state with the same mass. The KK gaugino modes $\lambda^{a(k,\ell)}(x)$ and $\lambda^{'a(k,\ell)}(x)$ form $4D$ fermion state which is degenerate with the corresponding vector and scalar states. As before we assume that all incomplete $E_6$ supermultiplets in the E$_6$SSM, which are even under the $\tilde{Z}^{H}_2$ symmetry, stem from the $6D$ bulk superfields. Hereafter we also require that the three complete families of $27_i$ representations of $E_6$ are located on the $O$ brane where $E_6$ gauge group is broken down to $SO(10)\times U(1)_{\psi}$. The $4D$ superfields $H_u$ and $\overline{H}_u$ can originate from the bulk $27'$--plets $\Phi^{\prime}_{H_u}$ and $\Phi^{\prime}_{\overline{H}_u}$ that decompose as follows \begin{equation} \Phi^{\prime}_{H_u} = \displaystyle\left(27,\,+,\,-,\,+\right),\qquad \Phi^{\prime}_{\overline{H}_u}= \displaystyle\left(27,\,-,\,+,\,-\right)\,, \label{51} \end{equation} where first, second, third and fourth quantities in brackets are the $E_6$ representation as well as $\sigma$, $\sigma_{I}$ and $\sigma_{II}$ associated with this representation respectively. The parities of these bulk $27'$--plets are chosen so that $H_u$ and $\overline{H}_u$ components of the $N=1$ chiral superfields $\hat{\Phi}^{\prime}_{H_u}$ and $\hat{\Phi}^{\prime c}_{\overline{H}_u}$ have positive parities with respect to $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ reflection symmetries (see Table~\ref{tab5}). In this context it is essential to keep in mind that the invariance of the $6D$ action requires that the parities of the $4D$ chiral supermultiplets $\hat{\Phi}^{\prime}_{\overline{H}_u}$ and $\hat{\Phi}^{\prime c}_{\overline{H}_u}$ are opposite. Since the parities of $H_u$ and $\overline{H}_u$ are positive the KK expansions of the bulk $27'$--plets $\Phi^{\prime}_{H_u}$ and $\Phi^{\prime}_{\overline{H}_u}$ contain zero modes that form $N=1$ chiral superfields with quantum numbers of $H_u$ and $\overline{H}_u$. The $SU(2)_W$ doublet chiral superfields $H_u$ and $\overline{H}_u$ are not the only supermultiplets from $\Phi^{\prime}_{H_u}$ and $\Phi^{\prime}_{\overline{H}_u}$ that may survive below the scale $M_X\sim 1/R$. Indeed, the parity assignments in Eq.~(\ref{51}) indicate that the $\overline{u}^{c}$ and $\overline{e}^{c}$ components of the $\hat{\Phi}^{\prime c}_{H_u}$ as well as $u^c$ and $e^c$ components of the $\hat{\Phi}^{\prime}_{\overline{H}_u}$ also have positive parities with respect to $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ symmetries. It means that the KK mode structures of the bulk supermultiplets $\Phi^{\prime}_{H_u}$ and $\Phi^{\prime}_{\overline{H}_u}$ involve zero modes that correspond to $N=1$ chiral superfields $u^c$, $e^c$, $\overline{u}^{c}$ and $\overline{e}^{c}$. Because the $E_6$ gauge symmetry is broken down to the $SO(10)\times U(1)_{\psi}$ subgroup on the $O$ brane the zero modes that come from the same bulk $27'$--plet but belong to different $SO(10)$ representations are not required to have the same transformation properties under the custodial $\tilde{Z}^{H}_2$ symmetry. This permits us to assume that $4D$ chiral superfields $u^c$, $e^c$, $\overline{u}^{c}$ and $\overline{e}^{c}$ are odd under the $\tilde{Z}^{H}_2$ symmetry. Then these supermultiplets are expected to mix with the appropriate components from other $27$--plets forming vectorlike states with masses slightly below $M_X$ and spoiling the $SO(10)$ relations between the Yukawa couplings of quarks and leptons to $H_u$ and $H_d$ as it is discussed in the previous subsection. The $4D$ superfields $H_d$ and $\overline{H}_d$ can originate from another pair of bulk $27'$--plets \begin{equation} \Phi^{\prime}_{H_d} = \displaystyle\left(27,\,+,\,-,\,-\right),\qquad \Phi^{\prime}_{\overline{H}_d}= \displaystyle\left(27,\,-,\,+,\,+\right)\,. \label{52} \end{equation} Using the orbifold parity assignments presented in Table~\ref{tab5} it is easy to check that all parities of $H_d$ and $\overline{H}_d$ components of the $N=1$ superfields $\hat{\Phi}^{\prime}_{H_d}$ and $\hat{\Phi}^{\prime c}_{\overline{H}_d}$ are positive so that the KK expansions of $6D$ superfields $\Phi^{\prime}_{H_u}$ and $\Phi^{\prime}_{\overline{H}_u}$ contain the appropriate zero modes. On the other hand one can also find that $\overline{d}^{c}$ and $\overline{N}^{c}$ components of the $\hat{\Phi}^{\prime c}_{H_d}$ as well as $d^c$ and $N^c$ components of the $\hat{\Phi}^{\prime}_{\overline{H}_d}$ also have positive parities with respect to $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ reflection symmetries. Therefore the particle content below the scale $M_X$ includes bosonic and fermionic states from $N=1$ chiral supermultiplets $d^c$, $N^c$, $\overline{d}^{c}$ and $\overline{N}^{c}$ as well. The scalar components of the $4D$ superfields $N^c$ and $\overline{N}^{c}$ can be used to break $U(1)_{\psi}$ and $U(1)_{\chi}$ down to $U(1)_N\times Z_2^M$. Because of this the supermultiplets $d^c$, $N^c$, $\overline{d}^{c}$ and $\overline{N}^{c}$ are expected to be even under the $\tilde{Z}^{H}_2$ symmetry and therefore can not mix with the components of $27_i$ localised on the $O$ brane. The large VEVs of $N^c$ and $\overline{N}^{c}$ ($\lesssim M_X$) can give rise to the masses of the bosonic and fermionic components of $N^c$ and $\overline{N}^{c}$ as well as $d^c$ and $\overline{d}^{c}$ which are just slightly below $M_X$. In order to achieve the appropriate breakdown of the $SU(2)_W\times U(1)_Y\times U(1)_{N}$ gauge symmetry at low energies the particle spectrum below the scale $M_X$ should be supplemented by the $4D$ chiral superfields $S$ and $\overline{S}$ which are even under the $\tilde{Z}^{H}_2$ symmetry. The corresponding zero modes can come from the pair of bulk $27'$--plets \begin{equation} \Phi^{\prime}_{S} = \displaystyle\left(27,\,+,\,+,\,-\right),\qquad \Phi^{\prime}_{\overline{S}}= \displaystyle\left(27,\,-,\,-,\,+\right)\,. \label{53} \end{equation} The $S$ and $\overline{S}$ components of the $N=1$ superfields $\hat{\Phi}^{\prime}_{S}$ and $\hat{\Phi}^{\prime c}_{\overline{S}}$ have positive orbifold parities. The $\overline{D}$ component of $\hat{\Phi}^{\prime}_{S}$ and the companion component from the $\hat{\Phi}^{\prime c}_{\overline{S}}$ superfield have also positive parities with respect to $Z_2$, $Z^{I}_2$ and $Z^{II}_2$ symmetries. It is convenient to assume that the states associated with these exotic quark supermultiplets are odd under the $\tilde{Z}^{H}_2$ symmetry so that the corresponding zero modes can mix with the appropriate components of the $27$--plets localised on the $O$ brane leading to the formation of the vectorlike states with masses slightly below $M_X$ and spoiling the $SO(10)$ relations between the Yukawa couplings of $S$ to the inert Higgs and exotic quark states. In addition to the components of $\hat{\Phi}^{\prime}_{S}$ and $\hat{\Phi}^{\prime c}_{\overline{S}}$ mentioned above the orbifold parities of $\overline{L}$ and $L$ components of $\hat{\Phi}^{\prime c}_{S}$ and $\hat{\Phi}^{\prime}_{\overline{S}}$ are positive. If the zero modes associated with these components survive to low energies and the corresponding $N=1$ supermultiplets are even under the $\tilde{Z}^{H}_2$ symmetry then the Yukawa couplings of these superfields to $Q_i$ and $\overline{D}_k$ allow the lightest exotic quarks to decay like in the case of Scenario A. The discussion above indicate that the simplest $6D$ orbifold GUT model based on the $E_6$ gauge group, which may lead at low energies to the gauge group and field content of the Scenario A specified in section 2, include six bulk $27'$--plets. The consistency of this orbifold GUT model requires the absence of anomalies. In the 6D orbifold models there are two types of anomalies: $4D$ anomalies \cite{Adler:1969gk} intrinsic to the fixed points and bulk anomalies \cite{Asaka:2002my}, \cite{vonGersdorff:2006nt}--\cite{E6-anomaly-2} which are induced by box diagrams with four gauge currents. For the $6D$ orbifold GUT model to be consistent it is necessary that both the fixed point and the bulk anomalies must cancel. The contributions of the anomalous box diagrams with four gauge currents to the $6D$ bulk anomalies are determined by the trace of four generators of gauge group. This trace contains nonfactorizable part and part which can be reduced to the product of traces of two generators. The nonfactorizable part is associated with the irreducible gauge anomaly while the factorized contribution corresponds to what is known as reducible anomaly. The reducible anomalies can be canceled by the Green--Schwarz mechanism \cite{Green:1984sg}. For the consistency the chiral field content of the $6D$ orbifold model must lead to the cancellation of the irreducible anomalies which is normally highly restrictive requirement \cite{Hebecker:2001jb}. However $6D$ orbifold GUT models based on the $E_6$ gauge group do not have irreducible bulk anomaly \cite{vonGersdorff:2006nt}--\cite{E6-anomaly-2}. Moreover using the results obtained in \cite{E6-anomaly-2} one can show that the reducible gauge anomaly gets cancelled if the field content of the $6D$ orbifold model involves six bulk $27'$--plets. The $4D$ anomalies at the fixpoints get also cancelled within the $6D$ orbifold GUT model discussed above. Indeed, the contributions of $27_i$ supermultiplets, that reside on the $O$ brane, to the anomalies vanish. Since the orbifold parity assignments are such that the KK modes of the bulk $27'$ superfields localized at the fixpoints always form pairs of $N=1$ supermultiplets with opposite quantum numbers the contributions of the bulk $27'$--plets to the $4D$ fixed point anomalies are cancelled automatically as well. Phenomenological viability of the $5D$ and $6D$ orbifold GUT models considered in this section requires the adequate suppression of the baryon and lepton number violating operators which can be induced at the scale $M_X$ giving rise to proton decay. As it was mentioned before the dimension five operators, that lead to the proton decay, are forbidden by the gauge symmetry in these models. However baryon and lepton number violating operators, which are mediated by the exchange of the leptoquark gauge bosons, are enhanced compared to the usual $4D$ case due to the presence of KK towers of such states. The proton decay rate in the $6D$ orbifold GUT models based on the $SO(10)$ gauge group was studied in \cite{Buchmuller:2004eg} where it was shown that in order to satisfy the experimental lower limit on the proton lifetime the scale $M_X$ should be larger than $9\cdot 10^{15}\,\mbox{GeV}$. This restriction on the scale $M_X$ can be used in the case of the $E_6$ inspired SUSY models as well. However the analysis of the RG flow of the gauge couplings, which we are going to consider next, indicates that the value of $g^2_i(M_X)$ in these models are 3-5 times larger than in the MSSM. This implies that the lower bound on the scale $M_X$ in the considered $E_6$ inspired models is expected to be $1.5-2\cdot 10^{16}\,\mbox{GeV}$. It is worth noting here again that the simplest $5D$ and $6D$ orbifold GUT models discussed in this section do not lead to the exact gauge coupling unification at the scale $M_X$ due to the brane contributions to the gauge couplings. The relative contribution of these brane corrections is expected to become more sizable with increasing $g_i^2(M_X)$ as it was discussed before. The gauge coupling unification in the $6D$ orbifold GUT models was considered in \cite{6d-susy-ogut-unif}. \section{RG flow of gauge couplings in the E$_6$SSM} In this section we discuss the RG flow of the SM gauge couplings $g_i(t)$ above the EW scale. The running of these couplings between $M_{X}$ and $M_Z$ is described by a system of renormalisation group equations (RGEs). To simplify our analysis we assume that $U(1)_{\psi}\times U(1)_{\chi}$ gauge symmetry is broken down to $U(1)_{N}\times Z_{2}^{M}$ near the scale $M_X$. This permits us to restrict our consideration to the analysis of the RG flow of four diagonal gauge couplings $g_3(t)$, $g_2(t)$, $g_1(t)$ and $g'_1(t)$ which correspond to $SU(3)_C$, $SU(2)_W$, $U(1)_Y$ and $U(1)_N$ gauge interactions respectively. Besides the evolution of these gauge couplings is affected by a kinetic term mixing. The mixing effect can be concealed in the interaction between the $U(1)_{N}$ gauge field and matter fields that can be parametrized in terms of off--diagonal gauge coupling $g_{11}$ (see \cite{Langacker:1998tc}, \cite{King:2005jy}, \cite{9}). In this framework the RG equations can be written as follows: \begin{equation} \displaystyle\frac{d G}{d t}=G\times B\,,\qquad\qquad \frac{d g_2}{dt}=\displaystyle\frac{\beta_2 g_2^3}{(4\pi)^2}\,,\qquad\qquad \frac{d g_3}{dt}=\frac{\beta_3 g_3^3}{(4\pi)^2}\,, \label{54} \end{equation} where $t=\ln\left(q/M_Z\right)$, $q$ is a renormalisation scale while $B$ and $G$ are $2\times 2$ matrices \begin{equation} G=\left( \begin{array}{cc} g_1 & g_{11}\\[2mm] 0 & g'_1 \end{array} \right)\,,\qquad B=\displaystyle\frac{1}{(4\pi)^2} \left( \begin{array}{cc} \beta_1 g_1^2 & 2g_1g'_1\beta_{11}+2g_1g_{11}\beta_1\\[2mm] 0 & g^{'2}_1\beta'_1+2g'_1 g_{11}\beta_{11}+g_{11}^2\beta_1 \end{array} \right)\,. \label{55} \end{equation} In Eqs.~(\ref{54})--(\ref{55}) $\beta_i$ and $\beta_{11}$ are beta functions. Here we examine the RG flow of gauge couplings in the two--loop approximation. In general the two--loop diagonal $\beta_i$ and off--diagonal $\beta_{11}$ beta functions may be presented as a sum of one--loop and two--loop contributions. However the previous analysis performed in \cite{King:2007uj} revealed that an off--diagonal gauge coupling $g_{11}$ being set to zero at the scale $M_X$ remains very small at any other scale below $M_X$. Since it seems to be rather natural to assume that just after the breakdown of the $E_6$ symmetry there is no mixing in the gauge kinetic part of the Lagrangian between the field strengths associated with the $U(1)_Y$ and $U(1)_{N}$ gauge interactions $g_{11}$ tends to be substantially smaller than the diagonal gauge couplings. Because of this we can neglect two--loop corrections to the off--diagonal beta function $\beta_{11}$. In the case of scenario A the one--loop off--diagonal beta function is given by $\beta_{11}=-\displaystyle\frac{\sqrt{6}}{5}$ while in the scenario B $\beta_{11}=\displaystyle\frac{3\sqrt{6}}{10}$. In the scenario A the two--loop diagonal beta functions $\beta_i$ are given by: \begin{equation} \begin{array}{rcl} \beta_3&=&-9+3N_g+\displaystyle\frac{1}{16\pi^2}\Biggl[g_3^2(-54+34 N_g)+3 N_g\,g_2^2+ N_g\, g_1^2\\[3mm] &&+N_g\,g_1^{'2}-4h_t^2-4h_b^2-2\Sigma_{\kappa}\Biggr]\,,\\[3mm] \beta_2&=&-5+3N_g+\displaystyle\frac{1}{16\pi^2}\Biggl[8N_g g_3^2+(-17+21 N_g)g_2^2+ \left(\displaystyle\frac{3}{5}+N_g\right) g_1^2\\[3mm] &&+\left(\displaystyle\frac{2}{5}+N_g\right) g_1^{'2} -6 h_t^2-6 h_b^2-2h_{\tau}^2-2\Sigma_{\lambda}\Biggr]\,,\\[3mm] \beta_1&=&\displaystyle\frac{3}{5}+3N_g+\displaystyle\frac{1}{16\pi^2}\Biggl[8N_g g_3^2+\left(\displaystyle\frac{9}{5}+3N_g\right)g_2^2+ \left(\displaystyle\frac{9}{25}+3 N_g\right) g_1^2\\[3mm] &&+\left(\displaystyle\frac{6}{25}+N_g\right) g_1^{'2}-\displaystyle\frac{26}{5} h_t^2-\displaystyle\frac{14}{5}h_b^2- \displaystyle\frac{18}{5}h_{\tau}^2-\displaystyle\frac{6}{5}\Sigma_{\lambda}-\displaystyle\frac{4}{5}\Sigma_{\kappa}\Biggr]\,,\\[3mm] \beta'_1&=&\displaystyle\frac{2}{5}+3N_g+\displaystyle\frac{5}{4}n+ \displaystyle\frac{1}{16\pi^2}\Biggl[8N_g g_3^2+\left(\displaystyle\frac{6}{5}+3N_g\right)g_2^2+ \left(\displaystyle\frac{6}{25}+ N_g\right) g_1^2\\[3mm] &&+\left(\displaystyle\frac{4}{25}+3N_g+\displaystyle\frac{25}{8}n \right) g_1^{'2}- \displaystyle\frac{9}{5} h_t^2-\displaystyle\frac{21}{5}h_b^2-\displaystyle\frac{7}{5}h_{\tau}^2- \displaystyle\frac{19}{5}\Sigma_{\lambda}-\displaystyle\frac{57}{10}\Sigma_{\kappa}\Biggr]\,,\\[3mm] \Sigma_{\lambda}&=&\lambda_1^2+\lambda_2^2+\lambda^2\,,\qquad\qquad\qquad \Sigma_{\kappa}=\kappa_1^2+\kappa_2^2+\kappa_3^2\,, \end{array} \label{56} \end{equation} where $N_g$ is a number of generations forming complete $E_6$ fundamental representations that the considered model involves at low energies, i.e. $N_g=3$, whereas $n$ is a number of $S$ and $\overline{S}$ supermultiplets from $27'_S$ and $\overline{27'}_S$ that survive to low energies (i.e. $n=0$ or $1$). Here we assume that the structure of the Yukawa interactions appearing in the superpotential (\ref{13}) is relatively simple, i.e. $\lambda_{\alpha\beta}=\lambda_{\alpha}\delta_{\alpha\beta}$, and $\kappa_{ij}=\kappa_i\delta_{ij}$ while $\tilde{f}_{\alpha\beta}$, $f_{\alpha\beta}$, $g^D_{ij}$ and $h^E_{i\alpha}$ are small and can therefore be ignored ($i,\,j=1,\,2,\,3$ and $\alpha,\,\beta=1,\,2$). We have also neglected all Yukawa couplings that may be associated with the presence of extra $S$ and $\overline{S}$ supermultiplets at low energies. In Eqs.~(\ref{56}) $h_t$, $h_b$ and $h_{\tau}$ are top quark, $b$-quark and $\tau$--lepton Yukawa couplings respectively. In the limit of $n=0$ the RG equations (\ref{56}) coincide with the ones presented in \cite{King:2007uj}. In the scenario B the two--loop diagonal beta functions $\beta_i$ can be written in the following form: $$ \begin{array}{rcl} \beta_3&=&-8+3N_g+\displaystyle\frac{1}{16\pi^2}\Biggl[g_3^2\left(-\displaystyle\frac{128}{3}+34 N_g\right)+3 N_g\,g_2^2 + \left(N_g+\displaystyle\frac{4}{15}\right)\,g_1^2\\[3mm] && + \left(N_g+\displaystyle\frac{2}{5}\right)\,g_1^{'2}-4h_t^2-4h_b^2-2\Sigma_{\kappa}\Biggr]\,, \end{array} $$ \begin{equation} \begin{array}{rcl} \beta_2&=&-4+3N_g+\displaystyle\frac{1}{16\pi^2}\Biggl[8N_g g_3^2+(-10+21 N_g)g_2^2+ \left(\displaystyle\frac{6}{5}+N_g\right) g_1^2\\[3mm] &&+\left(\displaystyle\frac{13}{10}+N_g\right) g_1^{'2} -6 h_t^2-6 h_b^2-2h_{\tau}^2-2\tilde{\Sigma}_{\lambda}\Biggr]\,,\\[3mm] \beta_1&=&\displaystyle\frac{8}{5}+3N_g+\displaystyle\frac{1}{16\pi^2}\Biggl[\left(8N_g + \displaystyle\frac{32}{15}\right)g_3^2+ \left(\displaystyle\frac{18}{5}+3N_g\right)g_2^2+ \left(\displaystyle\frac{62}{75}+3 N_g\right) g_1^2\\[3mm] &&+\left(\displaystyle\frac{47}{50}+N_g\right) g_1^{'2}-\displaystyle\frac{26}{5} h_t^2-\displaystyle\frac{14}{5}h_b^2- \displaystyle\frac{18}{5}h_{\tau}^2-\displaystyle\frac{6}{5}\tilde{\Sigma}_{\lambda}-\displaystyle\frac{4}{5}\Sigma_{\kappa}\Biggr]\,,\\[3mm] \beta'_1&=&\displaystyle\frac{19}{10}+3N_g+\displaystyle\frac{5}{4}n+ \displaystyle\frac{1}{16\pi^2}\Biggl[\left(8N_g+\displaystyle\frac{16}{5}\right)g_3^2+\left(\displaystyle\frac{39}{10}+3N_g\right)g_2^2\\[4mm] &&+\left(\displaystyle\frac{47}{50}+ N_g\right) g_1^2 +\left(\displaystyle\frac{121}{100}+3N_g+\displaystyle\frac{25}{8}n \right) g_1^{'2}\\[3mm] &&-\displaystyle\frac{9}{5} h_t^2-\displaystyle\frac{21}{5}h_b^2-\displaystyle\frac{7}{5}h_{\tau}^2- \displaystyle\frac{19}{5}\tilde{\Sigma}_{\lambda}-\displaystyle\frac{57}{10}\Sigma_{\kappa}\Biggr]\,, \end{array} \label{57} \end{equation} where $\tilde{\Sigma}_{\lambda}=\lambda_1^2+\lambda_2^2+\lambda_3^2+\lambda^2$. As before we assume relatively simple structure of the Yukawa interactions in the superpotential (\ref{18}), i.e. $\lambda_{ij}=\lambda_{i}\delta_{ij}$, $\kappa_{ij}=\kappa_i\delta_{ij}$, and ignore $\tilde{f}_{\alpha i}$, $f_{\alpha i}$, $g^{q}_{ij}$, $h^D_{ij}$ as well as all Yukawa couplings of extra $S$ and $\overline{S}$ supermultiplets. As one can see from Eqs.~(\ref{56})--(\ref{57}) $N_g=3$ is the critical value for the one--loop beta function of the strong interactions in the case of scenario A. Indeed, in the one--loop approximation the $SU(3)_C$ gauge coupling is equal to zero in this case. In the scenario B the one--loop contribution to $\beta_3$ remains rather small ($b_3=1$). Because of this any reliable analysis of the RG flow of gauge couplings requires the inclusion of two--loop corrections to the diagonal beta functions. One can obtain an approximate solution of the two--loop RGEs presented above (see \cite{Chankowski:1995dm}). At high energies this solution for the SM gauge couplings can be written as \begin{equation} \displaystyle\frac{1}{\alpha_i(t)}=\frac{1}{\alpha_i(M_Z)}-\displaystyle\frac{b_i}{2\pi} t-\frac{C_i}{12\pi}-\Theta_i(t) +\displaystyle\frac{b_i-b_i^{SM}}{2\pi}\ln\frac{T_i}{M_Z}\,, \label{58} \end{equation} where $\alpha_i(t)=\displaystyle\frac{g_i^2(t)}{(4\pi)}$, $b_i$ and $b_i^{SM}$ are the coefficients of the one--loop beta functions in the E$_6$SSM and SM respectively, the third term in the right--hand side of Eq.~(\ref{58}) is the $\overline{MS}\to\overline{DR}$ conversion factor with $C_1=0$, $C_2=2$, $C_3=3$ \cite{MS-DR}, while \begin{equation} \Theta_i(t)=\displaystyle\frac{1}{2\pi}\int_0^t (\beta_i-b_i)d\tau\,,\qquad\qquad T_i=\prod_{k=1}^N\biggl(m_k\biggr)^{\displaystyle\frac{\Delta b^k_i}{b_i-b_i^{SM}}}\,. \label{59} \end{equation} In Eq.~(\ref{59}) $m_k$ and $\Delta b_i^k $ are masses and one--loop contributions to the beta functions due to new particles appearing in the E$_6$SSM. For the calculation of $\Theta_i(t)$ the solutions of the one--loop RGEs are normally used. In Eqs.~(\ref{58})--(\ref{59}) only leading one--loop threshold effects are taken into account. Using the approximate solution of the two--loop RGEs in Eqs.~(\ref{58})--(\ref{59}) one can establish the relationships between the values of the gauge couplings at low energies and GUT scale. Then by using the expressions describing the RG flow of $\alpha_1(t)$ and $\alpha_2(t)$ it is rather easy to find the scale $M_X$ where $\alpha_1(M_X)=\alpha_2(M_X)=\alpha_0$ and the value of the overall gauge coupling $\alpha_0$ at this scale. Substituting $M_X$ and $\alpha_0$ into the solution of the RGE for the strong gauge coupling one finds the value of $\alpha_3(M_Z)$ for which exact gauge coupling unification occurs (see \cite{Carena:1993ag}): \begin{equation} \begin{array}{c} \displaystyle\frac{1}{\alpha_3(M_Z)}=\frac{1}{b_1-b_2}\biggl[\displaystyle\frac{b_1-b_3}{\alpha_2(M_Z)}- \displaystyle\frac{b_2-b_3}{\alpha_1(M_Z)}\biggr]-\frac{1}{28\pi}+\Theta_s+\frac{19}{28\pi}\ln\frac{T_{S}}{M_Z}\,,\\[4mm] \Theta_s=\biggl(\displaystyle\frac{b_2-b_3}{b_1-b_2}\Theta_1-\frac{b_1-b_3}{b_1-b_2}\Theta_2+\Theta_3\biggr)\,, \qquad \Theta_i=\Theta_i(M_X)\,. \end{array} \label{60} \end{equation} The combined threshold scale $T_{S}$, that appears in Eq.~(\ref{60}), can be expressed in terms of the effective threshold scales $T_1$, $T_2$ and $T_3$. The expression for $T_{S}$ is model--dependent. In the scenario A $T_{S}$ is given by $$ \begin{array}{rcl} T_{S}&=&\displaystyle\frac{T_2^{172/19}}{T_1^{55/19} T_3^{98/19}}\,,\\[0mm] T_1&=&\tilde{M}_1^{5/11} \mu_{L}^{4/55} m_{L}^{2/55} \Biggl(\prod_{i=1,2,3}m_{\tilde{D}_i}^{4/165}\mu_{D_i}^{8/165}\Biggr) \Biggl(\prod_{\alpha=1,2}m_{H_{\alpha}}^{2/55}\mu_{\tilde{H}_{\alpha}}^{4/55}\Biggr)\,,\\[0mm] \end{array} $$ \vspace{-2mm} \begin{eqnarray} T_2&=&\tilde{M}_2^{25/43} \mu_{L}^{4/43} m_{L}^{2/43} \Biggl(\prod_{\alpha=1,2} m_{H_{\alpha}}^{2/43}\mu_{\tilde{H}_{\alpha}}^{4/43}\Biggr)\,,\qquad\qquad\qquad\qquad\qquad\quad\nonumber\\[0mm] T_3&=&\tilde{M}_3^{4/7}\Biggl(\prod_{i=1,2,3}m_{\tilde{D}_i}^{1/21}\mu_{D_i}^{2/21}\Biggr)\,, \label{61} \end{eqnarray} where $\mu_{D_i}$ and $m_{\tilde{D}_i}$ are the masses of exotic quarks and their superpartners, $m_{H_{\alpha}}$ and $\mu_{\tilde{H}_{\alpha}}$ are the masses of Inert Higgs and Inert Higgsino fields, $m_{L}$ and $\mu_{L}$ are the masses of the scalar and fermion components of $L_4$ and $\overline{L}_4$ while $\tilde{M}_1$, $\tilde{M}_2$ and $\tilde{M}_3$ are the effective threshold scales in the MSSM \begin{eqnarray} \tilde{M}_1&=& \mu^{4/25} m_{A}^{1/25} \Biggl(\prod_{i=1,2,3} m_{\tilde{Q}_i}^{1/75} m_{\tilde{d}_i}^{2/75} m_{\tilde{u}_i}^{8/75} m_{\tilde{L}_i}^{1/25} m_{\tilde{e}_i}^{2/25}\Biggr)\,,\nonumber \\[0mm] \tilde{M}_2&=& M_{\tilde{W}}^{8/25} \mu^{4/25} m_A^{1/25} \Biggl(\prod_{i=1,2,3} m_{\tilde{Q}_i}^{3/25} m_{\tilde{L}_i}^{1/25}\Biggr)\,,\nonumber\\[0mm] \tilde{M}_3&=& M_{\tilde{g}}^{1/2} \Biggl(\prod_{i=1,2,3} m_{\tilde{Q}_i}^{1/12} m_{\tilde{u}_i}^{1/24} m_{\tilde{d}_i}^{1/24}\Biggr)\,. \label{62} \end{eqnarray} In Eqs.~(\ref{62}) $M_{\tilde{g}}$ and $M_{\tilde{W}}$ are masses of gluinos and winos (superpartners of $SU(2)_W$ gauge bosons), $\mu$ and $m_A$ are effective $\mu$--term and masses of heavy Higgs states respectively; $m_{\tilde{u}_i}$, $m_{\tilde{d}_i}$ and $m_{\tilde{Q}_i}$ are the masses of the right--handed and left--handed squarks and $m_{\tilde{L}_i}$ and $m_{\tilde{e}_i}$ are the masses of the left--handed and right--handed sleptons. In the case of scenario B we find \begin{eqnarray} \tilde{T}_{S}&=&\displaystyle\frac{\tilde{T}_2^{196/19}}{\tilde{T}_1^{65/19} \tilde{T}_3^{112/19}}\,,\nonumber \\[0mm] \tilde{T}_1&=&\tilde{M}_1^{5/13} \mu_{d_4}^{8/195} m_{d_4}^{4/195} \mu_{H_u}^{4/65} m_{H_u}^{2/65} \mu_{H_d}^{4/65} m_{H_d}^{2/65} \Biggl(\prod_{i=1,2,3}m_{\tilde{D}_i}^{4/195}\mu_{D_i}^{8/195}\Biggr) \Biggl(\prod_{\alpha=1,2}m_{H_{\alpha}}^{2/65}\mu_{\tilde{H}_{\alpha}}^{4/65}\Biggr)\,,\nonumber\\[0mm] \tilde{T}_2&=&\tilde{M}_2^{25/49} \mu_{H_u}^{4/49} m_{H_u}^{2/49} \mu_{H_d}^{4/49} m_{H_d}^{2/49} \Biggl(\prod_{\alpha=1,2} m_{H_{\alpha}}^{2/49}\mu_{\tilde{H}_{\alpha}}^{4/49}\Biggr)\,,\nonumber\\[0mm] \tilde{T}_3&=&\tilde{M}_3^{1/2} \mu_{d_4}^{1/12} m_{d_4}^{1/24} \Biggl(\prod_{i=1,2,3} m_{\tilde{D}_i}^{1/24} \mu_{D_i}^{1/12}\Biggr)\,, \label{63} \end{eqnarray} where $\mu_{d_4}$, $\mu_{H_u}$ and $\mu_{H_d}$ are the masses of the fermionic components of $d^c_4$ and $\overline{d^c}_4$, $H^u_{i}$ and $\overline{H}_u$ as well as $H^d_{i}$ and $\overline{H}_d$, that form vector-like states at low energies, whereas $m_{d_4}$, $m_{H_u}$ and $m_{H_d}$ are the masses of the scalar components of the corresponding supermultiplets. In general the effective threshold scales derived above can be quite different. Since our purpose is to establish the range of the values of $T_S$ and $\tilde{T}_{S}$ that leads to the unification of gauge couplings we shall set these effective threshold scales equal to each other. Then from Eqs.~(\ref{61}) and (\ref{63}) it follows that $T_1=T_2=T_3=T_S$ and $\tilde{T}_1=\tilde{T}_2=\tilde{T}_3=\tilde{T}_S$. The results of our numerical studies of the two--loop RG flow of gauge couplings in the case of scenarios A and B are summarized in Figs.~\ref{essmfig1} and \ref{essmfig2} respectively. We use the two--loop SM beta functions to describe the running of gauge couplings between $M_Z$ and $T_1=T_2=T_3=T_S$ (or $\tilde{T}_1=\tilde{T}_2=\tilde{T}_3=\tilde{T}_S$), then we apply the two--loop RGEs of the E$_6$SSM to compute the flow of $g_i(t)$ from $T_S$ (or $\tilde{T}_S$) to $M_X$ which is equal to $3\cdot 10^{16}\,\mbox{GeV}$ in the case of the E$_6$SSM. The low energy values of $g'_1$ and $g_{11}$ are chosen so that all four diagonal gauge couplings are approximately equal near the GUT scale and $g_{11}=0$ at this scale. For the calculation of the evolution of Yukawa couplings a set of one--loop RGEs is used. The corresponding one--loop RG equations are specified in \cite{King:2005jy}. \begin{figure} \begin{center} \hspace*{-11cm}{$\alpha_i(t)$}\\[1mm] \includegraphics[height=70mm,keepaspectratio=true]{gc-scen-a1.eps}\\ \hspace*{0cm}{$2\log[q/M_Z]$}\\[1mm] \hspace*{0cm}{\bf (a)}\\[3mm] \hspace*{-11cm}{$\alpha_i(t)$}\\[1mm] \includegraphics[height=70mm,keepaspectratio=true]{gc-scen-a11.eps}\\ \hspace*{0cm}{$2\log[q/M_Z]$}\\[1mm] \hspace*{0cm}{\bf (b) }\\ \vspace{-3mm} \caption{Two--loop RG flow of gauge couplings in the Scenario A: {\it (a)} RG flow of $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ couplings from $M_Z$ to $M_X$ for $T_S=400\,\mbox{GeV}$ and $n_S=1$; {\it (b)} running of SM gauge couplings in the vicinity of $M_X$ for $T_S=400\,\mbox{GeV}$ and $n_S=1$. Thick, dashed and solid lines correspond to the running of $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ couplings respectively. We used $\tan\beta=10$, $\alpha_s(M_Z)=0.118$, $\alpha(M_Z)=1/127.9$, $\sin^2\theta_W=0.231$ and $\kappa_1(T_S)=\kappa_2(T_S) =\kappa_3(T_S)=\lambda_1(T_S)=\lambda_2(T_S)=\lambda_3(T_S)=g^{'}_1(T_S)$. The dotted lines represent the uncertainty in $\alpha_i(t)$ caused by the variation of the strong gauge coupling from 0.116 to 0.120 at the EW scale.} \end{center} \label{essmfig1} \end{figure} In Fig.~\ref{essmfig1} we fix the effective threshold scale to be equal to $400\,\mbox{GeV}$. In Fig.~1a we plot the running of the gauge couplings from $M_Z$ to $M_X$ assuming that the low energy matter content involves three 27-plets of $E_6$ as well as $L_4$, $\overline{L}_4$, $S$ and $\overline{S}$ supermultiplets. Fig.~1b shows a blow--up of the crucial region in the vicinity of the GUT scale. Dotted lines show the interval of variations of gauge couplings caused by $1\,\sigma$ deviations of $\alpha_3(M_Z)$ around its average value, i.e. $\alpha_3(M_Z)\simeq 0.118\pm 0.002$. The results of the numerical analysis presented in Fig.~\ref{essmfig1} demonstrate that in the scenario A almost exact unification of the SM gauge couplings can be achieved for $\alpha_3(M_Z)=0.118$ and $\tilde{T}_S=400\,\mbox{GeV}$. With increasing (decreasing) the effective threshold scale the value of $\alpha_3(M_Z)$, at which exact gauge coupling unification takes place, becomes lower (greater). Thus in this case the gauge coupling unification can be achieved for any phenomenologically reasonable value of $\alpha_3(M_Z)$, consistent with the central measured low energy value, unlike in the MSSM where it is rather problematic to get the exact unification of gauge couplings \cite{Chankowski:1995dm}, \cite{gc-unif-mssm-2}--\cite{gc-unif-mssm-1}. Indeed, it is well known that in order to achieve gauge coupling unification in the MSSM with $\alpha_s(M_Z)\simeq 0.118$, the combined threshold scale, which is given by \cite{Chankowski:1995dm}, \cite{Carena:1993ag}, \cite{gc-unif-mssm-1}--\cite{Langacker:1992rq} \begin{equation} \tilde{M}_{S}=\displaystyle\frac{\tilde{M}_2^{100/19}}{\tilde{M}_1^{25/19} \tilde{M}_3^{56/19}} \simeq \mu/6\,, \label{64} \end{equation} must be around $\tilde{M}_S\approx 1\,\mbox{TeV}$. However the correct pattern of EW symmetry breaking requires $\mu$ to lie within the $1-2\,\mbox{TeV}$ range which implies $\tilde{M}_S<200-300\,\mbox{GeV}$, so that, ignoring the effects of high energy threshold corrections, the exact gauge coupling unification in the MSSM requires significantly higher values of $\alpha_3(M_Z)$, well above the experimentally measured central value \cite{Chankowski:1995dm}, \cite{Carena:1993ag}, \cite{gc-unif-mssm-1}--\cite{gc-unif-mssm-3}. It was argued that it is possible to get the unification of gauge couplings in the minimal SUSY model for $\alpha_3(M_Z)\simeq 0.123$ \cite{gc-unif-mssm-4}. On the other hand in the case of scenario A the combined threshold scale $T_{S}$ can be substantially larger than in the MSSM. This can be seen directly from the explicit expression for $T_S$. Combining Eqs.~(\ref{61}) we find \begin{eqnarray} T_{S}&=&\tilde{M}_{S}\cdot \Biggl(\displaystyle\frac{\mu_{L}^{12/19} m_{L}^{6/19}}{\mu_{D_3}^{12/19} m_{\tilde{D}_3}^{6/19}}\Biggr) \Biggl(\prod_{\alpha=1,2} \displaystyle\frac{m_{H_{\alpha}}^{6/19}\mu_{\tilde{H}_{\alpha}}^{12/19}}{m_{\tilde{D}_{\alpha}}^{6/19}\mu_{D_{\alpha}}^{12/19}} \Biggr)\,. \label{65} \end{eqnarray} From Eq.~(\ref{65}) it is obvious that $T_{S}$ is determined by the masses of the scalar and fermion components of $L_4$ and $\overline{L}_4$. The term $\mu_L L_4\overline{L}_4$ in the superpotential (\ref{13}) is not involved in the process of EW symmetry breaking. As a consequence the parameter $\mu_L$ remains arbitrary\footnote{When $\mu_L$ is considerably larger than the SUSY breaking scale $m_L\simeq \mu_L$.}. In particular, since the corresponding mass term is not suppressed by the $E_6$ symmetry the components of the doublet superfields $L_4$ and $\overline{L}_4$ may be much heavier than the masses of all exotic states resulting in the large combined threshold scale $T_{S}$ that lies in a few hundred GeV range even when scale $\tilde{M}_S$ is relatively low. The large range of variation of $T_{S}$ allows to achieve the exact unification of gauge couplings in the scenario A for any value of $\alpha_3(M_Z)$ which is in agreement with current data. It is worth noting here that, in principle, one could naively expect that large two--loop corrections to the diagonal beta functions would spoil the unification of the SM gauge couplings entirely in the considered case. Indeed, in the scenario A these corrections affect the RG flow of gauge couplings much more strongly than in the case of the MSSM because at any intermediate scale the values of the gauge couplings in the E$_6$SSM are substantially larger as compared to the ones in the MSSM. Nevertheless the results of our analysis discussed above are not as surprising as they may first appear. The analysis of the RG flow of the SM gauge couplings performed in \cite{King:2007uj} revealed that the two--loop corrections to $\alpha_i(M_X)$ are a few times bigger in the E$_6$SSM than in the MSSM. At the same time due to the remarkable cancellation of different two--loop corrections the absolute value of $\Theta_s$ is more than three times smaller in the E$_6$SSM as compared with the MSSM. This cancellation is caused by the structure of the two--loop corrections to the diagonal beta functions in the considered model. As a result, the prediction for the value of $\alpha_3(M_Z)$ at which exact gauge coupling unification takes place is considerably lower in the E$_6$SSM than in the MSSM. The only difference between the E$_6$SSM scenario, which was studied in \cite{King:2007uj}, and scenario A discussed above is in the possible presence of extra $S$ and $\overline{S}$ supermultiplets at low energies. From Eqs.~(\ref{56}) it follows that these supermultiplets do not contribute to the diagonal beta functions of the SM gauge couplings. Our analysis of the RG flow of $g_i(t)$ reveals that the evolution of the SM gauge couplings does not change much when the low energy particle spectrum is supplemented by the bosonic and fermionic components that originate from the extra $S$ and $\overline{S}$ chiral superfields. This explains why our results are so similar to those previously obtained in \cite{King:2007uj}. It is also worthwhile to point out that at high energies the uncertainty in $\alpha_3(t)$ caused by the variations of $\alpha_3(M_Z)$ is much bigger in the E$_6$SSM than in the MSSM. This is because in the E$_6$SSM the strong gauge coupling grows slightly with increasing renormalisation scale whereas in the MSSM it decreases at high energies. This implies that the uncertainty in the high energy value of $\alpha_3(t)$ in the E$_6$SSM is approximately equal to the low energy uncertainty in $\alpha_3(t)$ while in the MSSM the interval of variations of $\alpha_3(t)$ near the scale $M_X$ shrinks drastically. The relatively large uncertainty in $\alpha_3(M_X)$ in the E$_6$SSM, compared to the MSSM, allows one to achieve exact unification of gauge couplings for values of $\alpha_3(M_Z)$ which are within one standard deviation of its measured central value. The RG flow of the SM gauge couplings changes substantially in the case of scenario B as can be seen from Figs.~\ref{essmfig2}. As before we assume that the effective threshold scales are equal, i.e. $\tilde{T}_1=\tilde{T}_2=\tilde{T}_3=\tilde{T}_S$. Our numerical analysis reveals that the evolution of $\alpha_i(t)$ depends very strongly on $\tilde{T}_S$. When $\tilde{T}_S\lesssim 1\,\mbox{TeV}$ the gauge couplings become rather large near the GUT scale, i.e. $\alpha_i(M_X) \sim 1$, where as before we set $M_X\simeq 3\cdot 10^{16}\,\mbox{GeV}$. For so large values of $\alpha_i(t)$ the perturbation theory method becomes inapplicable. Therefore in our analysis we consider the range of scales $\tilde{T}_S$ which are much higher than $1\,\mbox{TeV}$. In Figs.~\ref{essmfig2} we set the threshold scale $\tilde{T}_S$ to be equal to $3\,\mbox{TeV}$. As one can see from these figures for $\tilde{T}_S=3\,\mbox{TeV}$ the values of $\alpha_i(M_X)$ are about $0.2$ that still allows us to use the perturbation theory up to the scale $M_X$. \begin{figure} \begin{center} \hspace*{-11cm}{$\alpha_i(t)$}\\[1mm] \includegraphics[height=70mm,keepaspectratio=true]{gc-scenb-3tev.eps}\\ \hspace*{0cm}{$2\log[q/M_Z]$}\\[1mm] \hspace*{0cm}{\bf (a)}\\[3mm] \hspace*{-11cm}{$\alpha_i(t)$}\\[1mm] \includegraphics[height=70mm,keepaspectratio=true]{gc-scenb1-3tev.eps}\\ \hspace*{0cm}{$2\log[q/M_Z]$}\\[1mm] \hspace*{0cm}{\bf (b) }\\ \vspace{-3mm} \caption{Two--loop RG flow of gauge couplings in the Scenario B: {\it (a)} evolution of $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ couplings from the EW scale to the GUT scale for $\tilde{T}_S=3\,\mbox{TeV}$ and $n_S=0$; {\it (b)} running of SM gauge couplings near the scale $M_X$ for $\tilde{T}_S=3\,\mbox{TeV}$ and $n_S=0$. The parameters and notations are the same as in Fig.~\ref{essmfig1}. } \end{center} \label{essmfig2} \end{figure} The effective threshold scale that we consider in our analysis $\tilde{T}_S$ is in the multi TeV range. At first glance, it is not clear if so large values of $\tilde{T}_i$ and $\tilde{T}_S$ can be obtained for a reasonable set of parameters. In particular, to satisfy naturalness requirements the third generation sfermions as well as neutralino and chargino states which are superparners of the SM gauge bosons and Higgs fields are expected to have masses below $1\,\mbox{TeV}$. Because of this in the MSSM naturalness arguments constrain the combined threshold scale $\tilde{M}_{S}$ to be lower than $200-300\,\mbox{GeV}$ as it was mentioned above. In the case of scenario B the analytical expression for the threshold scale $\tilde{T}_{S}$ can be obtained by combining Eqs.~(\ref{63}) that gives \begin{eqnarray} \tilde{T}_{S}&=&\tilde{M}_{S}\cdot \Biggl(\displaystyle\frac{\mu_{H_u}^{12/19} m_{H_u}^{6/19} \mu_{H_d}^{12/19} m_{H_d}^{6/19}} {\mu_{d_4}^{12/19} m_{d_4}^{6/19} \mu_{D_3}^{12/19} m_{\tilde{D}_3}^{6/19}}\Biggr) \Biggl(\prod_{\alpha=1,2} \displaystyle\frac{m_{H_{\alpha}}^{6/19}\mu_{\tilde{H}_{\alpha}}^{12/19}}{m_{\tilde{D}_{\alpha}}^{6/19}\mu_{D_{\alpha}}^{12/19}} \Biggr)\,. \label{66} \end{eqnarray} Eq.~(\ref{66}) indicates that the combined threshold scale $\tilde{T}_{S}$ tends to be very large if, for example, $\mu_{H_u}\simeq m_{H_u}\simeq \mu_{H_d}\simeq m_{H_d}$ are considerably larger than the masses of the scalar and fermion components of $d^c_4$ and $\overline{d^c}_4$ as well as the masses of all exotic states. In this case $\tilde{T}_{S}$ can be as large as $10\,\mbox{TeV}$ even when $\tilde{M}_{S}$ lies in a few hundred GeV range and $\mu_{H_u}\simeq m_{H_u}\simeq \mu_{H_d}\simeq m_{H_d}\lesssim 10\,\mbox{TeV}$. This can be achieved if the components of $d^c_4$ and $\overline{d^c}_4$ and some of the exotic quark and squark states have masses below $1\,\mbox{TeV}$. The effective threshold scales $\tilde{T}_1$, $\tilde{T}_2$ and $\tilde{T}_3$ can be also as large as a few $\mbox{TeV}$ if the scalar superpartners of the first and second generation fermions and some of the exotic states have masses above $10\,\mbox{TeV}$. Naturalness does not require these states to be light and, in fact, allowing them to be heavy ameliorates SUSY flavor and CP problems. As a consequence the several TeV threshold scales $\tilde{T}_1$, $\tilde{T}_2$, $\tilde{T}_3$ and $\tilde{T}_S$ can naturally emerge in the scenario B. In Fig.~2a we show the running of the SM gauge couplings from the EW scale to high energies. We assume that in this case the low energy matter content includes three 27-plets of $E_6$ as well as $d^c_4$, $\overline{d^c}_4$, $H_u$, $\overline{H}_u$ $H_d$ and $\overline{H}_d$ supermultiplets. Fig.~2b shows the same RG flow of the SM gauge couplings but just around the scale where the values of $\alpha_i(t)$ become rather close. Again dotted lines in Figs.~2a and 2b represent the changes of the evolution of the SM gauge couplings induced by the variations of $\alpha_3(M_Z)$ within $1\,\sigma$ around its average value. From Figs.~2a and 2b one can see that the interval of variations of $\alpha_3(t)$ enlarges with increasing renormalisation scale. The growth of the uncertainty in the high energy value of $\alpha_3(t)$ is caused by the raise of this coupling itself. As follows from Figs.~\ref{essmfig1} and \ref{essmfig2} in the scenario B the SM gauge couplings grow faster with increasing renormalisation scale than in the case of scenario A. This happens because the one--loop beta functions of these couplings are larger in the scenario B as compared to the ones in the scenario A. As a consequence the interval of variations of $\alpha_3(t)$ at high energies is also a bit bigger in the former than in the latter. However as one can see from Figs.~2a and 2b this does not facilitate the gauge coupling unification in scenario B. In fact, these figures demonstrate that large two--loop corrections spoil the unification of gauge couplings in this case. Indeed, in the one--loop approximation Eq.~(\ref{60}) leads to the same prediction for $\alpha_3(M_Z)$ in the scenarios A and B because extra matter in these scenarios form complete $SU(5)$ representations which contribute equally to the one--loop beta functions of the $SU(3)_C$, $SU(2)_W$ and $U(1)_Y$ interactions so that the differences of the coefficients of the one--loop beta functions $b_i-b_j$ remain intact. At the same time the contributions of two--loop corrections to $\alpha_i(M_X)$ ($\Theta_i$) and $\alpha_3(M_Z)$ ($\Theta_s$) are different in these cases. Our numerical analysis reveals that for $\tilde{T}_S\simeq 3\,\mbox{TeV}$ the exact gauge coupling unification can be achieved in the scenario B only if the value of $\alpha_3(M_Z)$ is around $0.112$. For higher scale $T_S$ the exact unification of $\alpha_i(t)$ requires even smaller values of $\alpha_3(M_Z)$ which are disfavoured by the recent fit to experimental data. The lower scales $T_S\lesssim 3\,\mbox{TeV}$ lead to the larger values of $\alpha_i(M_X)$ making questionable the validity of our calculations. As before extra $S$ and $\overline{S}$ superfields, that may survive to low energies, do not contribute to the diagonal beta functions of the SM gauge couplings and, therefore, do not change much the RG flow of $\alpha_i(t)$. As a result the value of $\alpha_3(M_Z)$ at which exact gauge coupling unification takes place does not change much as well after the inclusion of the bosonic and fermionic components of these supermultiplets. Thus it seems to be rather difficult to reconcile the unification of gauge couplings with present data in the Scenario B. Nevertheless the values of $\alpha_i(M_X)$ are not so much different from each other. From Fig.~2b it follows that the relative discrepancy of $\alpha_i(M_X)$ is about 10\% . This brings us back to the orbifold GUT framework which was discussed in the previous section. As it has been already mentioned orbifold GUTs do not imply the exact gauge coupling unification near the scale $M_X$, which is associated with the size of compact extra dimensions, due to the brane contributions to the gauge couplings (see Eq.~(\ref{37})). Since one can expect that these brane corrections become more sizable when $\alpha_i(M_X)$ are large, the relative discrepancy of 10\% between $\alpha_i(M_X)$ should not be probably considered as a big problem in the case of scenario B. \section{Phenomenological implications} We now consider cosmological implications and collider signatures of the $E_6$ inspired SUSY models discussed above. The phenomenological implications of these models are determined by the structure of the particle spectrum that can vary substantially depending on the choice of the parameters. For example, the masses of the $Z'$ boson, exotic quarks, Inert Higgsinos and Inert singlinos are set by the VEVs of the Higgs fields. In this section we primarily focus on the simplest case when only $H_u$, $H_d$ and $S$ acquire non--zero VEVs breaking $SU(2)_W\times U(1)_Y\times U(1)_{N}$ symmetry to $U(1)_{em}$ associated with electromagnetism. Assuming that $f_{\alpha\beta}$ and $\tilde{f}_{\alpha\beta}$ are sufficiently small the masses of the exotic quarks, Inert Higgsino states and $Z'$ boson are given by \begin{equation} \mu_{D_i}=\dfrac{\kappa_i}{\sqrt{2}}\,s\,, \qquad\qquad \mu_{H_{\alpha}}=\dfrac{\lambda_{\alpha}}{\sqrt{2}}\,s\,, \qquad\qquad M_{Z'}\simeq g^{'}_1 \tilde{Q}_S s\,, \label{67} \end{equation} where $s$ is a VEV of the field $S$, i.e. $\langle S \rangle=s/\sqrt{2}$. Here without loss of generality we set $\kappa_{ij}=\kappa_i\delta_{ij}$ and $\lambda_{\alpha\beta}=\lambda_{\alpha}\delta_{\alpha\beta}$. Since $\mu_{D_i}$, $\mu_{H_{\alpha}}$ and $M_{Z'}$ are determined by $s$, that remains a free parameter, the $Z'$ boson mass and the masses of exotic quarks and Inert Higgsinos cannot be predicted. Because recent measurements from the LHC experiments exclude $E_6$ inspired $Z'$ with masses lower than $2-2.15\,\mbox{TeV}$ \cite{Chatrchyan:2012it} the singlet field $S$ must acquire a large VEV ($s\gtrsim 5.5-6\,\mbox{TeV}$) to induce sufficiently large $M_{Z'}$. The couplings $\kappa_i$ should be also large enough to ensure that the exotic fermions are sufficiently heavy to avoiding conflict with direct particle searches at present and former accelerators. However the exotic fermions (quarks and Inert Higgsinos) can be relatively light in the E$_6$SSM. This happens, for example, when the Yukawa couplings of the exotic particles have hierarchical structure similar to the one observed in the ordinary quark and lepton sectors. Then $Z'$ mass lie beyond $10\,\mbox{TeV}$ and the only manifestation of the considered models may be the presence of light exotic quark and/or Inert Higgsino states in the particle spectrum. Since the qualitative pattern of the particle spectrum and associated collider signatures are so sensitive to the parameter choice it is worth to discuss first the robust predictions that the considered models have. It is well known that SUSY models predict that the mass of the lightest Higgs particle is limited from above. The E$_6$SSM is not an exception. In the simplest case when only $H_u$, $H_d$ and $S$ develop the VEVs, so that $\langle H_d\rangle =\displaystyle\frac{v_1}{\sqrt{2}}$,\,$\langle H_u\rangle =\displaystyle\frac{v_2}{\sqrt{2}}$ and $\langle S\rangle =\displaystyle\frac{s}{\sqrt{2}}$, the Higgs sector involves ten degrees of freedom. However four of them are massless Goldstone modes which are swallowed by the $W^{\pm}$, $Z$ and $Z'$ gauge bosons that gain non-zero masses. If CP--invariance is preserved the other degrees of freedom form two charged, one CP--odd and three CP-even Higgs states. When the SUSY breaking scale is considerably larger than the EW scale, the mass matrix of the CP-even Higgs sector has a hierarchical structure and can be diagonalised using the perturbation theory \cite{Nevzorov:2001um}-\cite{Nevzorov:2004ge}. In this case the mass of one CP--even Higgs particle is always very close to the $Z'$ boson mass $M_{Z'}$. The masses of another CP--even, the CP--odd and the charged Higgs states are almost degenerate. When $\lambda\gtrsim g'_1$, the qualitative pattern of the Higgs spectrum is rather similar to the one which arises in the PQ symmetric NMSSM \cite{Nevzorov:2004ge}-\cite{Miller:2005qua}. In the considered limit the heaviest CP--even, CP--odd and charged states are almost degenerate and lie beyond the $\mbox{TeV}$ range \cite{King:2005jy}. Finally, like in the MSSM and NMSSM, one of the CP--even Higgs bosons is always light irrespective of the SUSY breaking scale. However, in contrast with the MSSM, the lightest Higgs boson in the E$_6$SSM can be heavier than $110-120\,\mbox{GeV}$ even at tree level. In the two--loop approximation the lightest Higgs boson mass does not exceed $150-155\,\mbox{GeV}$ \cite{King:2005jy}. \subsection{Dark matter} The structure of the Yukawa interactions in the E$_6$SSM leads to another important prediction. Using the method proposed in \cite{Hesselbach:2007te} one can argue that there are theoretical upper bounds on the masses of the lightest and second lightest inert neutralino states \cite{Hall:2010ix}. To simplify the analysis we assume that the fermion components of the supermultiplets $\overline{S}$, $\overline{H}_u$ and $\overline{H}_d$, which may survive below the scale $M_X$, get combined with the corresponding superpositions of the fermion components of the superfields $S_i$, $H^u_i$ and $H^d_i$ resulting in a set of heavy vectorlike states. Furthermore we also assume that these vectorlike states completely decouple so that the particle spectrum below the TeV scale contains only two generations of inert Higgsinos ($\tilde{H}^u_{\alpha}$ and $\tilde{H}^d_{\alpha}$) and two generations of inert singlinos $\tilde{S}_{\alpha}$. The Yukawa interactions of these superfields are described by the superpotential \begin{eqnarray} W_{IH}=\lambda_{\alpha\beta} S (H^d_{\alpha} H^u_{\beta})+ f_{\alpha\beta} S_{\alpha} (H_d H^u_{\beta})+ \tilde{f}_{\alpha\beta} S_{\alpha} (H^d_{\beta} H_u)\,, \label{essm2} \end{eqnarray} where $\alpha,\beta=1,2$\,. Thus below the TeV scale the inert neutralino states are linear superposition of the inert singlino states ($\tilde{S}_1$, $\tilde{S}_2$) and neutral components of inert Higgsinos ($\tilde{H}^{d0}_1$, $\tilde{H}^{d0}_2$, $\tilde{H}^{u0}_1$, $\tilde{H}^{u0}_2$). The charged components of the inert Higgsinos $(\tilde{H}^{u+}_2,\,\tilde{H}^{u+}_1,\,\tilde{H}^{d-}_2,\,\tilde{H}^{d-}_1)$, form inert chargino sector. In order to avoid the LEP lower limit on the masses of inert charginos the couplings $\lambda_{\alpha\beta}$ and $s$ must be chosen so that all inert chargino states are heavier than $100\,\mbox{GeV}$. In addition, the requirement of the validity of perturbation theory up to the GUT scale constrains the allowed range of Yukawa couplings $\lambda_{\alpha\beta}$, $f_{\alpha\beta}$ and $\tilde{f}_{\alpha\beta}$. The restrictions specified above set very stringent limits on the masses of two lightest inert neutralinos. The analysis performed in \cite{Hall:2010ix} indicates that the lightest and second lightest inert neutralinos ($\tilde{H}^0_{1}$ and $\tilde{H}^0_{2}$) are typically lighter than $60-65\,\mbox{GeV}$. These neutralinos are predominantly inert singlinos so that they can have rather small couplings to the $Z$--boson. Therefore any possible signal which these neutralinos could give rise to at LEP would be extremely suppressed. On the other hand the couplings of $\chi^0_{1}$ and $\chi^0_{2}$ to the lightest CP--even Higgs boson $h_1$ are proportional to the $\mbox{mass}/\sqrt{v_1^2+v_2^2}$ in the leading approximation \cite{Hall:2010ix}. As a consequence the couplings of two lightest inert neutralino to the lightest Higgs state are always large if the corresponding states have appreciable masses. The discussion above indicates that the lightest and second lightest inert neutralinos tend to be the lightest states which are odd under the $Z_{2}^{E}$ symmetry. It is worth to remind here that in the considered $E_6$ inspired SUSY models $U(1)_{\psi}\times U(1)_{\chi}$ gauge symmetry is broken down to $U(1)_{N}\times Z_{2}^{M}$ where $Z_{2}^{M}=(-1)^{3(B-L)}$ is the so--called matter parity which is a discrete subgroup of $U(1)_{\psi}$ and $U(1)_{\chi}$. Since the low--energy effective Lagrangian is invariant under both $Z_{2}^{M}$ and $\tilde{Z}^{H}_2$ symmetries and $\tilde{Z}^{H}_2 = Z_{2}^{M}\times Z_{2}^{E}$ (see Table~\ref{tab1}), the $Z_{2}^{E}$ symmetry is also conserved. This means that the lightest exotic state, which is odd under the $Z_{2}^{E}$ symmetry, is absolutely stable and contributes to the relic density of dark matter. Because the lightest inert neutralino is also the lightest $R$--parity odd state either the lightest $R$--parity even exotic state or the lightest $R$--parity odd state with $Z_{2}^{E}=+1$ must be absolutely stable. When $f_{\alpha\beta}$ and $\tilde{f}_{\alpha\beta}$ are large enough ($f_{\alpha\beta}\sim \tilde{f}_{\alpha\beta}\sim 0.5$) the large mixing in the inert Higgs sector may lead to the lightest CP--even (or CP--odd) inert Higgs state with mass of the order of the EW scale. The corresponding exotic state is $R$--parity even neutral particle. If it is substantially lighter than the lightest ordinary neutralino state $\chi_1^0$ and the decay of $\chi_1^0$ into the lightest inert neutralino and the lightest inert Higgs scalar (pseudoscalar) is kinematically allowed then this lightest inert Higgs scalar (pseudoscalar) is absolutely stable and may result in considerable contribution to the relic dark matter density. Although the possibility mentioned above looks very attractive a substantial fine-tuning is normally required to make the lightest inert Higgs scalar (pseudoscalar) lighter than $\chi_1^0$. Most commonly $\chi_1^0$ is considerably lighter than the lightest inert Higgs scalar (pseudoscalar) so that the lightest CP--even (CP--odd) inert Higgs state can decay into $\chi_1^0$ and the lightest inert neutralino state. In other words, in the considered $E_6$ inspired SUSY models the lightest $R$--parity odd state with $Z_{2}^{E}=+1$, i.e. $\chi_1^0$, tend to be substantially lighter than the $R$--parity even exotic states. As a result the lightest neutralino state $\chi_1^0$ is a natural candidate for a cold component of dark matter in these models. In the neutralino sector of the E$_6$SSM there are two extra neutralinos besides the four MSSM ones. One of them is an extra gaugino $\tilde{B}'$ coming from the $Z'$ vector supermultiplet. The other one is an additional singlino $\tilde{S}$ which is a fermion component of the SM singlet superfield $S$. Extra neutralinos form two eigenstates $(\tilde{B}'\pm\tilde{S})/\sqrt{2}$ with masses around $M_{Z'}$\, \cite{King:2005jy}. Since LHC experiments set very stringent lower bound on the mass of the $Z'$ boson extra neutralino eigenstates tend to be the heaviest ones and decouple. The mixing between these heavy neutralino states and other gauginos and Higgsinos is very small. Therefore the lightest neutralino states in the E$_6$SSM, that determine the composition of $\chi_1^0$ and as a consequence its contribution to the relic dark matter density, become almost indistinguishable from the ones in the MSSM. This means that in the E$_6$SSM, like in the MSSM, the lightest neutralino $\chi_1^0$ can give a substantial contribution to the relic density which is in agreement with the measured abundance of cold dark matter $\Omega_{\mathrm{CDM}}h^2 = 0.1099 \pm 0.0062$ \cite{cdm}. In the E$_6$SSM the lightest inert neutralino can account for all or some of the observed cold dark matter relic density if $\chi_1^0$ has mass close to half the $Z$ mass. In this case the lightest inert neutralino states annihilate mainly through an $s$--channel $Z$--boson, via its Inert Higgsino doublet components which couple to the $Z$--boson \cite{Hall:2010ix}, \cite{Hall:2009aj}. When $|m_{\tilde{H}^0_{1}}|\ll M_Z$ the lightest inert neutralino states are almost inert singlinos and the couplings of $\tilde{H}^0_1$ to gauge bosons, Higgs states, quarks (squarks) and leptons (sleptons) are quite small leading to a relatively small annihilation cross section for $\tilde{H}^0_1\tilde{H}^0_1\to \mbox{SM particles}$. Since the dark matter number density is inversely proportional to the annihilation cross section at the freeze-out temperature the lightest inert neutralino state with mass $|m_{\tilde{H}^0_{1,2}}|\ll M_Z$ gives rise to a relic density which is typically much larger than its measured value\footnote{When $f_{\alpha\beta},\, \tilde{f}_{\alpha\beta}\to 0$ the masses of $\tilde{H}^0_1$ and $\tilde{H}^0_2$ tend to zero and inert singlino states essentially decouple from the rest of the spectrum. In this limit the lightest non-decoupled Inert neutralino may be rather stable and can play the role of dark matter \cite{Hall:2011zq}. The presence of very light neutral fermions in the particle spectrum might have interesting implications for the neutrino physics (see, for example \cite{Frere:1996gb}).}. Because the scenarios with $|m_{\tilde{H}^0_{1,2}}|\sim M_Z/2$ imply that the couplings of $\tilde{H}^0_1$ and $\tilde{H}^0_2$ to the lightest Higgs boson are much larger than the $b$--quark Yukawa coupling the lightest Higgs state decays more than 95\% of the time into $\tilde{H}^0_1$ and $\tilde{H}^0_2$ in these cases while the total branching ratio into SM particles varies from 2\% to 4\% \cite{Hall:2010ix}. At the same time the LHC production cross section of the lightest Higgs state in the considered $E_6$ inspired SUSY models is almost the same as in the MSSM. Therefore the evidence for the Higgs boson recently presented by ATLAS \cite{:2012gk} and CMS \cite{:2012gu} indicates that the corresponding scenarios are basically ruled out. In this context one should point out another class of scenarios that might have interesting cosmological implications. Let us consider a limit when $f_{\alpha\beta}\sim \tilde{f}_{\alpha\beta}\sim 10^{-5}$. So small values of the Yukawa couplings $f_{\alpha\beta}$ and $\tilde{f}_{\alpha\beta}$ result in extremely light inert neutralino states $\tilde{H}^0_1$ and $\tilde{H}^0_2$ which are basically inert singlinos. These states have masses about $1\,\mbox{eV}$. Since $\tilde{H}^0_1$ and $\tilde{H}^0_2$ are so light and absolutely stable they form hot dark matter in the Universe\footnote{In the context of $E_6$ inspired SUSY models warm dark matter was recently discussed in \cite{King:2012wg}.}. These inert neutralinos have negligible couplings to $Z$ boson and would not have been observed at earlier collider experiments. These states also do not change the branching ratios of the $Z$ boson and Higgs decays. Moreover if $Z'$ boson is sufficiently heavy the presence of such light Inert neutralinos does not affect Big Bang Nucleosynthesis \cite{Hall:2011zq}. When the masses of $\tilde{H}^0_1$ and $\tilde{H}^0_2$ are about $1\,\mbox{eV}$ these states give only a very minor contribution to the dark matter density while the lightest neutralino may account for all or some of the observed dark matter density. In this case one can expect that the lifetime of the next-to-lightest exotic state (for example, inert chargino) is given by \begin{equation} \tau_{NLES}\sim \frac{8\pi^2}{f^2 M_{NLES}}\,, \label{73} \end{equation} where $f_{\alpha\beta}\sim \tilde{f}_{\alpha\beta}\sim f$ and $M_{NLES}$ is the mass of the next-to-lightest exotic state. Assuming that $M_{NLES}\sim 1\,\mbox{TeV}$ we get $\tau_{NLES}\sim 10^{-15}\,s$. With increasing $f_{\alpha\beta}$ and $\tilde{f}_{\alpha\beta}$ the masses of the lightest inert neutralino states grow and their contribution to the relic density of dark matter becomes larger. This may lead to some interesting cosmological implications. The detailed study of these implications is beyond the scope of this paper and will be considered elsewhere. \subsection{LHC signatures} We can now turn to the possible collider signatures of the $E_6$ inspired SUSY models with exact custodial $\tilde{Z}^{H}_2$ symmetry. The presence of $Z'$ boson and exotic multiplets of matter in the particle spectrum is a very peculiar feature that may permit to distinguish the considered $E_6$ inspired SUSY models from the MSSM or NMSSM. Although the masses of the $Z'$ boson and exotic states cannot be predicted there are serious reasons to believe that the corresponding particles should be relatively light. Indeed, in the simplest scenario the VEVs of $H_u$, $H_d$ and $S$ are determined by the corresponding soft scalar masses. Since naturalness arguments favor SUSY models with $O(1\,\mbox{TeV})$ soft SUSY breaking terms the VEV $s$ is expected to be of the order of $1-10\,\mbox{TeV}$. On the other hand the requirement of the validity of perturbation theory up to the GUT scale sets stringent upper bounds on the low--energy values of the Yukawa couplings $\kappa_i$ and $\lambda_{\alpha}$ whereas the gauge coupling unification implies that $g^{'}_1(q)\simeq g_1(q)$. As a consequence the $Z'$ boson and exotic states are expected to have masses below $10\,\mbox{TeV}$. Collider experiments and precision EW tests set stringent limits on the mass of the $Z'$ boson and $Z-Z'$ mixing. The direct searches at the Fermilab Tevatron $(p\overline{p}\to Z'\to l^{+}l^{-})$ exclude $Z'$, which is associated with $U(1)_N$, with mass below $892\,\mbox{GeV}$ \cite{Accomando:2010fz} \footnote{Slightly weaker lower bound on the mass of the $Z_N'$ boson was obtained in \cite{Erler:2011ud}.}. Recently ATLAS and CMS experiments ruled out $E_6$ inspired $Z'$ with masses lower than $2-2.15\,\mbox{TeV}$ \cite{Chatrchyan:2012it}. The analysis performed in \cite{ZprimeE6} revealed that $Z'$ boson in the $E_6$ inspired models can be discovered at the LHC if its mass is less than $4-4.5\,\mbox{TeV}$. The determination of its couplings should be possible if $M_{Z'}\lesssim 2-2.5\,\mbox{TeV}$ \cite{Dittmar:2003ir}. The precision EW tests bound the $Z-Z'$ mixing angle to be around $[-1.5,\,0.7]\times 10^{-3}$ \cite{Erler:2009jh}. Possible $Z'$ decay channels in $E_6$ inspired supersymmetric models were studied in \cite{Accomando:2010fz}, \cite{Gherghetta:1996yr}. The potential influence of gauge kinetic mixing on $Z'$ production at the 7 TEV LHC was considered in \cite{Rizzo:2012rf}. The production of a TeV scale exotic states will also provide spectacular LHC signals. Several experiments at LEP, HERA, Tevatron and LHC have searched for colored objects that decay into either a pair of quarks or quark and lepton. But most searches focus on exotic color states, i.e leptoquarks or diquarks, have integer--spin. So they are either scalars or vectors. These colored objects can be coupled directly to either a pair of quarks or to quark and lepton. Moreover it is usually assumed that leptoquarks and diquarks have appreciable couplings to the quarks and leptons of the first generation. The most stringent constraints on the the masses of leptoquarks come from the nonobservation of these exotic color states at the ATLAS and CMS experiments. Recently ATLAS collaboration ruled out first and second generation scalar leptoquarks (i.e. leptoquarks that couple to the first and second generation fermions respectively) with masses below $600-700\,\mbox{GeV}$ \cite{Aad:2011ch}. The CMS collaboration excluded first and second generation scalar leptoquarks which are lighter than $640-840\,\mbox{GeV}$ \cite{:2012dn}. The experimental lower bounds on the masses of dijet resonances (in particular, diquarks) tend to be considerably higher (see, for example, \cite{90}). However the LHC lower bounds on the masses of exotic quarks mentioned above are not directly applicable in the case of the $E_6$ inspired SUSY models considered here. Since $Z_{2}^{E}$ symmetry is conserved every interaction vertex contains an even number of exotic states. As a consequence each exotic particle must eventually decay into a final state that contains at least one lightest Inert neutralino (or an odd number of the lightest Inert neutralinos). Since stable lightest Inert neutralinos cannot be detected directly each exotic state should result in the missing energy and transverse momentum in the final state. The $Z_{2}^{E}$ symmetry conservation also implies that in collider experiments exotic particles can only be created in pairs. In this context let us consider the production and sequential decays of the lightest exotic quarks at the LHC first. Because $D$ and $\overline{D}$ states are odd under the $Z_{2}^{E}$ symmetry they can be only pair produced via strong interactions. In the scenario A the lifetime and decay modes of the lightest exotic quarks are determined by the operators $g^D_{ij} (Q_i L_4) \overline{D}_j$ and $h^E_{i\alpha} e^c_{i} (H^d_{\alpha} L_4)$ in the superpotential (\ref{13}). These operators ensure that the lightest exotic quarks decay into $$ D\to u_i (d_i) + \ell (\nu) + E^{\rm miss}_{T} + X\,, $$ where $\ell$ is either electron or muon. Here $X$ may contain extra charged leptons that can originate from the decays of intermediate states (like Inert chargino or Inert neutralino). Since lightest exotic quarks are pair produced these states may lead to a substantial enhancement of the cross section $pp\to jj\ell^{+}\ell^{-}+E^{\rm miss}_{T}+X$ if they are relatively light. In the scenario B the decays of the lightest exotic quarks are induced by the operators $g^{q}_{ij}\overline{D}_i d^c_4 u^c_j$ and $h^D_{ij} d^c_4 (H^d_{i} Q_j)$. As a consequence the lightest diquarks decay into $$ D\to u^c_i + d^c_j + E^{\rm miss}_{T}+X\,, $$ where $X$ again can contain charged leptons that may come from the decays of intermediate states. In this case the presence of light $D$-fermions in the particle spectrum could result in an appreciable enhancement of the cross section $pp\to jjjj+E^{\rm miss}_{T}+X$. In general exotic squarks are expected to be substantially heavier than the exotic quarks because their masses are determined by the soft SUSY breaking terms. Nevertheless the exotic squark associated with the heavy exotic quark maybe relatively light. Indeed, as in the case of the superpartners of the top quark in the MSSM, the large mass of the heaviest exotic quark in the E$_6$SSM gives rise to the large mixing in the corresponding exotic squark sector that may result in the large mass splitting between the appropriate mass eigenstates. As a consequence the lightest exotic squark can have mass in TeV range. Moreover, in principle, the lightest exotic squark can be even lighter than the lightest exotic quark. If this is a case then the decays of the lightest exotic squark are induced by the same operators which give rise to the decays of the lightest exotic quarks when all exotic squarks are heavy. Therefore the decay patterns of the lightest exotic color states are rather similar in both cases. In other words when exotic squark is the lightest exotic color state in the particle spectrum it decays into either $$ \tilde{D}\to u_i (d_i) + \ell (\nu) + E^{\rm miss}_{T} + X\,, $$ if exotic squark is a scalar leptoquark or $$ \tilde{D}\to u^c_i + d^c_j + E^{\rm miss}_{T} + X\,, $$ if it is a scalar diquark. Due to the $Z_{2}^{E}$ symmetry conservation $E^{\rm miss}_{T}$ should always contain contribution associated with the lightest exotic particle. However since the lightest exotic squark is $R$--parity even state whereas the lightest Inert neutralino is $R$--parity odd particle the final state in the decay of $\tilde{D}$ should also involve the lightest neutralino to ensure that $R$--parity is conserved. Again, $X$ may contain charged leptons that can stem from the decays of intermediate states. Because the $Z_{2}^{E}$ symmetry conservation implies that the lightest exotic squarks can be only pair produced in the considered case the presence of light $\tilde{D}$ is expected to lead to an appreciable enhancement of the cross section of either $pp\to jj\ell^{+}\ell^{-}+E^{\rm miss}_{T}+X$ if $\tilde{D}$ is scalar leptoquark or $pp\to jjjj+E^{\rm miss}_{T}+X$ if $\tilde{D}$ is scalar diquark. Thus one can see that in both scenarios when the lightest exotic color state is either $D$-fermion or $\tilde{D}$-scalar the collider signatures associated with these new states are rather similar. Moreover since the decays of the lightest exotic color particles lead to the missing energy and transverse momentum in the final state it might be rather problematic to distinguish the corresponding signatures from the ones which are associated with the MSSM. For example, the pair production of gluinos at the LHC should also result in the enhancement of the cross section of $pp\to jjjj+E^{\rm miss}_{T}+X$. In this context the presence of additional charged leptons in $X$ can play an important role leading to characteristic signatures such as $\ell^{+}\ell^{-}$ pairs together with large missing energy in the final state. The situation also becomes a bit more promising if one assumes that the Yukawa couplings of the exotic particles have hierarchical structure similar to the one observed in the ordinary quark and lepton sectors. In this case all states which are odd under the $Z^{E}_2$ symmetry couple to the third generation fermions and sfermions mainly\footnote{This possibility was discussed at length in \cite{King:2005jy}--\cite{Accomando:2006ga}, \cite{8}.}. As a consequence the presence of the relatively light exotic color states should give rise to the enhancement of the cross section of either $pp\to t\bar{t}\ell^{+}\ell^{-}+E^{\rm miss}_{T}+X$ or $pp\to t\bar{t}b\bar{b}+E^{\rm miss}_{T}+X$. Here it is worthwhile to point out that the collider signatues associated with the light scalar leptoquarks or diquarks in the considered $E_6$ inspired SUSY models are very different from the commonly established ones which have been thoroughly studied. For instance, it is expected that scalar diquarks may be produced singly at the LHC and decay into quark--quark without missing energy in the final state. The scalar leptoquarks can be only pair produced at the LHC but it is commonly assumed that these states decay into quark--lepton without missing energy as well. On the other hand in the $E_6$ inspired SUSY models considered here the $Z_{2}^{E}$ symmetry conservation necessarily leads to the missing energy and transverse momentum in the corresponding final state. The presence of relatively light exotic quark and squark can substantially modify the collider signatures associated with the production and decay of gluinos \footnote{Novel gluino decays in the $E_6$ inspired models were recently considered in \cite{Belyaev:2012si}}. Indeed, if all squarks except the lightest exotic squark are rather heavy and the decay of the gluino into exotic quark and squark are kinematically allowed then the gluino pair production at the LHC results in $D\bar{D}\tilde{D}\tilde{\bar{D}}$ in the corresponding final state. The sequential decays of exotic quarks and squarks give rise to the enhancement of either $pp\to 4\,\ell+4\, j + E^{\rm miss}_{T}+X$ if exotic color states are leptoquarks or $pp\to 8\,j + E^{\rm miss}_{T}+X$ if exotic color states are diquarks, modulo of course effects of QCD radiation and jet merging. The modification of the gluino collider signatures discussed above might be possible only if there are non--zero flavor-off-diagonal couplings $\theta^g_{ij}$ of gluino to $D_i$ and $\tilde{D}_j$ ($i\ne j$). This is a necessary condition because the lightest exotic squark is normally associated with the heaviest exotic quark. Rough estimates indicate that the corresponding modification of the gluino collider signatures can occur even when the gluino flavour-off-diagonal couplings $\theta^g_{ij}$ are relatively small, i.e. $\theta^g_{ij}\gtrsim 0.01$. If gluino is heavier than the lightest exotic color state, but is substantially lighter than the second lightest exotic color state than the branching ratios of the nonstandard gluino decays mentioned above are suppressed. In this case the second lightest exotic color state can decay mostly into the lightest exotic color state and gluino if the corresponding decay channel is kinematically allowed. This happens when the lightest exotic color state is exotic $D$-fermion while the second lightest exotic color state is $\tilde{D}$-scalar or vice versa. Other possible manifestations of the $E_6$ inspired SUSY models considered here are related to the presence of vectorlike states $d^c_4$ and $\overline{d^c}_4$ as well as $L_4$ and $\overline{L}_4$. In the case of scenario B the fermionic components of the supermultiplets $d^c_4$ and $\overline{d^c}_4$ can have mass below the TeV scale. One of the superpartners of this vectorlike quark state may be also relatively light due to the mixing in the corresponding squark sector. If these quark and/or squark states are light they can be pair produced at the LHC via strong interactions. Since the superfields $d^c_4$ and $\overline{d^c}_4$ are odd under the $Z^{E}_2$ symmetry the decays of the corresponding quarks ($d_4$) and squarks ($\tilde{d}_4$) must always lead to the missing energy in the final state. In the limit when the lightest exotic color states include $d_4$ and/or $\tilde{d}_4$ whereas all other exotic states and sparticles are much heavier, the operators $h^D_{ij} d^c_4 (H^d_{i} Q_j)$ give rise to the following decay modes of $d_4$ and $\tilde{d}_4$ $$ d_4 \to q_i + E^{\rm miss}_{T} + X\,,\qquad\qquad\qquad \tilde{d}_4 \to d_i + E^{\rm miss}_{T} + X\,, $$ where $q_i$ can be either up-type or down-type quark while $X$ may contain charged leptons which can appear as a result of the decays of intermediate states. As in the case of exotic squark the final state in the decay of $d_4$ should contain the lightest neutralino and the lightest Inert neutralino to ensure the conservation of $R$--parity and $Z^{E}_2$ symmetry. Again due to the $Z_{2}^{E}$ symmetry conservation $d_4$ and $\tilde{d}_4$ can be only pair produced at the LHC resulting in an enhancement of $pp\to jj+E^{\rm miss}_{T}+X$. If $d_4$ and $\tilde{d}_4$ couple predominantly to the third generation fermions and sfermions then the pair production of these quarks/squarks should lead to the presence of two heavy quarks in the final state. As before these collider signatures do not permit to distinguish easily the considered $E_6$ inspired SUSY models from other supersymmetric models. For example, squark pair production at the LHC can also lead to two jets and missing energy in the final state. Again, the presence of additional charged leptons in $X$ can lead to the signatures that may help to distinguish the considered $E_6$ inspired SUSY models from the simplest SUSY extensions of the SM. In the case of scenario A the fermionic components of the supermultiplets $L_4$ and $\overline{L}_4$ as well as one of the superpartners of this vectorlike state may have masses below the TeV scale. If all other exotic states and sparticles are rather heavy the corresponding bosonic ($\tilde{L}_4$) and fermionic ($L_4$) states can be produced at the LHC via weak interactions only. Because of this their production cross section is relatively small. In the considered limit the decays of $L_4$ and/or $\tilde{L}_4$ are induced by the operators $h^E_{i\alpha} e^c_{i} (H^d_{\alpha} L_4)$. As a consequence the decays of $L_4$ and/or $\tilde{L}_4$ always lead to either $\tau$--lepton or electron/muon as well as missing energy in the final state. In the case of $\tilde{L}_4$ decays the missing energy in the final state can be associated with only one lightest Inert neutralino whereas the final state of the $L_4$ decays must contain at least one lightest Inert neutralino and one lightest ordinary neutralino to ensure the conservation of $R$--parity and $Z^{E}_2$ symmetry. More efficiently $L_4$ and/or $\tilde{L}_4$ can be produced through the decays of the lightest exotic color states (i.e. $D$ and/or $\tilde{D}$) if these states are relatively light and the corresponding decay channels are kinematically allowed. The Inert Higgs bosons and/or Inert neutralino and chargino states, which are predominantly Inert Higgsinos, can be also light or heavy depending on their free parameters. Indeed, as follows from Eq.~(\ref{67}) the lightest Inert Higgsinos may be light if the corresponding Yukawa coupling $\lambda_{\alpha}$ is rather small. On the other hand if at least one coupling $\lambda_{\alpha}$ is large it can induce a large mixing in the Inert Higgs sector that may lead to relatively light Inert Higgs boson states. Since Inert Higgs and Higgsino states do not couple to quarks directly at the LHC the corresponding states can be produced in pairs via off--shell $W$ and $Z$--bosons. Therefore their production cross section remains relatively small even when these states have masses below the TeV scale. The lightest Inert Higgs and Higgsino states are expected to decay via virtual lightest Higgs, $Z$ and $W$ exchange. The conservation of $R$--parity and $Z^{E}_2$ symmetry implies that the final state in the decay of Inert Higgsino involves at least one lightest Inert neutralino while the final state in the decay of Inert Higgs state should contain at least one lightest ordinary neutralino and one lightest Inert neutralino. As it was mentioned in the beginning of this subsection in the simplest scenario, when only $H_u$, $H_d$ and $S$ acquire VEVs at low energies, there are serious reasons to believe that the $Z'$ boson and all exotic states from three complete $27_i$ representations of $E_6$ have masses below $10\,\mbox{TeV}$. However the situation may change dramatically when $\tilde{Z}^H_2$ even superfield $\overline{S}$ survive to low energies. In order to demonstrate this, let us consider a simple toy model, where $U(1)_N$ gauge symmetry is broken by VEVs of a pair of SM singlet superfields $S$ and $\overline{S}$. Assuming that the superpotential of the considered model involves bilinear term $\mu_S\, S \overline{S}$ the part of the tree--level scalar potential, which depends on the scalar components of the superfields $S$ and $\overline{S}$ only, can be written as \begin{equation} V_S = (m^2_S+\mu_S^2) |S|^2 + (m^2_{\overline{S}}+\mu_S^2) |\overline{S}|^2 + (B_S \mu_S S \overline{S} + h.c.) +\displaystyle\frac{Q_S^2 g^{'2}_1}{2}\left(|S|^2-|\overline{S}|^2\right)^2\,, \label{74} \end{equation} where $m_S^2$, $m^2_{\overline{S}}$ and $B_S$ are soft SUSY breaking parameters and $Q_S$ is a $U(1)_N$ charge of the SM singlet superfields $S$. The last term in Eq.~(\ref{74}), which is the $U(1)_N$ D--term contribution to the scalar potential, forces the minimum of the corresponding potential to be along the $D$--flat direction $\langle S \rangle = \langle \overline{S} \rangle$. Indeed, in the limit $\langle S \rangle = \langle \overline{S} \rangle$ the quartic terms in the potential (\ref{74}) vanish. In the considered case the scalar potential (\ref{74}) remains positive definite only if $(m^2_S + m^2_{\overline{S}}+ 2 \mu_S^2 - 2|B_S \mu_S|)>0$. Otherwise physical vacuum becomes unstable, i.e. $\langle S \rangle = \langle \overline{S} \rangle \to \infty$. The scalar potential can be easily stabilized the if bilinear term $\mu_S\, S \overline{S}$ in the superpotential is replaced by \begin{equation} W_S = \lambda_0 \tilde{\phi} S \overline{S} + f(\tilde{\phi})\,, \label{75} \end{equation} where $\tilde{\phi}$ is $\tilde{Z}^H_2$ even superfield that does not participate in the $SU(3)_C\times SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi}$ gauge interactions. When $\lambda_0$ is small (i.e. $\lambda_0\ll 0.1$) the $U(1)_N$ D--term contribution to the scalar potential still forces the minimum of the scalar potential to be along the nearly $D$--flat direction if $m^2_S + m^2_{\overline{S}}<0$. This condition can be satisfied because sufficiently large values of $\kappa_i$ affect the evolution of $m_S^2$ rather strongly resulting in negative values of $m_S^2$ at low energies \cite{8}. If $m^2_S + m^2_{\overline{S}}<0$ and $\lambda_0$ is small then the scalar components of the superfields $\tilde{\phi}$, $S$ and $\overline{S}$ acquire very large VEVs, i.e. \begin{equation} \langle \tilde{\phi} \rangle \sim \langle S \rangle \simeq \langle \overline{S} \rangle \sim M_{SUSY}/\lambda_0\,, \label{76} \end{equation} where $M_{SUSY}$ is a supersymmetry breaking scale. If $\lambda_0\simeq 10^{-3}-10^{-4}$ the VEVs of the SM singlet superfields $S$ and $\overline{S}$ are of the order of $10^{3}-10^4\,\mbox{TeV}$ even when $M_{SUSY}\sim 1\,\mbox{TeV}$. So large VEV of the superfield $S$ may give rise to the extremely heavy spectrum of exotic particles and $Z'$. This can lead to the MSSM type of particle spectrum at the $\mbox{TeV}$ scale. Nevertheless even in this case the broken $U(1)_N$ symmetry leaves its imprint on the MSSM sfermion mass spectrum. Since $m^2_S\ne m^2_{\overline{S}}$ the VEVs of the SM singlet superfields $S$ and $\overline{S}$ deviates from the $D$--flat direction \begin{equation} Q_S^2 g^{'2}_1 \left(\langle S \rangle^2 - \langle \overline{S} \rangle^2\right)\simeq m^2_{\overline{S}}-m^2_S\,. \label{77} \end{equation} As a consequence all sfermions receive an additional contribution to the mass that come from the $U(1)_N$ $D$--term quartic interactions in the scalar potential \cite{Kolda:1995iw}. This contribution $\Delta_i$ is proportional to the $U(1)_N$ charge of the corresponding sfermion $Q_i$, i.e. \begin{equation} \Delta_{i}=\dfrac{g^{'2}_1}{2}\biggl(Q_1 v_1^2 + Q_2 v_2^2 + 2 Q_S \left(\langle S \rangle^2 - \langle \overline{S} \rangle^2\right)\biggr) Q_{i}=M_0^2\, \sqrt{40}\, Q_{i} \,, \label{78} \end{equation} where $Q_1$ and $Q_2$ are the $U(1)_N$ charges of $H_d$ and $H_u$. Thus for the superpartners of the first and second generation quarks and leptons one finds $$ \begin{array}{rcl} m_{\tilde{d}_{L\,i}}^2&\simeq &m_{Q_i}^2+\left(-\dfrac{1}{2}+\dfrac{1}{3}\sin^2\theta_W\right)M_Z^2\cos 2\beta + M_0^2\,,\\ m_{\tilde{u}_{L\,i}}^2&\simeq &m_{Q_i}^2+\left(\dfrac{1}{2}-\dfrac{2}{3}\sin^2\theta_W\right)M_Z^2\cos 2\beta + M_0^2\,,\\ m_{\tilde{u}_{R\,i}}^2&\simeq &m_{u^c_i}^2+\dfrac{2}{3} M_Z^2 \sin^2\theta_W \cos 2\beta + M_0^2\,,\\ m_{\tilde{d}_{R\,i}}^2&\simeq &m_{d^c_i}^2-\dfrac{1}{3} M_Z^2 \sin^2\theta_W \cos 2\beta + 2 M_0^2\,,\\[2mm] m_{\tilde{e}_{L\,i}}^2&\simeq &m_{L_i}^2+\left(-\dfrac{1}{2}+\sin^2\theta_W\right)M_Z^2\cos 2\beta + 2 M_0^2\,,\\ m_{\tilde{\nu}_{i}}^2&\simeq &m_{L_i}^2+\dfrac{1}{2} M_Z^2\cos 2\beta + 2 M_0^2\,,\\ m_{\tilde{e}_{R\,i}}^2&\simeq &m_{e^c_i}^2- M_Z^2 \sin^2\theta_W \cos 2\beta + M_0^2\,. \end{array} $$ \section{Conclusions} In this paper we have considered the $E_6$ inspired SUSY models in which a single discrete $\tilde{Z}^{H}_2$ symmetry forbids the tree--level flavor--changing transitions and baryon number violating operators. We assumed that the breakdown of $E_6$ symmetry or its subgroup lead to the rank--6 SUSY models below the GUT scale $M_X$. These models are based on the Standard Model (SM) gauge group together with extra $U(1)_{\psi}$ and $U(1)_{\chi}$ gauge symmetries. We also allow three copies of $27_i$ representations of $E_6$ to survive below the scale $M_X$ so that anomalies get canceled generation by generation. If extra exotic states from $27_i$--plets survive to low energies they give rise to tree--level non--diagonal flavor transitions and rapid proton decay. In order to suppress baryon number violating operators one can impose $\tilde{Z}^{H}_2$ discrete symmetry. We assumed that all matter superfields, that fill in complete $27_i$ representations of $E_6$, are odd under this discrete symmetry. Thus $\tilde{Z}^{H}_2$ symmetry is defined analogously to the matter parity $Z_{2}^{M}$ in the simplest $SU(5)$ SUSY GUTs, that lead to the low--energy spectrum of the MSSM. In addition to three complete fundamental representations of $E_6$ we further assumed the presence of of $M_{l}$ and $\overline{M}_l$ supermultiplets from the incomplete $27'_l$ and $\overline{27'}_l$ representation just below the GUT scale. Because multiplets $M_{l}$ and $\overline{M}_l$ have opposite $U(1)_{Y}$, $U(1)_{\psi}$ and $U(1)_{\chi}$ charges their contributions to the anomalies get cancelled identically. As in the MSSM we allowed the set of multiplets $M_{l}$ to be used for the breakdown of gauge symmetry and therefore assumed that all multiplets $M_{l}$ are even under $\tilde{Z}^{H}_2$ symmetry. In order to ensure that the $SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi}$ symmetry is broken down to $U(1)_{em}$ associated with the electromagnetism the set of multiplets $M_{l}$ should involve $H_u$, $H_d$, $S$ and $N^c_H$. We argued that $U(1)_{\psi}\times U(1)_{\chi}$ gauge symmetry can be broken by the VEVs of $N^c_H$ and $\overline{N}_H^c$ down to $U(1)_{N}\times Z_{2}^{M}$ because matter parity is a discrete subgroup of $U(1)_{\psi}$ and $U(1)_{\chi}$. Such breakdown of $U(1)_{\psi}$ and $U(1)_{\chi}$ gauge symmetries guarantees that the exotic states which originate from $27_i$ representations of $E_6$ as well as ordinary quark and lepton states survive to low energies. On the other hand the large VEVs of $N^c_H$ and $\overline{N}_H^c$ can induce the large Majorana masses for right-handed neutrinos allowing them to be used for the see--saw mechanism. For this reason we assumed that the $U(1)_{\psi}\times U(1)_{\chi}$ symmetry is broken down to $U(1)_{N}\times Z_{2}^{M}$ just below the GUT scale. The $\tilde{Z}^{H}_2$ symmetry allows the Yukawa interactions in the superpotential that originate from $27'_l \times 27'_m \times 27'_n$ and $27'_l \times 27_i \times 27_k$. Since the set of multiplets $M_{l}$ contains only one pair of doublets $H_d$ and $H_u$ the $\tilde{Z}^{H}_2$ symmetry defined above forbids not only the most dangerous baryon and lepton number violating operators but also unwanted FCNC processes at the tree level. Nevertheless if the set of $\tilde{Z}^{H}_2$ even supermultiplets $M_{l}$ involve only $H_u$, $H_d$, $S$ and $N^c_H$ then the lightest exotic quarks are extremely long--lived particles because $\tilde{Z}^{H}_2$ symmetry forbids all Yukawa interactions in the superpotential that allow the lightest exotic quarks to decay. Since models with stable charged exotic particles are ruled out by different terrestrial experiments the set of supermultiplets $M_{l}$ in the phenomenologically viable $E_6$ inspired SUSY models should be supplemented by some components of $27$-plet that carry $SU(3)_C$ colour or lepton number. In this work we required that extra matter beyond the MSSM fill in complete $SU(5)$ representations because in this case the gauge coupling unification remains almost exact in the one--loop approximation. As a consequence we restricted our consideration to two scenarios that result in different collider signatures associated with the exotic quarks. In the scenario A the set of $\tilde{Z}^{H}_2$ even supermultiplets $M_{l}$ involves lepton superfields $L_4$. To ensure the unification of gauge couplings we assumed that $\overline{H}_u$ and $\overline{H}_d$ are odd under the $\tilde{Z}^{H}_2$ symmetry whereas supermultiplet $\overline{L}_4$ is even. Then $\overline{H}_u$ and $\overline{H}_d$ from the $\overline{27'}_l$ get combined with the superposition of the corresponding components from $27_i$ so that the resulting vectorlike states gain masses of order of $M_X$. In contrast, $L_4$ and $\overline{L}_4$ should form vectorlike states at low energies facilitating the decays of exotic quarks. The superfield $\overline{S}$ can be either odd or even under the $\tilde{Z}^{H}_2$ symmetry. The bosonic and fermionic components of $\overline{S}$ may or may not survive to low energies. In the scenario A the exotic quarks are leptoquarks. Another scenario, that permits the lightest exotic quarks to decay within a reasonable time, implies that the set of multiplets $M_{l}$ together with $H_u$, $H_d$, $S$ and $N^c_H$ contains extra $d^c_4$ supermultiplet. Because in this scenario B the $\tilde{Z}^{H}_2$ even supermultiplets $d^c_4$ and $\overline{d^c}_4$ give rise to the decays of the lightest exotic color states they are expected to form vectorlike states with the TeV scale masses. Then to ensure that the extra matter beyond the MSSM fill in complete $SU(5)$ representations $\overline{H}_u$ and $\overline{H}_d$ should survive to the TeV scale as well. Again we assumed that $\overline{H}_u$ and $\overline{H}_d$ are odd under the $\tilde{Z}^{H}_2$ symmetry so that they can get combined with the superposition of the corresponding components from $27_i$ forming vectorlike states at low energies. As in the case of scenario A the superfield $\overline{S}$ can be either even or odd under the $\tilde{Z}^{H}_2$ symmetry and may or may not survive to the TeV scale. In the scenario B the exotic quarks manifest themselves in the Yukawa interactions as superfields with baryon number $\left(\pm\dfrac{2}{3}\right)$. The gauge group and field content of the $E_6$ inspired SUSY model discussed here can originate from the 5D and 6D orbifold GUT models in which the splitting of GUT multiplets can be naturally achieved. In particular, we studied $SU(5)\times U(1)_{\chi}\times U(1)_{\psi}$ SUSY GUT model in 5D compactified on the orbifold $S^1/(Z_2\times Z'_2)$. At low energies this model may lead to the scenarios A and B. We also considered $E_6$ gauge theory in $6D$ compactified on the orbifold $T^2/(Z_2 \times Z^{I}_2 \times Z^{II}_2)$ that can lead to the scenario A at low energies. In these orbifold GUT models all anomalies get cancelled and GUT relations between Yukawa couplings get spoiled. The adequate suppression of the operators, that give rise to proton decay, can be also achieved if the GUT scale $M_X\sim 1/R$ is larger than $1.5-2\cdot 10^{16}\,\mbox{GeV}$. We examined the RG flow of gauge couplings from $M_Z$ to $M_X$ in the case of scenarios A and B using both analytical and numerical techniques. We derived the corresponding two--loop RG equations and studied the running of the gauge couplings with and without extra $S$ and $\overline{S}$ superfields at the TeV scale. In the scenario A the gauge coupling unification can be achieved for any phenomenologically reasonable value of $\alpha_3(M_Z)$ consistent with the central measured low energy value. This was already established in the case of the SUSY model with extra $U(1)_N$ gauge symmetry and low energy matter content that involves three 27-plets of $E_6$ as well as $L_4$ and $\overline{L}_4$ \cite{King:2007uj}. Our analysis here revealed that the evolution of the SM gauge couplings does not change much when the low energy particle spectrum is supplemented by the $S$ and $\overline{S}$ chiral superfields. Thus this is not so surprising that the unification of the SM gauge couplings can be so easily achieved even in this case. In the scenario B large two--loop corrections spoil the unification of gauge couplings. Indeed, in this case the exact gauge coupling unification can be achieved only if $\alpha_3(M_Z)\lesssim 0.112$. As before the inclusion of extra $S$ and $\overline{S}$ superfields does not change much the RG flow of $\alpha_i(t)$ and therefore does not improve gauge coupling unification. However the relative discrepancy of $\alpha_i(M_X)$ is about 10\% . At the same time orbifold GUT framework does not imply the exact gauge coupling unification near the scale $M_X\sim 1/R$ because of the brane contributions to the gauge couplings. Therefore relative discrepancy of 10\% between $\alpha_i(M_X)$ should not be probably considered as a big problem. Finally we also discussed the cosmological implications and collider signatures of the $E_6$ inspired SUSY models discussed above. As it was mentioned the low--energy effective Lagrangian of these models is invariant under both $Z_{2}^{M}$ and $\tilde{Z}^{H}_2$ symmetries. Since $\tilde{Z}^{H}_2 = Z_{2}^{M}\times Z_{2}^{E}$ the $Z_{2}^{E}$ symmetry associated with exotic states is also conserved. As a result the lightest exotic state, which is odd under the $Z_{2}^{E}$ symmetry, must be stable. In the scenarios A and B the lightest and second lightest inert neutralinos tend to be the lightest exotic states in the particle spectrum. On the other hand the $Z_{2}^{M}$ symmetry conservation implies that $R$--parity is conserved. Because the lightest inert neutralino $\tilde{H}^0_1$ is also the lightest $R$--parity odd state either the lightest $R$--parity even exotic state or the lightest $R$--parity odd state with $Z_{2}^{E}=+1$ must be absolutely stable. Most commonly the second stable state is the lightest ordinary neutralino $\chi_1^0$ ($Z_{2}^{E}=+1$). Both stable states are natural dark matter candidates in the considered $E_6$ inspired SUSY models. When $|m_{\tilde{H}^0_{1}}|\ll M_Z$ the lightest inert neutralino is predominantly inert singlino and its couplings to the gauge bosons, Higgs states, quarks and leptons are very small resulting in too small annihilation cross section for $\tilde{H}^0_1\tilde{H}^0_1\to \mbox{SM particles}$. As a consequence the cold dark matter density is much larger than its measured value. In principle, $\tilde{H}^0_1$ could account for all or some of the observed cold dark matter density if it had mass close to half the $Z$ mass. In this case the lightest inert neutralino states annihilate mainly through an $s$--channel $Z$--boson. However the usual SM-like Higgs boson decays more than 95\% of the time into either $\tilde{H}^0_1$ or $\tilde{H}^0_2$ in these cases while the total branching ratio into SM particles is suppressed. Because of this the corresponding scenarios are basically ruled out nowadays. The simplest phenomenologically viable scenarios imply that the lightest and second lightest inert neutralinos are extremely light. For example, these states can have masses about $1\,\mbox{eV}$. The lightest and second lightest inert neutralinos with masses about $1\,\mbox{eV}$ form hot dark matter in the Universe but give only a very minor contribution to the dark matter density while the lightest ordinary neutralino may account for all or some of the observed dark matter density. The presence of two types of dark matter is a very peculiar feature that affect the collider signatures of the considered $E_6$ inspired SUSY models. The most spectacular LHC signals associated with these models may come from the TeV scale exotic color states and $Z'$. The production of the $Z'$ boson, that corresponds to the $U(1)_N$ gauge symmetry, should lead to unmistakable signal $pp\to Z'\to l^{+}l^{-}$ at the LHC. The $Z_{2}^{E}$ symmetry conservation implies that in collider experiments exotic particles can only be created in pairs. Moreover each exotic particle has to decay into a final state that contains at least one lightest inert neutralino resulting in the missing energy. Because of this the lightest exotic color state, that can be either $D$-fermion or $\tilde{D}$-scalar, decay into either $u_i (d_i) + \ell (\nu) + E^{\rm miss}_{T} + X$ if exotic quark (squark) is leptoquark or $u^c_i + d^c_j + E^{\rm miss}_{T} + X$ if exotic quark (squark) is diquark. The $Z_{2}^{E}$ symmetry conservation requires that $E^{\rm miss}_{T}$ should always contain contribution associated with the lightest inert neutralino. Since the lightest exotic squark is $R$--parity even state while the lightest inert neutralino is $R$--parity odd particle the final state in the decay of $\tilde{D}$ should also involve the lightest ordinary neutralino to ensure $R$--parity conservation. Thus the pair production of the lightest exotic color state is expected to lead to a substantial enhancement of the cross section of either $pp\to jj\ell^{+}\ell^{-}+E^{\rm miss}_{T}+X$ or $pp\to jjjj+E^{\rm miss}_{T}+X$. If the Yukawa couplings of the exotic particles have hierarchical structure similar to the one observed in the ordinary quark and lepton sectors then all states which are odd under the $Z^{E}_2$ symmetry couple to the third generation fermions and sfermions mainly. As a result the TeV scale exotic color states should give rise to the enhancement of the cross section of either $pp\to t\bar{t}\ell^{+}\ell^{-}+E^{\rm miss}_{T}+X$ or $pp\to t\bar{t}b\bar{b}+E^{\rm miss}_{T}+X$. Our consideration indicates that $\tilde{D}$-scalars in the considered $E_6$ inspired SUSY models lead to rather unusual collider signatures. Indeed, it is commonly expected that scalar diquarks decay into quark--quark without missing energy in the final state while the scalar leptoquarks decay into quark--lepton without missing energy as well. In the models considered here the $Z_{2}^{E}$ symmetry conservation necessarily leads to the missing energy in the corresponding final states. In addition relatively light exotic quark and squark can modify the collider signatures associated with gluinos if the decay of the gluino into exotic quark and squark is kinematically allowed. In this case gluino pair production at the LHC may result in $D\bar{D}\tilde{D}\tilde{\bar{D}}$ in the final state. The sequential decays of $D$-fermions and $\tilde{D}$-scalars give rise to the enhancement of either $pp\to 4\,\ell+4\, j + E^{\rm miss}_{T}+X$ or $pp\to 8\,j + E^{\rm miss}_{T}+X$. In the scenario B the fermionic components of the supermultiplets $d^c_4$ and $\overline{d^c}_4$ that form vectorlike quark state as well as their superpartner may have TeV scale masses. Then these quark and/or squark states can be pair produced at the LHC via strong interactions and decay into $q_i + E^{\rm miss}_{T} + X$ where $q_i$ can be either up-type or down-type quark. This may lead to an enhancement of $pp\to jj+E^{\rm miss}_{T}+X$. The discovery of $Z'$ and new exotic particles predicted by the $E_6$ inspired SUSY models considered here will open a new era in elementary particle physics. This would not only represent a revolution in particle physics, but would also point towards an underlying $E_6$ gauge structure at high energies. \section*{Acknowledgements} \vspace{-2mm} R.N. thanks X.~Tata for sharing his valuable ideas in connection with this work. R.N. acknowledges fruitful discussions with S.~F.~King, J.~Kumar, S.~Moretti, S.~Pakvasa and T.~Rizzo. R.N. is also grateful to P.~Athron, J.~Bjorken, K.~R.~Dienes, J.~Hewett, S.~Kraml, D.~J.~Miller, M.~M\"{u}hlleitner, M.~Sher, M.~A.~Shifman, L.~B.~Okun, B.~D.~Thomas, D.~G.~Sutherland, A.~I.~Vainshtein, M.~I.~Vysotsky for valuable comments and remarks. The work of R.N. was supported by the U.S. Department of Energy under Contract DE-FG02-04ER41291. \newpage
1,116,691,496,982
arxiv
\section*{Introduction} Linear recurrence equations have been widely used in several areas of applied mathematics and computer science. In applied science, they can be used to model the future of a process that depends linearly on a finite string, for instance: in population dynamics to model population size and structure [\cite{BEN}, \cite{FD}, \cite{SW}]; in economics to model the interest rate, the amortization of a loan and price fluctuations [\cite{CEF}, \cite{MAR}, \cite{HK}]; in computer science for analysis of algorithms [\cite{CLRS}, \cite{JOL}]; in statistics for the autoregressive linear model [\cite{HA}, \cite{RD}]. In theoretical mathematics, for instance: in differential equations to find the coefficients of series solutions [Chapters 4--5 in \cite{Cod}]; in the proof of Hilbert's tenth problem over $\mathbb{Z}$ \cite{Mati}; and in approximation theory to provide expansions of some second order operators \cite{Spig}. For a complete understanding of applications of the linear recurrence equations we recommend the Introduction of the monograph \cite{EPSW} and the references therein. We consider a random dynamics that arises from a linear homogeneous recurrence equation with control term given by independent and identically distributed (i.i.d. for short) random variables with Gaussian distribution. To be precise, {\it{given}} $p\in \mathbb{N}$, $\phi_1, \phi_2,\ldots, \phi_p\in\mathbb{R}$ with $\phi_p\not=0$, we define the linear homogeneous recurrence of degree $p$ as follows: \begin{equation}\tag{{\bf{L}}}\label{plr} x_{t+p}=\phi_1x_{t+p-1}+\phi_2x_{t+p-2}+\cdots+\phi_p x_{t} \quad\textrm{ for any } t\in \mathbb{N}_0, \end{equation} where $\mathbb{N}_0$ denotes the set of non-negative integers. To single out a unique solution of \eqref{plr} one should assign initial conditions $x_0,x_1,\ldots,x_{p-1}\in \mathbb{R}$. Recurrence \eqref{plr} is called a recurrence with $p$-history since it only depends on a $p$-number of earlier values. We consider a small perturbation of \eqref{plr} by adding Gaussian noise as follows: given $\epsilon>0$ fixed, consider the random dynamics \begin{equation}\tag{{\bf{SL}}}\label{ar} X^{(\epsilon)}_{t+p}=\phi_1X^{(\epsilon)}_{t+p-1}+\phi_2X^{(\epsilon)}_{t+p-2}+\cdots+\phi_p X^{(\epsilon)}_{t}+\epsilon \xi_{t+p}\quad\textrm{ for any } t\in \mathbb{N}_0, \end{equation} with initial conditions $X^{(\epsilon)}_0=x_0,X^{(\epsilon)}_1=x_1,\ldots,X^{(\epsilon)}_{p-1}=x_{p-1}$, and $(\xi_t:t\geq p)$ is a sequence of i.i.d. random variables with Gaussian distribution with zero mean and variance one. Denote by $(\Omega,\mathcal{F},\mathbb{P})$ the probability space where the sequence $(\xi_t:t\geq p)$ is defined, then the random dynamics \eqref{ar} can be defined as a stochastic process in the probability space $(\Omega,\mathcal{F},\mathbb{P})$. Notice that $\epsilon>0$ is parameter that controls the magnitude of the noise. When $\epsilon=0$ the deterministic model \eqref{plr} recovers from the stochastic model \eqref{ar}. Since $(\xi_t:t\geq p)$ is a sequence of i.i.d. random variables with Gaussian distribution, the model \eqref{ar} could be understood as a regularization of \eqref{plr}. Up to our knowledge, this type of model was originally used in $1927$ by G. Yule \cite{YUL} $(p=2)$, which models the presence of random disturbances of a harmonic oscillator for investigating hidden periodicities and its relation to the observations of sunspots. In this article we obtain a {\it{nearly-complete characterization}} of the convergence in the total variation distance between the {distribution of $X^{(\epsilon)}_t$} and its limiting distribution as $t$ increases. Under general conditions that we state in Section \ref{model}, when the intensity of the control $\epsilon$ is fixed, as the time goes by, the random linear recurrence goes to a limiting distribution in the total variation distance. We show that this convergence is actually abrupt in the following sense: the total variation distance between the distribution of the random linear recurrence and its limiting distribution drops abruptly over a negligible time (time window) around a threshold time (cut-off time) from near one to near zero. It means that if we run the random linear recurrence before a time window around the cut-off time the process is not well mixed and after a time window around the cut-off time becomes well mixed. This fact is known as a cut-off phenomenon in the context of stochastic processes. Suppose that we model a system by a random process $(X^{(\epsilon)}_t:t\geq 0)$, where the parameter $\epsilon$ denotes the intensity of the noise and assume that $X^{(\epsilon)}_\infty$ is its equilibrium. A natural question that arises is the following: {\it{with a {fixed} $\epsilon$ and an error $\eta>0$, how much time $\tau(\epsilon,\eta)$ do we need to run the model $(X^{(\epsilon)}_t:t\geq 0)$ in order to be close to its equilibrium $X^{(\epsilon)}_\infty$ by an error at most $\eta$ in a suitable distance?}} The latter is known as a {\it{mixing time}} in the context of random processes. In general, it is hard to compute and/or estimate $\tau(\epsilon,\eta)$. The cut-off phenomenon provides a strong answer in a small regime $\epsilon$. Roughly speaking, as $\epsilon$ goes to zero, it means that in a deterministic time $\tau^{*}(\epsilon)$ the system is ``almost" in its equilibrium within any error $\eta$. We provide a precise definition in Section \ref{model}. The cut-off phenomenon was extensively studied in the eighties to describe the phenomenon of abrupt convergence that appears in the models of cards' shuffling, Ehrenfests' urn and random transpositions, see for instance \cite{DIA}. In general, it is a challenging problem to prove that a specific model exhibits a cut-off phenomenon. It requires a complete understanding of the dynamics of the specific random process. For an introduction to this concept, we recommend Chapter $18$ of \cite{LP} for discrete Markov chains in a finite state, \cite{MART} for discrete Markov chains with infinite countable state space and [\cite{BJ}, \cite{BJ1}, \cite{BA}] for Stochastic Differential Equations in a continuous state space. This article is organized as follows: {In} Section \ref{model} we state the main result and its consequences. In Section \ref{proof} we give the proof of Theorem \ref{main} which is the main result of this article. Also, we appoint conditions to verify the hypothesis of Theorem \ref{main}. In Section \ref{examples} we provide a complete understanding how to verify the conditions of Theorem \ref{main} for a discretization of the celebrated Brownian oscillator. Lastly, we provide Appendix \ref{tools} with some results about the distribution of the random linear recurrence and its limiting behavior, Appendix \ref{totalvariation} which summarizes some properties about the total variation distance between Gaussian distributions, and Appendix \ref{toolsappendix} which states some elementary limit behaviors. \section{Main Theorem}\label{model} One of the most important problems in dynamical systems is the study of the limit behavior of its evolution for forward times. To the linear recurrence \eqref{plr} we can associate a characteristic polynomial \begin{equation}\label{ce} f(\lambda)= \lambda^{p}-\phi_1\lambda^{p-1}-\cdots-\phi_p \quad\textrm{ for any } \lambda\in \mathbb{C}. \end{equation} From now to the end of this article, we assume \begin{equation} \tag{{\bf{H}}} \label{H} \textrm{all the roots of \eqref{ce} have modulus less than one.} \end{equation} From \eqref{H} we can prove that for any string of initial values $x_0,\ldots,x_{p-1}\in \mathbb{R}$, $x_{t}$ goes exponentially fast to zero as $t$ goes to infinity. For more details see Theorem 1 in \cite{GSL}. In the stochastic model \eqref{ar}, \eqref{H} implies that the process $(X^{(\epsilon)}_t,t\in \mathbb{N}_0)$ is strongly ergodic, i.e., for any initial data $x_0,\ldots,x_{p-1}$, the random recurrence $X^{(\epsilon)}_t$ converges in the so-called total variation distance as $t$ goes to infinity to a random variable $X^{(\epsilon)}_\infty$. For further details see Lemma \ref{ergodico} in Appendix \ref{tools}. Given $m\in \mathbb{R}$ and $\sigma^2\in (0,+\infty)$, denote by $\mathcal{N}(m,\sigma^2)$ the Gaussian distribution with mean $m$ and variance $\sigma^2$. Later on, we see that for $t\geq p$ the random variable $X^{(\epsilon)}_t$ has distribution $\mathcal{N}(x_t,\epsilon^2 \sigma^2_t)$, where $x_t$ is given by \eqref{plr} and $\sigma^2_t\in (0,+\infty)$. Moreover, the random variable $X^{(\epsilon)}_\infty$ has distribution $\mathcal{N}(0,\epsilon^2 \sigma^2_\infty)$ with $\sigma^2_\infty\in (0,+\infty)$. Since the distribution of $X^{(\epsilon)}_t$ for $t\geq p$ and its limiting distribution $X^{(\epsilon)}_\infty$ are absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$, a natural way to measure its discrepancy is by the total variation distance. Given two probability measures $\mathbb{P}_1$ and $\mathbb{P}_2$ on the measure space $(\Omega,\mathcal{F})$, the total variation distance between the probabilities $\mathbb{P}_1$ and $\mathbb{P}_2$ is given by \begin{eqnarray*}\label{tv} \mathrm{{\bf{d}}}_{\mathrm{TV}}(\mathbb{P}_1,\mathbb{P}_2)\coloneqq \sup\limits_{F\in \mathcal{F}}|\mathbb{P}_1(F)-\mathbb{P}_2(F)|. \end{eqnarray*} When $X,Y$ are random variables defined in the probability space $(\Omega,\mathcal{F},\mathbb{P})$ we write $\mathrm{{\bf{d}}}_{\mathrm{TV}}(X,Y)$ instead of $\mathrm{{\bf{d}}}_{\mathrm{TV}}(\mathbb{P}(X\in\cdot), \mathbb{P}(Y\in\cdot))$, where $\mathbb{P}(X\in\cdot)$ and $\mathbb{P}(Y\in\cdot)$ denote the distribution of $X$ and $Y$ under $\mathbb{P}$, respectively. Then we define \begin{equation*} \label{toma} d^{(\epsilon)}(t) :=\mathrm{{\bf{d}}}_{\mathrm{TV}} \left(X^{(\epsilon)}_t,X^{(\epsilon)}_\infty\right)= \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}(x_t,\epsilon^2 \sigma^2_t),\mathcal{N}(0,\epsilon^2 \sigma^2_\infty)\right)\quad\textrm{ for any } t\geq p. \end{equation*} Notice that the above distance depends on the initial conditions $x_0,\ldots,x_{p-1}\in \mathbb{R}$. To do the notation more fluid, {\it{we avoid its dependence from our notation.}} For a complete understanding of the total variation distance between two arbitrary probabilities with densities, we recommend Section $3.3$ in \cite{RDR} and Section $2.2$ in \cite{AD}. Nevertheless, for the shake of completeness, we provide an Appendix \ref{totalvariation} that contains the properties and bounds for the total variation distance between Gaussian distributions that we used to prove Theorem \ref{main}, which is the main theorem of this article. The goal is to study of the so-called {\it{cut-off phenomenon}} in the total variation distance when $\epsilon$ goes to zero for the family of the stochastic processes \[\left(X^{(\epsilon)}:=\left(X^{(\epsilon)}_t:t\in \mathbb{N}_0\right): \epsilon>0\right)\] for {\it{fixed}} initial conditions $x_0,\ldots,x_{p-1}$. Roughly speaking, the argument of the proof consists in fairly intricate calculations of the distributions of $X^{(\epsilon)}_t$, $t\geq p$ and its limiting distribution $X^{(\epsilon)}_\infty$ whose distributions are Gaussian. Then the cut-off phenomenon is proved from a refined analysis of their means and variances, and ``explicit calculations and bounds" for the total variation distance between Gaussian distributions. This analysis also provides a delicate case in which the cut-off phenomenon does not occur. Now, we introduce the formal definition of cut-off phenomenon. Recall that for any $z\in \mathbb{R}$, $\lfloor z \rfloor$ denotes the greatest integer less than or equal to $z$. Consider the family of stochastic processes $(X^{(\epsilon)}:=(X^{(\epsilon)}_t:t\in \mathbb{N}_0): \epsilon>0)$. According to \cite{BY}, the cut-off phenomenon can be expressed in three increasingly sharp levels as follows. \begin{definition} The family $(X^{(\epsilon)}:\epsilon>0)$ has \begin{itemize} \item[i)] {\it{cut-off}} at $(t^{(\epsilon)}:\epsilon>0)$ with cut-off time $t^{(\epsilon)}$ if $t^{(\epsilon)}$ goes to infinity as $\epsilon$ goes to zero and \[ \lim\limits_{\epsilon \rightarrow 0^+}d^{(\epsilon)}(\lfloor \delta t^{(\epsilon)}\rfloor)= \begin{cases} 1 \quad&\textrm{if}~ 0<\delta<1,\\ 0 &\textrm{if}~ \delta>1. \end{cases} \] \item[ii)] {\it{window cut-off}} at $((t^{(\epsilon)},w^{(\epsilon)}): \epsilon>0)$ with cut-off time $t^{(\epsilon)}$ and time cut-off $w^{(\epsilon)}$ if $t^{(\epsilon)}$ goes to infinity as $\epsilon$ goes to zero, $w^{(\epsilon)}=o(t^{(\epsilon)})$ and \[ \lim\limits_{b\rightarrow -\infty}\liminf\limits_{\epsilon \rightarrow 0^+}d^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)=1 \quad \textrm{ and } \quad \lim\limits_{b\rightarrow +\infty}\limsup\limits_{\epsilon \rightarrow 0^+}d^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)=0. \] \item[iii)] {\it{profile cut-off}} at $((t^{(\epsilon)},w^{(\epsilon)}):\epsilon>0)$ with cut-off time $t^{(\epsilon)}$, time cut-off $w^{(\epsilon)}$ and profile function $G:\mathbb{R}\rightarrow [0,1]$ if $t^{(\epsilon)}$ goes to infinity as $\epsilon$ goes to zero, $w^{(\epsilon)}=o(t^{(\epsilon)})$, \[ \lim\limits_{\epsilon \rightarrow 0^+}d^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)=:G(b) \quad \textrm{ exists for any } b\in \mathbb{R} \] together with $\lim\limits_{b\rightarrow -\infty}G(b)=1$ and $\lim\limits_{b\rightarrow +\infty}G(b)=0$. \end{itemize} \end{definition} Bearing all this in mind, we can analyze how this convergence happens which is exactly the statement of the following theorem. \begin{theorem}[Main theorem]\label{main} Assume that \eqref{H} holds. For a given initial data $x=(x_0,\ldots,x_{p-1})\in \mathbb{R}^p\setminus\{0_p\}$ assume that there exist $r=r(x)\in (0,1)$, $l=l(x)\in \{1,\ldots,p\}$ and $v_t=v(t,x)\in \mathbb{R}$ such that \begin{enumerate} \item[i)] \begin{equation*}\label{most} \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{t^{l-1}r^t}-v_t\right|=0, \end{equation*} \item[ii)] $\sup\limits_{t\rightarrow+\infty}|v_t|<+\infty$, \item[iii)] $\liminf\limits_{t\rightarrow+\infty}|v_t|>0$. \end{enumerate} Then the family of random linear recurrences $(X^{(\epsilon)}:=(X^{(\epsilon)}(t):t\in \mathbb{N}_0): \epsilon>0)$ has {window} cut-off as $\epsilon$ goes to zero with cut-off time \[ t^{(\epsilon)}=\frac{\ln(\nicefrac{1}{\epsilon})}{\ln(\nicefrac{1}{r})}+ (l-1)\frac{\ln\left(\frac{\ln(\nicefrac{1}{\epsilon})}{\ln(\nicefrac{1}{r})}\right)}{\ln(\nicefrac{1}{r})} \] and time window \[ w^{(\epsilon)}=C+o_{\epsilon}(1), \] where $C$ is any positive constant and $\lim\limits_{\epsilon\rightarrow 0^+}o_{\epsilon}(1)=0$. In other words, \[ \lim\limits_{b\rightarrow -\infty}\liminf\limits_{\epsilon \rightarrow 0^+}d^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)=1 \textrm{ and } \lim\limits_{b\rightarrow +\infty}\limsup\limits_{\epsilon \rightarrow 0^+}d^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)=0, \] where $d^{(\epsilon)}(t)=\mathrm{{\bf{d}}}_{\mathrm{TV}} \left(X^{(\epsilon)}_t,X^{(\epsilon)}_\infty\right)$ for any $t\geq p$. \end{theorem} \begin{remark} Notice that $\sup\limits_{t\rightarrow+\infty}|v_t|<+\infty$ and $\limsup\limits_{t\rightarrow+\infty}|v_t|<+\infty$ are actually equivalent. However, $\liminf\limits_{t\rightarrow+\infty}|v_t|>0$ does not always imply $\inf\limits_{t\geq 0}|v_t|>0$. \end{remark} \begin{remark} Roughly speaking, the number $r$ corresponds to the absolute value of some roots of \eqref{ce} and $l$ is related to their multiplicities. \end{remark} \begin{remark} Under the conditions of Theorem \ref{main}, the total variation distance between the distribution of $X^{(\epsilon)}_t$ and its limiting distribution $X^{(\epsilon)}_\infty$ drives abruptly from one to zero in a time window $w^{(\epsilon)}$ of constant order around the cut-off time $t^{(\epsilon)}$ of logarithmic order. \end{remark} We introduce the definition of maximal set. We say that a set $\mathcal{A}\subset \mathbb{R}^{p}$ is a maximal set that satisfies the property {\bf{P}} if and only if any set $\mathcal{B}\subset \mathbb{R}^d$ that satisfies the property {\bf{P}} is a subset of $\mathcal{A}$. In the case when all the roots of \eqref{ce} are real numbers we see in Lemma \ref{lya} that there exists a maximal set $\mathcal{C}\subset \mathbb{R}^p$ such that any initial datum $x:=(x_0,\ldots,x_{p-1}) \in \mathcal{C}$ fulfills Condition i), Condition ii) and Condition iii) of Theorem \ref{main}. Moreover, $\mathcal{C}$ has full measure with respect to the Lebesgue measure on $\mathbb{R}^p$. If we only assume \eqref{H} and {\it{no further assumptions}}, we see in Corollary \ref{comp} that Condition iii) of Theorem \ref{main} may not hold. \section{Proof}\label{proof} Since the random recurrence \eqref{ar} is linear on the inputs which are independent Gaussian random variables, the time distribution of the random dynamics for $t\geq p$ is also Gaussian. Observe that for any $t\geq p$, $X^{(\epsilon)}_t$ has Gaussian distribution with mean $x_t$ and variance $\sigma^2(t,\epsilon, x_0,\ldots,x_{p-1})\in (0,+\infty)$. Later on, in Lemma \ref{distri} in Appendix \ref{tools}, under assumption \eqref{H}, we see that $\sigma^2(t,\epsilon, x_0,\ldots,x_{p-1})=\epsilon^2\sigma^2_t$, where $\sigma^2_t\in [1,+\infty)$ and it does not depend on the initial data $x_0,x_1,\ldots,x_{p-1}$. The following lemma asserts that the random dynamics \eqref{ar} is strongly ergodic when \eqref{H} holds. \begin{lemma}\label{lemma2} Assume that \eqref{H} holds. As $t$ goes to infinity, $X^{(\epsilon)}_t$ converges in the total variation distance to a random variable $X^{(\epsilon)}_\infty$ that has Gaussian distribution with zero mean and variance $\epsilon^2\sigma^2_\infty\in [\epsilon^2,+\infty)$. \end{lemma} For the sake of brevity, the proof of the last lemma is given in Lemma \ref{ergodico} in Appendix \ref{tools}. Recall that \begin{equation*} d^{(\epsilon)}(t)= \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}(x_t,\epsilon^2 \sigma^2_t),\mathcal{N}(0,\epsilon^2 \sigma^2_\infty)\right) \quad \textrm{ for any } t\geq p. \end{equation*} In order to analyze the cut-off phenomenon for the distance $d^{(\epsilon)}(t)$, for the convenience of computations we turn to study another distance as the following lemma states. \begin{lemma}\label{opo} For any $t\geq p$ we have \begin{equation*}\label{buenasi} \left| d^{(\epsilon)}(t)-D^{(\epsilon)}(t)\right|\leq R(t) \end{equation*} where \[D^{(\epsilon)}(t)=\mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}\left(\frac{x_t}{\epsilon\sigma_\infty},1\right), \mathcal{N}(0,1)\right)\] and \[ R(t)=\mathrm{{\bf{d}}}_{\mathrm{TV}}(\mathcal{N}(0,\sigma^2_t),\mathcal{N}(0,\sigma^2_\infty)). \] \end{lemma} \begin{proof} Notice that the terms $d^{(\epsilon)}(t)$ and $D^{(\epsilon)}(t)$ depend on the parameter $\epsilon$ and the initial data $x_0,x_1,\ldots,x_{p-1}$. Nevertheless, the term $R(t)$ does not depend on $\epsilon$ and on the initial data $x_0,x_1,\ldots,x_{p-1}$. Let $t\geq p$. By the triangle inequality we obtain \begin{align*} d^{(\epsilon)}(t)\leq \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}(x_t,\epsilon^2\sigma^2_t),\mathcal{N}(x_t,\epsilon^2 \sigma^2_\infty)\right)+ \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}(x_t,\epsilon^2\sigma^2_\infty),\mathcal{N}(0,\epsilon^2 \sigma^2_\infty)\right). \end{align*} By item i) and item ii) of Lemma \ref{pend} we have \begin{align*} d^{(\epsilon)}(t)\leq R(t)+D^{(\epsilon)}(t). \end{align*} On the other hand, by item ii) of Lemma \ref{pend} we notice \[D^{(\epsilon)}(t)= \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}(x_t,\epsilon^2\sigma^2_\infty),\mathcal{N}(0,\epsilon^2 \sigma^2_\infty)\right).\] By the triangle inequality we obtain \begin{align*} D^{(\epsilon)}(t)\leq \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}(x_t,\epsilon^2\sigma^2_\infty),\mathcal{N}(x_t,\epsilon^2 \sigma^2_t)\right)+ \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}(x_t,\epsilon^2\sigma^2_t),\mathcal{N}(0,\epsilon^2 \sigma^2_\infty)\right). \end{align*} Again, by item i) and item ii) of Lemma \ref{pend} we have \begin{align*} D^{(\epsilon)}(t)\leq R(t)+d^{(\epsilon)}(t). \end{align*} Gluing all pieces together we deduce \begin{equation*} \left| d^{(\epsilon)}(t)-D^{(\epsilon)}(t)\right|\leq R(t)\quad \textrm{ for any } t\geq p. \end{equation*} \end{proof} Now, we have all the tools to prove Theorem \ref{main}. \begin{proof}[Proof of Theorem \ref{main}] By Lemma \ref{lemma2} and Lemma \ref{pend1} we have $\lim\limits_{t\rightarrow +\infty}R(t)=0$. In order to analyze $D^{(\epsilon)}(t)$ we observe that \begin{equation}\label{ton1} \frac{x_t}{\epsilon \sigma_\infty}= \frac{t^{l-1}r^{t}}{\epsilon\sigma_\infty} \left(\frac{x_t}{t^{l-1}r^t}-v_t\right)+\frac{t^{l-1}r^{t}}{\epsilon\sigma_\infty}v_t, \end{equation} where $l\in \{1,\ldots,p\}$, $r\in(0,1)$, and $v_t$ are given by Condition i). By Lemma \ref{C3} in {Appendix \ref{toolsappendix}} we have \[ \lim\limits_{\epsilon\rightarrow0^+}\frac{(t^{(\epsilon)})^{l-1} r^{t^{(\epsilon)}}}{\epsilon} = 1. \] For any $t\geq 0$, define $p_t=\frac{t^{l-1}r^{t}}{\epsilon\sigma_\infty} \left(\frac{x_t}{t^{l-1}r^t}-v_t\right)$ and $q_t=\frac{t^{l-1}r^{t}}{\epsilon\sigma_\infty}v_t$. Then for any $b\in \mathbb{R}$ we have \begin{align*} |p_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|\leq & \left(\frac{t^{(\epsilon)}+bw^{(\epsilon)}}{t^{(\epsilon)}}\right)^{l-1} \frac{(t^{(\epsilon)})^{l-1}r^{t^{(\epsilon)}+bw^{(\epsilon)}-1}}{\epsilon\sigma_\infty}\times \\ & \left| \frac{x_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}}{({\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor})^{l-1}r^{{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}}}-v_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor} \right|. \end{align*} By Condition i) we have \begin{equation}\label{ton2} \lim\limits_{\epsilon\rightarrow 0^+} p_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}=0 \quad\textrm{ for any } b\in \mathbb{R}. \end{equation} Now, we analyze an upper bound for $|q_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|$. Notice that \begin{equation*} |q_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|\leq \left(\frac{t^{(\epsilon)}+bw^{(\epsilon)}}{t^{(\epsilon)}}\right)^{l-1} \frac{(t^{(\epsilon)})^{l-1}r^{t^{(\epsilon)}+bw^{(\epsilon)}-1}}{\epsilon\sigma_\infty}M, \end{equation*} where $M=\sup\limits_{t\geq 0}|v_t|$. By Condition ii) we know $M<+\infty$. Then \begin{equation}\label{ton3} \limsup\limits_{\epsilon \rightarrow 0^+}|q_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|\leq \frac{Mr^{bC-1}}{\sigma_\infty} \quad\textrm{ for any } b\in \mathbb{R}. \end{equation} From equality \eqref{ton1}, relation \eqref{ton2}, inequality \eqref{ton3} and item ii) of Lemma \ref{C2} we get \[ \limsup\limits_{\epsilon \rightarrow 0^+} \frac{|x_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|}{\epsilon\sigma_\infty}\leq \frac{Mr^{bC-1}}{\sigma_\infty} \quad\textrm{ for any } b\in \mathbb{R}. \] Using item i) of Lemma \ref{pend4} we have \begin{align*} \limsup\limits_{\epsilon \rightarrow 0^+}& ~ \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}\left(\frac{|x_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|}{\epsilon\sigma_\infty},1\right),\mathcal{N}(0,1)\right) \leq \\ &~ \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}\left(\frac{Mr^{bC-1}}{\sigma_\infty},1\right),\mathcal{N}(0,1)\right) \end{align*} for any $b\in \mathbb{R}$. Since $r\in (0,1)$, then by Lemma \ref{pend1} we have \begin{equation}\label{b1} \lim\limits_{b\rightarrow +\infty}\limsup\limits_{\epsilon \rightarrow 0^+} \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}\left(\frac{|x_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|}{\epsilon\sigma_\infty},1\right),\mathcal{N}(0,1)\right)=0. \end{equation} In order to analyze a lower bound for $|q_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|$, note \begin{align*} |q_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|&\geq \left(\frac{t^{(\epsilon)}+bw^{(\epsilon)}-1}{t^{(\epsilon)}}\right)^{l-1} \frac{(t^{(\epsilon)})^{l-1}r^{t^{(\epsilon)}+bw^{(\epsilon)}}}{\epsilon\sigma_\infty} |v_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|\\ \end{align*} for any $b\in \mathbb{R}$. By Condition iii) and item iii) of Lemma \ref{C2} we have \begin{equation}\label{ton4} \liminf\limits_{\epsilon \rightarrow 0^+} |q_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|\geq \frac{r^{bC}}{\sigma_\infty} \liminf\limits_{\epsilon \rightarrow 0^+}|v_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|\geq \frac{mr^{bC}}{\sigma_\infty}, \end{equation} where $m=\liminf\limits_{t\rightarrow +\infty}|v_t|\in (0,+\infty)$. From equality \eqref{ton1}, relation \eqref{ton2}, inequality \eqref{ton4} and item ii) of Lemma \ref{C2} we get \begin{equation*} \liminf\limits_{\epsilon \rightarrow 0^+} \frac{|x_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|}{\epsilon\sigma_\infty}\geq \frac{mr^{bC}}{\sigma_\infty} \quad\textrm{ for any } b\in \mathbb{R}. \end{equation*} From item ii) of Lemma \ref{pend4} we have \begin{align*} \liminf\limits_{\epsilon \rightarrow 0^+}&~ \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}\left(\frac{|x_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|}{\epsilon\sigma_\infty},1\right),\mathcal{N}(0,1)\right) \geq \\ & ~\mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}\left(\frac{r^{bC}}{\sigma_\infty} m,1\right),\mathcal{N}(0,1)\right) \end{align*} for any $b\in \mathbb{R}$. Since $r\in (0,1)$, then by item iii) Lemma \ref{pend2} we have \begin{equation}\label{b2} \lim\limits_{b\rightarrow -\infty}\liminf\limits_{\epsilon \rightarrow 0^+} \mathrm{{\bf{d}}}_{\mathrm{TV}}\left(\mathcal{N}\left(\frac{|x_{\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor}|}{\epsilon\sigma_\infty},1\right),\mathcal{N}(0,1)\right)=1. \end{equation} From \eqref{b1} and \eqref{b2} we have \[ \lim\limits_{b\rightarrow +\infty}\limsup\limits_{\epsilon\rightarrow 0^+}D^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)} \rfloor)=0 \textrm{ and } \lim\limits_{b\rightarrow -\infty}\liminf\limits_{\epsilon\rightarrow 0^+}D^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)=1. \] Recall that $ \lim\limits_{t\rightarrow +\infty}R(t)=0 $. By Lemma \ref{opo} and item i) of Lemma \ref{C2} we obtain \[ \limsup\limits_{\epsilon\rightarrow 0^+}d^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)\leq \limsup\limits_{\epsilon\rightarrow 0^+}D^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor). \] Now, sending $b \to +\infty$ we get \[ \lim\limits_{b\rightarrow +\infty}\limsup\limits_{\epsilon\rightarrow 0^+}d^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)=0. \] Similarly, by Lemma \ref{opo} and item ii) of Lemma \ref{C2} we obtain \[ \liminf\limits_{\epsilon\rightarrow 0^+}D^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)\leq \liminf\limits_{\epsilon\rightarrow 0^+}d^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor). \] Now, sending $b \to -\infty$ we get \[ \lim\limits_{b\rightarrow -\infty}\liminf\limits_{\epsilon\rightarrow 0^+}d^{(\epsilon)}(\lfloor t^{(\epsilon)}+bw^{(\epsilon)}\rfloor)=1. \] \end{proof} \subsection{Fulfilling the conditions of Theorem \ref{main}} Now, we provide a precise estimate of the rate of the convergence to zero of \eqref{plr}. Let us recall some well-known facts about $p$-linear recurrences. By the celebrated Fundamental Theorem of Algebra we have at most $p$ roots in the complex numbers for \eqref{ce}. Denote by $\lambda_1,\ldots,\lambda_q\in \mathbb{C}$ the different roots of \eqref{ce} with multiplicity $m_1,\ldots,m_q$ respectively, where $1\leq q\leq p$. Then \begin{equation}\label{eq1} x_t=\sum\limits_{j_1=1}^{m_1}c_{1,j_1}t^{j_1-1}\lambda_1^t+\sum\limits_{j_2=1}^{m_2}c_{2,j_2}t^{j_2-1}\lambda_2^t+ \ldots+\sum\limits_{j_q=1}^{m_q}c_{q,j_q}t^{j_q-1}\lambda_q^t \end{equation} for any $t\in \mathbb{N}_0$, where the coefficients $c_{1,1},\ldots,c_{1,m_1},\ldots,c_{q,1},\ldots,c_{q,m_q}$ are uniquely obtained from the initial data $x_0,\ldots,x_{p-1}$. For more details see Theorem $1$ in \cite{GSL}. Moreover, for any initial data $(x_0,\ldots,x_{p-1})\in\mathbb{R}^{p}\setminus \{0_p\}$ we have \[(c_{1,1},\ldots,c_{1,m_1},\ldots,c_{q,1},\ldots,c_{q,m_q})\in \mathbb{C}^{p}\setminus \{0_p\}.\] Notice that the right-hand side of \eqref{eq1} may have complex numbers. When all the roots of \eqref{ce} are real numbers we can establish the precise exponential behavior of $x_t$ as $t$ goes by. \begin{lemma}[Real roots]\label{lya} {Assume that all the roots of \eqref{ce} are real numbers}. Then there exists a non-empty maximal set $\mathcal{C} \subset \mathbb{R}^p$ such that for any $x=(x_0,\ldots,x_{p-1})\in \mathcal{C}$ there exist $r\coloneqq r(x)>0$, $l\coloneqq l(x)\in \{1,\ldots,p\}$ and $v_t\coloneqq v(t,x)\in \mathbb{R}$ satisfying \begin{equation*}\label{lyapunov} \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{t^{l-1}r^t}-v_t\right|=0. \end{equation*} Moreover, we have $\sup\limits_{t\rightarrow+\infty}|v_t|<+\infty$ and $\liminf\limits_{t\rightarrow+\infty}|v_t|>0$. \end{lemma} \begin{proof} Recall that the constants $c_{1,1},\ldots,c_{1,m_1},\ldots,c_{q,1},\ldots,c_{q,m_q}$ in representation \eqref{eq1} depend on the initial data $x_0,x_1,\ldots,x_{p-1}$. In order to avoid technicalities, without loss of generality we can assume that for each $1\leq j\leq q$ there exists at least one $1\leq k\leq m_j$ such that $c_{j,j_k}\not=0$. If the last assumption is not true for some $1\leq j\leq q$, then the root $\lambda_j$ does not appear in representation \eqref{eq1} for an specific initial data $x_0,x_1,\ldots,x_{p-1}$, then we can remove from representation \eqref{eq1} and apply the method described below. Denote by $r=\max\limits_{1\leq j\leq q}{|\lambda_j|}>0$. Since all the roots of \eqref{ce} are real numbers then after multiplicity at most two roots of \eqref{ce} have the same absolute value. The function $\mathrm{sign}(\cdot)$ is defined over the domain $\mathbb{R}\setminus\{0\}$ by $\mathrm{sign}(x)=\nicefrac{x}{|x|}$. Only one of the following cases can occur. \begin{itemize} \item[i)] There exists a unique $1\leq j\leq q$ such that $|\lambda_{j}|=r$. Let \[l=\max\{1\leq s\leq m_j:c_{j,s}\not=0\}.\] Then \[\lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{t^{l-1}r^t}-c_{j,l}(\mathrm{sign} (\lambda_j))^t\right|=0.\] In this case $\mathcal{C}=\mathbb{R}^p\setminus \{0_p\}$. \item[ii)] There exist $1\leq j<k\leq q$ such that $|\lambda_{j}|=|\lambda_k|=r$. Without loss of generality, we can assume $0<\lambda_k=-\lambda_j$. Let \[l_j=\max\{1\leq s\leq m_j:c_{j,s}\not=0\} \] and \[l_k=\max\{1\leq s\leq m_k:c_{k,s}\not=0\}. \] If $l_j<l_k$ or $l_k<l_j$ then by taking $l=\max\{l_j,l_k\}$ we have \[ \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{t^{l-1}r^t}-c_{\star,l}(\mathrm{sign} (\lambda_\star))^t\right|=0, \] where $\star=j$ if $l_j=l$ and $\star=k$ if $l_k=l$. In this case $\mathcal{C}=\mathbb{R}^p\setminus \{0_p\}$. If $l_j=l_k$ then by taking $l=l_j$, $v_t=(-1)^{t}c_{j,l}+c_{k,l}$ we have \begin{equation*} \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{t^{l-1}r^t}-v_t\right|=0. \end{equation*} Notice that $\sup\limits_{t\geq 0}|v_t|< +\infty$. By taking \[\mathcal{C}=\{(x_0,\ldots,x_{p-1})\in \mathbb{R}^p: -c_{j,l}+c_{k,l}\not=0 \textrm{ and } c_{j,l}+c_{k,l}\not=0 \}\] we have $\liminf\limits_{t\rightarrow+\infty}|v_t|>0$. \end{itemize} \end{proof} \begin{remark} From the proof of Lemma \ref{lya}, we can state precisely $\mathcal{C}$. Moreover, $\mathcal{C}$ has full measure with respect to the Lebesgue measure on $\mathbb{R}^p$. \end{remark} Rather than the real roots case, the following lemma provides a fine estimate about the behavior of \eqref{plr} as $t$ goes by in general setting. \begin{lemma}[General case]\label{ulya33} For any $x=(x_0,\ldots,x_{p-1})\in \mathbb{R}^p\setminus \{0_{p}\}$ there exist $r:=r(x)>0$, $l\coloneqq l(x)\in \{1,\ldots,p\}$ and $v_t\coloneqq v(t,x)\in \mathbb{R}$ such that \begin{equation*} \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{t^{l-1}r^t}-v_t\right|=0, \end{equation*} where \[ v_t=\sum\limits_{j=1}^{m} \left(\alpha_j\cos(2\pi\theta_j t)+\beta_j\sin(2\pi\theta_j t)\right) \] with $(\alpha_j, \beta_j) \coloneqq ( \alpha_j(x), \beta_j(x)) \in \mathbb{R}^2\setminus\{(0,0)\}$, $m\coloneqq m(x)\in \{1,\ldots,p\}$, and $\theta_j\coloneqq \theta(x)\in [0,1)$ for any $j \in \{ 1, \ldots , m\}$. Moreover, $\sup\limits_{t \geq 0}|v_t|<+\infty$. \end{lemma} \begin{proof} From \eqref{eq1} we have \[ x_t=\sum\limits_{j_1=1}^{m_1}c_{1,j_1}t^{j_1-1}\lambda_1^t+\sum\limits_{j_2=1}^{m_2}c_{2,j_2}t^{j_2-1}\lambda_2^t+ \ldots+\sum\limits_{j_q=1}^{m_q}c_{q,j_q}t^{j_q-1}\lambda_q^t \quad\textrm{ for any } t\in \mathbb{N}_0. \] Without loss of generality we assume for any $k\in\{1, \ldots, q\}$ there exists $j \in \{1,\ldots, m_k\}$ such that $c_{k,j}\neq 0$. Let $l_k\coloneqq \max\{1\leq j\leq m_k:c_{k,j}\neq 0\}$. Then $x_t$ can be rewritten as \[ x_t=\sum\limits_{j_1=1}^{l_1}c_{1,j_1}t^{j_1-1}\lambda_1^t+\sum\limits_{j_2=1}^{l_2}c_{2,j_2}t^{j_2-1}\lambda_2^t+ \ldots+\sum\limits_{j_q=1}^{l_q}c_{q,j_q}t^{j_q-1}\lambda_q^t, \] where $c_{k,l_k}\neq 0$ for each $k$. For each $k$ let $r_k \coloneqq \| \lambda_k \|$ be its complex modulus. Without loss of generality we assume: \begin{itemize} \item[i)] $r_1\leq \cdots \leq r_q$, \item[ii)] there exists an integer $\tilde{h}$ such that $r_{\tilde{h}}=\cdots=r_q$, \item[iii)] $l_{\tilde{h}} \leq \cdots \leq l_q$, \item[iv)] there exists an integer $h\geq \tilde{h}$ such that $l_h=\cdots=l_q$. \end{itemize} Let $r\coloneqq r_q$ and $l\coloneqq l_q$. By taking $v_t=r^{-t}(c_{h,l}\lambda_h^t+\dots+c_{q,l}\lambda_q^t)$ we have \[ \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{t^{l-1}r^t}-v_t\right|=0, \] where $\lambda_h,\dots,\lambda_q$ have the same modulus $r$, but they have different arguments $\theta_j\in [0,1)$. Then \[ v_t=\sum\limits_{j=h}^{q} \left(\alpha_j\cos(2\pi\theta_j t)+\beta_j\sin(2\pi\theta_j t)\right). \] Since $c_{k,l_k} \neq 0$ for each $h\leq k\leq q$, then $\alpha_j$ and $\beta_j$ are not both zero for any $h\leq j\leq q$. After relabeling we have the desired result. \end{proof} \begin{remark} Under no further conditions on Lemma \ref{ulya33}, we cannot guarantee that $\liminf\limits_{t\rightarrow +\infty}|v_t|>0$. For instance, the following corollary provides sufficient conditions for which $\liminf\limits_{t\rightarrow +\infty}|v_t|=0$. \end{remark} Following \cite{MK}, we define that the numbers $\vartheta_1,\ldots,\vartheta_m $ are rationally independent if the linear combination $k_1\vartheta_1+\ldots+k_m\vartheta_m \notin \mathbb{Z}$ for any $(k_1, \ldots, k_m) \in \mathbb{Z}^m \setminus \{ 0_m\}$. \begin{corollary}\label{comp} Assume that $\theta_1,\ldots,\theta_m $ are rationally independent then $\liminf\limits_{t \rightarrow +\infty} |v_t |=0$. \end{corollary} \begin{proof} For any $j\in\{1,\ldots,m\}$ notice that $d_j:=\sqrt{\alpha_j^2+\beta_j^2}>0$, and let $\cos(\gamma_j) =\nicefrac{\alpha_j}{d_j}$ and $\sin(\gamma_j) =\nicefrac{\beta_j}{d_j}$. Then $v_t$ can be rewritten as $v_t=\sum\limits_{j=1}^{m}d_j\cos(2\pi \theta_j t-\gamma_j)$. Let $\gamma=-(\frac{\gamma_1}{2\pi}, \ldots, \frac{\gamma_m}{2\pi})$ be in the $m$-dimensional torus $(\mathbb{R}/\mathbb{Z})^m$. Then the set $\{(\gamma+(\theta_1 t, \ldots, \theta_m t))\in (\mathbb{R}/\mathbb{Z})^m, t \in \mathbb{N}\}$ is dense in $(\mathbb{R}/\mathbb{Z})^m$, for more details see Corollary 4.2.3 of \cite{MK}. Consequently, $\liminf\limits_{t\rightarrow +\infty}|v_t|=0$. \end{proof} \section{Examples}\label{examples} In this section, we consider the celebrated Brownian oscillator \begin{equation}\label{unifm} \ddot{x}_t+\gamma \dot{x}_t+\kappa x_t=\epsilon \dot{B}_t \quad \textrm{ for any } t\geq 0, \end{equation} where $x_t$ denotes the position at time $t$ of the holding mass $m$ with respect to its equilibrium position, $\gamma>0$ denotes the damping constant, $\kappa>0$ denotes the restoration constant (Hooke's constant) and $(B_t:t\geq 0)$ is a Brownian motion. {For each initial displacement from the equilibrium position $x_0=u$ and initial velocity $\dot{x}_0=v$ we have a unique solution of \eqref{unifm}.} For further details see Chapter $8$ in \cite{Mao}. Without loss of generality we can assume that the mass $m$ is one. Using the classical {\it{forward difference approximation}} with the step size $h>0$ (fixed), we obtain \[ \frac{1}{h^2} (x_{(n+2)h}-2x_{(n+1)h}+x_{nh})+\frac{\gamma}{h}(x_{(n+1)h}-x_{nh}) +\kappa x_{nh} = \frac{\epsilon}{h}(B_{(n+3)h}-B_{(n+2)h}) \] for any $n\in \mathbb{N}_0$ with the initial data $x_0=u$ and $x_h=x_0+ \dot{x}_0h= u+vh$. For consistency, let $X_t=x_{th}$ for any $t\in \mathbb{N}_0$. The latter can be rewritten as \begin{equation}\label{nusc} X_{t+2}=\left( 2-\gamma h\right) X_{t+1} - \left( 1-\gamma h +\kappa h^2\right) X_{t} + \epsilon h(B_{(t+3)h}-B_{(t+2)h}) \quad \textrm{for any } t\in \mathbb{N}_0. \end{equation} Notice that the sequence $(B_{(t+3)h}-B_{(t+2)h}:t\in \mathbb{N}_0)$ are i.i.d. random variables with Gaussian distribution with zero mean and variance $h$. Therefore \begin{equation*} X_{t+2}=\left( 2-\gamma h\right) X_{t+1} - \left( 1-\gamma h +\kappa h^2\right) X_{t} + \epsilon h^{\nicefrac{3}{2}}\xi_{t+2}\quad \textrm{for any } t\in \mathbb{N}_0, \end{equation*} where $(\xi_{t+2}:t\in \mathbb{N}_0)$ is a sequence of i.i.d. random variables with standard Gaussian distribution. This is exactly a linear recurrence of degree $2$ with control sequence $(\epsilon h^{\nicefrac{3}{2}} \xi_{t+2} : t\in \mathbb{N}_0)$, and its characteristic polynomial is given by \begin{equation}\label{exchara} \lambda^2+(\gamma h -2)\lambda+(1-\gamma h + \kappa h^2). \end{equation} To fulfill assumption \eqref{H} we deduce the following conditions. \begin{itemize} \item[i)] If $\gamma^2-4k>0$, then polynomial \eqref{exchara} has two distinct real roots. In this case a sufficient condition to verify \eqref{H} is $h\in (0,\nicefrac{2}{\gamma})$. \item[ii)] If $\gamma^2-4k=0$, then polynomial \eqref{exchara} has two repeated real roots. In this case \eqref{H} is equivalent to $h\in (0,\nicefrac{\gamma}{\kappa})$. \item[iii)] If $\gamma^2-4k<0$, then polynomial \eqref{exchara} has two complex conjugate roots. In this case \eqref{H} is equivalent to $h\in (0,\nicefrac{\gamma}{\kappa})$. \end{itemize} In other words, there exists $h^*\in (0,1)$ such that for each $h\in (0,h^*)$ the characteristic polynomial \eqref{exchara} satisfies assumption \eqref{H}. From here to the end of this section, we assume that $h\in (0,h^*)$. Now, we compute $r$, $l$, $v_t$ and $\mathcal{C}$ which appear in Lemma \ref{lya}. Let $\lambda_1$ and $\lambda_2$ be roots of \eqref{exchara}. Denote $r_1=\|\lambda_1\|$ and $r_2=\|\lambda_2\|$. Recall the function $\mathrm{sign}(\cdot)$ is defined over the domain $\mathbb{R}\setminus\{0\}$ by $\mathrm{sign}(x)=\nicefrac{x}{|x|}$. We assume that $(x_0,x_1)\not= (0,0)$. We analyze as far as possible when the conditions of Theorem \ref{main} are fulfilled for the model \eqref{nusc}. \begin{itemize} \item[i)] {\bf{Real roots with different absolute values.}} $\lambda_1$ and $\lambda_2$ are real and $r_1 \not= r_2$. In this case, \[x_t = c_1 \lambda_1^t+c_2 \lambda_2^t\quad \textrm{ for any } t\in \mathbb{N}_0,\] where $c_1$ and $c_2$ are unique real constants given by initial data $x_0,x_1$. Since $(x_0,x_1)\not= (0,0)$ then $(c_1,c_2)\not=(0,0)$. Without loss of generality assume that $r_1>r_2$. \begin{itemize} \item[i.1)] If $c_1\neq 0$ then \[ \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{r_1^t}-c_1(\mathrm{sign}(\lambda_1))^t\right|=0. \] \item[i.2)] If $c_1=0$ then $c_2\neq 0$. Therefore \[ \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{r_2^t}-c_2(\mathrm{sign}(\lambda_2))^t\right|=0. \] \end{itemize} Consequently, $\mathcal{C}=\mathbb{R}^2\setminus\{(0,0)\}$. \item[ii)] {\bf{Real roots with the same absolute value.}} $\lambda_1$ and $\lambda_2$ are real and $r \coloneqq r_1 = r_2$. \begin{itemize} \item[ii.1)] If $\lambda_1=\lambda_2=r\mathrm{sign} (\lambda_1)$ then \[ x_t=c_1 r^t(\mathrm{sign} (\lambda_1))^t+c_2 t r^t(\mathrm{sign} (\lambda_1))^t \quad\textrm{ for any } t\in \mathbb{N}_0,\] where $c_1$ and $c_2$ are unique real constants given by initial data $x_0,x_1$. Since $(x_0,x_1)\not= (0,0)$ then $(c_1,c_2)\not=(0,0)$. Then \begin{itemize} \item[ii.1.1)] If $c_2\neq 0$ then \begin{equation*} \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{tr^t}-c_2(\mathrm{sign}(\lambda_1))^t\right|=0. \end{equation*} \item[ii.1.2)] If $c_2 = 0$ then $c_1\neq 0$. Therefore \begin{equation*} \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{r^t}-c_1(\mathrm{sign}(\lambda_1))^t\right|=0. \end{equation*} \end{itemize} Consequently, $\mathcal{C}=\mathbb{R}^2\setminus\{(0,0)\}$. \item[ii.2)] If $\lambda_1\neq\lambda_2$ then \[ x_t=c_1 r^t+c_2 (-r)^t \quad\textrm{ for any } t\in \mathbb{N}_0,\] where $c_1$ and $c_2$ are unique real constants given by initial data $x_0,x_1$. Therefore \begin{equation*} \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{r^t}- (c_1+c_2(-1)^t)\right|=0. \end{equation*} Consequently, \begin{align*} \mathcal{C}=&\{(x_0,x_1)\in \mathbb{R}^2:c_1+c_2\neq0 \textrm{ and } c_1-c_2\neq 0\}\\ =&\{(x_0,x_1)\in \mathbb{R}^2:x_0\neq 0 \textrm{ and } x_1 \neq 0\}. \end{align*} \end{itemize} \item[iii)] {\bf{Complex conjugate roots.}} Since the coefficients of the characteristic polynomial are real if $\lambda$ is a root of the polynomial, then conjugate $\overline{\lambda}$ is also a root. We can assume that $\lambda_1=re^{i2\pi\theta}$ and $\lambda_2=re^{-i2\pi\theta}$ with $r\in (0,1)$ and $\theta \in (0,1)\setminus \{\nicefrac{1}{2}\}$. In this setting \[x_t=c_1r^t \cos(2\pi\theta t)+c_2r^t \sin(2\pi\theta t) \quad\textrm{ for any } t\in \mathbb{N}_0, \] where $c_1$ and $c_2$ are unique real constants given by initial data $x_0,x_1$. Thus \[ \lim\limits_{t\rightarrow +\infty}\left|\frac{x_t}{r^t}-(c_1 \cos(2\pi\theta t)+c_2 \sin(2\pi\theta t))\right|=0. \] Since $(x_0,x_1)\not= (0,0)$ then $(c_1,c_2)\not=(0,0)$. Let $c=\sqrt{c_1^2+c_2^2}$, $\cos(\gamma) =\nicefrac{c_1}{c}$ and $\sin(\gamma) =\nicefrac{c_2}{c}$. Consequently, \[v_t:= c_1 \cos(2\pi\theta t)+c_2 \sin(2\pi\theta t) =c\cos(2\pi\theta t -\gamma) \quad\textrm{ for any } t\in \mathbb{N}_0.\] Observe that $\gamma$ depends on the initial data $x_0$ and $x_1$. Let us analyze under which conditions on $x_0$ and $x_1$ we have $\liminf\limits_{t\rightarrow+\infty}|v_t|>0$. \begin{itemize} \item[iii.1)] If $\theta$ is a rational number then the sequence $(\cos(2\pi\theta t-\gamma),t\in \mathbb{N}_0)$ takes finite number of values. Notice that there exists $t_0\in \mathbb{N}_0$ such that $2\pi\theta t_0-\gamma = \nicefrac{\pi}{2}+k\pi$ for some $k\in \mathbb{Z}$, if and only if $\cos(2\pi\theta t_0-\gamma)=0$. Therefore, $\liminf\limits_{t\rightarrow+\infty}|v_t|>0$ if and only if \begin{equation*} \mathcal{C}=\{(x_0,x_1)\in \mathbb{R}^2: 2\pi\theta t-\gamma\not=\frac{\pi}{2} + k\pi \quad\textrm{ for any } t\in\mathbb{N}_0,~ k\in\mathbb{Z}\}. \end{equation*} \item[iii.2)]If $\theta$ is an irrational number. Then by Corollary 4.2.3 of \cite{MK} the set $\{(\theta t-\nicefrac{\gamma}{2\pi})\in\mathbb{R}/\mathbb{Z}:t\in \mathbb{N}_0\}$ is dense in the circle $\mathbb{R}/\mathbb{Z}$ and consequently the set $\{\cos(2\pi\theta t-\gamma): t\in \mathbb{N}_0\}$ is dense in $[-1,1]$. Therefore, for any $\gamma$ we have $\liminf\limits_{t\rightarrow+\infty}|v_t|=0$, which implies $\mathcal{C}=\emptyset$. \end{itemize} \end{itemize}
1,116,691,496,983
arxiv
\section{Introduction} The interplay between magnetism and metallicity in the copper oxygen plane structure of the cuprates is at the origin of the great interest which has been devoted to their high temperature superconductivity. Such an interplay has been also found recently in the lamellar cobaltates Na$_{x}$CoO$_{2}$, which further display a high thermoelectric power together with good electronic conductivity \cite{Terasaki}. In both systems metallicity is induced by doping the square CuO$_{2}$ or triangular CoO$_{2}$ layers by insertion of dopants in the ionic separation layers. Soon after the discovery of the cuprate superconductivity, it has been suggested that the electron gas could display intrinsic inhomogeneous charge structures, such as linear ''stripes'' of Cu$^{2+}$ spins separated by doped metallic stripes \cite{Emery}. However, such static structures have only been found in specific cases \cite{Tranquada}, and the magnetic properties of the Cu sites have been usually found rather homogeneous. A distinct property of the Co ions in the large crystal field induced by their oxygen octahedral environment in the CoO$_{2}$ structure, is that the $% t_{2g}$ triplet of the Co site is much lower in energy than the $e_{g}$ doublet, so that the electronic structure of the Co ions is expected to correspond to low spin configurations Co$^{3+}$ ($S=0$) or Co$^{4+}$ ($S=1/2$% ) obtained by filling only the $t_{2g}$ triplet states. Consequently ordered charge structures are more frequently expected than for cuprates. Indeed, using the quasi-unique site sensitivity of NMR techniques, a Na atomic order \cite{NaPaper} and an associated Co ordered charge disproportionation (OCD) \cite{CoPaper} has been revealed for a specific metallic $x_{0}\approx 0.7$ composition, which displays a Curie-Weiss susceptibility. However for the metallic cobaltate antiferromagnetic (AF) phases found for $x\geqslant 0.75$ \cite{Sugiyama,Mendels}, neutron scattering experiments have been analyzed in a uniform local moment picture \cite{Boothroyd,Bayrakci}. This is unexpected for hole doping of Na$_{1}$CoO$_{2}$ which is a band insulator built from the non magnetic Co$^{3+}$ state \cite{Lang}, for which the $% t_{2g}$ multiplet is filled by the six $d$ electrons. Here we present the first systematic effort undertaken so far to study with local probes the variation with $x$ of the magnetic properties and demonstrate that the OCD is generic for $x>0.65$ including the AF phase with $T_{N}=$22~K. This OCD always results into non magnetic Co$^{3+}$ and a second type of sites with formal valence of about 3.5 responsible for the peculiar magnetic properties. We establish that the latter display above 100~K a generic nearly ferromagnetic behaviour for four phases displaying distinct ordered Na structures, so that the OCD is an essential ingredient to explain the magnetic properties of these high $x$ phases. Surprisingly the low $T$ ground state magnetic properties differ markedly as, apart the AF phase, the former $x_{0}\approx 0.7$ phase appears an experimental realization of a nearly ferromagnetic 2-dimensional metal without static magnetic order down to $T=0$. The hole doped AF phases have been shown ferromagnetic in plane and AF between planes (A-type AF) \cite {Boothroyd,Bayrakci}. So, the Na atomic order and the OCD of Co appear essential as well in governing the low energy modifications of the band structure of the correlated metallic state, which drive the coupling between Co planes and the low $T$ magnetic and thermoelectric properties \cite{Ong}. \begin{figure}[tbp] \onefigure[width=1\linewidth]{FigXray.eps}% \caption{Hexagonal (P63/mmc N$^\circ$ 196) indexation of part of the X ray diffraction patterns. At these high angle values the Cu K$\alpha $1 and K$\alpha $2 splits the reflections into two separate peaks with intensity ratio 2/1. (*) In the 0.71 case an additional structural splitting separates the hexagonal 110 (and 112) reflections into two distinct 310 and 020 orthorhombic (Ccmm N$^\circ$ 63) peaks.}% \label{FigXray} \end{figure} \section{Samples} We have synthesized our samples by standard solid state reaction of Na$_{2}$ CO$_{3}$ and Co$_{3}$O$_{4}$ powders in flowing oxygen, with nominal concentrations $x$ increasing by increments of 2 to 3/1000 in the range $0.65\leqslant x\lesssim 0.80$. X ray powder diffraction data always exhibited the Bragg peaks corresponding to the two-layer Co structure with a hexagonal unit cell. However we systematically detected weak additional reflections indicative of three dimensional Na long range ordering. It has been immediately clear that the corresponding Na order is complicated and highly dependent on Na content. This ordering was even found for a specific composition to drive an orthorhombic distortion of the average lattice as shown for $x=0.71$ in fig.~\ref{FigXray}, on a detail of the Bragg peaks of the X ray diffraction profile. From the Rietveld refinements of the X ray data summarized in table~\ref{table1} we could isolate four distinct phases for $0.65\leqslant x\lesssim 0.80$. Each phase exhibits a specific Na ordering leading to characteristic additional diffractions: commensurate reflections, or incommensurate superstructure satellites with $q(b^{\ast})$ as the component of the wave vector modulation. The concentrations for which single phase samples could be stabilized are sequenced in four distinct narrow $x$ non overlapping domains outside which the synthesized samples were found as intermediate mixtures of these phases. In addition, multiphasing has been found to occur quite commonly for any $x$ value if no particular care to homogeneize the samples was taken. We could however synthesize these four phases reproducibly, forbidding any air exposure of the samples. Confirmation that nearly pure phases could be achieved on large mass samples ($\simeq $400~mg) required for NMR measurements has been directly obtained from $^{23}$Na and $^{59}$Co NMR data, as shown hereafter. The orderings found for these four phases always correspond to a symmetry lowering and is systematically based on an orthorhombic reference subcell (Ccmm, N$^{\circ }$63): $a_{ort}=a_{hex}\sqrt{3}$; $b_{ort}=a_{hex}$; $% c_{ort}=c_{hex}$. Owing to the orthorhombic distortion for $x=0.71$ the labelling H67, O71, H72 and H75 has been used hereafter and in table~\ref {table1}. The formerly studied $x_{0}\approx 0.7$ phase \cite {NaPaper,CoPaper} is in fact that with the lowest $x$ value $x=0.67(1)$. We indeed found that any sample with larger $x$ evolves towards this limiting composition if it is kept in insufficiently dry atmosphere. \begin{largetable}% \caption{Parameters of the studied phases. The accuracy on their difference of Na content is much better than that on $x$ itself ($\pm $ 0.01). The lattice constants $a,b,c$ corresponding to fig.~\ref{FigXray} are given in the hexagonal or orthorhombic reference cell. When determined, the incommensurate $q(b^{\ast})$ or commensurate modulations $q(c^{\ast})$ are given. The Co$^{3+}$ fraction $y$ is obtained from $^{59}$Co NMR intensity data. The 0.67 sample displays a commensurate orthorombic superstructure $(a_{hex}\sqrt{3},\;3a_{hex},\;3c_{hex})$.}% \label{table1}. \begin{center} \begin{tabular}{cccccccccc} Phase & $x$ & $a_{ort}/\surd{3}$ (\AA) & $a_{hex}$ or $b_{ort}$ (\AA) & $c$ (\AA) & $q(b^{\ast})$ & $q(c^{\ast})$ & $A_{eff}^{iso}$ (kG/$\mu _{B}$)& y (\%) & \\ \hline H67 & $0.67$ & $a_{hex}$ & 2.82920(1) & 10.9387(4) & $1/3$ & $1/3$ & 9.1(3) & 26(4) & \\ O71 & $0.71$ & 2.83931(2) & 2.83031(3) & 10.8929(2) & 0.2849(1) & 0 & 8.0(3) & 40(5) & \\ H72 & $0.72$ & $a_{hex}$ & 2.83651(4) & 10.8770(2) & 0.2810(1) & 0 & 7.3(3) & 37(5) & \\ H75 & $\geq 0.75$ & $a_{hex}$ & 2.84162(1) & 10.8058(3) & - & - & 7.8(3) & 33(4) & \\ \end{tabular} \end{center} \end{largetable} \section{Magnetic susceptibility and $^{23}$Na NMR shifts.} The single crystal grains of samples of these phases were oriented in the $% H_{0}=7$ Tesla NMR field within Stycast or paraffin. SQUID measurements of the macroscopic susceptibility $\chi _{m}$ taken in 5~T field allow us to evidence that the different phases display different magnetic properties. For instance, as evidenced in fig.~\ref{fig.2}a, the low T magnitude of $\chi_{m}$ decreases progressively with increasing $\emph{x}$. The H75 phase is furthermore found to be the only phase in which a magnetic order is detected in low applied field. However, as minute amounts of impurities or slight admixture of phases could spoil the bulk measurements, spectroscopic measurements with local probes better determine the susceptibility of each phase. As reported in ref.~\cite{NaPaper} on H67, the $^{23}$Na NMR is an excellent probe of both Na order and of the magnetic properties. Indeed the $^{23}$Na NMR displays distinct quadrupole splittings for the different Na sites of this structure. In view of the more complex structures of the new phases there was no surprise in finding a larger number of resolved Na sites, although with similar magnitudes of their quadrupole splittings. The magnetic properties of the compounds are probed at the local scale through the NMR shifts of the different Na sites resolved in the $(-\frac{1}{2}% \leftrightarrow \frac{1}{2})$ transition of the $^{23}$Na spectra presented in fig.~\ref{fig.1}. There one can see that the $T=$5~K spectra are quite distinct for the four phases. For H75 a large broadening occurs in the AF state below $T_{N}=22$~K, while the spectrum of H67 is much more shifted than those of O71 and H72, which are distinct but display some overlap. The H72 batch has been found to evolve in time at room $T$ towards O71, and a slight mixture of the two pure phases might be unavoidable in the bulk samples \footnote{In various published papers for x values in the range studied, e.g. \cite{Sakurai}, we could find on the susceptibility data many signs indicating that the samples were superposition of our phases.}. Quite generally $^{23}$Na NMR allowed us to control the phase purity of the bulk of the NMR sample, as multiphase samples display superimposed spectra, as seen in fig.~\ref{fig.1}. \begin{figure}[tbp] \onefigure[width=1\linewidth]{Fig2epl.eps}% \caption{(a) $T$-dependencies of the bulk susceptibility $\chi_{m}$ measured with a DC SQUID in a 5~T field; (b) $T$ dependence of the mean $^{23}$Na NMR shift. Identical behaviour above 100~K can be seen for the four phases with remarkable differences at low $T$; (c) the linear variations of $K_{s}$ versus $\chi_{m}$ underlines the phase purity of the samples. The H67 data for $T<$30~K \cite{NaPaper} has been omitted to better display those of the new phases.}% \label{fig.2} \end{figure} Let us recall, as detailed in ref.~\cite{NaPaper} that for a field $% H_{0}\parallel \alpha $, the NMR shift $K_{\beta}^{\alpha}$ of a Na atomic site $\beta$ probes the spin susceptibility $\chi _{s,i}^{\alpha }(T)$ of the neighbouring Co sites $i$ through transferred hyperfine couplings $% A_{\beta ,i}^{\alpha }$ with $K_{\beta }^{\alpha }=\sum_{i}A_{\beta ,i}^{\alpha }\;\chi _{s,i}^{\alpha }(T)$. The main result found for H67, and verified as well here for the other phases, is that the $K_{\beta }^{\alpha }(T)$ variations scale with eachother for all Na sites. This $T$ dependence is associated with the average $\chi _{s}^{\alpha }(T)$ of the magnetic Co sites of the structure. So, overlooking the diversity of Na sites, the first moment (or center of gravity) of the $^{23}$Na NMR signal writes $% K_{s}^{\alpha }=A_{eff}^{\alpha }\chi _{s}^{\alpha }(T)$ where $% A_{eff}^{\alpha }$ is an effective hyperfine field per Co site. The $T$ variations of $K_{s}^{\alpha }$ are reported in fig.~\ref{fig.2}b, and are shown to be quite identical for $T>100$~K with a unique Curie-Weiss $(T-\Theta)^{-1}$ variation (with $\Theta\approx$-80~K). They differ markedly below 100~K, as does the SQUID data for $\chi _{m}$, the low $T$ enhancement of $\chi _{s}^{\alpha }(T)$ observed for H67 being progressively reduced for increasing $x$. The usual comparison between the SQUID and Na NMR data, displayed in the good linear $K_{s}$ versus $\chi _{m}$ plots of fig.~\ref{fig.2}c allows us to confirm the purity of the isolated phases. The high-T slopes of these plots yield similar values for the effective hyperfine coupling $A_{eff}^{iso}$ (table~\ref{table1}), which could be expected as $^{23}$Na sites are coupled with many cobalts \cite{NaPaper}. In all phases the anisotropy of $\chi _{s}^{\alpha }(T)$, given by that of $K_{s}^{\alpha }$, has been found $\lesssim $ $\pm $ 0.1. \begin{figure}[tbp] \onefigure[width=1\linewidth]{Fig1epl.eps}% \caption{$^{23}$Na NMR central line spectra taken at 5~K. They are quite distinct for the four nearly pure phases, with some overlap between O71 and H72 spectra. That for a sample which is clearly a mixture of H67, O71 and H72 is shown as well. $\nu_{ref}$ is the non magnetic $^{23}$Na NMR reference.}% \label{fig.1} \end{figure} In the H75 AF phase, the saturation of $K_{s}(T)$, that is $\chi _{s}^{\alpha }(T)$, seen at low $T$ in fig.~\ref{fig.2}b should be associated with the onset of AF correlations. In a uniform Heisenberg model, one would then assign the progressive increase of $\chi _{s}^{\alpha }(T)$ at low $T$ with decreasing $x$ to a decrease of $T_{N}$ and of out of plane AF coupling strength. However this primary interpretation fails as NMR data taken down to 1.4~K (and $\mu $SR to 50~mK \cite{Mendels2}), did not evidence any frozen magnetic state in the three other phases, which are then \textit{paramagnets in their ground state}, most probably metallic, as no electronic magnetic transition is detected. \begin{figure}[tbp] \onefigure[width=1\linewidth]{Fig3epl.eps}% \caption{Spectra taken with a large pulse spacing $\tau =200~\mu s$ in a spin echo sequence allowed us to isolate the narrow spectra of Co1 sites with long $T_{2}$. The broad spectra of the magnetic Co2 sites with short $T_{2}$ are obtained by subtracting the Co1 spectra from those taken with $\tau =10~\mu s$. The average shift of the Co2 sites decreases with increasing $x$ as does the $^{23}$Na shift in figs.~\ref {fig.2}b and \ref{fig.1}.}% \label{fig.3} \end{figure} \section{$^{59}$Co NMR} The difference between phases appears as well in the $^{59}$Co NMR spectra, which also display more Co sites than for H67 \cite{CoPaper}. \subsection{Charge disproportionation} As in H67 we identified two classes of Co sites, their $(-\frac{1}{2}% \rightarrow \frac{1}{2})$ transitions being easily visualized when $H_{0}$ is applied at 54.7$^{\circ }$ with respect to the $c$ axis, the ''magic angle'' for which quadrupole effects are reduced, as seen in fig.~\ref{fig.3}. A first series which we label as the Co1 ''class'', is associated with non magnetic Co$^{3+}$, with small spin lattice $T_{1}^{-1}$ and spin spin $T_{2}^{-1}$ relaxation rates, and occurs with similar NMR shifts at low $T$ in all four phases. The more complex spectra of fast relaxing magnetic Co sites, which we label as the Co2 ''class'' is seen in fig.~\ref{fig.3} to include diverse sites with much larger shifts at low $T$ than the Co1 sites. The increase of average shift of these Co2 with decreasing $x$ agrees perfectly with the SQUID and $^{23}$Na NMR data (fig.~\ref{fig.2}). The Co1 NMR shifts were found to vary with temperature, which is a sign that these non magnetic sites are sensing the magnetism of the Co2 sites through transferred hyperfine couplings. Indeed, in Fig.~\ref{fig.5}, we evidence that the shifts $^{59}K_{1}^{\alpha }$ of the Co1 nuclei scale linearly at low $T$ with that of $^{23}K$. The $T$ independent orbital contribution to $^{59}K_{1}^{\alpha }$ can be obtained by extrapolating this linear dependence to $^{23}K=0$ (that is vanishing spin susceptibility $% \chi _{s}(T)$).\ It is found isotropic and increases slightly from 1.95 to 2.05\% from H67 to H75. These values are quite comparable with the purely orbital shift of 1.95\% found for Co$^{3+}$ in the band insulator Na$_{1}$CoO% $_{2}$ \cite{Lang}. The fraction $y$ of Co$^{3+}$ sites, estimated from the Co1 relative NMR intensity (corrected for $T_{2}$ decay), increases slightly, but not regularly with $x$ (table~\ref{table1}), this overall trend being expected as all Co sites become Co$^{3+}$ for $x=1$. However as we find $y<x$, the average valence of the Co2 class of sites is always $<4+$% , and the OCD detected in H67 is present in all phases, \textit{including the AF ordered phase}. \begin{figure}[tbp] \onefigure[width=1\linewidth]{Fig5epl.eps}% \caption{At low $T$, the Co1 sites NMR shifts display linear variations versus the $^{23}$Na shifts $^{23}K$. For $H_{0}\parallel c$, the Co1 hyperfine coupling\ with the magnetic Co2 sites have similar magnitudes for all samples. For $H_{0}\perp c$ it has a similar magnitude for H72 and H75 but nearly vanishes for H67 and O71. For H67 two Co1 sites are well resolved, and their shifts depart slightly below 40~K (i.e. $^{23}K>0.2\%$), as reported in ref.~\cite{Gavilano}. The upturns for small $^{23}K$ are due to the onset of Co1-Co2 site exchange due to Na motion, as discussed in the text.}% \label{fig.5} \end{figure} \subsection{Na motion} As seen in Fig.~\ref{fig.5} we could detect the Co1 sites up to room $T$ in most phases, well above the onset of Na motion detected hereafter at $% \approx $200~K from $^{23}$Na $T_{1}$ data \cite{MotionPaper}. This implies that the OCD occurs already above room $T$, contrary to the proposal of ref.~ \cite{Gavilano}. However in fig.~\ref{fig.5} significant increases of $^{59}K_{1}^{\alpha }$ with respect to the low $T$ linear $^{23}K$ dependence are observed for $T\gtrsim 150$~K, that is for $^{23}K<0.08$. We attribute this to the onset of Co1-Co2 site exchange which becomes significant when Na motion begins to set in. Such site exchanges are usually easily detected in NMR spectra as they yield a progressive reduction of the line splitting between the two sites until they merge in a single exchange narrowed line at very high $T$, when the exchange rate exceeds the line splitting. Here the increase of the Co1 NMR shift merely corresponds to the onset of Co1-Co2 site exchange. Indeed from ref.~\cite{CoPaper}, $K^{ab}$(Co2) is as large as $\approx $4\% above 200~K, so the tiny 0.1\%-0.2\% increase of $K^{ab}$(Co1) corresponds to a $\approx $10\% decrease of the 1.5~MHz Co1-Co2 splitting, that is a very small exchange rate $\tau _{ex}^{-1}\approx 150$~kHz. This analysis is validated by the fact that weaker upturns of $K^{c}$(Co1) are seen for $H_{0}\parallel c$ for which $K^{c}$(Co1) and $K^{c}$(Co2) happen to differ only slightly as seen in ref.~\cite{CoPaper}. Therefore this key observation that the rate of exchange $\tau _{ex}^{-1}$ of Co sites is very slow with respect to expectations for electronic processes allows us to conclude that the Co1-Co2 site exchange is connected with Na motion. This proves that \textit{the Co charge does correlate with the Na environment} (e.g. Na1 sites being on top of Co$^{3+}$). \section{$\chem{^{23}}$Na spin lattice relaxation and electron spin dynamics} To search for differences in the dynamic electronic susceptibilities of these phases we have taken extensive $^{23}$Na $T_{1}$ data. As $^{23}$Na has a spin $I=3/2$, its nuclear magnetization recovery should be given by \begin{equation} M(t)/M_{0}=1-W\exp (-6t/T_{1})-(1-W)\exp (-t/T_{1}), \end{equation} with $W=0.9$ if only the central transition has been saturated. Since condition being impossible to fulfill strictly experimentally, $W$ has been left as an adjustable parameter, which was found to evolve between 0.9 and 0.7 depending on the sample and experimental conditions. The $T_{1}^{-1}$ data were found slightly anisotropic, i.e. $\approx $30\% larger for $H_{0}\perp c$ than for $H_{0}\parallel c$, for $T<$200~K. So, in fig.~\ref {fig.4} we only plotted the data for $H_{0}\parallel c$. Although the low $T$ variations of $\left( T_{1}T\right)^{-1}$ differ for the four phases, they do become identical above 100~K as seen in fig.~\ref {fig.4}a. There, we assign the extra high $T$ contribution to the occurrence of Na motion, their onset taking place at distinct $T\gtrsim 200$~K for the different phases. As can be anticipated, we could check that the magnetic behaviour is not that of a Fermi liquid, especially for the H67 sample, as $\left(T_{1}T\right)^{-1}$ and $^{23}K(T)$ do not combine into a constant $R=S_{0}/T_{1}T(^{23}K)^{2}$, where $S_{0}=(\hbar /4\pi k_{B})(\gamma _{e}/\gamma _{n})^{2}$ is the universal Korringa ratio. In fig.~\ref{fig.4}b $R$ is of course sample independent between 80 and 160~K and always smaller than unity, as noticed in a $x\approx 0.7$ mixed phase sample by Ihara \textit{et al.} \cite{Ishida}. This is expected if a ferromagnetic quasi-elastic peak at \textbf{q}$\approx 0$ dominates the spin excitations as revealed by Inelastic Neutron Scattering (INS) for the H75 phase above $T_{N}$ \cite {Boothroyd}. Indeed, in such a case the \textbf{q}$\approx 0$ response enhances markedly $\chi _{s}($\textbf{q}$=0)$, that is $^{23}K_{s}$, while $\left( T_{1}T\right) ^{-1}$\ is less enhanced as it probes $\chi ^{\prime \prime }($\textbf{q}$,\omega)$ at all \textbf{q} values. The identical behaviour found here above 100~K extends this result to all our Curie-Weiss phases. \begin{figure}[tbp] \onefigure[width=1\linewidth]{Fig4.eps}% \caption{$T$ variation of $(T_{1}T)^{-1}$ (a) and the normalized Korringa product $R$ (b) for the four phases. While the data are distinct below 100~K, in (c) a universal scaling between $(T_{1}T)^{-1}$ and $^{23}K$ is shown to apply. The high $T$ deviations due to Na motion and the slight low $T$ increases for H72 and O71 are discussed in the text.}% \label{fig.4} \end{figure} To characterize further the difference of magnetic excitations in the diverse ground states we searched for relationships between $(T_{1}T)^{-1}$ and $^{23}K$, as done in fig.~\ref{fig.4}c. Spin fluctuation theories in nearly ferromagnetic metals are expected to give $(T_{1}T)^{-1}=aK^{n}$ \cite{Moriya-Ueda}, with $n=1$ in 3D \cite{Alloul-Mihaly}, while $n=3/2$ is expected for 2D \cite{Hatatani-Moriya,Kitagawa}. For H67, we do remarkably find an accurate scaling, with $n=1.5\pm 0.1$, that is $% (T_{1}T)^{-1}=aK^{3/2}$ over the entire range 1.5~K$<T<$300~K. This interpretation of the data implies that the Curie-Weiss temperature $\Theta$=-80~K found in $K(T)$ and $\chi(T)$ is not associated with AF in-plane couplings but results from the nearly ferromagnetic electronic band behaviour \cite{Moriya-Ueda,Hatatani-Moriya} as seen for instance in TiBe$_{2}$ \cite{Alloul-Mihaly}. Impressively this 2D \textit{ferromagnetic} scaling extends down to low $T$ with a unique $a$ for all phases \footnote{ For H72 and O71, the small deviations for $T\lesssim 5$K are not intrinsic as data for the resolved Na lines were not found to scale perfectly. Purification of these phases is required for accurate low $T$ studies.}, including the AF phase down to $T_{N}$. At this stage one might wonder whether the saturation of $^{23}K(T)$ observed below 100~K in H75 is related to AF plane to plane couplings. Such AF fluctuations that enhance $\chi ^{\prime \prime }(\mathbf{q}=\mathbf{q}% _{AF})$ should result in an increase of $(T_{1}T)^{-1}$ and $R$ with a divergence at $T_{N}$. But they are not probed by the $^{23}$Na nuclei, as the local fields induced by two adjacent Co layers cancel in the A type AF structure, as confirmed by the weak $^{23}$Na NMR shift in the N\'{e}el state (fig.~\ref{fig.1}). The $^{23}$Na $T_{1}$ only probes then the strength of the ferro fluctuations, and the perfect scaling found above proves that the main incidence of AF fluctuations is to reduce the ferro ones below 100~K. Comparison with the numerical results of Hatatani \emph{et al.} \cite{Hatatani-Moriya} allows us to point out that, for H67, the low $T$ increase of $\chi (\mathbf{q}=0)$ with respect to the common Curie-Weiss variation is that expected by these authors in the immediate vicinity of a ferromagnetic instability. Therefore the H67 phase \textit{appears as an ideal 2D nearly ferromagnetic metal without 3D ordering settling in at low} $T$. \section{Discussion} The four phases exhibit specific Na orderings, but similar charge disproportionation, and identical ferro in plane fluctuations above 100~K, independent of the details of the OCD. It seems to us that this analogy of properties relies heavily on the occurrence of non magnetic Co$^{3+}$ as it reduces the number of hopping paths between magnetic sites with respect to a homogeneous structure. The associated decrease of bandwidth $W$ and increase of hole density on the magnetic sites magnify the role of correlations $U$, which enhances the ferro tendency supported by LDA calculations \cite{Singh} for the uniform case. On the contrary the ground state magnetic properties are certainly driven by the diverse Na atomic orders evidenced here, similar to those suggested, or observed \cite{Zandbergen,Roger} as well for various other $x$ values. One could expect then distinct ordered magnetic states as in the case of the $% x=0.5$ phase for which an AF order inside the Co plane sets in at $T_{N}=$ 86~K \cite{Mendels,Gasparovic} in the absence Co$^{3+}$ \cite{Bobroff}. But for the analogous ferro in plane couplings evidenced here for large $x$, one would always expect A type AF order to occur. In these AF phases, the 3D dispersion of the spin wave excitations found by INS has been analyzed with Heisenberg Co-Co AF coupling between planes either with nearest \cite{Boothroyd,Bayrakci} or next nearest neighbour exchange through Na orbitals \cite{Mazin}. For such Heisenberg transverse couplings, a hectic evolution of $T_{N}$ versus $x$ should presumably result, depending on the actual Na order, contrary to the smooth evolution of \textit{paramagnetism} found for $x>0.65$ and the abrupt occurrence of AF above $x=0.75$. This definitely allows us then to conclude that metallic magnetism is indeed responsible as well for the low $T$ states for large $x$ in these Na cobaltates. One might consider \cite{McKenzie} that a Fermi liquid state is only reached below an energy scale given by the temperature $T^{\ast }$ at which $\chi _{s}$ saturates, which increases from $\approx $1~K for H67 \cite {NaPaper} to $\approx$ 60~K for H75. The band parameters associated with Na order would then be responsible for these $T^{\ast }$ values and for the transverse couplings which drive the AF order. While some attempts have been done to take into account both correlations and OCD \cite{Marianetti}, extensions to diverse Na atomic orders are required to explain the evolution with $x$ of the ground state properties \footnote{After the disclosure of an earlier version of these results \cite{CondMat}, a theoretical attempt to explain them within a strong correlation approach has been done \cite{Gao}. The existence of a critical doping value for the onset of in-plane ferromagnetism has been reproduced, as well as the occurrence of Co$^{3+}$ in the presence of Na disorder, using heuristic values for the Na-Co interaction potentials.}. An important experimental aspect revealed by our thorough investigation is that, contrary to the case of cuprates for which dopant induced disorder is quite influential, and governs many aspects of the cuprates phase diagram \cite{FRA-alloul}, the hole doping achieved in bulk cobaltate samples by insertion of ordered Na planes corresponds to rather clean situations for many Na concentrations. \acknowledgments We thank J.~Bobroff, F.~Bert and P.~Mendels for performing the $\mu$SR measurements on the paramagnetic phases and for helpful discussions, as well as G.Kotliar, I.~Mazin, F.~Rullier-Albenque and D.~Singh for their stimulating interest. We acknowledge financial support from the ANR (NT05-4-41913), INTAS (04-83-3891) and RFBR (06-02-17197).
1,116,691,496,984
arxiv
\section{Effective Spin-Spin Interactions} \label{Effective-spin-spin-interactions} In this section, we analytically derive the expression for the effective coupling $J_{ij}=g_ig_j/\omega_1$, as presented in the main text (MT). \textit{General results.---}We have introduced the effective spin-spin interaction as \begin{equation} J_{ij}=-2\sum^\infty_{n=1}\frac{g_{i,n}g_{j,n}}{\omega_{n}}, \label{eq:Jij_n} \end{equation} with the spin-resonator coupling parameters given as $g_{i,n}=g_i \sqrt{n} \int_{0}^L \cos (k_n x) f(x-x_i) dx$. This yields \begin{eqnarray} J_{ij}&=&-\frac{2g_{i,n}g_{j,n}}{\omega_{1}} \int dx dx' f(x-x_i) f(x'-x_j) \times \nonumber \\ && \times \sum^\infty_{n=1} \cos(k_n x) \cos(k_n x') \nonumber. \end{eqnarray} The sum over $n$ gives \begin{eqnarray} &&\sum^\infty_{n=1} \cos(k_n x) \cos(k_n x') \nonumber \\ &=&\frac{1}{2} \sum^\infty_{n=1} \cos(k_n(x-x'))+ \cos(k_n(x+x')) \nonumber \\ &=&\frac{1}{4} \sum_{k\in \mathbb{Z}} \Big(\delta_{(x-x')/(2L)-k)}+\delta_{-(x+x')/(2L)-k)}\Big)-\frac{1}{2}\nonumber, \end{eqnarray} where we have used the Fourier Series decomposition of the Dirac comb \begin{eqnarray} \sum_{n=1}^\infty\cos(k_n x)&=& \frac{1}{2}\left(\sum_{n=-\infty}^\infty e^{i n \pi x/L}-1\right)\nonumber \\ &=&\sum_{k\in \mathbb{Z}} \frac{1}{2}\left( \delta(x/(2L)-k)-1\right). \end{eqnarray} Given the range of integration over $x,x'$, only the first Dirac $\delta$ function contributes to $J_{i,j}$. This leads to \begin{eqnarray} J_{i,j}&=&\frac{g_i g_j}{\omega_1} \left( \int_0^L f(x-x_i) dx\int_0^L f(x-x_j) dx \right. \nonumber \\ && \left.-L \int_0^L f(x-x_i) f(x-x_j)dx \right). \end{eqnarray} Note that in the standard situation $|x_{i} - x_{j}|\gg a$ ($a$ being the spatial extent of the function $f$), the second term is negligible. Using the normalization property of $f(x-x_j)$, i.e., $\int_0^L f(x-x_i) dx = 1$, we arrive at the result presented in the main text. \textit{Box function.---}For a box function $f(x)=\frac{\delta_{x>0}\delta_{x<a}}{a}$, and assuming $|x_j-x_i|>a$ (and also the obvious condition $0<x_{i,j}<x_{i,j}+a<1$), the second term is exactly zero and we obtain $J_{i,j} = g_i g_j / \omega_1$, which does not depend on $a$, nor the qubit positions. \section{Parametric Modulation of the Qubit-Resonator Coupling: Potential Advantages} In this Appendix we discuss the possibility to potentially boost and fine-tune the effective spin-spin interactions $J_{ij}$ by parametrically modulating the longitudinal spin-resonator coupling. Specifically, consider the generalization of Eq.(\ref{eq:model}) with an off-resonant modulation of $g_{i,n}$ at the drive frequency $\Omega_{n}$, i.e., $g_{i,n} \rightarrow g_{i,n}\left(t\right)=A_{i,n}\cos\left(\Omega_{n}t\right)$, with $\Delta_{n}=\omega_{n} - \Omega_{n}$ \footnote{If the driving amplitudes $A_{i,n}$ are zero for all but one specific mode, one recovers (approximately) a single-mode problem \cite{harvey18SM, Royer2017SM}.}. When transforming to a suitable rotating frame and neglecting rapidly oscillating terms (in the limit $\Delta_{n}, A_{i,n} \ll \Omega_{n}$) we obtain a time-independent Hamiltonian $H$ which maps directly onto the system studied so far with the replacements $\omega_{n}\rightarrow \Delta_{n}$ and $g_{i,n}\rightarrow A_{i,n}/2$. Accordingly, for stroboscopic times synchronized with the \textit{detuning} parameters $\Delta_{n}$ (where $\Delta_{n} \cdot t_{p}=2\pi p_{n}$ with $p_{n}$ integer) the unitary evolution in the \textit{lab} frame reduces to Eq.(\ref{eq:unitary-stroboscopic}), up to a free evolution term $\exp[-it_{p}\sum_{n}\Omega_{n}a_{n}^{\dagger}a_{n}]$ (which leaves the qubits untouched and even reduces to the identity as well if $\Omega_{n} \cdot t_{p}=2\pi q_{n}$ with $q_{n}$ integer), with $J_{ij} \approx - \sum_{n} A_{i,n}A_{j,n} / 2\Delta_{n}$, the sign of which may be controlled by introducing relative phases between the driving terms \cite{harvey18SM, Royer2017SM}. Provided that parametric modulation of the qubit-resonator coupling (discussed as extension (iii) in the main text) can be implemented, it comes with the following potential advantages: (1) Here, the commensurability condition applies to the (tunable) detuning parameters $\Delta_{n}=\omega_{n}-\Omega_{n}$ rather than the bare spectrum $\omega_{n}$. Therefore, even if the bare spectrum of the resonator $\omega_{n}$ is not commensurable, periodic disentanglement of the internal qubit degrees of freedom from the (hot) resonator modes can be achieved by choosing the driving frequencies $\Omega_{n}$ appropriately. (2) The coupling $J_{ij}$ can be amplified by cranking up the classical amplitudes $A_{i,n}$, provided that $A_{i,n}\ll\Omega_{n}$ for self-consistency. Moreover, $J_{ij}$ is suppressed by the detuning $\Delta_{n}$ only (rather than the frequencies $\omega_{n}$ as is the case in the static scenario). Still, the detuning should be sufficiently large in order to avoid photon-loss-induced dephasing \cite{Royer2017SM} and to keep the stroboscopic cycle time $\sim2\pi/\Delta_{n}$ sufficiently short; see below for quantitative, implementation-specific estimates. (3) Since the number of modes effectively contributing to $J_{ij}$ is well controlled by the choice $|A_{i,n}|\geq0$, the high-energy cut-off problem described above is very well-controlled. \section{Timing-Induced Errors} In this Appendix we analyze errors induced by timing inaccuracies. Limited timing accuracy leads to deviations from the ideal stroboscopic times $t_{p}$, with corresponding time jitter $\Delta t=t-t_{p}$. For example, in quantum dot systems timing accuracies $\Delta t$ of a few picoseconds have been demonstrated experimentally \cite{bocquillon13}. Here, we present analytical perturbative results that complement our numerical results as presented and discussed in the main text. Our analysis starts out from the Hamiltonian given in Eq.(1) in the main text. For notational convenience we rewrite this Hamiltonian as \begin{equation} H=\sum_{i}\frac{\omega_{i}}{2}\sigma_{i}^{z}+\sum_{n}\omega_{n}a_{n}^{\dagger}a_{n}+g\sum_{n}\mathcal{S}_{n}\otimes\left(a_{n}+a_{n}^{\dagger}\right), \end{equation} with $g\mathcal{S}_{n}=\sum_{i}g_{i,n}\sigma_{i}^{z}$. The time evolution operator generated by this Hamiltonian reads in full generality \begin{eqnarray} e^{-iHt} & = & U_{z}\left(\omega_{i}t/2\right)U_{zz}\left(J_{ij}t\right)W\left(t\right),\\ W\left(t\right) & = & U_{\mathrm{pol}}e^{-it\sum_{n}\omega_{n}a_{n}^{\dagger}a_{n}}U_{\mathrm{pol}}^{\dagger}, \end{eqnarray} with the spin-dependent, multi-mode polaron transformation $U_{\mathrm{pol}}=\exp[\sum_{n}\left(g/\omega_{n}\right)\mathcal{S}_{n}\left(a_{n}-a_{n}^{\dagger}\right)]$, as well as the single-qubit $U_{z}\left(\omega_{i}t/2\right)=\exp\left[-it\sum_{i}\left(\omega_{i}/2\right)\sigma_{i}^{z}\right]$, and two-qubit gates $U_{zz}\left(J_{ij}t\right)=\exp[-it\sum_{i<j}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}]$, respectively. While for stroboscopic times $W\left(t_{p}\right)=\mathds1$, as discussed extensively in the main text, for non-stroboscopic times ($t=t_{p}+\Delta t$) generically $W\left(t\right)$ will entangle the qubit and resonator degrees of freedom, with $W\left(t_{p}+\Delta t\right)=W\left(\Delta t\right)$, thereby reducing the overall gate fidelity. Errors due to limited timing accuracy will come from two sources: (i) First, as is the case for any unitary gate, there will be standard errors in the realization of single and two-qubit gates coming from limited timing control. For example, we can decompose the two-qubit gate as $U_{zz}\left(J_{ij}t\right)=U_{zz}\left(J_{ij}\Delta t\right)U_{zz}\left(J_{ij}t_{p}\right)$, where $U_{zz}\left(J_{ij}t_{p}\right)$ refers to the desired target gate and $U_{zz}\left(J_{ij}\Delta t\right)$ results in undesired contributions. The latter will be small provided that the random phase angles are small, i.e., $\Delta\gamma_{ij}=J_{ij}\Delta t\ll1$. Accordingly, the timing control $\Delta t$ has to be fast on the time-scale set by the two-qubit interactions. A similar argument holds for the single qubit gate $U_{z}\left(\omega_{i}t/2\right)$ which is assumed to be controlled by spin-echo techniques. (ii) Second, for non-stroboscopic times there will be errors due to the breakdown of the commensurability condition (given by $\omega_{n}t_{p}=2\pi p_{n}$ with $p_{n}$ integer); for non-stroboscopic times $W\left(t\right)$ does not simplify to the identity matrix. This type of error is specific to our hot-gate scheme. While all errors of type (i) are fully included in our numerical calculations, within our analytical calculation presented here we will focus on errors of type (ii), as these are specific to our (quantum-bus based) hot gate approach. In the following we will focus on errors due to the breakdown of the commensurability condition, as described by the unitary $W\left(\Delta t\right)$. Using the relation $U_{\mathrm{pol}}a_{n}U_{\mathrm{pol}}^{\dagger}=a_{n}+\left(g/\omega_{n}\right)\mathcal{S}_{n}$, we have \begin{equation} W\left(\Delta t\right)=\exp\left[-i\Delta t\sum_{n}\omega_{n}(a_{n}^{\dagger}+\frac{g}{\omega_{n}}\mathcal{S}_{n})(a_{n}+\frac{g}{\omega_{n}}\mathcal{S}_{n})\right]. \end{equation} The qubits are assumed to be initialized in a pure state, $\varrho\left(0\right)=\left|\psi\right\rangle _{0}\left\langle \psi\right|$. In the absence of errors, ideally they evolve into the pure target state defined as $\left|\psi_{\mathrm{tar}}\right\rangle =U_{z}\left(\omega_{i}t_{p}/2\right)U_{zz}\left(J_{ij}t_{p}\right)\left|\psi\right\rangle _{0}$, which comprises both the single and two-qubit gates. As discussed above, here we neglect standard errors of type (i) and set $\left|\psi_{\mathrm{tar}}\right\rangle =U_{z}\left(\omega_{i}t/2\right)U_{zz}\left(J_{ij}t\right)\left|\psi\right\rangle _{0}$ at time $t=t_{p}+\Delta t$, assuming that $\omega_{i}\Delta t,J_{ij}\Delta t\ll1$. Initially, the resonator modes are assumed to be in a thermal state, with $\rho_{\mathrm{th}}=\prod_{n}e^{-\beta\omega_{n}a_{n}^{\dagger}a_{n}}/Z_{n}$, and $\beta=1/k_{B}T$. Then, the full evolution of the coupled spin-resonator system reads \begin{eqnarray} \rho\left(t\right) & = & e^{-iHt}\varrho\left(0\right)\otimes\rho_{\mathrm{th}}e^{iHt},\\ & = & W\left(\Delta t\right)\left(\varrho\left(t\right)\otimes\rho_{\mathrm{th}}\right)W^{\dagger}\left(\Delta t\right), \end{eqnarray} where $\varrho\left(t\right)=\left|\psi_{\mathrm{tar}}\right\rangle \left\langle \psi_{\mathrm{tar}}\right|$ refers to the qubit's pure (target) density matrix at time $t$ in the case of ideal, noise-free evolution, while $\rho\left(t\right)$ gives the density matrix of the coupled spin-resonator system in the presence of errors caused by incommensurate timing. The fidelity of our protocol is defined as \begin{equation} \mathcal{F}\left(t\right)=\left<\psi_{\mathrm{tar}}|\mathrm{Tr}_{\mathrm{res}}\left[\rho\left(t\right)\right]|\psi_{\mathrm{tar}}\right>, \end{equation} where $\mathrm{Tr}_{\mathrm{res}}\left[\dots\right]$ denotes the trace over the resonator degrees of freedom. In order to derive a simple, analytical expression for the incommensurabiliy-induced error $\xi_{\mathrm{timing}}=1-\mathcal{F}$, in the following we restrict ourselves to a single mode, taken to be the mode $a_{1}$ (for small errors similar error terms due to multiple incommensurate modes can be added independently); also note that our complementary numerical results cover the multi-mode problem. Next, we perform a Taylor expansion of the undesired unitary as \begin{equation} W\left(\Delta t\right)\approx\mathds1-i\mathcal{O}_{1}-\frac{1}{2}\mathcal{O}_{1}^{2},\label{eq:Taylor-expansion-error} \end{equation} with \begin{equation} \mathcal{O}_{1}=\omega_{1}\Delta t\left[a_{1}^{\dagger}a_{1}+\frac{g}{\omega_{1}}\mathcal{S}_{1}\left(a_{1}+a_{1}^{\dagger}\right)+(\frac{g}{\omega_{1}})^{2}\mathcal{S}_{1}^{2}\right]. \end{equation} This approximation is valid provided that the effective phase error is sufficiently small, that is $\Delta\varphi=\omega_{1}\Delta t||a_{1}^{\dagger}a_{1}||\ll1$; approximately $\Delta\varphi\approx\omega_{1}\Delta t\bar{n}_{\mathrm{th}}\left(\omega_{1}\right)$, where $\bar{n}_{\mathrm{th}}\left(\omega_{1}\right)$ gives the thermal occupation of the mismatched mode. Then, up to second order in $\Delta\varphi$, we obtain \begin{eqnarray} \rho\left(t\right) & \approx & \varrho\left(t\right)\otimes\rho_{\mathrm{th}}-i\left[\mathcal{O}_{1},\varrho\left(t\right)\otimes\rho_{\mathrm{th}}\right]\nonumber \\ & & +\mathscr{D}\left[\mathcal{O}_{1}\right]\varrho\left(t\right)\otimes\rho_{\mathrm{th}}, \end{eqnarray} where $\mathscr{D}\left[\mathcal{O}_{1}\right]\rho=\mathcal{O}_{1}\rho\mathcal{O}_{1}^{\dagger}-\frac{1}{2}\left\{ \mathcal{O}_{1}^{\dagger}\mathcal{O}_{1},\rho\right\} $ denotes the standard dissipator of Lindblad form. When tracing out the resonator degrees of freedom and computing the overlap with the ideal qubit's target state $\left|\psi_{\mathrm{tar}}\right\rangle $, the first order terms are readily shown to vanish, and the leading order terms scale as $\sim\Delta t^{2}$ (in agreement with our numerical results). Evaluating the second-order terms, we obtain a compact expression for the error given by \begin{eqnarray} \xi_{\mathrm{timing}} & \approx & \left(\omega_{1}\Delta t\right)^{2}\{\left(2\bar{n}_{\mathrm{th}}\left(\omega_{1}\right)+1\right)\left(g/\omega_{1}\right)^{2}\left(\Delta\mathcal{S}_{1}\right)^{2}\nonumber \\ & & +\left(g/\omega_{1}\right)^{4}\left(\Delta\mathcal{S}_{1}^{2}\right)^{2}\}.\label{eq:commensurability-error-wo-spin-echo} \end{eqnarray} Here, $\left(\Delta\mathcal{S}_{1}\right)^{2}=\left<\psi_{\mathrm{tar}}|\mathcal{S}_{1}^{2}|\psi_{\mathrm{tar}}\right>-\left<\psi_{\mathrm{tar}}|\mathcal{S}_{1}|\psi_{\mathrm{tar}}\right>^{2}$ denotes the variance of the collective spin-operator $\mathcal{S}_{1}$ in the ideal target state $\left|\psi_{\mathrm{tar}}\right\rangle $. Typically, for $\bar{n}_{\mathrm{th}}\left(\omega_{1}\right)\gg1$ and $g/\omega_{1}\ll1$ the first term will dominate the overall error and we obtain \begin{equation} \xi_{\mathrm{timing}}\approx2\left(\omega_{1}\Delta t\right)^{2}\bar{n}_{\mathrm{th}}\left(\omega_{1}\right)\left(g/\omega_{1}\right)^{2}\left(\Delta\mathcal{S}_{1}\right)^{2}. \end{equation} While the error scales linearly with the thermal occupation $\bar{n}_{\mathrm{th}}\left(\omega_{1}\right)$, it is suppressed quadratically for small phase errors $\omega_{1}\Delta t\ll1$ and weak spin-resonator coupling $g/\omega_{1}\ll1$. However, our analytical calculation is valid only provided that the Taylor expansion in Eq.(\ref{eq:Taylor-expansion-error}) is justified; again, this is the case if $\Delta\varphi=\omega_{1}\Delta t||a_{1}^{\dagger}a_{1}||\ll1$ is satisfied. Still, our analytical treatment supports and complements our numerical results in the three following ways: (i) The timing error is quadratic in the time jitter $\Delta t$, i.e., $\xi_{\mathrm{timing}}\sim\Delta t^{2}$. (ii) The timing error is linearly proportional to the effective spin-spin interaction $\sim J\sim g^{2}/\omega_{1}$; in agreement with our numerical results, (in the absence of dephasing) timing errors are suppressed for slow two-qubit gates. (iii) The timing error scales linearly with temperature $\sim k_{B}T/\omega_{1}$. \section{Engineering of Spin Models} In this Appendix we provide further details regarding the implementation of targeted, engineered spin models. Specifically, two more comments are in order: (i) For translation invariant models, the eigenstates of $w_{ij}$ can be written as sine and cosine waves with normalized momentum $-\pi<k_q<\pi$. In particular for long-range models, we can obtain good approximations of $w_{ij}$ using only a restricted number $\eta<N$ of cycles corresponding to the lowest spatial frequencies $k_q$. (ii) To satisfy the condition $w_q>0$, we can add to $w_{ij}$ a diagonal component $w_D\delta_{i,i}$, which does not contribute to the dynamics, and which can also be used to improve the convergence with $\eta$. \section{Additional Numerical Results} In this section, we present additional numerical results related to the realization of a phase gate between two distant qubits and the engineering of spin models (compare Figs.~2-3 of the main text). \begin{figure} \includegraphics[width=\columnwidth]{Fig_GateSM} \caption{{\it Hot phase gate between two distant qubits.} (a-b) Total photon number $P$ for the parameters of Fig.2(a) of the main text [panel (a)], and for $T=0$ and different cutoffs $a=0.03,0.06,0.09L$ [panel (b)]. (c-d) Fidelity $\mathcal{F}$ for smaller spin-resonator coupling parameters $g_{i}$, where the maximum fidelity is reached for $p^{\star}=4$ and $p^{\star}=16$ [panels (c) and (d), respectively]. These data correspond to the analysis shown in Fig.2 (d) of the MT. (e)-(f) Gate error $1-\mathcal{F}$ in presence of a nonlinear term $\epsilon$ in the dispersion relation of the transmission line, for $p^\star=1$ versus time [panel (e)], and for different values of $p^\star$ at the optimal time when $1-\mathcal{F}$ is minimal [panel (f)]. Other parameters: $a=0.3L$, $T=0$. \label{fig:photon-number}} \end{figure} \textit{Total photon number.---}The total photon number in the transmission line $P=\sum_n \langle a_n^\dag a_n\rangle $ is shown in Fig.~\ref{fig:photon-number}(a), for the parameters of Fig.~2(a) of the main text. At short times, the qubits excite a number of photons ($\sim 2$ for the chosen parameter set), which add up to the thermal background. These photons are then absorbed perfectly at the gate time $t=\tau$. As shown in panel (b), the number of emitted photons tends to slightly decrease with increasing values of $a$. \textit{Fidelity---}In panels (c-d) of Fig.~\ref{fig:photon-number} we provide further numerical results for spin-resonator coupling parameters $g_{i}$, where the maximum fidelity is reached for later times (rather than at $p^{\star}=1$, as discussed in the main text), namely for $p^{\star}=4$ and $p^{\star}=16$ [panels (c) and (d), respectively]. In all cases considered we take the ratio $g_{i}/\omega_{1}$ such that a maximally entangling gate can (in principle) be achieved at $p=p^{\star}$. Taking $g_{1}=g_{2}=g$, this is the case for $J_{12} t_{p^{\star}} = 2\pi (g/\omega_{1})^2 p^{\star} = \pi / 4$, as required for a maximally entangling gate of the form $U_{\mathrm{max}} = \exp[-i(\pi/4)\sigma_{1}^{z} \sigma_{2}^{z}]$. Since $p^{\star}$ can only take on integer values, the value of $g_{i}/\omega_{1}$ needs to be fine-tuned in order to achieve a maximally entangling gate; without fine-tuning generically the target state will still be entangled (but not maximally entangled, even in the absence of noise). As shown in panels (c-d) of Fig.~\ref{fig:photon-number}, periodic stroboscopic cycles for integer values of $p$ can clearly be identified. For values $p^{\star}\gg1$, many, small amplitude oscillations occur before the fidelity reaches its maximum value at the nominal gate time $t_{g}=\pi/(4J_{12})$. In this parameter regime, the effective dynamics for $\mathcal{F}(t)$ typically feature a slow (secular), large amplitude with high-frequency, small amplitude oscillations on top; therefore, the relevant timescale for timing errors (due to timing inaccuracies $\Delta t =t - t_{g}$) is set by the interaction as $\sim 1/J_{12}$, as exemplified in Fig.~\ref{fig:photon-number} (d) for $p^{\star}=16\gg1$. Since the essential dynamics appear on a long timescale $\sim 1/J_{12}$, with only small changes occurring in the vicinity of $p^{\star}$, the constraints on timing errors are strongly relaxed, because stroboscopic precision on a timescale $\sim 1/\omega_{n}$ is not required in order to achieve a high-fidelity gate. Conversely, high-fidelity results can already be found in the parameter regime $\Delta t \ll \pi/4J_{12}$. \textit{Nonlinear spectrum.---}Next, we study potential errors due to a non-linear photonic spectrum (where $\omega_{n} \neq n \omega_{1}$). Before presenting our detailed numerical results, some general comments are in order: (i) First, note that this type of error can only occur in the multi-mode setup, but is entirely absent in the single-mode regime, as could be (approximately) realized using parametric modulation of the qubit-resonator coupling \cite{harvey18SM, Royer2017SM}. (ii) Second, the commensurability condition, as specified in the main text for a linear spectrum, can be generalized to spectra for which one can find a stroboscopic time $t^{\star}>0$ (and integer multiples thereof), for which $\omega_{1} t^{\star} = 2\pi p_{1}$, $\omega_{2} t^{\star} = 2\pi p_{2}$ etc. can be satisfied for integer values $p_{1}, p_{2}, \dots$. This means that all fractions $\omega_{m} / \omega_{n} = p_{m} / p_{n}$ need to be rational numbers. Taking the ordering $\omega_{1} \leq \omega_{2} \leq \dots$, we may summarize these conditions as $\omega_{n}/\omega_{1} = p_{n} / p_{1} \in \mathbb{Q}$. Then, with $\omega_{1} t^{\star} = 2\pi p_{1}$ satisfied, all remaining equations can be deduced as $\omega_{n} t^{\star} = (p_{n}/p_{1}) \omega_{1} t^{\star} = (p_{n}/p_{1}) 2\pi p_{1} = 2\pi p_{n}$. Therefore, given a specific spectrum $\omega_{n}$, (in principle) one may still find specific (stroboscopic) times $t^{\star}>0$ (and integer multiples thereof), for which the qubits disentangle entirely from the resonator modes, even if the spectrum is non-linear. Our numerical results can be found in Fig.~\ref{fig:photon-number}(e-f); here, we study the role of a nonlinear term in the dispersion relation of the transmission line, $\omega_n\to \omega_n=\omega_1 n - \delta \omega _n$, where (for concreteness) we consider a quadratic term of the form $\delta \omega_n=\epsilon \omega_1 (n-1)^2$. In panel (e), we represent the gate error versus time for $p^{\star}=1$ and different values of $\epsilon$ (see legend). Around the gate time, the modes only partially synchronize, implying a minimal gate error which increases with $\epsilon$. We further quantify these effects by representing in panel (f) the gate error (at such optimal time) as a function of $p^{\star}$, for the same values of $\epsilon$. One clearly distinguishes two limits corresponding to $\delta \omega_n t_p \ll 1$ (resp. $\gg 1$), which we can both understand analytically, considering for simplicity the effect of the asynchronicity of the mode $n=2$ ($n=1$ is not affected by $\epsilon$), and $T=0$. First, in the perturbative limit $\delta \omega_2 t_p \ll 1$, the effect of the nonlinear term is analog to a timing error as discussed above, with the mode asynchronicity $\delta \omega_2 t_{p^{\star}}$ replacing the timing error $\omega_1 \Delta t$ in the expression of $W(t)$. This corresponds to a gate error \begin{equation} \xi_\mathrm{nonlinear}\approx (\delta \omega_2 t_{p^{\star}})^2 (g/\omega_2)^2 (\Delta \mathcal{S}_2)^2, \end{equation} scaling thus as $\epsilon^2 p^{\star}$, as confirmed by our numerical simulations. In the opposite limit, $\delta \omega_2 t_{p^{\star}} \gg 1$, the mode asynchronicity hits a maximum value $\delta \omega_2 t_{p^{\star}} (2\pi)\sim \pi$, and the error reads \begin{equation} \xi_\mathrm{nonlinear}\approx \pi^2 (g/\omega_2)^2 (\Delta \mathcal{S}_2)^2, \end{equation} scaling as $1/p^{\star}$, independently of $\epsilon$, as also seen in our numerical simulations. This means that, along the lines of timing errors, the effect of nonlinear terms can be reduced by increasing $p^{\star}$. \begin{figure} \includegraphics[width=\columnwidth]{Fig_SpinModels} \caption{{\it Engineering of spin models.} (a) Same as Fig.~3 MT for a spin glass with random interactions $w_{ij}$ between $[-0.5,0.5]$. (b-c) Convergence analysis where we plot the error $\epsilon\equiv ||w_{ij}^{(\eta)}-w_{ij}||_2$ versus $\eta$ and different values of $N$. \label{fig:SpinModels}} \end{figure} \textit{Engineering of spin models.---}In Fig.~\ref{fig:SpinModels}, we present additional numerical results on the engineering of spin models. In panel (a), we represent the formation of a spin glass with random interactions. In contrast to the models presented in Fig.~3 MT, one requires to implement the full spectrum, i.e., to use $\eta=N$, to obtain a faithful generation of the target matrix. The convergence of the generated matrix $w_{ij}$ with $\eta/N$ is shown in Fig.~\ref{fig:SpinModels}(b) for 1D models with nearest neighbor interactions and with power law decay $\alpha=1$. In both cases, we obtain a good representation of the targeted interactions for $\eta \gtrsim N/2$. Note that the convergence to NN interactions occurs at later times compared to the power-law case due to high spatial frequencies in the spectrum. As already shown in panel (a), to obtain a true spin glass model, one instead requires to implement the full spectrum of $W$, see Fig.~\ref{fig:SpinModels}(c). \section{Decoherence Analysis} In this section, we provide detailed background material related to effects due to decoherence. First, we present the Master equation used in order to model decoherence in the form of qubit dephasing and resonator rethermalization. Next, we analytically derive an expression for the gate error caused by qubit dephasing. Thereafter, we numerically analyze rethermalization-induced errors. Finally, we show that the total error due to both (i) dephasing and (ii) rethermalization can be quantified in terms of a single cooperativity parameter. \subsection{Master Equation} \textit{Master equation.}---Within a standard Born-Markov approach, the noise processes described above can be accounted for by a master equation for the system's density matrix $\rho$ as \begin{eqnarray} \dot{\rho} & = & -i\left[H_{\mathrm{id}},\rho\right]+(\gamma_{\phi}/2)\sum_{i}\mathscr{D}\left[\sigma_{i}^{z}\right]\rho\\ & & +\sum_{n}\kappa_{n}\left(\bar{n}_{\mathrm{th}}\left(\omega_{n}\right)+1\right)\mathscr{D}\left[a_{n}\right]\rho\\ & & +\sum_{n}\kappa_{n}\bar{n}_{\mathrm{th}}\left(\omega_{n}\right)\mathscr{D}\left[a_{n}^{\dagger}\right]\rho, \end{eqnarray} where $H_{\mathrm{id}}$ describes the ideal (error-free), coherent evolution for longitudinal coupling between the qubits and the resonator mode, and $\gamma_{\phi}=1/T_{2}$ is the pure dephasing rate. The second and third line describe rethermalization of the modes $a_{n}$ towards the a thermal state with an effective rate $\sim\kappa_{n}\left(\bar{n}_{\mathrm{th}}\left(\omega_{n}\right)+1\right)\approx k_{B}T/Q_{n}$. This simple noise model is valid within the so-called approximation of independent rates of variation \cite{cohen-tannoudji92}, where the interactions with the environment are treated separately for spin and resonator degrees of freedom; in other words, they can approximately treated as independent entities and the terms (rates of variation) due to internal and dissipative dynamics are added independently. While for ultra-strong coupling the qubit-resonator system needs to be treated as a whole when studying its interaction with the environment \cite{beaudoin11}, yielding irreversible dynamics through jumps between dressed states (rather than bare states), in the weak coupling regime $\left(g_{i,n}\ll\omega_{n}\right)$ one can resort to the standard (quantum optical) dissipators given above, with $\mathscr{D}\left[a\right]\rho=a\rho a^{\dagger}-1/2\left\{ a^{\dagger}a,\rho\right\} $. \subsection{Dephasing-Induced Errors} \textit{Dephasing-Induced Errors.}---In this Appendix we provide an analytical model for dephasing-induced errors. Neglecting rethermalization-induced errors for the moment, here we consider the following Master equation \begin{equation} \dot{\rho}=\underset{\mathcal{L}_{0}\rho}{\underbrace{-i\left[H_{\mathrm{id}},\rho\right]}}+\underset{\mathcal{L}_{1}\rho}{\underbrace{(\gamma_{\phi}/2)\sum_{i}\mathscr{D}\left[\sigma_{i}^{z}\right]\rho}},\label{eq:Master-equation-pure-dephasing-analytical-1} \end{equation} where $H_{\mathrm{id}}$ describes the ideal (error-free), coherent evolution for longitudinal coupling between the qubits and the resonator mode, and $\gamma_{\phi}$ is the pure dephasing rate. Since the super-operators $\mathcal{L}_{0}$ and $\mathcal{L}_{1}$ as defined in Eq.(\ref{eq:Master-equation-pure-dephasing-analytical-1}) commute, that is $\left[\mathcal{L}_{0},\mathcal{L}_{1}\right]=0$ (since $\left[H_{\mathrm{id}},\mathscr{D}\left[\sigma_{i}^{z}\right]X\right]=\mathscr{D}\left[\sigma_{i}^{z}\right]\left[H_{\mathrm{id}},X\right]$ for any operator $X$), the full evolution simplifies to \begin{equation} \rho\left(t\right)=e^{\mathcal{L}_{1}t}e^{\mathcal{L}_{0}t}\rho\left(0\right)=e^{\mathcal{L}_{1}t}\rho_{\mathrm{id}}\left(t\right), \end{equation} where we have defined the ideal target state at time $t$ as $\rho_{\mathrm{id}}\left(t\right)=\exp\left[\mathcal{L}_{0}t\right]\rho\left(0\right)$, which, starting from the initial state $\rho\left(0\right)$, exclusively accounts for the ideal (error-free), coherent evolution. For small infidelities $\left(\gamma_{\phi}t\ll1\right)$, the deviation from the ideal dynamics $\Delta\rho=\rho-\rho_{\mathrm{id}}$ is approximately given by \begin{equation} \Delta\rho\left(t\right)\approx\gamma_{\phi}t/2 \sum_{i}\mathscr{D}\left[\sigma_{i}^{z}\right]\rho_{\mathrm{id}}\left(t\right),\label{eq:linear-dephasing-error-analytical} \end{equation} showing that (in the regime of interest where $\gamma_{\phi}t\ll1$) the dominant dephasing induced errors are linearly proportional to $\sim\gamma_{\phi}t_{g}\sim\gamma_{\phi}/J_{ij}$, as expected; here, $t_{g}\sim1/J_{ij}$ is the relevant gate time which has to be short compared to $\gamma_{\phi}^{-1}$. In the following we compute the dephasing-induced error analytically. We define the pure qubit target state as $\left|\Psi_{\mathrm{tar}}\right\rangle $ and take the state fidelity $\mathcal{F}$ as our figure of merit, with \begin{equation} \mathcal{F}\left(t_{g}\right)=\left<\Psi_{\mathrm{tar}}|\rho_{q}\left(t_{g}\right)|\Psi_{\mathrm{tar}}\right>, \end{equation} where $\rho_{q}\left(t_{g}\right)=\mathrm{Tr}_{\mathrm{res}}\left[\rho\left(t_{g}\right)\right]=\mathrm{Tr}_{\mathrm{res}}\left[\exp\left[\mathcal{L}t_{g}\right]\rho\left(0\right)\right]$ is the state of the qubits at time $t_{g}$, with $\mathcal{L}=\mathcal{L}_{0}+\mathcal{L}_{1}$ and $\mathrm{Tr}_{\mathrm{res}}\left[\dots\right]$ denoting the trace over the resonator degrees of freedom. Since the qubits ideally disentangle from the resonator modes for stroboscopic times and since $\mathcal{L}_{1}$ acts on the qubit degrees of freedom only, we find \begin{eqnarray} \rho_{q}\left(t_{g}\right) & = & e^{\mathcal{L}_{1}t_{g}}\mathrm{Tr}_{\mathrm{res}}\left[e^{\mathcal{L}_{0}t_{g}}\left|\Psi_{0}\right\rangle \left\langle \Psi_{0}\right|\otimes\rho_{\mathrm{th}}\left(0\right)\right],\\ & = & e^{\mathcal{L}_{1}t_{g}}\mathrm{Tr}_{\mathrm{res}}\left[\left|\Psi_{\mathrm{tar}}\right\rangle \left\langle \Psi_{\mathrm{tar}}\right|\otimes\rho_{\mathrm{th}}\left(0\right)\right],\\ & = & e^{\mathcal{L}_{1}t_{g}}\left|\Psi_{\mathrm{tar}}\right\rangle \left\langle \Psi_{\mathrm{tar}}\right|. \end{eqnarray} The fidelity $\mathcal{F}\left(t_{g}\right)$ can then be expressed \begin{equation} \mathcal{F}\left(t_{g}\right)=\left<\Psi_{\mathrm{tar}}|e^{\mathcal{L}_{1}t_{g}}(\left|\Psi_{\mathrm{tar}}\right\rangle \left\langle \Psi_{\mathrm{tar}}\right|)|\Psi_{\mathrm{tar}}\right>. \end{equation} In the regime of interest (with small infidelities) we can approximate the error $\xi\left(t_{g}\right)=1-\mathcal{F}\left(t_{g}\right)$ as \begin{equation} \xi\left(t_{g}\right)=-t_{g}\left<\Psi_{\mathrm{tar}}|\mathcal{L}_{1}(\left|\Psi_{\mathrm{tar}}\right\rangle \left\langle \Psi_{\mathrm{tar}}\right|)|\Psi_{\mathrm{tar}}\right> \end{equation} With $\mathcal{L}_{1}$ as defined in Eq.(\ref{eq:Master-equation-pure-dephasing-analytical-1}) this leads to the compact expression \begin{equation} \xi\left(t_{g}\right)=\gamma_{\phi}t_{g}/2\sum_{i}\left\{ 1-\left|\left<\Psi_{\mathrm{tar}}|\sigma_{i}^{z}|\Psi_{\mathrm{tar}}\right>\right|^{2}\right\} . \end{equation} Accordingly we only need to evaluate the expectation values of $\sigma_{i}^{z}$ in the ideal target state in order to estimate the dephasing-induced fidelity error. Specifically for $\left|\Psi_{\mathrm{tar}}\right\rangle =\exp[-i t_{g}\sum_{i<j}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}]\left|\Psi_{0}\right\rangle $ it is sufficient to compute the expectation values of $\sigma_{i}^{z}$ in the initial state $\left|\Psi_{0}\right\rangle $, because \begin{equation} \xi\left(t_{g}\right)=\gamma_{\phi}t_{g}/2\sum_{i}\left\{ 1-\left|\left<\Psi_{0}|\sigma_{i}^{z}|\Psi_{0}\right>\right|^{2}\right\} . \end{equation} For qubits initialized in the $x-y$ plane, e.g., $\left|\Psi_{0}\right\rangle = \otimes_{j=1}^{N}(\left|0\right\rangle _{j}+i\left|1\right\rangle _{j})/\sqrt{2}$, the expectation values $\left<\Psi_{0}|\sigma_{i}^{z}|\Psi_{0}\right>=0$ vanish and we arrive at a (conservative) estimate of \begin{equation} \xi\left(t_{g}\right) \approx (N\gamma_{\phi}/2) t_{g}, \label{eq:error_dephasing} \end{equation} with $N=\sum_{i}$ being the number of qubits, and $\gamma_{\mathrm{eff}} = N\gamma_{\phi}/2$ describing the effective many-body dephasing rate. As expected the error grows linearly with the gate time $\sim t_{g}/T_{2}$. {\it Numerical verifications.---}First, as demonstrated in Fig.~\ref{fig:Deco}(a), we have numerically verified the liner error scaling [compare Eq.~\eqref{eq:error_dephasing}] induced by dephasing for $N=2$ qubits and a gate time $t_g=\tau$. Second, we have numerically verified the scaling of the state error \begin{equation} \xi(t_\mathrm{run})\approx (\gamma_\phi N/2J_{\max})\bar{\gamma}MNd, \end{equation} for the modulated scheme applied to QAOA, see Fig. 4 MT. This is a direct consequence of Eq.~\eqref{eq:error_dephasing}, obtained for a total run time $t_g\to t_\mathrm{run}\sim \bar{\gamma}MNd/J_{\max}$ (see text). \begin{figure} \includegraphics[width=\columnwidth]{Fig_Deco} \caption{{\it Dephasing and rethermalization induced errors.} (a) Effect of dephasing for the phase gate presented in Fig.~2 MT, as a function of the dephasing rate $\gamma_\phi$. (b-d) Rethermalization induced error $\xi=1-\mathcal{F}$ due to coupling of the ($\sim 30$) resonator modes to a thermal reservoir, for three different temperatures (light red to dark red; see legend), two different cutoff values ($a/L=0.1, 0.2$; see text in panels), and different qubit-photon coupling parameters $g_{i}=g$. The latter are set as $g/\omega_{1}=1/\sqrt{8}$ (panels b,d) and $g/\omega_{1}=1/\sqrt{32}$ (panels c), respectively. In the small error regime of interest, the (linear) temperature dependence is well captured by the thermal occupation factor $\bar{n}_{\mathrm{th}}(\omega_{1})$, while the error is found to be independent of the coupling $g$. To simplify the numerical treatment, we considered a value $\kappa_n=\kappa$ independent of $n$. \label{fig:Deco}} \end{figure} \subsection{Rethermalization-Induced Errors} \textit{Errors for a two-qubit gate.---}We have first numerically verified that rethermalization-induced errors are independent of the qubit-resonator coupling strength $g_{i}$, as demonstrated in Fig.~\ref{fig:Deco}(b-d). In this case, we took into account the effect of decoherence by calculating the evolution of $100$ MPS quantum trajectories~\cite{Daley2014}. This finding can be understood from the fact that photon rethermalization leads to qubit dephasing (due to leakage of which-way information) at an effective rate $\sim g^2$ that scales quadratically with the qubit-dependent separation in phase space (i.e., the displacement amplitude), while the relevant gate time scales as $\sim 1/g^2$ \cite{Royer2017SM, schuetz17SM, rabl10SM}. When multiplying these two factors to obtain the effective error the dependence on $g$ drops out, leading to an effective error that is independent of $g$, as numerically verified in Fig.~\ref{fig:Deco}(b-d). Finally, for the two values of $a$ considered, we did not observe a significant effect of the cut-off value $a$ on rethermalization errors. {\it Scaling analysis for QAOA.---}We now consider the multi-qubit scenario. In Fig.~4(b) of the MT, we show a scaling analysis for QAOA in the single-mode case, which indicates that the total error can be estimated by \begin{equation} \xi(t_\mathrm{run}) \approx (\kappa (1+2\bar{n}_\mathrm{th}(\omega_0))/|\Delta|) \bar{\gamma}MNd.\label{eq:error_rethermalization} \end{equation} In order to interpret this numerical result, we first estimate the error accumulated during a cycle of duration $t_p$ implementing the component $q$ of the Hamiltonian $H_C$ (see MT). Following Refs.~\cite{Royer2017SM, schuetz17SM}, this corresponds to an error \begin{equation} \xi(t_p) \approx (\kappa (1+2\bar{n}_\mathrm{th}(\omega_0))\bra{\psi}\mathscr{D}[S]\ket{\psi} t_p, \end{equation} with $\ket{\psi}$ denoting the (ideal) target state obtained in the absence of noise ($\kappa=0$), the collective spin operator $S=\sum_i (g_i/\Delta) \sigma_i^z$, the spin-resonator coupling $g_i=\sqrt{-\Delta J_{\max}/2} u_{i,q}$, and $t_p \sim \gamma_m w_q/J_{\max}$. The collective dephasing term $\bra{\psi}\mathscr{D}[S]\ket{\psi}$ can be written as \begin{eqnarray} &&\bra{\psi}\mathscr{D}[S]\ket{\psi}=\sum_i \frac{g_i^2}{\Delta^2}(1-\bra{\psi} \sigma_i^z\ket{\psi}^2) \nonumber \\ && +\sum_{i\neq j} \frac{g_i g_j}{\Delta^2}(\bra{\psi} \sigma_i^z \sigma_j^z \ket{\psi}\!-\!\bra{\psi} \sigma_i^z\ket{\psi}\bra{\psi} \sigma_j^z\ket{\psi}) \nonumber. \end{eqnarray} The scaling of $\bra{\psi}\mathscr{D}[S]\ket{\psi}$ is in general nontrivial as it depends on the many-body structure of $\ket{\psi}$. However, our numerical results can be explained by considering that the first term dominates over the second term. This assumption is in particular valid around the initial and final times of the QAOA evolution when $\ket{\psi}$ is approximately a product state. Considering then the worst case scenario $\bra{\psi} \sigma_i^z\ket{\psi} \approx 0$, and using $\sum_i |u_{i,q}|^2=1$, we indeed obtain the estimate Eq.~\eqref{eq:error_rethermalization} for the accumulated error for the total QAOA evolution $t_g\to t_\mathrm{run}$. Note that for other types of multi-qubit evolutions than QAOA, we cannot exclude the possibility that the second term plays a role and changes the error scaling. \subsection{Cooperativity Parameter} In this section we show that the two-qubit error can be expressed in terms of a single cooperativity parameter $C$. Here, for simplicity we first consider a single resonator mode of frequency $\omega_{1}$, as could be realized based on parametric modulation of the qubit-resonator coupling \cite{harvey18SM, Royer2017SM}, with the replacement $\omega_1 \rightarrow \Delta$. \textit{Single mode setting.}---Following the main text, we consider two error sources: (i) dephasing of the qubits on a timescale $\sim T_{2}$ and (ii) rethermalization of the resonator mode an with an effective decay rate $\sim\kappa\bar{n}_{\mathrm{th}}$, with $\kappa=\omega_{1}/Q$. The gate time is given by $t_{\mathrm{coh}}=\pi/4J_{12}$, with $J_{12}=g^{2}/\omega_{1}$ (we have set $g_{i}=g$ for simplicity). As shown above, both analytically and numerically, the dephasing induced error can be expressed as $\xi_{\gamma}=\alpha_{\gamma}\gamma_{\phi}/J_{12}$, with the pre-factor $\alpha_{\gamma}=N\pi/8$. The rethermalization-induced error can be written as $\xi_{\kappa}=\alpha_{\kappa}\left(\kappa/\omega_{1}\right)\bar{n}_{\mathrm{th}}$, as follows from multiplying the effective dephasing rate $\Gamma_{\mathrm{eff}}\sim\kappa\bar{n}_{\mathrm{th}}\left(g/\omega_{1}\right)^{2}$ with the gate time time $t_{\mathrm{coh}}=\pi/4J_{12}\approx\omega_{1}/g^{2}$ \cite{schuetz17SM}; the pre-factor $\alpha_{\kappa}$ can be obtained numerically as $\alpha_{\kappa}\approx3$. In the small error regime, we can add up these two errors independently and arrive at the total error \begin{equation} \xi=\alpha_{\kappa}\frac{k_{B}T}{Q\omega_{1}}+\alpha_{\gamma}\frac{\gamma_{\phi}\omega_{1}}{g^{2}}, \end{equation} where we have used $\kappa\bar{n}_{\mathrm{th}}\approx k_{B}T/Q$. For fixed spin-photon coupling $g$, this general expression for $\xi$ can be optimized with respect to the frequency $\omega_{1}$. The optimal frequency $\omega_{1}^{\star}$ is given as \begin{equation} \omega_{1}^{\star}=\sqrt{\frac{\alpha_{\kappa}}{\alpha_{\gamma}}\frac{k_{B}Tg^{2}}{Q\gamma_{\phi}}}. \end{equation} For faster dephasing $\sim\gamma_{\phi}$, the optimal value of $\omega_{1}^{\star}$ decreases, to allow for a faster gate (since $J\sim1/\omega_{1}$), while $\omega_{1}^{\star}$ increases with larger rethermalization $\kappa\bar{n}_{\mathrm{th}}\approx k_{B}T/Q$, because the thermal occupation will be smaller. For this optimized value of $\omega_{1}^{\star}$, the error $\xi$ simplifies to \begin{equation} \xi=\frac{2\sqrt{\alpha_{\kappa}\alpha_{\gamma}}}{\sqrt{C}}\sim\frac{1}{\sqrt{C}}, \end{equation} where we have introduced the cooperativity parameter as \begin{equation} C=\frac{g^{2}}{\gamma_{\phi}\kappa\bar{n}_{\mathrm{th}}}\approx\frac{g^{2}Q}{\gamma_{\phi}k_{B}T}. \end{equation} In essence, the parameter $C$ compares the coherent coupling $g$ with the geometric mean of the decoherence rates, given by $\gamma_{\phi}$ and $\kappa\bar{n}_{\mathrm{th}}$, respectively. Taking (for example) $g/2\pi\approx10\mathrm{MHz}$, $T\approx1\mathrm{K}$, $Q\sim10^{5}$ and $\gamma_{\phi}/2\pi\approx0.1\mathrm{kHz}-0.1\mathrm{MHz}$ (corresponding to $T_{2}\approx10\mu\mathrm{s}-10\mathrm{ms}$) \cite{schuetz17SM}, we obtain a cooperativity in the range $C\approx5\times10^{3}$ up to $C\approx5\times10^{6}$, yielding an overall two-qubit error $\xi$ in the range $\xi\approx\left(0.1-4.3\right)\%$. For comparison, for the implementation of the QAOA protocol the decoherence error $\xi$ is amplified by both (i) the circuit depth $M$ and (ii) the larger number of qubits $N$, by a factor $\sim M N^{3/2}$. This increase can be compensated when using optimized parameters, say $g/2\pi\approx100\mathrm{MHz}$, $T\approx100\mathrm{mK}$, and $Q\sim10^{6}$. \begin{comment} {\it Typical numbers for solid-state based implementations.---} To keep decoherence effects minimal, the relevant coherent timescale $t_{\mathrm{coh}}$ should be much shorter than the time-scale associated with any relevant noise processes. For the implementation of a maximally entangling controlled phase gate the timescale $t_{\mathrm{coh}}$ is set by the gate time $t_{\mathrm{coh}} = \pi/4 J_{\mathrm{max}}$, with $J_{\mathrm{max}}$ denoting the largest available spin-spin coupling, while for the implementation of QAOA with desired matrix $w_{ij}$ the timescale $t_{\mathrm{coh}}$ is given by the total run time $t_{\mathrm{coh}} \rightarrow T = \eta t_{\mathrm{coh}}$. First, the gate time has to be short compared to the qubit's (echo-extended) dephasing time $T_{2}=1/\gamma_{\phi}$, i.e., $t_{\mathrm{coh}} \ll T_{2}$. As shown in the SM, the corresponding dephasing-induced error $\xi_{\phi}$ can be estimated analytically as $\xi_{\phi} = N \gamma_{\phi} t_{\mathrm{coh}}/2$. For concreteness, taking (for example) a QD qubit coherence time of $T_{2} \approx 10\,\mathrm{ms}$ \cite{veldhorst14, veldhorst15} and $t_{\mathrm{coh}}\approx 100\,\mathrm{ns}$ \cite{jin12, harvey18, beaudoin16}, the coherence induced error can be estimated as $\xi_{\phi} \sim 1.25\%$ for $N=50$ qubits and $\eta=50$ cycles to build up the desired interaction matrix $w_{ij}$. Second, the gate has to be fast compared to the effective rethermalization rate of the relevant resonators modes $\sim\kappa_{n}\left(\bar{n}_{\mathrm{th}}\left(\omega_{n}\right)+1\right)\approx k_{B}T/Q_{n}$; here, $\bar{n}_{\mathrm{th}}\left(\omega_{n}\right) = 1/(\exp(\beta \omega_{n})-1)$ is the thermal occupation number of mode $n$ and the last approximation holds in the high-temperature limit $k_{B}T\gg\omega_{n}$. The quality factor associated with the $n$-th mode is given by $Q_{n}=\omega_{n}/\kappa_{n}$, with the corresponding single-photon decay rate $\kappa_{n}$. This argument directly leads to the requirement $J_{\mathrm{max}}\gg k_{B}T/Q_{n}$.\textcolor{orange}{Isn't it independent of $J_{\max}$} If the quality factors $Q_{n}$ remain constant (or even improve) when lowering the frequencies $\omega_{n}$, one strategy to fulfill this inequality could be to choose $\omega_{1}$ sufficiently small (since $J_{ij}\sim1/\omega_{1}$), provided that the round-trip time $\sim 1/\omega_{1}$ is still short compared to the dephasing time of the qubits $\sim T_{2}$. Moreover, technological improvements in the achievable quality factors will directly entail a corresponding reduction of rethermalization-induced errors, thereby increasing the acceptable temperature. In absolute numbers, for say $T=1\mathrm{K}$ ($k_{B}T/2\pi \approx 20 \mathrm{GHz}$) and relevant cavity quality factors of $\gtrsim 10^{5} - 10^{6}$ \cite{megrant12, bruno15, barends08}, one needs $J_{\mathrm{max}} \gg (20-200)\mathrm{kHz}$ in the case of a single maximally entangling gate (and $J_{\mathrm{max}} \gg \eta \times (20-200)\mathrm{kHz}$ in the case of multiple cycles). \end{comment} \section{Implementation with superconducting qubits} In this section, we propose an implementation of our model with superconducting qubits. Our approach is based on Ref.~\cite{Didier2015}, which we extend to two qubits and to the multi-mode scenario. \begin{figure} \includegraphics[width=\columnwidth]{Fig_SCqubits} \caption{{\it Implementation of longitudinal couplings with two transmon qubits.} (a) Circuit representation of our model with the two qubits placed at the two edges of a transmission line. The connecting inductances are shown in blue. (b) Dispersion relation and (c) and longitudinal couplings for $a_1=0.05L$. The asymptotic expressions (orange and green lines) are described in the text.\label{fig:setupSC}} \end{figure} \subsection{Setup} The setup we have in mind is shown in Fig.~\ref{fig:setupSC} with two transmon qubits (depicted in orange) placed at the two edges of the transmission line. Here, the connecting Josephson junctions (shown as blue) create a phase drop in the transmission line and will lead to the desired longitudinal coupling . In the following, we show how to write the spin-Boson Hamiltonian describing this implementation and how to connect it to the model presented in the main text. \subsection{Total Lagrangian} Following the quantization procedure~\cite{Blais2004,Vool2017}, we write the total Lagrangian as \begin{eqnarray} \mathcal{L} &=& \int_0^L dx \left( \frac{c \dot \Phi^2}{2} - \frac{(\partial_x \Phi)^2}{2\ell} \right) \nonumber \\ &+& \sum_{i=1,2} \Big[ E_{Jr} \cos \left( \frac{\Phi(x_i)}{\phi_0}\right) + C_{r} \frac{\dot \Phi(x_i)^2}{2} \nonumber \\ &+& \left(\frac{C_s}{2} \dot\phi_{ib}^2 + \frac{C_a}{2} \dot\phi_{ia} ^2+ \frac{C_b}{2} \dot\phi_{ib}^2\right) \nonumber \\ &+& E_{Ja} \cos \left(\frac{\phi_{ia}}{\phi_0}\right)+ E_{Jb}\cos \left(\frac{\phi_{ib}}{\phi_0}\right)\Big], \end{eqnarray} with $x_1=0,x_2=L$, $\phi_0=\hbar/(2e)$, and $c$ (resp. $\ell$) the capacitance (inductance) per unit length of the transmission line. Flux quantization in the transmon loops leads to the identities \begin{eqnarray} \phi_{ia}+\phi_{ib} &=& \Phi_e+ \Phi(x_i)\equiv \delta_i \nonumber, \end{eqnarray} with $\Phi_e$ an applied external magnetic flux. Writing $\phi_{ia,ib}=\delta_i/2\mp\phi_i$, $\phi_i=(\phi_{ib}-\phi_{ia})/2$, we obtain \begin{eqnarray} \mathcal{L} &=& \int_0^L dx \left( \frac{c \dot \Phi^2}{2} - \frac{(\partial_x \Phi)^2}{2\ell} \right) \nonumber \\ &+& \sum_{i=1,2} \Big[ E_{Jr} \cos \left( \frac{\Phi(x_i)}{\phi_0}\right) + C_{r} \frac{\dot \Phi(x_i)^2}{2} \nonumber \\ &+& \frac{C_s+C_b}{2} \left(\frac{\dot\Phi(x_i)}{2}+\dot\phi_i\right)^2 + \frac{C_a}{2} \left(\frac{\dot\Phi(x_i)}{2}-\dot\phi_i\right)^2 \nonumber \\ &+& E_{J}\cos \left(\frac{\delta_i}{2\phi_0}\right) \cos \left(\frac{\phi_i}{\phi_0}\right)\Big], \end{eqnarray} where we assumed identical junction energies \mbox{$E_{Ja}=E_{Jb}\equiv E_J/2$, and $\dot \Phi_e=0$}. We now linearize in first order in $\Phi(0),\Phi(L)\ll \phi_0$ the cosine term $\propto \cos(\delta_i/\phi_0)$. This allows to write the Lagrangian as \begin{equation} \mathcal{L}=\mathcal{L}_0 +\mathcal{L}_\mathrm{int}, \end{equation} representing respectively the transmission line and transmon qubits, and the coupling terms between them \begin{eqnarray} \mathcal{L}_0 &=& \int_0^L dx \left( \frac{c \dot \Phi^2}{2} - \frac{(\partial_x \Phi)^2}{2\ell} \right) \nonumber \\ &+&\sum_i \frac{\mathcal{C}}{2} \dot \Phi(x_i)^2 - \frac{\Phi(x_i)^2}{2L_r} \nonumber \\ &+& \frac{C_T}{2}\dot\phi_i^2 + E_J \cos\left (\frac{\Phi_e}{2\phi_0} \right)\cos\left(\frac{\phi_i}{\phi_0}\right) \nonumber \\ \mathcal{L}_\mathrm{int} &=& - \sum_i E_J \sin\left (\frac{\Phi_e}{2\phi_0}\right) \left( \frac{\Phi(x_i)}{2\phi_0} \right)\cos\left(\frac{\phi_i}{\phi_0}\right) \nonumber \\ &+& \sum_i \frac{C'}{2} \dot \Phi(x_i) \dot \phi_i, \end{eqnarray} with $\mathcal{C}=C_{r}+C_T/4$, $L_r=\phi_0^2/E_{Jr}$ the Josephson inductance, $C_T = C_a+C_b+C_s$, and $C'=(C_s+C_b-C_a)/2$. It is important to note that the capacitance $\mathcal{C}$ and inductance $L_r$ act as boundary conditions for the transmission line and thus control the corresponding mode structure~\cite{Bourassa2009,Malekakhlagh2017,Gely2017}. Also, the interaction Lagrangian $\mathcal{L}_\mathrm{int}$ consists of two terms, representing respectively longitudinal and transverse couplings (see below) of the transmon qubits to the transmission line. \subsection{Mode structure of the transmission line.} In order to map our superconducting qubit implementation to the model presented in the main text, we diagonalize the transmission line contribution of the Lagrangian $\mathcal{L}_0$ (see also for instance Ref.~\cite{Malekakhlagh2017}) to obtain a basis of photon modes. To do so, we write the Euler-Lagrange equations for $\Phi(x,t)=u_n(x)e^{-i\omega_nt}$ \begin{eqnarray} \partial_x^2 u_n(x)+v_p^2 \omega_n^2 u_n(x)=0, \end{eqnarray} with $v_p= 1/\sqrt{\ell c}$ the speed of light in the transmission line. Without loss of generality, we can write the mode functions as sine waves \begin{eqnarray} u_n(x)&=& \sqrt{\frac{2}{L}} \sin (k_nx-\theta_n), \end{eqnarray} with the dispersion relation $\omega_n=k_n v_p$, $\theta_n$ a real number, and with boundary conditions \begin{eqnarray} u'_n(x_i) = (-1)^i\left (\frac{1}{a_1}+k_n^2a_2 \right) u_n(x_i). \end{eqnarray} Here $a_1=L_r/\ell$, and $a_2=\mathcal{C}/c$ are two lengths, representing the effective spatial extent of the transmission line-qubit coupling (see below). We can finally rewrite the above equation in the form of the two coupled transcendental equations \begin{eqnarray} k_na_1&=& (1+a_1a_2 k_n^2) \tan(\theta_n)\label{eq:trans1} \\ k_n &=& \frac{n\pi+2\theta_n}{L} \quad (n>1) . \label{eq:trans2} \end{eqnarray} In the general case, the equations are solved numerically, and we discuss two asymptotic regimes below. Writing $\Phi(t)=\sum_n \Phi_n(t) u_n(x)$, we can finally write \begin{eqnarray} \mathcal{L}_0&=&\frac{c}{2} \sum_n (\dot\Phi_n^2-\omega_n^2\Phi_n^2), \nonumber \\ &+& \sum_i \frac{C_T}{2}\dot\phi_i^2 + E_J \cos\left (\frac{\Phi_e}{2\phi_0} \right)\cos\left(\frac{\phi_i}{\phi_0}\right) \nonumber \\ \mathcal{L}_\mathrm{int} &=&-E_J \sin\left (\frac{\Phi_e}{2\phi_0}\right) \sum_{n,i} u_n(x_i) \left( \frac{\Phi_n}{2\phi_0} \right)\cos\left(\frac{\phi_i}{\phi_0}\right) \nonumber \\ &+& \frac{C'}{2} \sum_{n,i} u_n(x_i) \dot \Phi_n \dot \phi_i, \end{eqnarray} where we assumed the functions $u_n(x)$ to be normalized (valid in limit $\theta_n \ll k_n L$). \subsection{Hamiltonian description} We can now perform a Legendre transformation, writing the charge degrees of freedom as \begin{eqnarray} q_n &=& \frac{\partial\mathcal{L}}{\partial \dot \Phi_n} = c \dot \Phi_n+\sum_i \frac{C'}{2}u_n(x_i) \dot \phi_i \nonumber \\ q_i &=& \frac{\partial\mathcal{L}}{\partial \dot \phi_i} = C_T \dot \phi_i +\sum_n \frac{C'}{2}u_n(x_i) \dot \Phi_n. \end{eqnarray} In first order in $\mathcal{L}_\mathrm{int}/\mathcal{L}_0$, i.e assuming the capacitive energy of the coupling term ($\propto C'$) can be treated perturbatively, we obtain \begin{eqnarray} \dot \Phi_n&=& \frac{q_n }{c}-\sum_i \frac{q_i C' u_n(x_i) }{2cC_T} \nonumber \\ \dot \phi_i &=& \frac{q_i }{C_T}-\sum_n \frac{q_n C' u_n(x_i) }{2cC_T}, \end{eqnarray} and thus \mbox{$H=\sum_n q_n \dot \Phi_n +\sum_i q_i \dot \phi_i-\mathcal{L}=H_0+H_\mathrm{int}$} \begin{eqnarray} H_0 &=& \sum_n \left( \frac{q_n^2}{2c}+\frac{\omega_n^2c\Phi_n^2}{2} \right) \nonumber \\ &+& \sum_i \frac{q_i^2}{2C_T} -E_J \cos\left(\frac{\Phi_e}{2\phi_0}\right) \cos\left(\frac{\phi_i}{\phi_0}\right) \nonumber \\ H_\mathrm{int}&=&- \frac{C'}{2C_Tc} \sum_{i,n} u_n(x_i) q_n q_i \nonumber \\ &+& \frac{E_J}{2\phi_0} \sin\left(\frac{\Phi_e}{2\phi_0}\right) \sum_{i,n} u_n(x_i) \Phi_n \cos\left(\frac{\phi_i}{\phi_0}\right).\nonumber \end{eqnarray} Assuming for simplicity the transmon to be in the linear regime~\footnote{At the next order, we obtain the qubit nonlinearity.}, we can rewrite the first term as \begin{eqnarray} H_0 &=& \sum_n \left(\frac{q_n^2}{2c}+\frac{\omega_n^2c\Phi_n^2}{2} \right)+ \sum_i \left(\frac{q_i^2}{2C_T} +\frac{\omega_z^2C_T\phi_i^2}{2}\right), \nonumber \end{eqnarray} with $\omega_z = 1/\sqrt{L_T C_T}$ the qubit frequency, $L_T=\phi_0^2/E_J(\Phi_e)$, and $E_J(\Phi_e)=E_J\cos\left(\frac{\Phi_e}{2\phi_0}\right)$, which we can diagonalize in terms of harmonic oscillator operators describing the transmon and transmission line excitations \begin{eqnarray} \Phi_n &=& \sqrt{\frac{\hbar }{2c\omega_n}}(a_n+a_n^\dagger), \quad q_n = \sqrt{\frac{\hbar c\omega_n}{2}}i(a_n^\dagger-a_n) \nonumber \\ \phi_i &=& \sqrt{\frac{\hbar }{2C_T\omega_z}}(a_i+a^\dagger_i), \quad q_i = \sqrt{\frac{\hbar C_T\omega_z}{2}}i(a^\dagger_i-a_i) \nonumber, \end{eqnarray} to obtain \begin{eqnarray} H_0 &=& \sum_n \hbar \omega_n a^\dagger_n a_n + \hbar \omega_z \sum_i a^\dagger_i a_i. \end{eqnarray} Finally, in terms of these eigenmodes, the coupling Hamiltonian reads in the $\{0_i,1_i\}$ subspace of the qubits \begin{eqnarray} H_\mathrm{int} &=&\hbar\sum_{i,n} \Omega_{i,n} (a_n^\dagger +a_n)+ \hbar\sum_{i,n} g^z_{in} \sigma_i^z (a_n^\dagger +a_n), \nonumber \\ &+&\hbar \sum_{i,n} g^y_{in} (a_i^\dagger - a_i)(a_n^\dagger - a_n)\label{eq:Hint}, \end{eqnarray} with the couplings frequencies \begin{eqnarray} \Omega_{in} &=& \frac{E_J}{2\phi_0} \sin\left(\frac{\Phi_e}{2\phi_0}\right) \sqrt{\frac{1}{2\hbar c\omega_n}}u_n(x_i) \frac{A_{1}+A_{0}}{2} \nonumber \\ g^z_{in} &=& \frac{E_J}{2\phi_0} \sin\left(\frac{\Phi_e}{2\phi_0}\right) \sqrt{\frac{1}{2\hbar c\omega_n}}u_n(x_i) \frac{A_{1}-A_{0}}{2} \nonumber \\ g^y_{in} &=&\frac{C'}{2C_Tc} \sqrt{\frac{C_T\omega_z}{2}} \sqrt{\frac{ c\omega_n}{2}}u_n(x_i), \nonumber \\ \end{eqnarray} and matrix elements $A_{s_i}=\bra{s_i}\cos(\phi_i/\phi_0)\ket{s_i}$ for the qubit operators. The first term in Eq.~\eqref{eq:Hint} is a driving term creating photons in the transmission line due to the presence of the external flux $\Phi_e$, and which is absent in our model Eq.~(1) of the MT. Note however that this term can be eliminated using displaced bosonic operators $a_n\to a_n +\Omega_n/\omega_n$. The second term represents the desired longitudinal interactions, and scales with the qubit junction energy $E_J$ and can be tuned by the external flux $\Phi_e$~\cite{Didier2015}. We discuss the multimode structure and origin of the frequency cutoff below. Finally, the last term is a transverse coupling whose strength is controlled by the different capacitances of the qubits. Interestingly, we can eliminate this term by setting $C'=0$, i.e. $C_s=C_a+C_b$. \subsection{Numerical results and asymptotic expressions} To conclude our implementation, we analyse the form of dispersion relation of the transmission line, and the scaling of the coupling term $g_{in}^z$ with respect to the mode number $n$, assuming for simplicity $C'=0$ (no transverse coupling) and $a_2/a_1 \approx 0$ (the frequency cutoff is only set by the inductance $L_r$ of the connecting junction). The dispersion relation, calculated by numerically solving Eqs.~\eqref{eq:trans1},\eqref{eq:trans2} for $a_1=0.05L$ is shown in Fig.~\ref{fig:setupSC}(b), and is close to being linear. We represent in panel (c) the corresponding coupling strengths $g_{in}^z$. At small spatial frequencies $k_n a_1\ll1 $, we can linearize Eqs.~\eqref{eq:trans1},\eqref{eq:trans2} and obtain asympotic expressions for $k_n\approx \pi n/(L-2 a_1)$, and $g^z_{in} \propto \sqrt{n}$, Similarly, at high frequencies, $k_n a_1\gg1$, we have instead $k_n\approx (n+1)\pi/L$, $g_{in} \propto 1/\sqrt{n}$. These asymptotic expressions for $k_n$ and $g_{in}$ are shown as blue and orange line respectively. Note that such quasi linear dispersion relation and form of the coupling $g_{in}^z$ have also been shown in the case of transverse couplings~\cite{Malekakhlagh2017,Gely2017}. Also, the scalings with $n$ in the low and high-frequency regime of $g_{in}^z$ match the phenomenological expression used in the main text. Note that our model can be generalized to the $N$ qubits scenario. This would require however a complete numerical analysis to compute the mode structure of the transmission line, obtain the corresponding Hamiltonian, and assess the magnitude of longitudinal and possible residual transverse couplings. Approaches based on capacitive couplings of asymmetric flux qubits to the transmission line~\cite{Billangeon2015} represent another interesting option, where the frequency cutoff is determined by the coupling capacitance~\cite{Malekakhlagh2017,Gely2017}. \subsection{Typical numbers} We conclude this section by giving relevant numbers and error estimates for a SC implementation of our model. The estimated gate time between qubits induced by longitudinal couplings is of the order of $\sim 50$ ns, corresponding to couplings $g/(2\pi) \approx 60 $ MHz~\cite{Royer2017SM}. For concreteness, we consider a coherence time $T_2\approx 100\ \mu$s~\cite{Gambetta2017SM}, a loss rate $\kappa/(2\pi)=0.05 $ MHz~\cite{Royer2017SM} and a thermal population of $\bar{n}_\mathrm{th}(\omega_0)=3$. These numbers correspond to a cooperativity $C\approx 6\times10^6$, which translates to a total QAOA error of about $\sim 8\%$ for $N=12$ qubits and $M=5$ QAOA cycles. For the same set of parameters, as indicated in the main text, the backbone of our QAOA implementation, namely the two-qubit hot gate (with $N=2$ and $M=1$) and the spin engineering recipe (where $M=1$) could be demonstrated with considerably smaller errors. \section{Implementation with Quantum Dots} In this Appendix we provide background material for the implementation of our theoretical scheme using quantum dots coupled to transmission line resonators. First, we derive the microscopic coupling between a quantum dot and a multi-mode microwave cavity. If one restricts oneself to the lowest dot orbital and a single resonator mode we recover standard expressions used in the literature. Specifically, we then discuss quantum dot charge qubits and singlet-triplet qubits. \subsection{Microscopic Dot-Resonator Coupling} Following Refs.\cite{viennot16SM,cottet15SM}, microscopically the coupling between a quantum dot, described by its electron density $\rho\left(\mathbf{r}\right)=\sum_{\sigma}\Psi_{\sigma}^{\dagger}\left(\mathbf{r}\right)\Psi_{\sigma}\left(\mathbf{r}\right)$, to a microwave cavity with associated voltage fluctuations $\hat{V}\left(\mathbf{r}\right)=\sum_{n}\phi_{n}\left(\mathbf{r}\right)\left(a_{n}+a_{n}^{\dagger}\right)$ {[}for convenience we have assumed the voltage mode functions $\phi_{n}\left(\mathbf{r}\right)$ to be real{]} is given by \begin{eqnarray} H_{I} & = & e\int d\mathbf{r}\hat{V}\left(\mathbf{r}\right)\rho\left(\mathbf{r}\right),\\ & = & e\sum_{\sigma}\int d\mathbf{r}\hat{V}\left(\mathbf{r}\right)\Psi_{\sigma}^{\dagger}\left(\mathbf{r}\right)\Psi_{\sigma}\left(\mathbf{r}\right), \end{eqnarray} where $e$ is the electron's charge. Next, we express the field operator $\Psi_{\sigma}\left(\mathbf{r}\right)$ in terms of the annihilation operators associated with the dot orbitals $\nu_{i}$ of dot $i$ as \begin{equation} \Psi_{\sigma}\left(\mathbf{r}\right)=\sum_{i,\nu_{i}}\varphi_{i\nu_{i}}\left(\mathbf{r}\right)c_{i\nu_{i},\sigma}. \end{equation} Here, the fermionic operator $c_{i\nu_{i},\sigma}(c_{i\nu_{i},\sigma}^{\dagger})$ annihilates (creates) an electron of spin $\sigma=\uparrow,\downarrow$ in the orbital $\nu_{i}$ of dot number $i$. We then arrive at \begin{equation} H_{I}=\sum_{n}\sum_{i,\nu_{i},j,\nu_{j},\sigma}g_{n,i,\nu_{i},j,\nu_{j}}\left(a_{n}+a_{n}^{\dagger}\right)c_{i\nu_{i},\sigma}^{\dagger}c_{j\nu_{j},\sigma}, \end{equation} where \begin{equation} g_{n,i,\nu_{i},j,\nu_{j}}=e\int d\mathbf{r}\phi_{n}\left(\mathbf{r}\right)\varphi_{i\nu_{i}}^{*}\left(\mathbf{r}\right)\varphi_{j\nu_{j}}\left(\mathbf{r}\right). \end{equation} For standard geometries---compare for example Refs.~\cite{childress04SM, frey12SM, beaudoin16SM}---where (for example) only one dot out of a larger double quantum dot is exposed to the resonator's voltage the mode function $\phi_{n}\left(\mathbf{r}\right)$ overlaps only with one specific dot, say the right dot (labeled by $R$). Therefore, to obtain a non-zero expression for the coupling $g_{n,i,\nu_{i},j,\nu_{j}}$, we can fix one of the indices as $i=R$ or $j=R$ (for $i=L,R$ in a DQD setting). Following Ref.\cite{cottet15SM}, we neglect photon-induced orbital tunneling terms because of small wavefunction overlap and focus on the dominant coupling term where $i=j=R$; further suppression of photon-induced tunneling terms can be achieved by carefully avoiding resonances between the resonator frequencies $\omega_{n}$ and the (tunable) transition energies $\Delta_{i\nu_{i},j\nu_{j}}$. In this case the dot-resonator coupling reduces to \begin{equation} H_{I}=\sum_{n}\sum_{\nu,\nu',\sigma}g_{n\nu\nu'}\left(a_{n}+a_{n}^{\dagger}\right)c_{R\nu\sigma}^{\dagger}c_{R\nu'\sigma},\label{eq:dot-resonator-coupling} \end{equation} with \begin{equation} g_{n\nu\nu'}=e\int d\mathbf{r}\phi_{n}\left(\mathbf{r}\right)\varphi_{R\nu}^{*}\left(\mathbf{r}\right)\varphi_{R\nu'}\left(\mathbf{r}\right). \end{equation} Next, as detailed in Ref.\cite{cottet15SM}, we drop photon-induced orbital tunneling terms within one dot where $\nu\neq\nu'$, because of negligible wavefunction overlap. Within this approximation, we recover the standard form for capacitive dot-resonator coupling \cite{childress04SM} \begin{equation} H_{I}=\sum_{n,\nu}g_{n\nu}\left(a_{n}+a_{n}^{\dagger}\right)\otimes\hat{n}_{R\nu}, \label{eq:H-int-QD-multi-electron} \end{equation} with $\hat{n}_{R\nu}=\sum_{\sigma}c_{R\nu\sigma}^{\dagger}c_{R\nu\sigma}$ and \begin{equation} g_{n\nu}=e\int d\mathbf{r}\phi_{n}\left(\mathbf{r}\right)\left|\varphi_{R\nu}\left(\mathbf{r}\right)\right|^{2}. \end{equation} If one restricts oneself to the lowest electronic orbital $\nu=0$ and a single resonator mode $n$ we recover standard expressions as used for example in Refs.\cite{taylor06SM,beaudoin16SM,childress04SM,viennot16SM,cottet15SM}. Within this description the resonator's voltage fluctuations amount to \textit{fluctuations of the dot's chemical potential} \cite{harvey18SM}, with a coupling strength approximately given by $g_{n0}\approx e\phi_{n}\left(\mathbf{r}_{\mathrm{dot}}\right)$ where $\mathbf{r}_{\mathrm{dot}}$ refers to the center of the wavefunction $\varphi_{R0}\left(\mathbf{r}\right)$; this treatment amounts to the dipole-like approximation in quantum optics where the quantum dot is considered as point-like on the relevant lengthscale set by the wavelength associated with the mode-function $\phi_{n}\left(\mathbf{r}\right)$. This can be done for long-wavelength resonator modes, but eventually the coupling $g_{n\nu}$ will average out for sufficiently large mode number $n$ because of rapid oscillations of the associated mode function $\phi_{n}\left(\mathbf{r}\right)$, as is well known also from coupling of quantum dots to phonons \cite{hanson07SM}. While this is the case for very large $n$ only, our microscopic form of the spin-resonator coupling avoids unphysical divergencies and ensures a finite spin-spin coupling parameters $J_{ij}$, as shown in detail in Sec. \ref{Effective-spin-spin-interactions}. In the low temperature regime of interest (where $k_{B}T\ll\Delta_{\mathrm{orb}}$) we can restrict ourselves to the lowest electronic orbital $\nu=0$. In an effectively one-dimensional problem as considered here, with a qubit localized at $x_{i}$, we can then express the \textit{charge-based} coupling between qubit $i$ and mode $n$ as \begin{equation} g_{n}=g_{0}\sqrt{n}\int dx\cos\left(k_{n}x\right)f\left(x-x_{i}\right). \label{eq:QD-microscopic-coupling} \end{equation} Here, we have set $\phi_{n}\left(x\right)=\phi_{n}\cos\left(k_{n}x\right)$, $f\left(x-x_{i}\right)=\int dydz\left|\varphi_{R0}\left(\mathbf{r}\right)\right|^{2}$ and $g_{0}\sqrt{n}=e\phi_{n}$, with the single photon voltage fluctuation amplitude $\phi_{n}=\alpha V_{n}\sim\sqrt{n}$; here, the amplitudes $\phi_{n}$ account for the potential fluctuations felt by the quantum dot via the lever arm $\alpha$. This expression matches the one used in the main text where $g_{i,n}$ refers to qubit $i=1,\dots,N$, with individual amplitudes $g_{0} \rightarrow g_{i}$. \textit{Quantum dot orbital transitions.---}Typically, for gate-defined quantum dots the single-particle orbital level $\Delta_{\mathrm{orb}}$ spacing amounts to $\Delta_{\mathrm{orb}}\sim1\mathrm{meV}$ \cite{cerletti05SM}. This energy scale is much larger than the thermal energy $k_{B}T$ for temperatures as high as $T=1\mathrm{K}$ which corresponds to $k_{B}\times1\mathrm{K}\approx8.6\times10^{-2}\mathrm{meV}$. For comparatively high temperatures in the range $T\approx\left(1-4\right)\mathrm{K}$, the thermal occupation of photons with energy $\hbar\omega=\Delta_{\mathrm{orb}}$ is $\bar{n}_{\mathrm{th}}\left(\Delta_{\mathrm{orb}}\right)\approx10^{-5}-6\times10^{-2}$. Therefore, the overwhelming majority of quantum dot experiments (which typically operate at dilution fridge temperatures where $T\approx\left(10-100\right)\mathrm{mK}\ll1\mathrm{K}$) can be described by restricting oneself to the orbital ground-state subspace. Along the same lines, we will restrict ourselves to the lowest orbital levels, since $\Delta_{\mathrm{orb}}$ is much larger than all relevant energy scales in our problem \footnote{The temperature requirements may be more stringent (for example) in silicon samples, where the valley splitting has to be taken into account, which tends to be smaller than the orbital splitting. In Si/SiGe quantum dots, the valley splitting is usually not larger than $\sim 100\mu \mathrm{eV}$, while it is larger in Si/SiO2 dots, in the range of hundreds of $\mu\mathrm{eV}$ up to $\sim 1 \mathrm{meV}$.}. \subsection{Quantum Dot Charge Qubits} In this subsection we briefly discuss a potential quantum-dot based physical implementation of our scheme, closely following Ref.\cite{childress04SM}; for a schematic illustration compare Fig.\ref{fig:implementation-DQD}. Consider a DQD in the \textit{single-electron} regime; below we will separately consider double dots in the two-electron regime. The electron can occupy the left $\left|L\right\rangle $ or right orbital $\left|R\right\rangle $, respectively. The right dot is capacitively coupled to the resonator. In this scenario, we can project the general result given in Eq.\eqref{eq:H-int-QD-multi-electron} onto the single-electron regime. When restricting ourselves to the lowest QD orbital as discused above, the interaction between the DQD and the transmission line can then be written as \begin{equation} H_{I}=\sum_{n}g_{n}(a_{n} + a_{n}^{\dagger})\otimes\left|R\right\rangle \left\langle R\right|, \label{eq:H-int-QD-single-electron} \end{equation} with the coupling $g_{n}$ given in Eq.\eqref{eq:QD-microscopic-coupling}; this coupling accounts for a frequency cut-off as set by the microscopic size given by $f(x)$ [compare the main text where $f(x)$ has been modeled by a simple box-function with spatial extent $a$]. Note that, when neglecting this cut-off, we recover standard results as presented for example in \cite{childress04SM}. To make this comparison concrete, we can rewrite $H_{I}$ as $H_{I}=ev\hat{V}\otimes\left|R\right\rangle \left\langle R\right|,$ where $e$ is the electron's charge, $\hat{V}$ is the (quantized) voltage on the resonator near the right dot, $v=C_{c}/\left(C_{c}+C_{d}\right)$, and $C_{d}$ is the capacitance to ground of the right dot; $C_{c}$ is the capacitive coupling between the right dot and the resonator. Following Ref.\cite{childress04SM,taylor06SM}, the quantized voltage at the end of the transmission line (with length $L$) can be written as $\hat{V}=\sum_{n}\sqrt{\hbar\omega_{n}/LC_{0}}\left(a_{n}+a_{n}^{\dagger}\right)$ where $C_{0}$ refers to the capacitance per unit length; the allowed wavevectors can be written as $k_{n}=\left(n+1\right)\pi/L$ and the corresponding frequencies read $\omega_{n}=k_{n}/C_{0}Z_{0}$, with the characteristic impedance $Z_{0}$ \cite{childress04SM}. Using (in our notation) $\omega_{n}=n\pi c/L$ (with $n=1,2,\dots$), the amplitudes of the zero-point fluctuations can be written as $V_{n}=\sqrt{\hbar n\pi c/L^{2}C_{0}}$. Accordingly, since the individual couplings $g_{i,n}$ scale with the zero-point fluctuations, we find $g_{i,n}\sim\sqrt{n}/L$, in direct agreement with Ref.\cite{sundaresan15}. The zero-point fluctuations (and therefore the qubit resonator coupling) can be increased significantly with the help of so-called high-impedance resonators, as demonstrated (for example) in Refs.\cite{stockklauser17SM,samkharadze16SM}. The voltage along such a high-impedance resonator is much larger than for a conventional $50\Omega$ resonator, with the single-photon voltage at the resonator's antinode being $V_{1}=\sqrt{\hbar Z_{r}}\omega_{1}$ for the fundamental mode $\omega_{1}$ \cite{harvey18SM}. \begin{figure} \includegraphics[width=1\columnwidth]{Fig_DQD} \caption{\label{fig:implementation-DQD} Schematic illustration of a DQD coupled to a transmission line. The charge-resonator coupling derives from the capacitive coupling to the right (or left) dot only. Here, the right dot is taken to be sensitive to the zero-point voltage of a coplanar-waveguide resonator via a capacitive finger with so-called lever arm $\alpha$, which couples microwave photons in the resonator to the orbital degree-of-freedom of the electron \cite{beaudoin16SM,frey12SM}. Coupling between the resonator mode and the electron's spin can be achieved by making use of various mechanisms which hybridize spin and charge degrees of freedom, as provided by spin-orbit interaction or inhomogeneous magnetic fields \cite{beaudoin16SM, hu12SM, trif08SM, viennot15SM}.} \end{figure} Coming back to the dot-resonator coupling described by Eq.\eqref{eq:H-int-QD-single-electron}, it is instructive to express $H_{I}$ in terms of the (orbital) DQD eigenstates, defined as \cite{childress04SM} \begin{eqnarray} \left|+\right\rangle & = & \sin\theta\left|L\right\rangle +\cos\theta\left|R\right\rangle ,\\ \left|-\right\rangle & = & \cos\theta\left|L\right\rangle -\sin\theta\left|R\right\rangle , \end{eqnarray} where $\tan\theta=-2t_{c}/\left(\omega_{q}+\epsilon\right)$; the effective qubit splitting $\omega_{q}=\sqrt{4t_{c}^{2}+\epsilon^{2}}$ between the eigenstates $\left|+\right\rangle $ and $\left|-\right\rangle $ can be tuned in situ via the (tunable) tunnel-coupling $t_{c}$ and/or the detuning parameter $\epsilon$. Defining the Pauli spin matrices (in the orbital pseudo-spin space) as $\sigma^{+}=\left|+\right\rangle \left\langle -\right|$ etc., the full Hamiltonian can be written as \begin{eqnarray} H & = & \frac{\omega_{q}}{2}\sigma^{z}+\sum_{n}\omega_{n}a_{n}^{\dagger}a_{n}\nonumber \\ & & +\sum_{n}\left(g_{n}^{z}\sigma^{z}+g_{n}^{x}\sigma^{x}\right)\otimes\left(a_{n}+a_{n}^{\dagger}\right), \end{eqnarray} with $\omega_{n}=\left(n+1\right)\omega_{0}$. The splitting $\omega_{q}$ can be tuned via the dot parameters $t_c$ and $\epsilon$, respectively, while the coupling constants $g_{m}^{x},g_{m}^{z}$ can be controlled via the dot parameters as $g_{n}^{x} = g_{n} t_{c}/\omega_{q}$, and $g_{n}^{z} = g_{n} \epsilon / 2\omega_{q}$ \cite{childress04SM}. Again, when disregarding cut-off effects, we recover results as presented (for example) in \cite{childress04SM}, where $g_{n} = g_{0} \sqrt{n}$, with the overall coupling strength $g_{0}/\omega_{0}=v\sqrt{2Z_{0}/R_{Q}}=\mathrm{const.}$, with $Z_{0}$ being the characteristic impedance of the transmission line and $R_{Q}=h/e^{2}$ referring to the resistance quantum \cite{childress04SM}. Typically, as done in Ref.\cite{childress04SM}, one proceeds by considering the regime $\omega_{0}\approx\omega_{q}$ in which one can neglect all but (for example) the fundamental mode (setting $a_{0}=a$), eventually leading to the Jaynes-Cummings Hamiltonian within a rotating wave approximation \begin{equation} H\approx\frac{\omega_{q}}{2}\sigma^{z}+\omega_{0}a^{\dagger}a+g_{0}^{x}\left(a\sigma^{+}+a^{\dagger}\sigma^{-}\right). \end{equation} This limit has been studied experimentally in Ref.\cite{frey12SM}, with a charge(pseudo-spin)-resonator strength of several tens of MHz. Conversely, for $t_{c}=0$ we realize a model with purely \textit{longitudinal} charge(pseudo-spin)-resonator coupling, that is \begin{equation} H=\frac{\omega_{q}}{2}\sigma^{z}+\sum_{n}\omega_{n}a_{n}^{\dagger}a_{n}+\sum_{n}g_{n}^{z}\sigma^{z}\otimes\left(a_{n}+a_{n}^{\dagger}\right). \end{equation} By indexing $\sigma^{z} \rightarrow \sigma_{i}^{z}$, $\omega_{q} \rightarrow \omega_{i}$ and $g_{n}^{z} \rightarrow g_{i,n}$, this single-mode Hamiltonian can be generalized to the multi-qubit scenario, as considered in the main text. \subsection{Singlet-Triplet Qubits} In this section we show how to parametrically modulate the spin resonator coupling in the case of singlet-triplet qubits embedded in DQDs. To make our work self-contained we first closely follow Ref.\cite{harvey18SM} for a single-mode analysis, and then generalize this idea to a multi-mode setup. \textit{Single-Mode Setting.---}We follow Ref.\cite{harvey18SM} to analyze the coupling between singlet-triplet qubits and a high-impedance resonator. When neglecting higher orbitals and other spin levels (within the lowest orbital) the Hamiltonian associated with a double quantum dot in the two-electron regime can be written as \begin{equation} H_{q}=\frac{J\left(\epsilon\right)}{2}\sigma^{z}, \end{equation} with the exchange-induced splitting $J\left(\epsilon\right)=\omega_{q}\left(\epsilon\right)$ between the two qubit states $\left\{ \left|T_{0}\right\rangle ,\left|S\right\rangle \right\} $. The detuning parameter $\epsilon$ can be readily controlled classically, but also coupled to the quantized voltage fluctuations associated with the resonator. When writing the detuning as $\epsilon=\epsilon_{0}+\delta\epsilon$ we can expand the splitting $J\left(\epsilon\right)$ in a Taylor series around the equilibrium $\epsilon_{0}$ as \begin{equation} J\left(\epsilon\right)\approx J\left(\epsilon_{0}\right)+J'\left(\epsilon_{0}\right)\delta\epsilon+\frac{1}{2}J''\left(\epsilon_{0}\right)\delta\epsilon^{2}+\dots \end{equation} In the presence of both classical driving and quantum fluctuations, we have \begin{eqnarray} \delta\epsilon & = & ec_{g}V_{g}\left(t\right)+ec_{r}\hat{V},\\ & = & \epsilon_{d}\cos\left(\omega_{d}t\right)+ec_{r}\hat{V}, \end{eqnarray} where $e$ refers to the electron charge, and $c_{g}$ ($c_{r}$) give the geometrical lever arms between the double dot and the RF gate (resonator); the latter sets the shift in chemical potential of the DQD caused by a voltage shift on those gates, respectively. For a \textit{single-mode} resonator, where $\hat{V}=V_{0}\left(a+a^{\dagger}\right)$, the total Hamiltonian \begin{equation} H=\omega_{0}a^{\dagger}a+\frac{J\left(\epsilon\right)}{2}\sigma^{z}, \end{equation} can then be approximated as \begin{widetext} \begin{eqnarray} H & = & \omega_{0}a^{\dagger}a+\frac{1}{2}\left[J\left(\epsilon_{0}\right)+\frac{1}{2}J''\left(\epsilon_{0}\right)\left(\frac{\epsilon_{d}^{2}}{2}+e^{2}c_{r}^{2}V_{0}^{2}\right)\right]\sigma^{z}\\ & & +\frac{1}{2}J'\left(\epsilon_{0}\right)\left[\epsilon_{d}\cos\left(\omega_{d}t\right)+ec_{r}V_{0}\left(a+a^{\dagger}\right)\right]\sigma^{z}\nonumber \\ & & +\frac{1}{4}J''\left(\epsilon_{0}\right)\left[ec_{r}V_{0}\epsilon_{d}\left(e^{i\omega_{d}t}+e^{-i\omega_{d}t}\right)\left(a+a^{\dagger}\right)+2e^{2}c_{r}^{2}V_{0}^{2}a^{\dagger}a+e^{2}c_{r}^{2}V_{0}^{2}\left(a^{2}+\mathrm{h.c.}\right)+\frac{\epsilon_{d}^{2}}{2}\cos\left(2\omega_{d}t\right)\right]\sigma^{z}.\nonumber \end{eqnarray} \end{widetext} In the absence of driving $\left(\epsilon_{d}=0\right)$ to leading linear order we would obtain \begin{equation} H\approx\omega_{0}a^{\dagger}a+\frac{\tilde{J}\left(\epsilon_{0}\right)}{2}\sigma^{z}+g_{1}\sigma^{z}\otimes\left(a+a^{\dagger}\right), \end{equation} with a renormalized qubit splitting $\tilde{J}\left(\epsilon_{0}\right)$ and \begin{equation} g_{1}=\frac{1}{2}J'\left(\epsilon_{0}\right)ec_{r}V_{0}. \end{equation} The maximum coupling $g_{1}$ is largely set by the zero-point voltage fluctuation amplitude $V_{0}$, while the coupling can be tuned by choosing the operating point $J'\left(\epsilon_{0}\right)$ appropriatedly. The effective dipole is turned off (on) if $J'\left(\epsilon_{0}\right)=0$ $\left(|J'\left(\epsilon_{0}\right)|>0\right)$. In the presence of driving $\left(\epsilon_{d}>0\right)$, however, in an interaction picture with respect to $H_{0}=\omega_{d}a^{\dagger}a+\tilde{J}\left(\epsilon_{0}\right)\sigma^{z}/2$ one can approximately (within a RWA, where all fast-oscillating terms are dropped) restrict oneself to \cite{harvey18SM} \begin{equation} \tilde{H}\approx\Delta a^{\dagger}a+g_{2}\sigma^{z}\otimes\left(a+a^{\dagger}\right), \end{equation} with the detuning $\Delta=\omega_{0}-\omega_{d}$ and the coupling \begin{equation} g_{2}=\frac{1}{4}J''\left(\epsilon_{0}\right)ec_{r}V_{0}\epsilon_{d}. \end{equation} The coupling $g_{2}$ is proportional to both the zero-point fluctuation $V_{0}$, but (as opposed to the coupling $g_{1}$) also to the amplitude $\epsilon_{d}$, which allows us to \textit{classically amplify and control} the spin-resonator interaction strength. Moreover, the fact that (for a fixed frequency $\omega_{0}$ which does not have to be necessarily the fundamental mode) the single photon amplitude decreases with the length of the resonator $L$ as $V_{0}\sim1/\sqrt{L}$ can be compensated by increasing the driving amplitude $\epsilon_{d}$ (as long as $\delta\epsilon\ll\epsilon_{0}$ to guarantee the validity of the underlying Taylor expansion). Following Ref.\cite{harvey18SM}, we have neglected the dispersive shift term $H_{\mathrm{ds}}=\chi a^{\dagger}a\sigma^{z}$ with $\chi=J''\left(\epsilon_{0}\right)e^{2}c_{r}^{2}V_{0}^{2}/2$, because (as compared to the longitudinal coupling) this dispersive coupling is smaller by a factor $\sim ec_{r}V_{0}\bar{n}/\epsilon_{d}\ll1$, with $\bar{n}=\left\langle a^{\dagger}a\right\rangle $. Depending on the actual value of $\epsilon_{d}$ the last condtion may impose a limitation on temperature; however, the effect of the dispersive shift may also be neglected for sufficiently large detuning in the limit $\Delta\gg\chi N/2$. \textit{Multi-Mode Setting.---}As shown in Ref.~\cite{harvey18SM}, the longitudinal spin resonator coupling can be amplified by parametrically modulating the splitting with a classical drive. Here, we aim to generalize this idea to a multi-mode setup. The multi-mode Hamiltonian under consideration reads \begin{equation} H=\sum_{n}\omega_{n}a_{n}^{\dagger}a_{n}+\frac{J\left(\epsilon\right)}{2}\sigma^{z}, \end{equation} with $\epsilon=\epsilon_{0}+\delta\epsilon$, and \begin{equation} \delta\epsilon=\sum_{n}A_{n}\cos\left(\Omega_{n}t\right)+\sum_{n}\phi_{n}\left(a_{n}+a_{n}^{\dagger}\right). \end{equation} with $\phi_{n}=ec_{r}V_{n}$ for notational convenience. Here, the first term describes a polychromatic driving scheme (with amplitudes $A_{n}$ and frequencies $\Omega_{n}\approx\omega_{n}$) and the second term describe voltage fluctuations on the dot caused by the multi-mode resonator. We obtain \begin{eqnarray} \delta\epsilon^{2} & = & \sum_{m,n}A_{m}A_{n}\cos\left(\Omega_{m}t\right)\cos\left(\Omega_{n}t\right)\\ & & +\sum_{m,n}\phi_{m}\phi_{n}\left(a_{m}+a_{m}^{\dagger}\right)\left(a_{n}+a_{n}^{\dagger}\right)\\ & & +2\sum_{m,n}A_{m}\phi_{n}\cos\left(\Omega_{m}t\right)\left(a_{n}+a_{n}^{\dagger}\right). \end{eqnarray} Within the experimentallly most relevant regime we can neglect all rapidly oscillating terms (provided that $J'\left(\epsilon_{0}\right)A_{n}/4,J'\left(\epsilon_{0}\right)\phi_{n}\sqrt{\left\langle a_{n}^{\dagger}a_{n}\right\rangle }/2\ll\Omega_{n}$, $J''\left(\epsilon_{0}\right)\phi_{m}\phi_{n}\sqrt{\left\langle a_{m}^{\dagger}a_{m}\right\rangle \left\langle a_{n}^{\dagger}a_{n}\right\rangle }/4\ll\left|\Omega_{m}-\Omega_{n}\right|$ and $J''\left(\epsilon_{0}\right)A_{m}A_{n}/4\ll\left|\Omega_{m}-\Omega_{n}\right|,\Omega_{m}+\Omega_{n}$ $\forall m\neq n$) and keep only co-rotating terms (see the last line in the expression for $\delta\epsilon^{2}$). In that case, along the lines of the single-mode analysis the total Hamiltonian simplifies within a RWA to \begin{eqnarray} H & \approx & \sum_{n}\omega_{n}a_{n}^{\dagger}a_{n}+\frac{\tilde{J}\left(\epsilon_{0}\right)}{2}\sigma^{z}\\ & & +\sum_{n}g_{2,n}\left(t\right)\sigma^{z}\otimes\left(a_{n}+a_{n}^{\dagger}\right), \end{eqnarray} with \begin{equation} g_{2,n}\left(t\right)=\frac{1}{2}J''\left(\epsilon_{0}\right)ec_{r}V_{n}A_{n}\cos\left(\Omega_{n}t\right). \end{equation} This coupling is boosted by the classical amplitude $A_{n}$; in the lab frame the coupling to mode $n$ oscillates with frequency $\Omega_{n}\approx\omega_{n}$. In the limit of a single-frequency, monochromatic drive (i.e., $A_{n}=0$ for all but a single mode) we recover the results of the previous section. In principle we can selectively control the modes the qubit couples to by choosing the amplitudes $\left|A_{n}\right|\geq0$ appropriately. For example, by setting $A_{n}=0$ for all even (odd) modes the qubit couples only to the odd (even) resonator modes. This control can be done individually for every qubit. \subsection{Protocol to implement QAOA} To realize our scheme for implementing QAOA with this specific setup one could (for example) make use of spin-spin interactions controlled via parametric modulation as detailed in \cite{harvey18SM} together with single-qubit rotations or utilize the (tunable) effective electric dipole moment associated with exchange coupled spin states in a DQD \cite{taylor06SM, schuetz17SM}. One can then efficiently alternate between the unitaries $U_{x}(\beta)$ and $U_{zz}(\gamma)$ by repeatedly cycling through two parameter regimes: (i) The magnetic gradient $\delta B$ dominates over the voltage-controlled qubit frequencies $\omega_{i}(\epsilon) \rightarrow 0$ (with $\epsilon$ denoting the gate voltage), resulting in $\beta = \delta B \cdot t_{p}$. In this regime the coupling to the resonator is turned off \cite{schuetz17SM, harvey18SM}, giving $J_{ij}=0$ and effectively $\gamma_{ij}=0$. (ii) Then, by pulsing to a regime where $\delta B$ is the smallest energy scale, we turn on the qubit-qubit coupling $U_{zz}(\gamma_{ij})$ \cite{jin12SM, harvey18SM}. This procedure completes one cycle of QAOA.
1,116,691,496,985
arxiv
\section{Introduction} In recent years, a deluge of different interconnection networks have been proposed to address the critical role of communication in modern HPC systems \cite{besta2014slim,lei2020bundlefly,hawkins2007data,kim2008technology,shpiner2017dragonfly+,singla2012jellyfish,valadarsky2015xpander}. To efficiently and robustly enable communication, many of these topologies are designed to exhibit a plethora of beneficial structural properties. An ideal network will have low endpoint-to-endpoint latency, resiliency to link failures, high bisection bandwidth to avoid congestion -- all while maintaining low-system cost. Researchers have employed the language of graph theory to formalize and quantify such properties. In this way, a number of graph statistics -- such as diameter, average distance, vertex and edge-connectivity, and dense bipartitions -- are well-known to be critically important to the performance of the computing system. However, constructing a graph topology simultaneously optimizing these criteria is challenging. One approach is to focus on finding families of graphs that are extremal with respect to a particular property, with the hope that optimization of that property guarantees acceptably good, if not optimal, behavior with respect to the others. For example, the SlimFly topology \cite{besta2014slim} was proposed with the claim ``it's \emph{ALL} about the diameter." Specifically, the authors argued that graphs which minimize the diameter while simultaneously maximizing the number of vertices for a given radix also exhibit other good properties, such as resilience to link failure and high bisection bandwidth. However, how to construct such topologies (and whether they even exist) remains a challenging and ongoing topic in mathematics~\cite{miller2012moore}. Indeed, the SlimFly topology is only made possible by a sophisticated algebraic construction by McKay, Miller and \v{S}ir\'{a}\v{n} \cite{mckay1998note}, which has nearly-optimal size with respect to degree-diameter condition. SlimFly is far from the only proposed topology to take an extremal approach; the well-known DragonFly \cite{kim2008technology} and associated variants aim to maximize performance while minimizing system cost, utilizing a group of high-radix routers as a virtual router to increase the network's effective radix. And more recently, a related topology called BundleFly~\cite{lei2020bundlefly} expands and adapts the SlimFly for use with multicore fiber systems. In this work, we propose that utilizing a graph construction which optimizes the {\it spectral gap} -- the difference between the largest two eigenvalues of the adjacency matrix -- yields a broad family of flexible, balanced, low-cost, and congestion-avoiding topologies. We call this family of topologies SpectralFly, as they are examples of Ramanujan graphs which have optimal spectral gap. As we explain, the spectral gap is a highly nuanced and far-reaching indicator of graph structure, controlling diameter, average distance, fault tolerance, neighborhood expansion, and bisection bandwidth, among others. In comparison to SlimFly, we show SpectralFly makes marginal sacrifices in terms of diameter and average shortest path length, while offering comparable or sometimes significantly better properties, particularly in the case of bisection bandwidth and related properties involving bottlenecks. While no single topology can be optimal in every regard, our results show SpectralFly is extremely competitive for many key structural factors, making it a good fit for a variety of workloads. Our work is organized as follows: in Section \ref{sec:eigs}, we provide a brief overview of graph eigenvalues, the spectral gap, and the Ramanujan property, establishing the importance of the spectral gap as a consideration in HPC interconnection network design. In Section \ref{sec:const}, we introduce the particular family of Ramanujan graphs we utilize, known as LPS graphs, providing the necessary definitions and examples, and highlighting some key LPS graph properties. Then, in Section \ref{sec:structProp}, we study structural properties of SpectralFly in comparison with SlimFly, BundleFly and DragonFly, across 5 classes of topology sizes which range from roughly 100 vertices to almost 10K vertices. Additionally, we also examine the resilience of these properties under varying levels of edge failures. Finally, in Sections \ref{sec:route}-\ref{sec:sim}, we validate the utility of the structural advantages of SpectralFly by performing simulations and experiments using the Structural Simulation Toolkit Macroscale Element Library (SST/macro). We define our routing algorithms in Section \ref{sec:route}, and evaluate several micro-benchmarks with different topologies in Section \ref{sec:sim}. Throughout, we use standard graph theory terminology and consider only undirected graphs $G=(V,E)$, where $V$ is a set of elements called {\it vertices} and $E$ is a set of unordered pairs of elements of $V$ called {\it edges}. \change{In the context of interconnection networks, vertices represent routers, and edges represent bidirectional links.} The degree of a vertex is the number of edges to which it belongs; we call a graph $k$-regular if each vertex has degree $k$ and sometimes refer to $k$ as the radix of the graph. \setlength{\abovedisplayskip}{2.0pt} \setlength{\belowdisplayskip}{2.0pt} \section{Eigenvalues, expansion, and the Ramanujan property}\label{sec:eigs} Graph eigenvalues capture a plethora of network properties critical to the design and function of interconnection networks. Diameter, bisection bandwidth, fault tolerance, average shortest path length, and other structural properties are controlled by eigenvalues; see \cite{aksoy2020ramanujan,Mohar:Eigenvalues} for a survey. \change{Here, we highlight graph theoretic results establishing these connections in order to explain why Ramanujan graphs possess superior structural properties for interconnection network design. } Many such properties are controlled by a {\it single} eigenvalue: if $G$ is a $k$-regular graph with adjacency matrix $A$, this eigenvalue, denoted $\lambda(G)$, is the largest magnitude eigenvalue of $A$ not equal to $\pm k$. The difference between the two largest adjacency eigenvalues is sometimes called the ``spectral gap". In case of $k$-regular graphs, this is the difference between the second largest eigenvalue and $k$, as $k$ is always the largest eigenvalue. As we will soon explain, Ramanujan graphs ``optimize" the $\lambda(G)$ and hence have large spectral gap. Perhaps the most important property for our purposes here, $\lambda(G)$ controls the expansion properties of the graph. Loosely speaking, expansion means every ``not too large" set of vertices has a ``not too small" set of neighbors. The vertex isoperimetric number of a graph, $h(G)$, is one way of formalizing this: \[ h(G)=\min_{\substack{X \subseteq V(G) \\ 2|X|\leq |V(G)|}}\frac{|\partial X|}{|X|}, \] where $\partial X$ denotes the neighbors of $X$ that are not in $X$. \change{Thus, larger values of $h(G)$ suggest better expansion properties.} The vertex isopermetric number, as well as related variants, are closely linked to $\lambda(G)$. In particular, Tanner \cite{Tanner1984} proved a lower bound on $h(G)$ for $k$-regular graphs in terms of this eigenvalue, while Alon and Milman \cite{Alon1985} gave an upper bound. Such results suggest it is natural to measure expansion directly in terms of eigenvalues themselves. We will concern ourselves with this notion of expansion, called {\it spectral expansion}. As smaller values of $\lambda(G)$ mean better expansion properties, it is natural to ask: what is the theoretical minimum of $\lambda(G)$? The Alon-Boppana theorem \cite{Alon1986} answers this question, stating that for a $k$-regular graph with second largest (in magnitude) adjacency eigenvalue $\lambda$ and diameter $D$, we have $\lambda(G) \geq 2\sqrt{k-1}\left(1-\nicefrac{2}{D}\right)-\nicefrac{2}{D}.$ Consequently, if $(G_i)_{i=1}^{\infty}$ is a family of connected, $k$-regular, $n$-vertex graphs with $n \to \infty$ as $i \to \infty$, then, $\liminf_{i \to \infty} \lambda(G_i) \geq 2 \sqrt{k-1}.$ We call a graph Ramanujan if it achieves this theoretical minimum, i.e., is an optimal spectral expander. \begin{definition} A $k$-regular graph $G$ is called Ramanujan if $\lambda(G) \leq 2 \sqrt{k-1},$ where $\lambda(G)$ denotes the largest magnitude adjacency eigenvalue of $G$ not equal to $\pm k$. \end{definition} As a consequence of their optimal spectral expansion, Ramanujan graphs possess a plethora of beneficial structural properties discussed in \cite{aksoy2020ramanujan}. In particular, the Ramanujan property guarantees at least nearly optimal bisection bandwidth. While we emphasize this near-optimality of bisection bandwidth in this work, we note the Ramanujan property guarantees something much stronger: it bounds the number of edges between {\it any} collection of vertices, not just bisections. This stronger property is called the discrepancy inequality~\cite{Chung:spectral}. Simply put, this means large spectral gap implies two arbitrary subsets of the network are bottleneck-free; see Fig. \ref{F:disc} for a cartoon of substructures forbidden by the discrepancy property. \begin{figure} \centering \hfill \begin{subfigure}[t]{.45\columnwidth} \includegraphics[trim = 35 35 35 15 ,clip,width=.9\columnwidth]{BisectionBandwidth_cartoon.png}\label{fig:BW_cartoon}% \caption{Low Bisection Bandwidth} \end{subfigure} \hskip1.5em \begin{subfigure}[t]{.45\columnwidth} \includegraphics[trim = 60 35 60 60, clip,width=.9\columnwidth]{Discrepancy_Cartoon.png}% \caption{Low Discrepancy} \end{subfigure} \hfill \phantom{} \caption{Structures forbidden by high bisection bandwidth and discrepancy. Bisection bandwidth only concerns edges crossing a bipartition (blue shadow), while discrepancy also requires any two subsets (in purple), are bottleneck-free. \vspace{-1.5em}} \label{F:disc} \end{figure} While we will not explicitly design the experiments of Section \ref{sec:sim} to emphasize the impact of the discrepancy property, in practice, this is likely to be an important property for the practical usage of the systems. In particular, as the discrepancy property assures that given an arbitrary collection of vertices involved in a computation the bisection bandwidth on the topology restricted to those vertices is still high, we expect systems designed around Ramanujan graph topologies will be less susceptible to performance degradation based on job schedule and inter-job contention as illustrated in \cite{Bhatele:Dragonfly}. Additionally, we note that the discrepancy property will likely mitigate the benefit of routing strategies such as Valiant that attempt to homogenize traffic across the network. In particular, as high discrepancy networks are optimally bottleneck-free, this minimizes the advantage of homogenizing network traffic. \paragraph{Related work in HPC} As evidenced by the relationships between eigenvalues and other structural properties highlighted above, it is unsurprising some HPC topologies consider spectral expansion {\it implicitly} in their network design. The well-known randomized Jellyfish topology has strong, albeit not optimal, spectral expansion properties. However, random $k$-regular graphs are ``sub Ramanujan" as shown by Friedman's proof \cite{Friedman2003} of Alon's second eigenvalue conjecture \cite{Alon1986}; hence SpectralFly has superior spectral expansion over JellyFish. Further, as discussed in \cite{valadarsky2015xpander}, the unstructuredness of randomized constructions such as Jellyfish makes ``them hard to reason about (predict, diagnose), build (e.g., in terms of wiring complexity), and so poses serious, arguably insurmountable, obstacles to their adoption in practice." Next, we briefly mention other work {\it explicitly} considering notions of spectral expansion or Ramanujan graphs in an HPC setting. In \cite{aksoy2020ramanujan}, the authors survey a wide swath of supercomputing topologies and derive either asymptotic bounds or exact expressions for their spectral gap, which shows many supercomputing topologies are far from Ramanujan. In this work, we aim to realize the theoretical potential suggested in \cite{aksoy2020ramanujan} through SpectralFly, which has optimal spectral gap. So called $(\alpha,\beta,n,d)-$expanders are utilized to construct ``multibutterfly" networks \cite{Upfal1992}, and later, ``metabutterfly" networks \cite{Brewer1994}, which aim to mitigate wiring complexity. Valadarsky et al \cite{Valadarsky2016} propose ``Xpander", a general construction in the context of datacenter design. Xpander is based on the theory of graph lifts \cite{Bilu2006} which, via derandomization procedures, may generate deterministic almost-Ramanujan graphs. Theoretical work by Marcus, Spielman and Srivastava \cite{Marcus2013} suggests it may be possible to explicitly generate Ramanujan graphs utilizing $k$-lifts via sophisticated interlacing polynomial techniques. \section{\large SpectralFly Topology Construction}\label{sec:const} Constructing explicit families of Ramanujan graphs is an ongoing topic of research. The first constructions were by Lubotzky, Phillips and Sarnak \cite{lubotzky1988ramanujan}, and independently, by Marguilis \cite{margulis1988explicit}. In 2013 and 2015, Marcus, Spielman and Srivastava \cite{Marcus2013, Marcus2015} gave new constructions of bipartite Ramanujan graphs. For more on these constructions, see \cite{aksoy2020ramanujan}. Here, we focus on the construction by Lubotzky, Phillips and Sarnak, which we refer to as {\it LPS graphs}. These are the graph topologies underlying a SpectralFly network. Hence, when studying graph-theoretic properties, we use the terms ``SpectralFly" and ``LPS" interchangeably. \change{When interpreted as a network, a vertex of an LPS graph corresponds to a router, and edges between vertices correspond to bidirectional links. While fully-realized SpectralFly networks must also specify endpoint concentration (see Section \ref{sec:sim}), in this section we focus on the core LPS topology formed by the routers. } We utilize an extension of the original LPS graphs provided by Morgenstern \cite{Morgenstern1994}. LPS graphs are examples of graphs encoding algebraic group structure, called Cayley graphs. \begin{definition}[Cayley Graph] The Cayley graph, $\mathrm{Cay}(\mathcal{G},S)$, of a group $\mathcal{G}$ and symmetric is a graph on vertex set $V=\mathcal{G}$ and edge set $E=\{\{u,v\}: u^{-1}v \in S\}$. \end{definition} An LPS graph is a particular Cayley graph where both the group and the generating set $S$ depend on number-theoretic properties of two input values, $p$ and $q$, as defined below. \begin{definition}[LPS Graphs] \label{def:LPS} The LPS graph $\mbox{LPS}(p,q)$ is a {Cayley graph defined for distinct odd primes $p, q$.} {To define the generator set,} let $x,y$ be solutions to $x^2+y^2+1=0 \pmod{q}$, and $D$ be the set of solutions $(\alpha_0,\alpha_1,\alpha_2,\alpha_3)$ to $\alpha_0^2 + \alpha_1^2 + \alpha_2^2+\alpha_3^2 = p$ which satisfy \begin{itemize} \item $\alpha_0>0$ is odd, if $p \equiv 1 \pmod{4}$ \item $\alpha_0>0$ is even, or $\alpha_0=0$ and $\alpha_1>0$, if $p \equiv 3 \pmod{4}$. \end{itemize} The generating set $S$ of $\mbox{LPS}(p,q)$ consists of all matrices \[ \begin{bmatrix} \alpha_0+\alpha_1x+\alpha_3y & -\alpha_1y + \alpha_2+\alpha_3x \\ -\alpha_1y - \alpha_2 + \alpha_3x & \alpha_0-\alpha_1x-\alpha_3y \end{bmatrix}, \] where $(\alpha_0,\alpha_1,\alpha_2,\alpha_3) \in D$. The group $G$ of $\mbox{LPS}(p,q)$ is \[ G= \begin{cases} \mathrm{PSL}(2,\mathbb{F}_q) & \mbox{ if } \left(\frac{p}{q}\right)=1 \vspace{1mm} \\ \mathrm{PGL}(2,\mathbb{F}_q) & \mbox{ if } \left(\frac{p}{q}\right)=-1 \end{cases}, \] where $\left(\tfrac{p}{q}\right)$ is the Legendre symbol, and PSL and PGL are the projective special and linear groups, respectively. { If $q > 2\sqrt{p}$, then $\mbox{LPS}(p,q)$ is a $(p+1)$-regular Ramanujan graph.} \end{definition} \change{For those unfamiliar with algebraic graph theory or number theory, a full understanding of the details within the proceeding definition is unnecessary for the purposes of this work (see \cite{davidoff2003elementary}). Nonetheless, we include this definition as a self-contained description of the LPS topology. Similarly for completeness and to help garner understanding, we briefly illustrate how to generate LPS graphs in practice with an example below, and include a visualization in Figure \ref{fig:viz}. } For a full tutorial on LPS graph generation, see \cite{elzinga2010producing}. \begin{figure}[t] \centering \begin{tikzpicture}[square/.style={regular polygon,regular polygon sides=4}] \def.4\columnwidth{.4\columnwidth} \def1{1} \tikzstyle{element}=[square,draw=black,line width=1pt,align=center, inner sep = 0pt] \tikzstyle{action}=[arrows={->[line width=1pt,length=2mm,width=3mm]},line width = 1pt] \draw (0,0) node[element, fill=blue!15] (m1) {$\begin{matrix} 0 & 1 \\ 1 & 2\end{matrix}$}; \draw (.4\columnwidth,1) node[element, fill=red!15] (m2) {$\begin{matrix}1& 2\\0& 2\end{matrix}$}; \draw (-.4\columnwidth,1) node[element, fill=red!15] (m3) {$\begin{matrix}1& 3\\4& 4\end{matrix}$}; \draw (-.4\columnwidth,-1) node[element, fill=red!15] (m4) {$\begin{matrix}1& 4\\3& 0\end{matrix}$}; \draw (.4\columnwidth,-1) node[element, fill=red!15] (m5) {$\begin{matrix}1& 1\\1& 4\end{matrix}$}; \draw[action] (m1) -- (m2) node[midway, above, sloped] {\small{$\begin{pmatrix} 1&1\\2&4\end{pmatrix}$}}; \draw[action] (m1) -- (m3) node[midway, above, sloped] {\small{$\begin{pmatrix} 1&4\\3&4\end{pmatrix}$}}; \draw[action] (m1) -- (m4) node[midway, below, sloped] {\small{$\begin{pmatrix} 1&2\\1&4\end{pmatrix}$}}; \draw[action] (m1) -- (m5) node[midway, below, sloped] {\small{$\begin{pmatrix} 1&3\\4&4\end{pmatrix}$}}; \end{tikzpicture} \caption{ Neighborhood of a vertex in LPS$(3,5)$. Vertices are from $\mathrm{PGL}(2,\mathbb{F}_5)$ labeled by a representative matrix from the coset; edges $\{u,v\}$ are labeled by a generating element $u^{-1}v$ \vspace{-2.5em}} \label{F:xwing} \end{figure} \begin{example} Let $(p,q)=(3,5)$. These are valid inputs for an LPS graph because $p,q$ are distinct, odd primes and $5>2\sqrt{3}$. Since $x^2 \not \equiv 3 \pmod{5}$ for any $x$, the Legendre symbol $\left( \tfrac{3}{5} \right)=-1$ and hence the group is $\mathrm{PGL}(2,\mathbb{F}_5)$ where $\mathbb{F}_5=\{0,1,\dots,4\}$. The elements of $\mathrm{PGL}(2,\mathbb{F}_5)$ are cosets of $2 \times 2$ matrices with elements in $\mathbb{F}_5$ and nonzero determinant such that $A,B$ are in the same coset if $A=xB$ for some nonzero $x$. For example, the element \[ v= \left\{ \begin{bmatrix} 0 & 1 \\ 1 & 2 \end{bmatrix}, \begin{bmatrix} 0 & 2 \\ 2 & 4 \end{bmatrix}, \begin{bmatrix} 0 & 3 \\ 3 & 1 \end{bmatrix}, \begin{bmatrix} 0 & 4 \\ 4 & 3 \end{bmatrix} \right\} \] represents a {\it single} element of $\mathrm{PGL}(2,\mathbb{F}_5)$, and hence a single vertex of the graph \mbox{LPS}$(3,5)$. Next, we construct the generating set $S$. As $p \equiv 3 \pmod{4}$, we are interested in solutions of $\alpha_0^2 + \alpha_1^2 + \alpha_2^2+\alpha_3^2 = 3$ where either $\alpha_0>0$ and is even, or $\alpha_0=0$ and $\alpha_1>0$. There are no solutions of the former type; solutions of latter type are: \[ (0,1,1,1),(0,1,-1,-1),(0,1,-1,1),(0,1,1,-1). \] Finally, using $(x,y)=(0,2)$ as a solution to $x^2+y^2+1=0 \pmod{5}$, the elements of the generating set $S$ may be constructed. For example, \change{the coset} for the generator $s \in S$ corresponding to the solution $(0,1,1,1)$ is \[ \left\{ \begin{bmatrix} 1 & 2 \\ 1 & 4 \end{bmatrix}, \begin{bmatrix} 2 & 4 \\ 2 & 3 \end{bmatrix}, \begin{bmatrix} 3 & 1 \\ 3 & 2 \end{bmatrix}, \begin{bmatrix} 4 & 3 \\ 4 & 1 \end{bmatrix} \right\}. \] $\mbox{LPS}(3,5)$ is then constructed by creating edges between $u,v \in \mathrm{PGL}(2,\mathbb{F}_5)$ whenever $us=v$ for $s \in S$. Figure \ref{F:xwing} illustrates the neighborhood of a certain vertex in LPS$(3,5)$, labeling edges by the associated element $s \in S$. \end{example} \begin{figure*}[ht] \centering \phantom{}\hfill \includegraphics[width=0.48\linewidth]{lps_3_7_all_crop.pdf} \hfill \includegraphics[width=0.48\linewidth]{lps_3_17_cycle_crop.pdf} \hfill \phantom{} \\ \includegraphics[width=0.5\linewidth]{lps3_17_legend.pdf} \caption{ Visualization of SpectralFly topology instances: the entire LPS$(3,7)$ graph (left) and the 6-hop neighborhood of a vertex in LPS$(3,17)$ (right). Since LPS graphs are vertex transitive, the $k$-hop neighborhood of every vertex has the same structure. Furthermore, the local neighborhood surrounding a vertex is a tree of variable depth depending on the inputs $p,q$. For instance, a shortest length cycle in LPS$(3,17)$ is highlighted as blue, and utilizes vertices at distance 6 from the center vertex. \vspace{-1.5em} } \label{fig:viz} \end{figure*} \begin{figure}[ht!] \centering \includegraphics[width=0.49\columnwidth]{LPS_possSizes_vertices.pdf} \includegraphics[width=0.49\columnwidth]{bw_all_LPS.pdf} \\ \includegraphics[clip,width=0.49\columnwidth]{rad_test.png}\label{fig:sizeComp}% \includegraphics[clip,width=0.49\columnwidth]{bb_comp_log.pdf}% \vspace{-0.5em} \caption{Possible number of vertices and radix of LPS for $p,q<300$ (upper left), normalized bisection bandwidth of LPS for $p,q<100$ (upper right), feasible topology sizes per radix (lower left), raw bisection bandwidth comparison (lower right). \vspace{-3em}} \label{fig:sizes_bws} \vspace{-0.2em} \end{figure} There are several reasons for selecting LPS graphs amongst currently proposed Ramanujan constructions. First, LPS graphs are flexible in terms of feasible sizes and radix values. Figure \ref{fig:sizes_bws} (left) plots radix and vertex counts for all possible LPS graphs generated with inputs $p,q<300$. While, like almost any structured family of topologies, some radix and vertex count combinations are infeasible, the absence of large gaps in the plot suggests the high likelihood of finding an LPS graph ``acceptably close" to any given desired radix and vertex count combination. As discussed further in Section \ref{sec:structProp}, this flexibility stands in contrast to many competing graph topologies. LPS graphs afford users the ability to generate {\it arbitrarily large} graphs for a given radix, whereas the sizes of other topologies can only be increased via the radix. Secondly, in addition to exhibiting the Ramanujan property, LPS graphs possess other desirable characteristics. For example, since LPS graphs are Cayley graphs, they are vertex-transitive. Informally, this means every vertex has an identical local environment, i.e. the graph ``looks the same" from every vertex. Thus, the 6 hop neighborhood of a vertex in LPS$(3,17)$ seen in Figure $\ref{fig:viz}$ has an identical structure for all vertices. Consequently, vertex-transitivity enables simplifications which benefit the computational cost and design of routing protocols. Their algebraic structure also affords other benefits, such as optimal edge-connectivity (a key consideration for network resiliency) as well as efficient algorithms by which topologies on tens of millions of vertices may be easily generated \cite{elzinga2010producing}. In addition to possessing these properties by virtue of being a Cayley graph, LPS graphs are also widely studied. Over the past several decades, researchers have bounded or characterized the diameter, path length behavior, and fault tolerance of LPS graph, making them attractive options for supercomputing topologies, as argued in \cite{aksoy2020ramanujan}. One such key property for interconnection networks is bisection bandwidth. Figure \ref{fig:sizes_bws} (upper right) presents the normalized bisection bandwidth of LPS graphs for various sized topologies on radix values between $k=4$ and 98, divided by $nk/2$ to ensure a size-agnostic comparison. We observe larger normalized bisection bandwidth values are achieved for larger radix graphs, with diminishing returns. In contrast to some other topologies we survey, the bisection bandwidth doen't decay as LPS graph size increases per fixed radix, which is a consequence of the Ramanujan property. Furthermore, larger normalized bisection bandwidth values are feasible for larger radix networks. \section{Structural Property Comparison}\label{sec:structProp} \begin{table} \footnotesize \begin{tabular}{l|c|c|c|c|c|c} \multirow{2}{*}{\bf Topology} & \multirow{2}{*}{\bf \change{Routers}} &{\bf \change{Router}} & \multirow{2}{*}{\bf Diam.} & \multirow{2}{*}{\bf Dist.} & \multirow{2}{*}{\bf Girth} & \multirow{2}{*}{\boldsymbol{$\mu_1$}} \\ & & {\bf Radix} & & & & \\ \toprule LPS$(11,7)$ & 168 & 12 & 3 & 2.39 & 3 & 0.50 \\ SF$(7)$ & 98 & 11 & 2 & 1.89 & 3 & 0.62 \\ BF$(13,3)$ & 234 & 11 & 3 & 2.56 & 3 & 0.27 \\ DF$(12)$ & 156 & 12 & 3 & 2.70 & 3 & 0.08 \\ \hline LPS$(23,11)$ & 660 & 24 & 3 & 2.35 & 3 & 0.65 \\ SF$(17)$ & 578 & 25 & 2 & 1.96 & 3 & 0.64 \\ BF$(37,3)$ & 666 & 23 & 3 & 2.61 & 3 & 0.13 \\ DF$(24)$ & 600 & 24 & 3 & 2.84 & 3 & 0.04 \\ \hline LPS$(53,17)$ & 2448 & 54 & 3 & 2.32 & 3 & 0.74 \\ SF$(37)$ & 2738 & 55 & 2 & 1.98 & 3 & 0.65 \\ BF$(97,4)$ & 3104 & 54 & 3 & 2.76 & 3 & 0.07 \\ DF$(53)$ & 2862 & 53 & 3 & 2.93 & 3 & 0.02 \\ \hline LPS$(71,17)$ & 4896 & 72 & 4 & 2.61 & 4 & 0.77 \\ SF$(47)$ & 4418 & 71 & 2 & 1.98 & 3 & 0.66 \\ BF$(137,4)$ & 4384 & 74 & 3 & 2.76 & 3 & 0.05 \\ DF$(69)$ & 4830 & 69 & 3 & 2.94 & 3 & 0.01 \\ \hline LPS$(89,19)$ & 6840 & 90 & 4 & 2.61 & 4 & 0.80 \\ SF$(59)$ & 6962 & 89 & 2 & 1.99 & 3 & 0.66 \\ BF$(157,5)$ & 7850 & 85 & 3 & 2.82 & 3 & 0.06 \\ DF$(85)$ & 7310 & 85 & 3 & 2.95 & 3 & 0.01 \\ \bottomrule \end{tabular} \caption{Basic structural properties \vspace{-1.5em}}\label{tab:structProp} \vspace{-1em} \end{table} In order to understand the trade-offs between costs, diameter, and bisection bandwidth we compare the combinatorial properties of four topologies representing extreme points at or near the design space Pareto frontier. Specifically, we consider the DragonFly (optimizing cost and diameter), SlimFly (optimizing diameter and size), BundleFly (optimizing diameter and cost), and LPS/SpectralFly (optimizing spectral gap). Since random graph constructions, such as the aforementioned JellyFish, have sub-optimal spectral gap \cite{Friedman2003}, and also face serious challenges to adoption in practice due to their unstructuredness, we limit our comparison to {\it deterministic} topologies. Furthermore, we've selected topologies capable of being scaled to beyond tens of thousands of vertices, and which are flexible enough to generate instances with similar size, radix and link counts to other topologies, in order to ensure a fair comparison. Satisfying these criteria, the topologies we consider are defined as follows: \begin{itemize}[noitemsep,topsep=0pt] \item LPS$(p,q)$: The topology underlying SpectralFly, LPS graphs \cite{lubotzky1988ramanujan} are described in Definition \ref{def:LPS}. The radix is $p+1$ and the number of vertices is \change{$\left(3-\left(\tfrac{p}{q}\right)\right)\left(\nicefrac{q^3-q}{4}\right)$}. \item SlimFly, SF$(q)$: Studied in \cite{besta2014slim}, SlimFly topologies are based on the MMS graph construction by McKay, Miller and \v{S}ir\'{a}\v{n} \cite{mckay1998note}. For a description of the MMS graph construction, see \cite{hafner2004geometric}. The number of vertices is $2q^2$ and radix is $\frac{3q-\delta}{2}$, where $q=4k+\delta$ for $\delta \in \{-1,0,1\}$. \item BundleFly, BF$(p,s)$: a multi-star product of an MMS graph with parameter $s$ and Paley graph with parameter $p$ -- see \cite{lei2020bundlefly}. The number of vertices is $2ps^2$ and the radix is $\frac{p-1}{2}+\frac{3s-\delta}{2}$ where $s=4k+\delta$ for $\delta \in \{-1,0,1\}$. \item DragonFly, DF$(a)$: while there are many DragonFly variants (see \cite{teh2017design,aksoy2020ramanujan} for specifications), we consider the ``canonical" DragonFly topology consisting of $a+1$ fully connected groups, each on $a$ vertices. The number of vertices is $a(a+1)$ and the radix is $a$. \end{itemize} While it would be interesting to explore properties of Xpander topologies here, applying complicated interlacing polynomial approaches for their construction and the need to calculate the set of all shortest paths for every pair of routers makes such an evaluation impractical at scales of interest. We consider 5 size classes for each topology, ranging from ~100 vertices to ~7K vertices. For each size class, we conduct a parameter search to select the topology with closest radix and number of vertices relative to the others in that class. Table \ref{tab:structProp} shows the 4 topologies within each size class have very close radix, and fairly close node counts ensuring a fair comparison \change{of the performance based on the \emph{structural properties} of these networks} in our subsequent experiments. \change{The topologies also have similar {\it girth} (length of the shortest cycle), with larger LPS topologies being the sole examples of girth 4 topologies. } \paragraph{Feasible topology sizes per radix} The LPS construction accommodates a variety of radix and node size combinations. {In general, Ramanujan graphs of any size are possible; however the smallest possible LPS graph is on 120 vertices}. Fig. \ref{fig:sizes_bws} (lower left) plots possible vertex count and radix combinations. For SlimFly and canonical DragonFly, a large, low-radix topology is impossible, as the radix uniquely determines the topology size. BundleFly allows multiple possible vertex sizes per radix, but the choice of radix constrains the possible vertex sizes. The green points plot the maximum possible number of vertices per each feasible BundleFly radix. Some of maxima drop off sharply for certain radix values, suggesting the range of possible sizes may be unstable. \paragraph{Diameter and average path lengths} As summarized in Table \ref{tab:structProp}. SlimFly always has diameter 2, while BundleFly and DragonFly have diameter 3. In contrast, the diameter of LPS graphs depends on the topology size; numerical experiments from \cite{sardari2019diameter} suggest this diameter is asymptotic to $(4/3)\log_5(n)$. LPS has the second smallest average shortest path length (i.e. distance) across all size classes, in spite of sometimes having the largest diameter (for the fourth and fifth size classes). This gap between diameter and average distance suggests ``most'' pairs of vertices in LPS graphs may be closer in distance than the diameter. This is also apparent in Figure \ref{fig:viz}'s visualization of LPS$(3,7)$, where relatively fewer vertices appear at distance equal to the diameter from the center vertex. Indeed, recent work by Sardari \cite{sardari2019diameter} proved for any $k$-regular Ramanujan graph, only a tiny fraction of all pairs of vertices have distance greater than $(1+\varepsilon)\log_{k-1}(n)$. Furthermore, for each vertex $x$, the number of vertices at distance greater than this exponentially decays, being less than $n^{1-\varepsilon}$. \paragraph{Normalized Laplacian spectral gap, $\mu_1$} To enable cross-size comparison, we compute the normalized Laplacian spectral gap, $\mu_1$, related to the second largest adjacency eigenvalue $\lambda$ by $\mu_1=\nicefrac{k-\lambda}{k}$, where $k$ is the radix. Whereas smaller values of $\lambda$ ensure better spectral expansion, this is associated with {\it larger} values of $\mu_1$. Compared with SlimFly and LPS, Table \ref{tab:structProp} shows BundleFly and DragonFly with smaller values of $\mu_1$, which decay for larger sized topologies. As proven in \cite{aksoy2020ramanujan}, the second normalized Laplacian eigenvalue of the SlimFly topology SF$(q)$ is $\frac{2}{3+\delta/q}$ \!\raisebox{0.5ex}{\texttildelow}{}$\frac{2}{3}$. Since LPS graphs are Ramanujan, they have $\mu_1$ at least as large as $\frac{k-2\sqrt{k-1}}{k}$. Thus an LPS graph with radix $k\geq 35$ is guaranteed to have larger $\mu_1$ than {\it any} SlimFly topology. LPS graphs with smaller radix values may still have larger $\mu_1$ (as seen in the second size class in Table \ref{tab:structProp}) or smaller $\mu_1$ (as seen in the first size class). \paragraph{Bisection bandwidth} We use METIS \cite{karypis1997metis} to approximate bisection bandwidth, establishing an upper bound given by the points in Fig. \ref{fig:sizes_bws} (lower right). We also compute a lower bound from \cite{fiedler1973algebraic}, $\mbox{BW}(G)\geq \frac{\lambda_1 k n}{4}$, where $k$ is the radix and $n$ is the number of vertices. The exact bisection bandwidth lies between these points, represented by the shaded regions. Recall we are considering the router topology, without regard to a specific concentration. While one can further analyze bisection bandwidth under a particular concentration level, the relative orderings we observe here also hold whenever chosen concentration levels are equal, and so we omit this design choice for clarity and simplicity. As seen in Fig. \ref{fig:sizes_bws} in log-scale, as the size of SlimFly topologies increase, the gap between its normalized bisection bandwidth and that of a similar radix LPS widens further. SpectralFly has up to a 39\% increase in bisection bandwidth over SlimFly. This can be confirmed analytically: applying bounds from \cite{aksoy2020ramanujan}, the normalized bisection bandwidth of SlimFly is asymptotically $1/3$. LPS graphs have normalized bisection bandwidth at least $\frac{k-2\sqrt{k-1}}{2k}$, guaranteeing an LPS graph with $k\geq 36$ has larger normalized bandwidth than {\it any} SlimFly. We emphasize this is a {\it lower bound}; the normalized bisection bandwidth of LPS graphs computed by METIS exceeds $1/3$ around radix 18. \subsection{Structural Properties Under Link Failures} \begin{figure}[] \includegraphics[clip,width=0.5\columnwidth]{diam_delete_1.pdf}% \includegraphics[clip,width=0.5\columnwidth]{diam_delete_3.pdf} \\ \includegraphics[clip,width=0.49\columnwidth]{dist_delete_1.pdf} \includegraphics[clip,width=0.49\columnwidth]{dist_delete_3.pdf} \\ \includegraphics[clip,width=0.5\columnwidth]{bw_delete_1.pdf}% \includegraphics[clip,width=0.49\columnwidth]{bw_delete_3.pdf}% \caption{ Structural properties under edge failures for comparable LPS, SlimFly, BundleFly and DragonFly topologies on about 600 vertices (left column) and 7K vertices (right) \vspace{-1.5em}}\label{fig:deleteAll} \end{figure} We also examine how these structural properties vary under link failures of varying magnitudes.For each topology, we delete $k$ proportion of its edges, chosen randomly. Our results are averaged over sufficiently many trials.\footnote{For each topology, proportion $k$, and structural property measured, we increase the number of trials $x$ in powers of 10 until the coefficient of variation of sample means across 10 batches of $x$ trials is less than $10\%$.} We run these experiments for ``small", ~600 vertices, instances of each topology, as well as intermediate sized topologies on ~5K vertices. Figure \ref{fig:deleteAll} presents the results for diameter, mean distance, and bisection bandwidth. Note these measures are only well-defined for {\it connected} topologies; however, all four topologies remain consistently connected for small (left column) and medium (right column) sizes under random link failures until 60\% and 80\%, respectively, of edges are deleted. Thus, we only consider edge deletion proportions up until this disconnection threshold. With regard to diameter, SlimFly has the smallest value of 2 of the topologies surveyed. However, at 10\% edge failure, this diameter increases to $4$, while LPS topologies exhibit slightly smaller diameter. This suggests SlimFly diameter is more {\it fragile} than that of LPS, congruent with our prior observation that while nearly every pair of vertices in SlimFly is separated by a 2 hop distance, only very few pairs of vertices in an LPS graph achieve the diameter \cite{sardari2019diameter}. While LPS maintains a slight edge over SlimFly for 10\% edge failures, for 20-50\% edge failures they have comparable diameter, and for $>50$\% SlimFly has slightly smaller diameter. Lastly, for mean distance and bisection bandwidth, LPS and SlimFly perform the best. SlimFly has the smallest mean distance across all edge failure rates, with the gap between LPS narrowing slightly as a higher proportion of edges fail. For bisection bandwidth, LPS retains its larger bandwidth over SlimFly; this gap narrows significantly beyond 20\% failure. In summary, LPS and SlimFly are consistently more resilient under random edge failures than BundleFly and DragonFly with regard to diameter, average distance, and bisection bandwidth. For diameter, LPS and SlimFly are comparable, with LPS having slightly better diameter for 10\% edge failures and worse for 50\% and above. For average distance and bisection bandwidth, SlimFly retains lower hop count while LPS retains superior bisection bandwidth. \section{Routing Algorithms}\label{sec:route} We consider 3 types of routing strategies for SpectralFly: shortest path routing (minimal), Valiant routing, and Universal Global Adaptive (UGAL) routing. In minimal routing, given a source-destination pair $(s,d)$, a packet is forwarded along the routers on the shortest path from $s$ to $d$. In theory, minimal routing will minimize the overall latency of communication thereby outperforming other routing schemes when the underlying network has no congestion. However, in a congested network, shortest paths may not be the best choice for routing. This is especially true when the betweenness centrality scores of a set of vertices (routers) in a graph (topology) are quite high, meaning these set of vertices will be on the shortest paths for many vertices in the graph. Consequently these vertices will become the bottlenecks in a highly-saturated network. Congestion concerns have prompted alternative routing schemes which improve performance on various topologies. One such alternative to shortest path routing is Valiant routing~\cite{valiant1982scheme} which proceeds in two phases: given a source-destination pair $(s,d)$, a random intermediate router $i$ is chosen. The packet is then routed from $s$ to $i$ along a shortest path. Once the packet arrives at $i$, the second phase forwards the packet from $i$ to $d$ by following a shortest path. However, Valiant routing ignores the current state of routers, such as queue length. To ameliorate this, the UGAL family of routing protocols selects dynamically between the minimal path or a Valiant-style paths based on the current state of the system. For example, in the UGAL-L variant, each router only maintains information about the queue lengths of the local outports. Using this information at the source, a packet is either forwarded to a random intermediate node first or follows a minimal path based on the queue sizes of the local random outport and minimal outport and total hopcounts from the source to the destination for these two possible routes. \subsection{Deadlock Avoidance} Due to limited resources on each router (buffer count, size, etc.), cyclic dependencies can arise in the resource dependency graph, where messages may try to flow from one router to the next but also messages from the next router may try to flow in the reverse direction. As the buffers fill up and traffic from each router blocks each other in a cycle, this ultimately results in a deadlock. Such deadlocks can be avoided primarily in three ways: (1) by creating an acyclic routing scheme; (2) by using virtual channels (VC) and changing the virtual channel to route a packet on each network hop (by incrementing the virtual channel on each network hop, deadlock-free routing can be guaranteed); (3) by running a cycle-detection algorithm on the routing graph beforehand. Each time a cycle is detected, a new virtual channel is added to one of the routing edges, continuing until there are no more cycles. We've chosen the second approach to avoid deadlock based on virtual channels, since it does not require any preprocessing of the topology graph. We set the number of virtual channels to be equal to the diameter of SpectralFly, $d+1$ for the shortest path routing and $(2d + 1)$ for Valiant routing. \begin{figure*}[tp] \centering \begin{subfigure}[b]{0.26\textwidth} \centering \includegraphics[width=\linewidth]{speedup_offered_load_random_ugal_all_topo_c.pdf} \caption{Random.} \label{fig:offered_load_random_ugal} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\linewidth]{speedup_offered_load_bitshuffle_ugal_all_topo_c.pdf} \caption{Bit shuffle.} \label{fig:offered_load_bitshuffle_ugal} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\linewidth]{speedup_offered_load_bitrev_ugal_all_topo_c.pdf} \caption{Bit reverse.} \label{fig:offered_load_bitreverse_ugal} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\linewidth]{speedup_offered_load_transpose_ugal_all_topo_c.pdf} \caption{Transpose.} \label{fig:offered_load_transpose_ugal} \end{subfigure} \vspace*{-1em} \\ \centering \includegraphics[width=0.4\textwidth]{legends.png} \vspace{-1.0em} \caption{ Performance comparison across topologies, traffic patterns, and offered load conditions under UGAL-L routing. \vspace{-1.5em} } \label{fig:offered_load_ugal} \end{figure*} \section{Simulation Results} \label{sec:sim} In this section, we report our simulation results on evaluating SpectralFly, SlimFly, BundleFly and DragonFly topologies with different workloads exhibiting interesting communication patterns that are prevalent in many HPC applications. \subsection{Simulation Software} We conduct our experiments in the Structural Simulation Toolkit (SST) Macroscale Element Library (SST/macro) simulator~\cite{sstrepo}. Our simulation approach performs online simulation, which involves skeletonization of an application during the compilation step so that part of the application involving communication (such as communication API calls, MPI\_alltoall etc.) can be intercepted by the simulator during runtime. The simulator replaces these calls with various built-in network component model implementations. The application can then run inside the simulator without any significant change. The user can provide necessary hardware parameter values (for routers, NICS, topologies, routing schemes etc.) to the simulator for running the application with different hardware configurations. We have used the \textit{Simulator Network for Adaptive Priority Packet Routing (SNAPPR)} network model in SST/macro to evaluate different topologies. SNAPPR implements coarse-grained cycle-based simulation to simulate priority queue-based QoS. In addition, it can also restrict injection rate of messages for congestion control. For a detailed discussion about available network models in SST/macro, we refer to the SST/macro user manual~\cite{sstrepo}. \subsection{Configuration and Simulation Setup}\label{subsec:config} We evaluate the performance of different micro-benchmarks by considering SpectralFly, DragonFly, SlimFly and BundleFly topologies. We conducted our experiments with \!\raisebox{0.5ex}{\texttildelow}{}$8.7k$ network endpoints and with 32-port routers. To generate the SpectralFly topology with \!\raisebox{0.5ex}{\texttildelow}{}$8.7k$ network endpoints, we set $(p, q)=(23, 13)$ to generate a graph with 1092 routers, and a concentration of $8$ endpoints per router. For the DragonFly topology configuration, the number of groups used is 69 ($g$), with 16 routers per group ($a$), each router connected to $8$ endpoints ($p$), and 8 global links ($h$) per router. This conforms to the recommended balance to support full global bandwidth for Dragonfly with radix-$k$ switches ($p=k/4, h=k/4, a=k/2$). The global links in the DragonFly topology are arranged in a circulant manner~\cite{hastings2015comparing,kaplan2017unveiling}, since this arrangement provides better bisection bandwidth than the absolute arrangement. For the SlimFly topology, $q$ is set to 27, with each router connected to 8 endpoints. Finally, for the BundleFly topology, the graph is constructed with $p=s=9$, and each router has a concentration of 6 endpoints. In the case of under-subscription (for example, when running microbenchmarks with $2^{13} = 8192$ ranks out of \!\raisebox{0.5ex}{\texttildelow}{}$8.7k$ available ranks), \change{the physical nodes allocated to the job} are chosen randomly. \change{Each MPI rank is then sequentially allocated to nodes based on the standard ordering for the topology. For the SpectralFly topology we use the essentially unstructured ordering resulting from the Elzinga construction~\cite{elzinga2010producing}.} We report our experimental results with various routing strategies. Valiant routing demonstrates similar performance trend. The router buffer size has been set to 64KB (Other buffer sizes have also been tested but the results are not reported here due to space constraint). For simulation, the number of virtual channels has been set to the diameter of the graph plus one. \subsection{Experimental results} \begin{figure}[tp] \centering \includegraphics[width=0.80\linewidth]{speedup_offered_load_random_min_c.pdf} \label{fig:offered_load_random} \vspace{-0.8em} \caption{ Performance across topologies, and offered load conditions with random micro-benchmark, under minimal routing. \vspace{-3em}} \label{fig:offered_load_minimal} \end{figure} \subsubsection{Micro-benchmarks to assess performance under congestion} We consider standard traffic pattern micro-benchmarks to evaluate the performance of different topologies under various network capacities (offered load). These include random, bit shuffle, transpose, and bit reverse traffic patterns. In each case, a source node communicates with a destination node that is determined by a specific permutation of the bit representation of the source. Random traffic patterns can be found in many irregular and graph applications. The shuffle traffic pattern (obtained by rotating left 1 bit of the source) can be found in Fast Fourier Transform (FFT) and sorting applications. Matrix transpose is a basic linear-algebraic operation. We consider a total of 8192 endpoints for these experiments. For each traffic pattern ran on a topology, we collect the maximum time taken across all the messages under a particular offered load. The results are reported in~\Cref{fig:offered_load_ugal}. Here, on the x-axis we plot the offered load i.e. how much of the network is saturated when running the micro-benchmarks. To simulate network congestion, we inject messages with varying delays by simulating a Poisson process. We report the speedup relative to the execution with DragonFly Each topology was run with the UGAL-L routing. As can be observed from the figure, for all the micro-benchmarks SpectralFly performs the best. The better performance of SpectralFly can be attributed to the superior bisection bandwidth and available path diversity of the SpectralFly topology. At or beyond 70\% of the network capacity, the network becomes saturated. Between BundleFly and SlimFly, BundleFly exhibits better performance (except with bit shuffle traffic). These experiments demonstrate that, because of stronger discrepancy and spectral properties, SpectralFly is robust to accommodate diverse traffic patterns under varying degrees of network congestion. \begin{figure}[tp] \centering \includegraphics[width=0.7\linewidth]{speedup_bars_offered_load_valiant_routing_lps.pdf} \vspace{-0.8em} \caption{ \change{Evaluation of Valiant routing for the SpectralFly topology with micro-benchmarks.\vspace{-2em}}} \label{fig:offered_load_random_lps_routing} \end{figure} \subsubsection{Evaluation of different routing schemes} Besides evaluating different topologies with the UGAL routing, we also consider minimal and Valiant routing for evaluation. \Cref{fig:offered_load_minimal} presents the performance of the random benchmark with minimal routing. With random micro-benchmark, SpectralFly demonstrates better performance, compared to other topologies. Bit shuffle and transpose exhibit similar patterns. Next, in order to evaluate the difference between minimal and Valiant routing for SpectralFly, we ran the four micro-benchmarks under varying network loads and report the results in \Cref{fig:offered_load_random_lps_routing}. The execution time is normalized w.r.t. the execution time of minimal (shortest-path) routing scheme on SpectralFly. We see significant improvement for all offered loads with the bit shuffle, bit reverse, and transpose traffic patterns, while the random pattern has a significant decrease in performance (except at 60\% offered load). This suggests the increase in path diversity by applying Valiant routing to a structured communication pattern better exploits the discrepancy property of the LPS graphs. Moreover, there is already significant path diversity in minimal routing for the random micro-benchmark, and so the addition of Valiant routing provides a minimal increase in path diversity while doubling the expected length of the routing paths. This suggests SpectralFly performs best when traffic is unstructured, either due to the logical communication pattern or from the choice of routing algorithm. { \subsection{Evaluation of Topologies under Real-World Traffic Patterns} \subsubsection{Patterns Considered} For evaluating different topologies with real-world traffic patterns (under both minimal and UGAL routings), we consider communication motifs from the Ember Communication Pattern Library~\cite{emberrepo}: \begin{enumerate}[label=(\roman*),wide, labelwidth=!, labelindent=0pt] \item \textit{Nearest neighbor communication pattern -- Halo3D-26:} Nearest neighbor communication pattern, found in stencil workloads, is captured by the Halo3D-26 motif, where each MPI rank communicates with 6 of it's nearest neighbors as well as 20 of it's diagonal neighbors, a total of 26 neighbors. \item \textit{Wavefront communication pattern -- Sweep3D:} Wavefront communication pattern is prevalent in particle transport physics simulation, parallel iterative solvers and triangular solvers \cite{hoisie1999performance} that generally stresses network latency and has substantial dependency levels. One representative motif for the wavefront communication pattern is the ASCI ~Sweep3D~\cite{hoisie1999performance}. Here, a 3D data domain is decomposed over a 2D array of MPI processes and repeated sweep along the diagonal is performed. \item \textit{Subcommunicator all-to-all communication pattern-- FFT:} In this communication pattern, found in Multi-dimensional FFT, a 3D domain is decomposed along the $X$ and $Y$ dimensions and subcommunicators are formed along the 1D line in both of the $X$ and $Y$ dimensions. One MPI process is assigned to each of the 3D grid points and communicates with all the subcommunicators along the $X$ and $Y$ dimensions \end{enumerate} \begin{figure}[tp] \centering \includegraphics[width=0.7\linewidth]{speedup_bars_ember_minimal.pdf} \vspace{-1.2em} \caption{\small{Performance under Ember real-world traffic patterns with minimal routing, reported as speedup w.r.t. the DragonFly topology. \vspace{-1.5em} } } \label{fig:ember_speedup} \end{figure} \begin{figure}[tp] \centering \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=0.7\columnwidth]{speedup_bars_ember_ugal.pdf} \end{subfigure} \\ \vspace{-1.2em} \caption{\change{ \small{Performance of different topologies with UGAL routing w.r.t. DragonFly UGAL routing for real-world traffic patterns. \vspace{-1.5em}}}} \label{fig:ember_ugal_speedup} \end{figure} \subsubsection{Performance Results} The performance of each of the Ember motifs on different topologies are reported in~\Cref{fig:ember_speedup} (with minimal routing) and \cref{fig:ember_ugal_speedup} (with UGAL routing). As can be observed from~\Cref{fig:ember_speedup}, for both the Halo3D-26 and the Sweep3D traffic patterns, the SpectralFly configuration outperforms the other topologies with a speedup of \!\raisebox{0.5ex}{\texttildelow}{}$1.2\times$ and \!\raisebox{0.5ex}{\texttildelow}{}$1.4\times$ respectively, over the DragonFly topology (with minimal routing). This indicates that, for SpectralFly, with communication patterns with relatively low per-node communication, the robust discrepancy property and the reduction in the average hop-count is sufficient to ameliorate any penalty accruing as a result of longer maximum hop-count. In contrast to this, we see that for the balanced FFT motif, DragonFly slightly outperforms the other topologies. As the communication pattern for FFT involves all-to-all communication along a 2D-plane within a 3D-arrangement of ranks, we suspect that relative improvement is a result of the partial alignment of these 2D all-to-all communication with the group structure. Specifically, if multiple nodes from the same all-to-all communication land in the same group, there is an out-sized decrease in the communication pressure on the global links. In particular, we note that the stronger group structure of DragonFly (even as compared to BundleFly and SlimFly) leads to the best performance on the balanced FFT motif. We also note that, because of the lack of large all-to-all clusters in Halo3D-26 and Sweep3D, there is as much marginal benefit to alignment with the group structure. Finally, for the unbalanced FFT traffic pattern, the SpectralFly configuration outperforms all other topologies. While the other topologies with strong group structure will again benefit from multiple elements from the 2D all-to-all aligning with the group, the increased sizes of the all-to-all groups will necessitate significantly more between-group traffic on global links, degrading overall performance. In contrast, SpectralFly handles the increased all-to-all communication pressure better. We also evaluate the Ember benchmarks on different topologies with UGAL routing. \Cref{fig:ember_ugal_speedup} shows SpectralFly outperforms other topologies for halo3D-26 and Sweep3D motifs. However, for both FFT motifs, DragonFly with UGAL routing performs better. For the FFT motif, SpectralFly performs better than SlimFly and BundleFly (achieving 90\% of the execution efficiency w.r.t DragonFly for the balanced FFT motif). This suggests discrepancy properties of LPS graphs ensure performance of SpectralFly either better-than or competitive-with topologies which excel under the UGAL routing scheme. } \begin{table*}[t] { \footnotesize \centering \begin{tabular}{l|c|c|cc|cc|c|c |c| c|c} \multirow{2}{*}{\bf Topology} & \multirow{2}{*}{\bf Routers} & {\bf Router} & \multicolumn{2}{|c|}{\bf Average Wire} & \multicolumn{2}{|c|}{\bf Max.\ Wire} & {\bf Electrical} & {\bf Optical} & {\bf Bisection} & {\bf Total} & {\bf Power/Bandwidth} \\ & & {\bf Radix} & \multicolumn{2}{|c|}{\bf Length (m)} &\multicolumn{2}{|c|}{\bf Length (m)} & {\bf Links} & {\bf Links} &{\bf Bandwidth} & {\bf Power (W)} & {\bf (mW per Gb/s)} \\ \toprule LPS$(11,7)$ & \phantom{0}168 & 12 & \phantom{0}8.02 & \multirow{2}{*}{(10.29)} & 19.8 &\multirow{2}{*}{(21.21)}& 249 & \phantom{00}758 & 304 &\phantom{00}928 & 30.5\\ SF$(9)$ & \phantom{0}162 & 13 & \phantom{0}8.68 && 21.6 && 151 & \phantom{00}902 & 369 & \phantom{0}1028 &27.9\\ \hline LPS$(19,7)$ & \phantom{0}336 & 20 & 10.43 &\multirow{2}{*}{(13.94)}& 28.6 &\multirow{2}{*}{(31.05)}& 432 & \phantom{0}2928 &1080& \phantom{0}3276& 30.3\\ SF$(13)$ & \phantom{0}338 & 19& 10.89 && 27.8 && 315 & \phantom{0}2896 & 1105 & \phantom{0}3155 & 28.6\\ \hline LPS$(23,11)$ & \phantom{0}660 & 24 & 14.35 &\multirow{2}{*}{(17.27)}& 39.8 &\multirow{2}{*}{(41.07)}& 531 & \phantom{0}7389 & 2928 & \phantom{0}7845& 26.8\\ SF$(17)$ & \phantom{0}578 & 25 & 13.05 && 36.2 && 558 & \phantom{0}6667 & 2465 & \phantom{0}7138 & 29.0\\ \hline LPS$(29,13)$ & 1092 & 30 & 17.32 &\multirow{2}{*}{(21.09)}& 50.8 &\multirow{2}{*}{(52.10)}& 831 & 15549 & 6150 & 16292& 26.5\\ SF$(23)$ & 1058 & 35 & 16.00 && 47.4 && 1257 & 17258 & 6095 & 18336& 30.1\\ \bottomrule \end{tabular} \caption{\small Wire length and energy efficiency statistics for the heuristic embedding of comparable SpectralFly and SlimFly topologies. For mean and maximum wire length, we include in parenthesis the mean 20 instantiations of the SkyWalk topology in the same machine room. \vspace*{-2.0em}}\label{tab:wirelengths}} \end{table*} \begin{figure} \centering \includegraphics[trim = 15 15 0 15,clip,width = \linewidth]{latency_ratio_SW.pdf} \caption{\small Ratio of maximum and average latency between SpectralFly/SlimFly and SkyWalk as a function of the switch latency. \vspace{-3em}} \label{fig:latency} \end{figure} \vspace{-.25em} \section{Beyond Structure} As noted in Section \ref{sec:structProp}, network parameters for each topology (Table \ref{tab:structProp}), were chosen to facilitate a comparison of interconnects based on their fundamental \emph{structural} properties. In Sections \ref{sec:structProp} and \ref{sec:sim}, the structure of SpectralFly is superior to, or comparable with, that of DragonFly, SlimFly, and BundleFly across a variety of metrics. However, in practice, the trade-off between topology cost and performance is an important factor in the overall design. Since the competing topologies have a similar number of routers and connections per router, the total amount of wiring needed to build the topologies will be a primary driver of any cost differences. {In addition to the direct costs of wire length there is an additional, indirect cost as longer wires often necessitate higher energy optical connections. As an added benefit, the analysis of wire lengths allows us to evaluate the end-to-end and typical latency of SpectralFly as compared to physical latency minimizing topologies, such as SkyWalk~\cite{SkyWalk}.} First, we compare the average {and maximum} wire length necessary to implement a SpectralFly topology to the similarly-sized SlimFly topology. SlimFly was chosen as point of comparison because the bisection bandwidth (see Figure \ref{fig:sizes_bws}) is most structurally comparable to the SpectralFly topology. To ensure an equitable comparison, we assume each topology is implemented equal concentration with rectilinear physical wiring. {Following the methodology in \cite{SkyWalk}, we assume an $x \times y$ grid of cabinets where intra-cabinent wires are all $2$ meters while the inter-cabinet wires have length of $4 + 2 \abs{x_i-x_j} + 0.6 \abs{y_i - y_j}$, which includes a 2 meter overhead at each end of the link. Assuming a roughly square room, we fix $y = \lceil\sqrt{\nicefrac{2c}{0.6}}\rceil$ and $x = \lceil\nicefrac{c}{y}\rceil$ where $c$ is the minimum number of cabinets need for the topology if, similar to the Summit supercomputer, each cabinet contains two routers.} This allows us to restrict our attention to the wiring between the routers. Thus the question of minimal average wire length is an instance of the Quadratic Assignment Problem (QAP), which is $\mathcal{NP}$-complete. To find a heuristically minimal layout, we apply an expectation minimization approach combined with a greedy refinement process which outperforms the the standard Fast Approximate QAP algorithm on these instances~\cite{Vogelstein:FAQ}. {In order to take advantage of short lengths of intra-cabinent links, for each topology we fix as a maximum matching of the underlying topology and enforce that the matching edges are within a cabinet. Table \ref{tab:wirelengths} provides a summary of the results of this layout approach. As we can see the maximum and average wire lengths of SpectralFly and SlimFly topologies are within \!\raisebox{0.5ex}{\texttildelow} 10\% of each other across all sizes, with SpectralFly performing better on smaller topologies. To provide additional context, we compare the layout with the SkyWalk topology which was designed to minimize end-to-end latency in the case of ultra-low latency routers/switches. For each machine room, we report (in parenthesis) the average behavior over 20 instantiations of the SkyWalk topology in the same machine room. As we can see the SkyWalk topology typically requires between \!\raisebox{0.5ex}{\texttildelow}{}20-30\% longer lines overall with a maximum wirelength \!\raisebox{0.5ex}{\texttildelow}{}3\% longer. This indicates that despite the underlying expansion of the SpectralFly and SlimFly necessitating longer wires, with care the overall wire lengths can be made comparable to other modern topologies.} { To translate the wire lengths to an estimate of the power usage, we update the methodology of \cite{GooglePower} to modern hardware (i.e., the Mellanox SB7800 InfiniBand EDR 100Gb/s Switch) and assume each port connected to an electrical link uses \!\raisebox{0.5ex}{\texttildelow}{}3.76 W while ports with optical links use 25\% more power at \!\raisebox{0.5ex}{\texttildelow}{}4.72 W. Using METIS to approximate bisection bandwidth, we quantify the trade-off between overall power expenditure versus communication performance (see Table \ref{tab:wirelengths}). As is the case with other metrics, the difference in energy cost per unit of bandwidth is \!\raisebox{0.5ex}{\texttildelow}{}5-10\%, with the notable exception of the $(29,13)$-SpectralFy being 15\% more efficient than the similarly sized SlimFly. This efficiency gain is a consequence of SpectralFly's better expansion properties yielding slightly better bisection bandwidth while requiring \!\raisebox{0.5ex}{\texttildelow}{}15\% fewer links.} { The wire lengths allows the evaluation of end-to-end latency and clock cycle times implicit in SpectralFly and SlimFly. Following \cite{SkyWalk}, we assume a cable delay of $5\ \nicefrac{ns}{m}$ uniform switch latencies. Figure \ref{fig:latency} provides a comparison of both SlimFly and SpectralFly with the latency minimizing SkyWalk topology. Except for $\mbox{LPS}(19,7)$, we have that both topologies typically have lower end-to-end latency (and hence clock-cycle time) than the SkyWalk topology, as well as singificantly lower average latency. While the average and end-to-end latency of SpectralFly is slightly larger (\!\raisebox{0.5ex}{\texttildelow}{}5-10\%) than SlimFly, necssitating a longt clock-cycle time, the overall performance benefits illustrated in Section \ref{sec:sim} are sufficient to make up for this difference. Further, we believe that applying a more sophisticated multi-objective minimization approach to the layout problem will further close the gap in latencies between these two topologies. } \section{Conclusion} The design of interconnection networks is increasingly informed by graph theoretic considerations. While researchers have established a long list of desirable criteria, such as low-diameter and average distance, high fault tolerance, and high bisection bandwidth, developing topologies exhibiting all these properties requires sophisticated methods. To this end, we've proposed SpectralFly, a class of topologies with optimal spectral gap based on the LPS graph algebraic construction. Exploring first the design space of LPS graphs, we showed this construction permits a large range of topology sizes and radix values, including arbitrarily large topologies per fixed radix. We then highlighted, both via experiments and analytically, structural properties for which SpectralFly excelled in comparison to competing topologies. In particular, for bottleneck measures such as normalized bisection bandwidth, SpectralFly outperformed other topologies, which have decaying or tightly bounded bandwidth. The concession for these properties is slightly larger diameter; however, we showed the average distance between nodes in an LPS topology is typically smaller than DragonFly and BundleFly, and only marginally larger than SlimFly. Furthermore, experiments suggest these LPS graph properties remain relatively robust under edge failures. Lastly, in order to experimentally validate the potential of SpectralFly suggested by its structural properties, we conducted simulations using the SST/macro simulator. SpectralFly outperformed other network topologies under a diverse range of communication patterns found in traditional HPC workloads. Further, we demonstrated that the cost of implementing a SpectralFly topology is on-par with, if not better than, the SlimFly topology (which is the only considered topology which has comparable bandwidth). \\ \noindent {\bf Acknowledgement.} We would like to thank Jeremiah Wilke for very helpful technical exchanges regarding SST/macro. This work was supported by the High Performance Data Analytics program at PNNL. Information Release PNNL-SA-160551. \bibliographystyle{IEEEtran}
1,116,691,496,986
arxiv
\section{Introduction} By the end of 2013, researchers found out that DNN models are vulnerable to well-crafted malicious perturbations. Szegedy et al. \cite{szegedy2014intriguing} were the first to recognize the prevalence of adversarial cases in the context of image classification. Researchers have shown that a slight alteration in an image can influence the prediction of a DNN model. It is demonstrated that even the most advanced classifiers can be fooled by a very small and practically undetectable change in input, resulting in inaccurate classification. Since then, a lot of research studies \cite{tuna2021exploiting,ilyas2019prior,tuna2020closeness,meng2017magnet} were performed in this new discipline known as \textit{Adversarial Machine Learning} and these studies were not limited just to image classification task. To give some example, Sato et al. \cite{sato2018interpretable} showed in the NLP domain that changing just one word from an input sentence can fool a sentiment analyser trained with textual data. Another example is in the audio domain \cite{carlini2018audio},where the authors generated targeted adversarial audio samples in autonomous speech recognition task by introducing very little distortion to the original waveform. The findings of this study indicate that the target model can simply be exploited to transcribe the input as any desired phrase. Attacks that take advantage of DNN's weakness can substantially compromise the security of these machine learning (ML)-based systems, often with disastrous results. Adversarial evasion attacks mainly work by altering the input samples to increase the likelihood of making wrong predictions. These attacks can cause the model's prediction performance to deteriorate since the model cannot correctly predict the actual output for the input instances. In the context of medical applications, a malicious attack could result in an inaccurate disease diagnosis. As a result, it has the potential to impact the patient's health, as well as the healthcare industry \cite{finlayson2019adversarial}. Similarly, self-driving cars employ ML to navigate traffic without the need for human involvement. A wrong decision for the autonomous vehicle based on an adversarial attack could result in a tragic accident. \cite{sitawarin2018darts,morgulis2019fooling}. Hence, defending against malicious evasion attacks and boosting the robustness of ML models without sacrificing clean accuracy is critical. Presuming that these ML models are to be utilized in crucial areas, we should pay utmost attention to ML models' performance and the security problems of these architectures. In principle, adversarial strategies in evasion attacks can be classified based on multiple criteria. Based on the attacker's ultimate goal, attacks can be classified as targeted and untargeted attacks. In the former, the attacker perturbs the input image, causing the model to predict a class other than the actual class. Whereas in the latter, the attacker perturbs the input image so that a particular target class is predicted by the model. Attacks can also be grouped based on the level of knowledge that the attacker has. If the attacker has complete knowledge of the model like architecture, weights, hyper-parameters etc., we call this kind of setting as White-Box Settings. However, if the attacker has no information of the deployed model and defense strategy, we call this kind of setting as Black-Box Settings \cite{NEURIPS2018_e7a425c6}. This research study focused on both targeted and untargeted attacks in a White-Box setting. We propose an effective modification to the standard DNN-based classifiers by adding special kind of non-linear activation functions (sigmoid or tanh) to the last layer of the model architecture. We showed that training a model using high temperature value at the output layer activations and using the model by discarding the temperature value at inference time provides a very high degree of robustness to loss-based White-box targeted and untargeted attacks, together with attacks acting like Deepfool. We hereby name our proposed models as \texttt{Squeezed Models}. Our codes are released on GitHub \footnote{\url{https://github.com/author-name/xxx}} for scientific use. To summarize; our main contributions for this study are: \begin{itemize} \item We propose an effective modification to standard DNN based classifiers, which enables natural robustness to gradient-based White-box targeted and untargeted attacks. \item We show that using a specific type of non-linear activation functions at the output layer with high temperature values can actually provide robustness to the model without impairing the ability to learn. \item We experimentally showed that adding non-linearity to the last hidden layer provides robustness to other types of attacks, like Deepfool. \end{itemize} \section{Related Works} \label{ch:related_work} Since the uncovering of DNN's vulnerability to adversarial attacks \cite{szegedy2014intriguing}, a lot of work has gone into inventing new adversarial attack algorithms and defending against them by utilizing more robust architectures \cite{HUANG2020100270,catak2020generative,9003212,9099439}. We will discuss some of the noteworthy attack and defense studies separately. \subsection{Adversarial evasion attacks} DNN models have some vulnerabilities that make them challenging to defend in adversarial settings. For example, they are mostly sensitive to slight changes in the input data, leading to unexpected results in the model's predictions. Figure \ref{fig:adv-ml-ex} depicts how an adversary could take advantage of such a vulnerability and fool the model using properly crafted perturbation applied to the input. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\linewidth]{figures/jarviiii3.pdf} \caption{The figure depicts an example to adversarial attack. The original image is subjected to the adversarial perturbation. The precisely crafted perturbation manipulates the model in such a way that a "Dog (Chihuahua)" is wrongly identified as "Sports Car" with high confidence.} \label{fig:adv-ml-ex} \end{figure} An important portion of the attack methods are gradient-based and based on perturbing the input sample in order to maximize the model's loss. In recent years, many different adversarial attack techniques have been suggested in literature. The most widely known and used adversarial attacks are \texttt{Fast-Gradient Sign Method, Iterative Gradient Sign Method, \texttt{DeepFool}} and \texttt{Carlini-} \texttt{Wagner}. These adversarial attack algorithms are briefly explained in Section \ref{sec:fgsm-definition} - \ref{sec:carlini}. \subsubsection{Fast-Gradient Sign Method}\label{sec:fgsm-definition} This method, also known as FGSM \cite{goodfellow2015explaining}, was one of the first and most well-known adversarial attacks. The derivative of the model's loss function with respect to the input sample is exploited in this attack strategy to determine which direction the pixel values in the input image should be changed in order to minimize the loss function of the model. Once this direction is determined, it changes all pixels in the opposite direction at the same time to maximize loss. One can craft adversarial samples for a model with a classification loss function represented as $J(\theta,\mathbf{x},y)$ by utilizing the formula below, where $\theta$ denotes the parameters of the model, $\mathbf{x}$ is the benign input, and $y_{true}$ is the real label of our input. \begin{equation} \mathbf{x}^{adv} = \mathbf{x} + \epsilon \cdot sign\left(\nabla_x J(\theta,\mathbf{x},y_{true}) \right) \label{eq:fgsm_untargeted} \end{equation} In \cite{kurakin2017adversarial}, the authors presented a targeted variant of FGSM referred to as the Targeted Gradient Sign Method (TGSM). This way, they could change the attack to try to convert the model's prediction to a particular class. To achieve this, instead of maximizing the loss with respect to the true class label, TGSM attempts to minimize the loss with respect to the target class $J_{target}$. \begin{equation} \mathbf{x}^{adv} = \mathbf{x} - \epsilon \cdot sign\left(\nabla_x J(\theta,\mathbf{x},y_{target}) \right) \label{eq:fgsm_targeted} \end{equation} Different from Eq. \ref{eq:fgsm_untargeted}, we now subtract the crafted perturbation from the original image as we try to minimize the loss this time. If we want to increase the efficiency of this approach, we can modify above equation as in Eq.\ref{eq:fgsm_targeted_enhanced}.The only difference is that instead of minimizing the loss of the target label, we maximize the loss of the loss of the true label and also minimize the loss for the alternative label. \begin{equation} \mathbf{x}^{adv} = \mathbf{x} + \epsilon \cdot sign\left(\nabla_x (J(\theta,\mathbf{x},y_{true})-J(\theta,\mathbf{x},y_{target})) \right) \label{eq:fgsm_targeted_enhanced} \end{equation} \subsubsection{Iterative Gradient Sign Method} Kurakin et al. \cite{kurakin2017adversarial} proposed a minor but significant enhancement to the FGSM. Instead of taking one large step $\epsilon$ in the direction of the gradient sign, we take numerous smaller steps $\alpha$ and utilize the supplied value $\epsilon$ to clip the output in this method. This method is also known as the Basic Iterative Method (BIM), and it is simply FGSM applied to an input sample iteratively. Equation \ref{eq:bim} describes how to generate perturbed images under the $l_{inf}$ norm for a BIM attack. \begin{equation} \begin{aligned} \mathbf{x}_{t}^* & = \mathbf{x} \\ \mathbf{x}_{t+1}^* & = clip_{x, \epsilon} \{ \mathbf{x}_{t} + \alpha \cdot sign \left( \nabla_\mathbf{x} J(\theta, \mathbf{x}_t^*, y_{true}) \right) \} \end{aligned} \label{eq:bim} \end{equation} where $\mathbf{x}$ is the clean sample input to the model, $\mathbf{x}^*$ is the output adversarial sample at $i$\textsuperscript{th} iteration, $J$ is the loss function of the model, $\theta$ denotes model parameters, $y_{true}$ is the true label for the input, $\epsilon$ is a configurable parameter that limits maximum perturbation amount in given $l_{inf}$ norm, and $\alpha$ is the step size. As in the case of TGSM, we can easily modify Eq. \ref{eq:bim} to produce targeted variant of BIM. At each intermediate step, we can try to minimize the loss with respect to target class while at the same time maximizing the loss with respect to original class as in Eq. \ref{eq:bim_targeted}. \begin{equation} \begin{aligned} \mathbf{x}_{t}^* & = \mathbf{x} \\ arxiv \mathbf{x}_{t+1}^* & = clip_{x, \epsilon} \{ \mathbf{x}_{t} + \alpha \cdot sign ( \nabla_\mathbf{x} (J(\theta, \mathbf{x}_t^*, y_{true})-\\ & J(\theta, \mathbf{x}_t^*, y_{target})) ) \} \end{aligned} \label{eq:bim_targeted} \end{equation} \subsubsection{Deepfool Attack} \label{sec:deepfool-definition} This attack method has been introduced by Moosavi-Dezfooli et al. \cite{moosavidezfooli2016deepfool} and it is one of the most strong untargeted attack algorithms in literature. It's made to work with several distance norm metrics, including $l_{inf}$ and $l_{2}$ norms. The Deepfool attack is formulated on the idea that neural network models act like linear classifiers with classes separated by a hyperplane. Starting with the initial input point $\mathbf{x_t}$, the algorithm determines the closest hyperplane and the smallest perturbation amount, which is the orthogonal projection to the hyperplane, at each iteration. The algorithm then computes $\mathbf{x}_{t+1}$ by adding the smallest perturbation to the $\mathbf{x}_{t}$ and checks for misclassification. The illustration of this attack algorithm is provided in Figure \ref{fig:decision_boundary}. This attack can break defensive distillation method and achieves higher success rates than previously mentioned iterative attack approaches. But the downside is, produced adversarial sample generally lies close to the decision boundary of the model. \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth] {figures/decision_boundary.png} \caption{Illustration of Deepfool attack algorithm} \label{fig:decision_boundary} \end{figure} \subsubsection{Carlini{\&}Wagner Attack} \label{sec:carlini} Proposed by Carlini and Wagner \cite{carlini2017evaluating}, and it is one of the strongest attack algorithms so far. As a result, it's commonly used as a benchmark for the adversarial defense research groups, which tries to develop more robust DNN architectures that can withstand adversarial attacks. It is shown that, for the most well-known datasets, the CW attack has a greater success rate than the other attack types on normally trained models. Like Deepfool, it can also deceive defensively distilled models, which other attack types struggle to create adversarial examples for. In order to generate more effective and strong adversarial samples under multiple $l_{p}$ norms, the authors reformulate the attack as an optimization problem which may be solved using gradient descent. A $confidence$ parameter in the algorithm can used to change the level of prediction score for the created adversarial sample. For a normally trained model, application of CW attack with default setting (confidence set to 0) would generally yield to adversarial samples close to decision boundary. And high confident adversaries generally located further away from decision boundary. Adversarial machine learning is a burgeoning field of research, and we see a lot of new adversarial attack algorithms being proposed. Some of the recent remarkable ones are: i) Square Attack \cite{andriushchenko2020square} which is a query efficient black-box attack that is not based on model's gradient and can break defenses that utilize gradient masking, ii) HopSkipJumpAttack \cite{9152788} which is a decision-based attack algorithm based on an estimation of model's gradient direction and binary-search procedure for approaching the decision boundary, iii) Prior Convictions \cite{ilyas2019prior} which utilizes two kinds of gradient estimation (time and data dependent priors) and propose a bandit optimization based framework for adversarial sample generation under loss-only access black-box setting and iv) Uncertainty-Based Attack \cite{tuna2021exploiting} which utilizes both the model's loss function and quantified epistemic uncertainty to generate more powerful attacks. \subsection{Adversarial defense} \subsubsection{Defensive Distillation} Although the idea of knowledge distillation was previously introduced by Hinton et al. \cite{hinton2015distilling} to compress a large model into a smaller one, the utilization of this technique for adversarial defense purposes was first suggested by Papernot et al. \cite{papernot2016distillation}. The algorithm starts with training a $teacher \ model$ on training data by employing a high temperature (T) value in the softmax function as in Equation \ref{eq:softmax_T}, where $p_{i}$ is the probability of i\textsuperscript{th} class and $z_{i}$'s are the logits. \begin{equation} p_{i} = \frac{\exp(\frac{z_{i}}{T})}{\sum_{j} \exp(\frac{z_{i}}{T})} \label{eq:softmax_T} \end{equation} Then, using the previously trained teacher model, each of the samples in the training data is labeled with soft labels calculated with temperature (T) in prediction time. The $distilled \ model$ is then trained with the soft labels acquired from the teacher model, again with a high temperature (T) value in the softmax. When the training of the student model is over, we use temperature value as 1 during prediction time. Figure \ref{fig:edefens-dist} shows the overall steps for this technique. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\linewidth]{figures/defense-dist.png} \caption{Defensive Distillation.} \label{fig:edefens-dist} \end{figure} \subsubsection{Adversarial Training} Adversarial training is an intuitive defense method in which the model's robustness is increased by training it with adversarial samples. As demonstrated in Eq. \ref{eq:min_maxx}, this strategy can be mathematically expressed as a Minimax game. \begin{equation} \label{eq:min_maxx} \underset{\theta}{min} \ \underset{|\delta\| \leq \epsilon }{max} \ J(h_\theta(x+\delta), y) \end{equation} where $h$ denotes the model, $J$ denotes the model's loss function, $\theta$ represents model's weights and y is the actual label. $\delta$ is the amount of perturbation amount added to input x and it is constrained by given $\epsilon$ value. The inner objective is maximized by using the most powerful attack possible, which is mostly approximated by various adversarial attack types. In order to reduce the loss resulting from the inner maximization step, the outside minimization objective is used to train the model. This whole process produces a model that is expected to be resistant to adversarial attacks used during the training of the model. For adversarial training, Goodfellow et al. \cite{goodfellow2015explaining} used adversarial samples crafted by the FGSM attack. And Madry et al. used the PGD attack \cite{madry2019deep} to build more robust models, but at the expense of consuming more computational resources. Despite the fact that adversarial training is often regarded as one of the most effective defenses against adversarial attacks, adversarially trained models are nevertheless vulnerable to attacks like CW. Adversarial ML is a very active field of research, and new adversarial defense approaches are constantly being presented. Among the most notable are: i) High-Level Representation Guided Denoiser (HGD) \cite{liao2018defense} which avoids the error amplification effect of a traditional denoiser by utilizing the error in the upper layers of a DNN model as loss function and manages the training of a more efficient image denoiser, ii) APE-GAN \cite{shen2017apegan} which uses a Generative Adversarial Network (GAN) trained with adversarial samples to eliminate any adversarial perturbation of an input image, iii) Certified Defense \cite{raghunathan2020certified} which proposes a new differentiable upper bound yielding a model certificate ensuring that no attack can cause the error to exceed a specific value and iv) \cite{tuna2020closeness} which uses several uncertainty metrics for detecting adversarial samples. \section{Approach} \subsection{Chosen Activation Functions} We used specific type of activation functions (sigmoid and hyperbolic tangent) whose derivatives can be expressed in terms of themselves. And the derivative of these activation functions approaches to 0 when the output of the activation functions approaches to their maximum and minimum values. Starting with the sigmoid function; we know that sigmoid function ($\sigma(x)$) can be represented as in Eq. \ref{eq:sigmoid} and it squeezes the input to the range of 0 and 1 as can be seen in Figure \ref{fig:sig}. \begin{equation} \sigma(x) = \frac{1}{1+e^{-x}} \label{eq:sigmoid} \end{equation} And the derivative of Sigmoid function can be expressed as in Eq. \ref{eq:sigmoid_derivative}: \begin{equation} \frac{d}{{dx}}\sigma(x) = \sigma(x) . (1 - \sigma(x)) \label{eq:sigmoid_derivative} \end{equation} One can easily derive using above formulation or verify from Figure \ref{fig:sig} that the derivative of sigmoid function approach to 0 when the output of the sigmoid function approaches to 0 or 1. Similarly, we can represent hyperbolic tangent ($\tanh(x)$) function as in Eq. \ref{eq:tanhh}. Different from sigmoid, hyperbolic tangent function squeezes the input to the range of -1 and 1 as can be seen in Figure \ref{fig:tanh}. \begin{equation} \tanh{x}=\frac{e^x - e^{-x}}{e^x + e^{-x}} \label{eq:tanhh} \end{equation} The derivative of hyperbolic tangent function can be expressed as in Eq. \ref{eq:tanhh_derivative}. Using Eq. \ref{eq:tanhh_derivative} or Figure \ref{fig:tanh}, we can verify that the derivative of $\tanh$ function approaches to 0 when the output of the $\tanh$ function approaches to -1 or 1. So, the pattern is similar to the one we see in sigmoid function. The derivative of both of these activation functions yields to 0 when the output of the activation functions are at their minimum or maximum values. This property will be quite usefull when use these activation functions at the output layer of DNN classifiers to zeroing out the gradients. \begin{equation} \frac{d}{{dx}}\tanh x = 1 - \tanh ^2 x \label{eq:tanhh_derivative} \end{equation} \begin{figure}[!htbp] \centering \subfloat[\centering Sigmoid\label{fig:sig}]{{\includegraphics[width=5.2cm]{figures/sigmoid.pdf} }}% \qquad \subfloat[\centering Hyperbolic Tangent\label{fig:tanh}]{{\includegraphics[width=5.2cm]{figures/tanh.pdf} }}% \end{figure} \subsection{Proposed Method} We begin this part by introducing the loss calculation for a standard deep neural network classifier. Let $K$ denotes the number of output classes, $\mathcal{D} = \{(\mathbf{x}_i,\mathbf{y}_i)\}_{i=1}^{N}$ be our dataset, where $x_i \in \mathbb{R}^{d}$ and $y_i \in \{o_1,o_2...,o_k\}$ are the $i^{th}$ input and output respectively where $o_k$ is the one-hot encoded vector with the only $k^{th}$ index being one and zero for the other indices and the probability output score of any output class with index $k \in \{0,1...,K-1\}$ is represented by $P_k$. Based on this notation, the loss value ($J$) of the classifier for any test input $x^*$ can be calculated using cross-entropy loss function as below: \begin{equation} \begin{split} J = -\sum_{k=0}^{K-1}o_k[k] \cdot \log(P_k) & = -\log(P_{true}) \end{split} \end{equation} As can be seen in Figure \ref{fig:standard_dnn}, in standard DNN-based classifiers that are widely used today, usage of activation functions in the output layer is omitted and the prediction scores of each class is calculated by feeding the output of the last layer of the network (logits) to the softmax function. If we denote the logits by $Z= \{z_0,z_1...,z_{K-1}\}$, we can calculate the derivative of the loss with respect to $k^{th}$ logit using Eq. \ref{eq:logit_derivative}. Formal derivation of the Eq. \ref{eq:logit_derivative} is provided in Appendix B. \begin{equation} \frac{\partial J}{\partial z_k} = P_k - o_k[k] \label{eq:logit_derivative} \end{equation} \begin{figure}[!htbp] \centering \begin{subfigure}[b]{0.8\linewidth} \centering \includegraphics[width=\linewidth]{figures/sigmoid_trick-Standard_DNN.png} \caption{Standard DNN Classifier} \label{fig:standard_dnn} \end{subfigure} \hfill \begin{subfigure}[b]{0.8\linewidth} \centering \includegraphics[width=\linewidth]{figures/sigmoid_trick-Sigmoid.png} \caption{The proposed classifier} \label{fig:proposed_dnn} \end{subfigure} \caption{Comparison of standard DNN classifier and the proposed classifier} \label{fig:proposed_classifier} \end{figure} Loss-based adversarial attacks try to exploit the gradient of the loss function $(J)$ with respect to input sample $x$, and what the attacker is trying to do is to use $\frac{\partial J}{\partial x}$ to maximize $J$. We know from chain rule that $\frac{\partial J}{\partial x} = \frac{\partial J}{\partial z}.\frac{\partial z}{\partial x}$. Therefore, for any target class $k$, the gradient of the model's loss function with respect to the input image is directly proportional to $\frac{\partial J}{\partial z_k}$. In response to such kind of attack idea, several defense approaches have been proposed which mask the gradients of the models. For example, Defensive Distillation technique achieves this against untargeted loss-based attacks by enabling the model to make highly confident predictions. Because, when the model makes highly confident predictions in favor of the true class; $P_{true}$ approaches 1, and since the label for true class is also 1, $\frac{\partial J}{\partial z}$ and therefore $\frac{\partial J}{\partial x}$ approaches to 0 for the specific untargeted attack case. However, above approach will not work for targeted attack case. Because, in order to prevent targeted attacks, we have to make $\frac{\partial J}{\partial z}$ to become 0 for target class. And, the way to achieve this for standard DNN-based classifiers is to make target probability ($P_{target}$) to be very close to 1 (to make $P_{target} - o_{target}[target]$ equals to 0) which obviously contradicts with the natural learning task. Therefore, there actually exists a dilemma between masking the gradient of the model for targeted attack case and achieving the task of learning for the model at our hand. This phenomenon is beautifully explained by Katzir et al. in \cite{katzir2019blocking}. To overcome this problem, we propose to use either of the two commonly known nonlinear activation functions (sigmoid and $\tanh$) on the logits of the model as depicted in Figure \ref{fig:proposed_dnn}. The important thing is to apply an high temperature value to these activation functions during learning process (e.g. : $\sigma(x,T)=1/(1+\exp(-x/T))$ and use the model by ignoring the temperature value at prediction time, just like defensive distillation technique. After our proposed modification, the output of the last layer will be $\hat{Z}$, where $\hat{Z}=\{\hat{z}_0,\hat{z}_1...,\hat{z}_{K-1}\}$ and $\hat{Z}=\tanh{(Z)}$ or $\hat{Z}=\sigma{(Z)}$, depending on the chosen activation function. Based on this modified architecture, derivative of the model's loss with respect to input image under gradient-based attack against any class $k$ can be formulated as below: \begin{equation} \begin{split} \frac{\partial J}{\partial x} = \frac{\partial J}{\partial \hat{z}_k}.\frac{\partial \hat{z}_k}{\partial z_k}.\frac{\partial z_k}{\partial x} \end{split} \label{eq:long_chain} \end{equation} In case of sigmoid function, the above equation can be reformulated as below by using Eq. \ref{eq:sigmoid_derivative} and Eq.\ref{eq:logit_derivative}. \begin{equation} \begin{split} \frac{\partial J}{\partial x} = (P_k - o_k[k]).\hat{z}_k.(1 - \hat{z}_k).\frac{\partial z_k}{\partial x} \end{split} \label{eq:long_sigmoid} \end{equation} And in case of tanh function, the Eq.\ref{eq:long_chain} can be written as below Eq. \ref{eq:tanhh_derivative} and Eq.\ref{eq:logit_derivative}: \begin{equation} \begin{split} \frac{\partial J}{\partial x} = (P_k - o_k[k]).(1 - \hat{z}_k ^2).\frac{\partial z_k}{\partial x} \end{split} \label{eq:long_tanh} \end{equation} During the training of the DNN classifier depicted in Figure \ref{fig:proposed_dnn}, we force $\hat{z_k}$ to be at it's maximum possible value for the true class in order the maximize the final softmax prediction score. And similarly, we force $\hat{z_k}$ to be at it's minimum value for the other classes. Therefore, in case of sigmoid and tanh functions, $\hat{z_k}$ will approach to 1 for true class. And for the classes other than the true class, $\hat{z_k}$ will approach to 0 and -1 for sigmoid and tanh functions respectively. Since we additionally applied a high temperature value to these activation functions during training time, the output of these activation functions ($\hat{z_k}$) will be even more close to their maximum and minimum values at prediction time when we omit their temperature values. Consequently, Eq. \ref{eq:long_sigmoid} and Eq. \ref{eq:long_tanh} will approach to 0 for both targeted and untargeted attack cases. Because, if we use the proposed model architecture using sigmoid function, $\hat{z}_k.(1 - \hat{z}_k)$ will be 0 when $\hat{z}_k$ is either 0 or 1. And if we use the proposed model architecture using tanh function, $1 - \hat{z}_k ^2$ will become 0 when $\hat{z}_k$ is either -1 or 1. This way, we can successfully zero out (mask) the gradients of the model for loss-based targeted and untargeted attack threats. To avoid any round-off errors in floating point operations, high precision should be set for floating point numbers in the ML calculations. \subsection{Visual Representations of Loss Surfaces} We know that normally-trained models are vulnerable to gradient-based white-box targeted and untargeted attack threats. The main reason for this vulnerability lies in the ability of the attacker to successfully exploit the loss function of the model. To illustrate this fact, we made a simple experiment using a test image from MNIST (Digit) dataset and draw the loss surfaces of various models against two different directions (one for loss gradient direction and one for a random direction). When we check Figures \ref{fig:normal_untargeted} and \ref{fig:normal_targeted} which display the loss surfaces of normally-trained model, we see in both cases that there exists a useful gradient information which the attacker can exploit to craft both untargeted and targeted adversarial samples. \begin{figure*}[!htbp] \centering \subfloat[\centering normal model untargeted\label{fig:normal_untargeted}]{{\includegraphics[width=5cm]{figures/mnist_normal_untargeted_loss_surface_in_loss_direction.pdf} }}% \qquad \subfloat[\centering normal model targeted\label{fig:normal_targeted}]{{\includegraphics[width=5cm]{figures/mnist_normal_targeted_loss_surface_in_loss_direction.pdf} }}% \subfloat[\centering distilled model untargeted\label{fig:distilled_untargeted}]{{\includegraphics[width=5cm]{figures/mnist_distilled_untargeted_loss_surface_in_loss_direction.pdf} }}% \qquad \subfloat[\centering distilled model targeted\label{fig:distilled_targeted}]{{\includegraphics[width=5cm]{figures/mnist_distilled_targeted_loss_surface_in_loss_direction.pdf} }}% \subfloat[\centering our model untargeted\label{fig:our_untargeted}]{{\includegraphics[width=5cm]{figures/mnist_tanh_untargeted_loss_surface_in_loss_direction.pdf} }}% \qquad \subfloat[\centering our model targeted\label{fig:our_targeted}]{{\includegraphics[width=5cm]{figures/mnist_tanh_targeted_loss_surface_in_loss_direction.pdf} }}% \caption{Loss surfaces of various models under untargeted and targeted attack scenarios}% \label{fig:loss-surfaces}% \end{figure*} To prevent the above vulnerability, various defense methods has been proposed, including Defensive distillation. This technique was found to significantly reduce the ability of traditional gradient-based untargeted attacks to build adversarial samples. Because defense distillation has an effect of diminishing the gradients down to zero for untargeted attack case and the usage of standard objective function is not effective anymore. As depicted in Figure \ref{fig:distilled_untargeted}, the gradient of the distilled model diminishes to zero and thus loss-based untargeted attacks have difficulty in crafting adversarial samples for defensively distilled models. However, it was then demonstrated that attacks, such as the TGSM attack, could defeat the defensive distillation strategy \cite{ross2017improving}, but without providing a mathematical proof about why these attacks actually work. And the actual reason of success for these kind of attacks against defensively distilled model is shown to lie in the targeted nature of these attacks \cite{katzir2019blocking}. Figure \ref{fig:distilled_targeted} demonstrates the loss surface of a distilled model under a targeted attack and we can easily see that the gradient of the model loss does not diminish to zero as in Figure \ref{fig:distilled_untargeted}. The result is not surprising at all, because for a defensively distilled model under targeted attack, we expect $P_{target}$ to be almost 0 and $o_{target}[target]$ is 1. Therefore, we expect $\frac{\partial J}{{\partial z}}$ (equivalent to $P_{target}$-$o_{target}[target]$) approach to -1 which is more than enough to exploit the gradient of the loss function for a successful attack. As a last attempt, we analyze the loss surfaces of one of our proposed models (model which was trained using $\tanh$ activation function with high temperature value at the output layer). When we check Figures \ref{fig:our_untargeted} and \ref{fig:our_targeted}, we see that the gradient of model's loss function diminishes to 0 for both of the untargeted and targeted attack cases. And this prevents the attacker to exploit the gradient information of the model to craft successful adversarial perturbations. \subsection{Softmax prediction scores of proposed architectures} For a normally trained standard DNN-based classifiers, we expect the model to make a prediction in favor of true class with a prediction score usually close to 1. In case of a defensively distilled model, we force the model to make high confident predictions. That is why, we see a prediction score very close to 1 in favor of the true class. However, for our proposed model architectures, the softmax prediction score of the true class is lower compared to a normal or defensively distilled model. Because the activation functions in the last layer of the model limits the values for $\hat{z}_k$ to an interval of (0,1) and (-1,1). If we use sigmoid function in the last layer, maximum prediction score will be 0.232 and if we use tanh function in the last layer, maximum prediction score will be 0.450. And this will be the case for all the predictions. Similarly, minimum prediction scores will be 0.085 and 0.061 for models with sigmoid and tanh activation functions respectively. Softmax prediction score output of a test sample from MNIST dataset is displayed in Figure \ref{fig:softmax-scores} for various models. We believe that this behaviour of our models, just like the case in defensively distilled models, might also be quite useful to prevent attackers to infer an information that is supposed to be private from the output probability scores of any prediction of the model and might contribute to the privacy of model as suggested in \cite{shokri2017membership,shokri2019}. \begin{figure}[!htbp] \centering \includegraphics[width=1\linewidth]{figures/softmax_scores.pdf} \caption{Softmax score outputs of various models} \label{fig:softmax-scores} \end{figure} \section{Experiments} \subsection{Adversarial Assumptions} In this research study, we assume that the attacker can chose to implement targeted or untargeted attacks towards the target the model. Our assumption was that the attacker was fully aware of the architecture and parameters of the target model as in the case of \textit{whitebox} setting and use the model as it is. Another crucial assumption concerns the constraints of the attacker. Clearly, the attacker should be limited to applying a perturbation with $l_p$ norm up to certain $\epsilon$ value for an attack to be unrecognizable to the human eye. For this study, we used $l_\infty$ and $l_2$ norm metrics to restrict the maximum perturbation amount that an adversary can apply on the input sample. Finally, the error rate of our proposed defense technique is assessed over the percentage of resulting successful attack samples which is proposed by Goodfellow et al. \cite{goodfellow2015explaining} and recommended by Carlini et al. \cite{carlini2019evaluating}. \subsection{Experimental Setup}\label{sec:experimental-setup} For our experiments, we used four sets of models for each dataset as normal model, defensively distilled(student) model, proposed model with sigmoid activation and proposed model with tanh activation function at the output layer. By using same architectures, we trained our CNN models using MNIST (Digit) \cite{lecun-mnisthandwrittendigit-2010} and CIFAR-10 \cite{cifar10} datasets. For MNIST (Digit) dataset, our models attained accuracy rates of 99.35\%, 99.41\%, 98.97\% and 99.16\% respectively. And, for CIFAR-10 dataset, our models attained accuracy rates of 83.95\%, 84.68\%, 82.37\% and 80.15\% respectively. The architectures of our CNN models and the hyperparameters used in model training are listed Appendix A. Finally, we set the temperature ($T$) value as 20 and 50 for MNIST and CIFAR datasets respectively during the training of the defensively distilled model and our proposed models. \subsection{Experimental Results}\label{sec:experimental-results} During our tests, we only implemented attack on the test samples if our models had previously classified them accurately. Because an adversary would have no reason to tamper with samples that have already been labeled incorrectly. For the TGSM and Targeted BIM attacks, we regard the attacks successful only if each the perturbed image is classified by the model as the chosen target class. We set the target class to "2" for MNIST (Digit) dataset, and "Cars" for CIFAR-10 dataset. We utilized an open source Python library called Foolbox \cite{rauber2018foolbox} to implement the attacks used in this study. The attack parameters used in BIM and Targeted BIM are provided in Table \ref{tab:iterative_settings}. The results of our experiments for MNIST and CIFAR10 datasets are available in Tables \ref{tab:mnist_results},\ref{tab:mnist_results2} and Tables \ref{tab:cifar_results},\ref{tab:cifar_results2} together with the amount of perturbations applied and chosen norm metrics. Just for CW and Deepfool attacks, we used the $l_{2}$ norm equivalent of the applied perturbation by using the formula $l_{2} = l_{inf} \times \sqrt{n} \times \sqrt{2}/\sqrt{\pi e}$, where $n$ is the input sample dimension. When we check the results, we observe that normally trained models are vulnerable to both targeted and untargeted attack types, whereas defensively distilled models are vulnerable to only targeted attack types. And our proposed models (squeezed models) provides a high degree of robustness to both targeted (TGSM,Targeted BIM, CW) and untargeted (FGSM,BIM) attacks. This success results from the effectiveness of our models in zeroing out the gradients in both scenarios. \begin{table} \centering \caption{Attack success rates on MNIST (Digit) - Part 1 } \label{tab:mnist_results} \scriptsize \begin{tabular}{!{\color{black}\vrule}l|c!{\color{black}\vrule}c!{\color{black}\vrule}c!{\color{black}\vrule}c!{\color{black}\vrule}} \cline{2-5} \multicolumn{1}{l!{\color{black}\vrule}}{} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Tanh)\end{tabular} \\ \hline FGSM ($l_\infty$, $\epsilon$ : 0.1) & 9.75\% & 2.13\% & 0.12\% & 0.38\% \\ \hline TGSM ($l_\infty$, $\epsilon$ : 0.1) & 1.77\% & 1.74\% & 0.04\% & 0.03\% \\ \hline BIM ($l_\infty$, $\epsilon$ : 0.1) & 34.20\% & 2.31\% & 0.05\% & 0.19\% \\ \hline Targeted-BIM ($l_\infty$, $\epsilon$ : 0.1) & 13.04\% & 9.31\% & 0.03\% & 0.02\% \\ \hline CW ($l_2$, $\epsilon$ : 1.35 conf : $0$) & 80.94\% & 59.99\% & 0.04\% & 0.07\% \\ \hline Deepfool ($l_2$, $\epsilon$ : 1.35 ) & 29.73\% & 21.22\% & 0.06\% & 0.14\% \\ \hline \end{tabular} \label{tab:mnist_results} \end{table} \begin{table} \centering \caption{Attack success rates on MNIST (Digit) - Part 2} \label{tab:mnist_results2} \scriptsize \begin{tabular}{!{\color{black}\vrule}l|c!{\color{black}\vrule}c!{\color{black}\vrule}c!{\color{black}\vrule}c!{\color{black}\vrule}} \cline{2-5} \multicolumn{1}{l!{\color{black}\vrule}}{} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Tanh)\end{tabular} \\ \hline FGSM ($l_\infty$, $\epsilon$ : 0.2) & 31.09\% & 2.23\% & 0.38\% & 0.12\% \\ \hline TGSM ($l_\infty$, $\epsilon$ : 0.2) & 9.86\% & 8.33\% & 0.03\% & 0.04\% \\ \hline BIM ($l_\infty$, $\epsilon$ : 0.2) & 98.19\% & 2.71\% & 0.23\% & 0.08\% \\ \hline Targeted-BIM ($l_\infty$, $\epsilon$ : 0.2) & 90.05\% & 77.77\% & 0.03\% & 0.04\% \\ \hline CW ($l_2$, $\epsilon$ : 2.70 conf : $0$) & 100\% & 99.96\% & 0.11\% & 0.11\% \\ \hline Deepfool ($l_2$, $\epsilon$ : 2.70 ) & 97.69\% & 87.41\% & 0.17\% & 0.06\% \\ \hline \end{tabular} \label{tab:mnist_results2} \end{table} \begin{table} \centering \caption{Attack success rates on CIFAR10 - Part 1 } \label{tab:cifar_results} \scriptsize \begin{tabular}{|l|c|c|c|c|} \cline{2-5} \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}\\\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Tanh)\end{tabular} \\ \hline FGSM ($l_\infty$, $\epsilon$ : 3/255) & ~72.38\% & 13.88\% & ~4.07\% & ~1.64\% \\ \hline TGSM ($l_\infty$, $\epsilon$ : 3/255) & ~22.84\% & ~21.36\% & ~0.56\% & 0.23\% \\ \hline BIM ($l_\infty$, $\epsilon$ : 3/255) & ~93.53\% & ~15.01\% & 2.61\% & ~0.95\% \\ \hline Targeted-BIM ($l_\infty$, $\epsilon$ : 3/255) & ~57.36\% & 57.22\% & ~0.46\% & 0.22\% \\ \hline CW ($l_2$, $\epsilon$ :0.798) & ~100.00\% & ~100.00\% & 2.93\% & ~1.36\% \\ \hline Deepfool ($l_2$, $\epsilon$ : 0.798) & ~99.98\% & ~99.76\% & ~2.22\% & ~0.87\% \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Attack success rates on CIFAR10 - Part 2 } \label{tab:cifar_results2} \scriptsize \begin{tabular}{|l|c|c|c|c|} \cline{2-5} \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}\\\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Tanh)\end{tabular} \\ \hline FGSM ($l_\infty$, $\epsilon$ : 6/255) & 80.58\% & 13.85\% & 4.04\% & 1.7\% \\ \hline TGSM ($l_\infty$, $\epsilon$ : 6/255) & 25.41\% & 25.58\% & 0.64\% & 0.3\% \\ \hline BIM ($l_\infty$, $\epsilon$ : 6/255) & 96.75\% & 15.11\% & 3.07\% & 1.14\% \\ \hline Targeted-BIM ($l_\infty$, $\epsilon$ : 6/255) & 68.16\% & 73.07\% & 0.6\% & 0.29\% \\ \hline CW ($l_2$, $\epsilon$ :1.596) & 100\% & 100\% & 3.08\% & 1.75\% \\ \hline Deepfool ($l_2$, $\epsilon$ :1.596 ) & 99.98\% & 100\% & 2.17\% & 0.89\% \\ \hline \end{tabular} \end{table} One other thing worth to mention about the result of our experiments is that, in addition to gradient-based attacks, our proposed models exhibit excellent performance against Deepfool attack as well. Generally, the reason behind the success of Deepfool attack against standard DNN-based classifiers is the linear nature of these models as argued by Goodfellow et. al. \cite{goodfellow2015explaining} and the authors of Deepfool paper formalized their methods based on this assumption \cite{moosavidezfooli2016deepfool}. However, since we introduce additional non-linearity to the standard DNN classifiers at the output layer, Deepfool attack algorithm fails to succeed in crafting adversarial samples compared to normally or defensively distilled models. \section{Conclusion} In this study, we first showed that existing DNN-based classifiers are vulnerable to gradient-based White-box attacks. And, even the model owner uses a defensively distilled model, the attacker can still have a chance to craft successful targeted attacks. We then proposed a modification to the standard DNN-based classifiers which helps to mask the gradients of the model and prevents the attacker to exploit them to craft both targeted and untargeted adversarial samples. We empirically verified the effectiveness of our approach on standard datasets which are heavily used by adversarial ML community. Finally, we demonstrated that our proposed model variants have inherent resistance to Deepfool attack thanks to the increased non-linearity at the output layer. In this study, we focused on securing DNN based classifiers against evasion attacks. However, it is shown that previous defense approaches on adversarial robustness suffer from privacy preservation issues\cite{shokri2019}. In the future, we plan to evaluate our proposed models against privacy related attack strategies, specifically membership inference attacks. \section{Appendix - A} \begin{table}[!htbp] \centering \caption{Model Architectures used in our experiments} \label{tab:cnn_model_arch_digit} \scriptsize \begin{tabular}{|c||c|c|} \hline \textbf{Dataset} & \textbf{Layer Type} & \textbf{Layer Information}\\ \hline \hline \multirow{10}{*}{MNIST - Digit} & Convolution (padding:1) + ReLU & $3 \times 3 \times 32$ \\ & Convolution (padding:1) + ReLU & $3 \times 3 \times 32$ \\ & Max Pooling & $2 \times 2$ \\ & Convolution (padding:1) + ReLU & $3 \times 3 \times 64$ \\ & Convolution (padding:1) + ReLU & $3 \times 3 \times 64$ \\ & Max Pooling & $2 \times 2$ \\ & Fully Connected + ReLU & $3136 \times 200$ \\ & Dropout & p : 0.5 \\ & Fully Connected + ReLU & $200 \times 200$ \\ & Dropout & p : 0.5 \\ & Fully Connected & $200 \times 10$ \\ \hline \hline \multirow{14}{*}{CIFAR10} & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 32$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 64$ \\ & Max Pooling (Stride 2) & $2 \times 2$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 128$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 128$ \\ & Max Pooling (Stride 2) & $2 \times 2$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 256$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 256$ \\ & Dropout & p : 0.5 \\ & Max Pooling (Stride 2) & $2 \times 2$ \\ & Fully Connected + ReLU & $4096 \times 1024$ \\ & Dropout & p : 0.5 \\ & Fully Connected + ReLU & $1024 \times 256$ \\ & Dropout & p : 0.5 \\ & Fully Connected & $256 \times 10$ \\ \hline \end{tabular} \end{table} Note: The common softmax layers are omitted for simplicity. For our proposed methods, we have applied Sigmoid and Tanh activation layers just after the final fully connected layers. The model architectures are available in the shared Github repository. \begin{table*}[!htbp] \centering \caption{CNN model parameters} \label{tab:cnn_model_params} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\\\end{tabular}}} & \multicolumn{4}{c|}{MNIST (Digit)} & \multicolumn{4}{c|}{CIFAR-10} \\ \cline{2-9} \multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ours\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ours\\(Tanh)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Normal\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distilled\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ours\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ours \\(Tanh)\end{tabular} \\ \hline Opt. & Adam & Adam & Adam & Adam & Adam & Adam & Adam & Adam \\ \hline LR & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\ \hline Batch S. & 128 & 128 & 128 & 128 & 128 & 128 & 128 & 128 \\ \hline Dropout & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.05 & 0.25 \\ \hline Epochs & 20 & 20 & 20 & 20 & 50 & 50 & 50 & 50 \\ \hline Temp. & 1 & 20 & 20 & 20 & 1 & 50 & 50 & 50 \\ \hline \end{tabular} \end{table*} \begin{table}[!htbp] \centering \caption{Parameters that are used in BIM and Targeted BIM attacks: $\alpha$ denotes the step size and $i$ denotes \# of steps for a perturbation budget $\epsilon$} \label{tab:iterative_settings} \begin{tabular}{c|c|c} \hline \textbf{Dataset} & \textbf{Parameters} & \textbf{$l_p$ norm}\\ \hline \hline MNIST Digit & $\epsilon$ = 0.1 \& 0.2, $\alpha$ = $\epsilon$ $\cdot$ 0.1, i = 20 & $l_\infty$\\ CIFAR10 & $\epsilon$ = 3/255 \& 6/255, $\alpha$ = $\epsilon$ $\cdot$ 0.1, i = 20 & $l_\infty$ \\ \hline \end{tabular} \end{table} \section{Appendix - B} The gradient derivation of the cross-entropy loss coupled with the softmax activation function is described in this part. This derivation was detailed for the first time in \cite{Campbell_onthe}. We will be using the derivation explained by Katzir et al. in \cite{katzir2019blocking} as it is. Softmax Function Gradient Derivation: Let $K$ represents number of classes in training data, $y=(y_0,y_1,....,y_{K-1})$ denotes the one-hot encoded label information, $z_i$ denotes the $i^{th}$ component of the logits layer output given some network input $x$. The probability estimate of the $i^{th}$ class associated with the input by the softmax function is: \begin{equation} P_i = \frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \end{equation} $P_i$'s derivative with respect to $z_k$ can then be calculated as below: \begin{equation} \frac{\partial P_i}{\partial z_j} = \frac{\partial \left(\frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \right)}{\partial z_j} \end{equation} In the case of $i=j$, we get: \begin{equation} \begin{split} \frac{\partial P_i}{\partial z_j} & = \frac{\partial \left(\frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \right)}{\partial z_j} = \frac{e^{z_i}\sum_{k=0}^{K-1}e^{z_k}-e^{z_i}e^{z_j}}{\left(\sum_{k=0}^{K-1} e^{z_k} \right)^2} \\ & = \frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \cdot \frac{(\sum_{k=0}^{K-1}e^{z_k}) - e^{z_j}}{\sum_{k=0}^{K-1}e^{z_k}} = P_i(1-p_j) \end{split} \end{equation} Likewise, when $i \neq j$, we get: \begin{equation} \begin{split} \frac{\partial P_i}{\partial z_j} & = \frac{\partial(\frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}})}{\partial z_j} = \frac{0-e^{z_i}e^{z_j}}{(\sum_{k=0}^{K-1}e^{z_k})^2} \\ & = \frac{-e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \cdot \frac{e^{z_j}}{\sum_{k=0}^{K-1}e^{z_k}} = -P_iP_j \end{split} \end{equation} When we combine the two previous results, we get: \begin{equation} \label{eq:different} \begin{split} \frac{\partial P_i}{\partial z_j} & = \begin{cases} P_i(1-P_j), & \text{if $i=j$}\\ -P_iP_j, & i \neq j \end{cases} \end{split} \end{equation} The cross-entropy loss $L$ for any input $x$ is formulated as : \begin{equation} L = -\sum_{i=0}^{K-1}y_i \cdot \log(P_i) \end{equation} Assuming ‘log’ as natural logarithm (ln) for simplicity, we may formulate the gradient of the cross-entropy loss with respect to the $i^{th}$ logit as below: \begin{equation} \begin{split} \frac{\partial P_i}{\partial z_j} & = \frac{\partial (-\sum_{i=0}^{K-1}y_i \cdot \log(P_i))}{\partial z_i} \\ & = -\sum_{k=0}^{K-1}y_k \cdot \frac{\partial \log(P_k)}{\partial z_i} = -\sum_{k=0}^{K-1}y_k \cdot \frac{\partial \log(P_k)}{\partial P_k} \cdot \frac{\partial P_k}{\partial z_i} \\ & = -\sum_{k=0}^{K-1}y_k \cdot \frac{1}{P_k} \cdot \frac{\partial P_k}{\partial z_i} \end{split} \end{equation} Combining Cross-Entropy and Softmax Function Derivatives: Knowing from Eq. \ref{eq:different} that the softmax derivative equation for the case when $i = j$ differs from the other cases, we rearrange the loss derivative equation slightly to differentiate this case from the others: \begin{equation} \begin{split} \frac{\partial L}{\partial z_i} & = -\sum_{k=0}^{K-1}y_k \cdot \frac{1}{P_k} \cdot \frac{\partial P_k}{\partial z_i} \\ & = -y_i \cdot \frac{1}{P_i} \cdot \frac{\partial P_i}{\partial z_i}-\sum_{k \neq i}y_k \cdot \frac{1}{P_k} \cdot \frac{\partial P_k}{\partial z_i} \end{split} \end{equation} We can now apply the derivative of softmax we derived before to obtain: \begin{equation} \begin{split} -y_i \cdot \frac{P_i(1-P_i)}{p_i} & - \sum_{k \neq i}\frac{y_k \cdot (-P_kP_i)}{P_k} = -y_i + y_iP_i + \sum_{k \neq i} y_kP_i \\ & = P_i\left(y_i+\sum_{k \neq i}y_k\right)-y_i \end{split} \end{equation} Luckily, because $Y$ is the one-hot encoded actual label vector, we know that: \begin{equation} y_i + \sum_{k \neq i} y_k = \sum_{k=0}^{K-1}y_k=1 \end{equation} and therefore we finally end up with below expression as the derivative of the loss with respect to any logit: \begin{equation} \frac{\partial L}{\partial z_i} = P_i \left( y_i + \sum_{k \neq i}Y_k \right) -y_i = P_i - y_i \end{equation} \bibliographystyle{IEEEtran} \section{Introduction} By the end of 2013, researchers found out that DNN models are vulnerable to well-crafted malicious perturbations. Szegedy et al. \cite{szegedy2014intriguing} were the first to recognize the prevalence of adversarial cases in the context of image classification. Researchers have shown that a slight alteration in an image can influence the prediction of a DNN model. It is demonstrated that even the most advanced classifiers can be fooled by a very small and practically undetectable change in input, resulting in inaccurate classification. Since then, a lot of research studies \cite{tuna2021exploiting,ilyas2019prior,tuna2020closeness,meng2017magnet} were performed in this new discipline known as \textit{Adversarial Machine Learning} and these studies were not limited just to image classification task. To give some example, Sato et al. \cite{sato2018interpretable} showed in the NLP domain that changing just one word from an input sentence can fool a sentiment analyser trained with textual data. Another example is in the audio domain \cite{carlini2018audio},where the authors generated targeted adversarial audio samples in autonomous speech recognition task by introducing very little distortion to the original waveform. The findings of this study indicate that the target model can simply be exploited to transcribe the input as any desired phrase. Attacks that take advantage of DNN's weakness can substantially compromise the security of these machine learning (ML)-based systems, often with disastrous results. Adversarial evasion attacks mainly work by altering the input samples to increase the likelihood of making wrong predictions. These attacks can cause the model's prediction performance to deteriorate since the model cannot correctly predict the actual output for the input instances. In the context of medical applications, a malicious attack could result in an inaccurate disease diagnosis. As a result, it has the potential to impact the patient's health, as well as the healthcare industry \cite{finlayson2019adversarial}. Similarly, self-driving cars employ ML to navigate traffic without the need for human involvement. A wrong decision for the autonomous vehicle based on an adversarial attack could result in a tragic accident. \cite{sitawarin2018darts,morgulis2019fooling}. Hence, defending against malicious evasion attacks and boosting the robustness of ML models without sacrificing clean accuracy is critical. Presuming that these ML models are to be utilized in crucial areas, we should pay utmost attention to ML models' performance and the security problems of these architectures. In principle, adversarial strategies in evasion attacks can be classified based on multiple criteria. Based on the attacker's ultimate goal, attacks can be classified as targeted and untargeted attacks. In the former, the attacker perturbs the input image, causing the model to predict a class other than the actual class. Whereas in the latter, the attacker perturbs the input image so that a particular target class is predicted by the model. Attacks can also be grouped based on the level of knowledge that the attacker has. If the attacker has complete knowledge of the model like architecture, weights, hyper-parameters etc., we call this kind of setting as White-Box Settings. However, if the attacker has no information of the deployed model and defense strategy, we call this kind of setting as Black-Box Settings \cite{NEURIPS2018_e7a425c6}. This research study focused on both targeted and untargeted attacks in a White-Box setting. We propose an effective modification to the standard DNN-based classifiers by adding special kind of non-linear activation functions (sigmoid or tanh) to the last layer of the model architecture. We showed that training a model using high temperature value at the output layer activations and using the model by discarding the temperature value at inference time provides a very high degree of robustness to loss-based White-box targeted and untargeted attacks, together with attacks acting like Deepfool. We hereby name our proposed models as \texttt{Squeezed Models}. Our codes are released on GitHub \footnote{\url{https://github.com/author-name/xxx}} for scientific use. To summarize; our main contributions for this study are: \begin{itemize} \item We propose an effective modification to standard DNN based classifiers, which enables natural robustness to gradient-based White-box targeted and untargeted attacks. \item We show that using a specific type of non-linear activation functions at the output layer with high temperature values can actually provide robustness to the model without impairing the ability to learn. \item We experimentally showed that adding non-linearity to the last hidden layer provides robustness to other types of attacks, like Deepfool. \end{itemize} \section{Related Works} \label{ch:related_work} Since the uncovering of DNN's vulnerability to adversarial attacks \cite{szegedy2014intriguing}, a lot of work has gone into inventing new adversarial attack algorithms and defending against them by utilizing more robust architectures \cite{HUANG2020100270,catak2020generative,9003212,9099439}. We will discuss some of the noteworthy attack and defense studies separately. \subsection{Adversarial evasion attacks} DNN models have some vulnerabilities that make them challenging to defend in adversarial settings. For example, they are mostly sensitive to slight changes in the input data, leading to unexpected results in the model's predictions. Figure \ref{fig:adv-ml-ex} depicts how an adversary could take advantage of such a vulnerability and fool the model using properly crafted perturbation applied to the input. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\linewidth]{figures/jarviiii3.pdf} \caption{The figure depicts an example to adversarial attack. The original image is subjected to the adversarial perturbation. The precisely crafted perturbation manipulates the model in such a way that a "Dog (Chihuahua)" is wrongly identified as "Sports Car" with high confidence.} \label{fig:adv-ml-ex} \end{figure} An important portion of the attack methods are gradient-based and based on perturbing the input sample in order to maximize the model's loss. In recent years, many different adversarial attack techniques have been suggested in literature. The most widely known and used adversarial attacks are \texttt{Fast-Gradient Sign Method, Iterative Gradient Sign Method, \texttt{DeepFool}} and \texttt{Carlini-} \texttt{Wagner}. These adversarial attack algorithms are briefly explained in Section \ref{sec:fgsm-definition} - \ref{sec:carlini}. \subsubsection{Fast-Gradient Sign Method}\label{sec:fgsm-definition} This method, also known as FGSM \cite{goodfellow2015explaining}, was one of the first and most well-known adversarial attacks. The derivative of the model's loss function with respect to the input sample is exploited in this attack strategy to determine which direction the pixel values in the input image should be changed in order to minimize the loss function of the model. Once this direction is determined, it changes all pixels in the opposite direction at the same time to maximize loss. One can craft adversarial samples for a model with a classification loss function represented as $J(\theta,\mathbf{x},y)$ by utilizing the formula below, where $\theta$ denotes the parameters of the model, $\mathbf{x}$ is the benign input, and $y_{true}$ is the real label of our input. \begin{equation} \mathbf{x}^{adv} = \mathbf{x} + \epsilon \cdot sign\left(\nabla_x J(\theta,\mathbf{x},y_{true}) \right) \label{eq:fgsm_untargeted} \end{equation} In \cite{kurakin2017adversarial}, the authors presented a targeted variant of FGSM referred to as the Targeted Gradient Sign Method (TGSM). This way, they could change the attack to try to convert the model's prediction to a particular class. To achieve this, instead of maximizing the loss with respect to the true class label, TGSM attempts to minimize the loss with respect to the target class $J_{target}$. \begin{equation} \mathbf{x}^{adv} = \mathbf{x} - \epsilon \cdot sign\left(\nabla_x J(\theta,\mathbf{x},y_{target}) \right) \label{eq:fgsm_targeted} \end{equation} Different from Eq. \ref{eq:fgsm_untargeted}, we now subtract the crafted perturbation from the original image as we try to minimize the loss this time. If we want to increase the efficiency of this approach, we can modify above equation as in Eq.\ref{eq:fgsm_targeted_enhanced}.The only difference is that instead of minimizing the loss of the target label, we maximize the loss of the loss of the true label and also minimize the loss for the alternative label. \begin{equation} \mathbf{x}^{adv} = \mathbf{x} + \epsilon \cdot sign\left(\nabla_x (J(\theta,\mathbf{x},y_{true})-J(\theta,\mathbf{x},y_{target})) \right) \label{eq:fgsm_targeted_enhanced} \end{equation} \subsubsection{Iterative Gradient Sign Method} Kurakin et al. \cite{kurakin2017adversarial} proposed a minor but significant enhancement to the FGSM. Instead of taking one large step $\epsilon$ in the direction of the gradient sign, we take numerous smaller steps $\alpha$ and utilize the supplied value $\epsilon$ to clip the output in this method. This method is also known as the Basic Iterative Method (BIM), and it is simply FGSM applied to an input sample iteratively. Equation \ref{eq:bim} describes how to generate perturbed images under the $l_{inf}$ norm for a BIM attack. \begin{equation} \begin{aligned} \mathbf{x}_{t}^* & = \mathbf{x} \\ \mathbf{x}_{t+1}^* & = clip_{x, \epsilon} \{ \mathbf{x}_{t} + \alpha \cdot sign \left( \nabla_\mathbf{x} J(\theta, \mathbf{x}_t^*, y_{true}) \right) \} \end{aligned} \label{eq:bim} \end{equation} where $\mathbf{x}$ is the clean sample input to the model, $\mathbf{x}^*$ is the output adversarial sample at $i$\textsuperscript{th} iteration, $J$ is the loss function of the model, $\theta$ denotes model parameters, $y_{true}$ is the true label for the input, $\epsilon$ is a configurable parameter that limits maximum perturbation amount in given $l_{inf}$ norm, and $\alpha$ is the step size. As in the case of TGSM, we can easily modify Eq. \ref{eq:bim} to produce targeted variant of BIM. At each intermediate step, we can try to minimize the loss with respect to target class while at the same time maximizing the loss with respect to original class as in Eq. \ref{eq:bim_targeted}. \begin{equation} \begin{aligned} \mathbf{x}_{t}^* & = \mathbf{x} \\ arxiv \mathbf{x}_{t+1}^* & = clip_{x, \epsilon} \{ \mathbf{x}_{t} + \alpha \cdot sign ( \nabla_\mathbf{x} (J(\theta, \mathbf{x}_t^*, y_{true})-\\ & J(\theta, \mathbf{x}_t^*, y_{target})) ) \} \end{aligned} \label{eq:bim_targeted} \end{equation} \subsubsection{Deepfool Attack} \label{sec:deepfool-definition} This attack method has been introduced by Moosavi-Dezfooli et al. \cite{moosavidezfooli2016deepfool} and it is one of the most strong untargeted attack algorithms in literature. It's made to work with several distance norm metrics, including $l_{inf}$ and $l_{2}$ norms. The Deepfool attack is formulated on the idea that neural network models act like linear classifiers with classes separated by a hyperplane. Starting with the initial input point $\mathbf{x_t}$, the algorithm determines the closest hyperplane and the smallest perturbation amount, which is the orthogonal projection to the hyperplane, at each iteration. The algorithm then computes $\mathbf{x}_{t+1}$ by adding the smallest perturbation to the $\mathbf{x}_{t}$ and checks for misclassification. The illustration of this attack algorithm is provided in Figure \ref{fig:decision_boundary}. This attack can break defensive distillation method and achieves higher success rates than previously mentioned iterative attack approaches. But the downside is, produced adversarial sample generally lies close to the decision boundary of the model. \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth] {figures/decision_boundary.png} \caption{Illustration of Deepfool attack algorithm} \label{fig:decision_boundary} \end{figure} \subsubsection{Carlini{\&}Wagner Attack} \label{sec:carlini} Proposed by Carlini and Wagner \cite{carlini2017evaluating}, and it is one of the strongest attack algorithms so far. As a result, it's commonly used as a benchmark for the adversarial defense research groups, which tries to develop more robust DNN architectures that can withstand adversarial attacks. It is shown that, for the most well-known datasets, the CW attack has a greater success rate than the other attack types on normally trained models. Like Deepfool, it can also deceive defensively distilled models, which other attack types struggle to create adversarial examples for. In order to generate more effective and strong adversarial samples under multiple $l_{p}$ norms, the authors reformulate the attack as an optimization problem which may be solved using gradient descent. A $confidence$ parameter in the algorithm can used to change the level of prediction score for the created adversarial sample. For a normally trained model, application of CW attack with default setting (confidence set to 0) would generally yield to adversarial samples close to decision boundary. And high confident adversaries generally located further away from decision boundary. Adversarial machine learning is a burgeoning field of research, and we see a lot of new adversarial attack algorithms being proposed. Some of the recent remarkable ones are: i) Square Attack \cite{andriushchenko2020square} which is a query efficient black-box attack that is not based on model's gradient and can break defenses that utilize gradient masking, ii) HopSkipJumpAttack \cite{9152788} which is a decision-based attack algorithm based on an estimation of model's gradient direction and binary-search procedure for approaching the decision boundary, iii) Prior Convictions \cite{ilyas2019prior} which utilizes two kinds of gradient estimation (time and data dependent priors) and propose a bandit optimization based framework for adversarial sample generation under loss-only access black-box setting and iv) Uncertainty-Based Attack \cite{tuna2021exploiting} which utilizes both the model's loss function and quantified epistemic uncertainty to generate more powerful attacks. \subsection{Adversarial defense} \subsubsection{Defensive Distillation} Although the idea of knowledge distillation was previously introduced by Hinton et al. \cite{hinton2015distilling} to compress a large model into a smaller one, the utilization of this technique for adversarial defense purposes was first suggested by Papernot et al. \cite{papernot2016distillation}. The algorithm starts with training a $teacher \ model$ on training data by employing a high temperature (T) value in the softmax function as in Equation \ref{eq:softmax_T}, where $p_{i}$ is the probability of i\textsuperscript{th} class and $z_{i}$'s are the logits. \begin{equation} p_{i} = \frac{\exp(\frac{z_{i}}{T})}{\sum_{j} \exp(\frac{z_{i}}{T})} \label{eq:softmax_T} \end{equation} Then, using the previously trained teacher model, each of the samples in the training data is labeled with soft labels calculated with temperature (T) in prediction time. The $distilled \ model$ is then trained with the soft labels acquired from the teacher model, again with a high temperature (T) value in the softmax. When the training of the student model is over, we use temperature value as 1 during prediction time. Figure \ref{fig:edefens-dist} shows the overall steps for this technique. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\linewidth]{figures/defense-dist.png} \caption{Defensive Distillation.} \label{fig:edefens-dist} \end{figure} \subsubsection{Adversarial Training} Adversarial training is an intuitive defense method in which the model's robustness is increased by training it with adversarial samples. As demonstrated in Eq. \ref{eq:min_maxx}, this strategy can be mathematically expressed as a Minimax game. \begin{equation} \label{eq:min_maxx} \underset{\theta}{min} \ \underset{|\delta\| \leq \epsilon }{max} \ J(h_\theta(x+\delta), y) \end{equation} where $h$ denotes the model, $J$ denotes the model's loss function, $\theta$ represents model's weights and y is the actual label. $\delta$ is the amount of perturbation amount added to input x and it is constrained by given $\epsilon$ value. The inner objective is maximized by using the most powerful attack possible, which is mostly approximated by various adversarial attack types. In order to reduce the loss resulting from the inner maximization step, the outside minimization objective is used to train the model. This whole process produces a model that is expected to be resistant to adversarial attacks used during the training of the model. For adversarial training, Goodfellow et al. \cite{goodfellow2015explaining} used adversarial samples crafted by the FGSM attack. And Madry et al. used the PGD attack \cite{madry2019deep} to build more robust models, but at the expense of consuming more computational resources. Despite the fact that adversarial training is often regarded as one of the most effective defenses against adversarial attacks, adversarially trained models are nevertheless vulnerable to attacks like CW. Adversarial ML is a very active field of research, and new adversarial defense approaches are constantly being presented. Among the most notable are: i) High-Level Representation Guided Denoiser (HGD) \cite{liao2018defense} which avoids the error amplification effect of a traditional denoiser by utilizing the error in the upper layers of a DNN model as loss function and manages the training of a more efficient image denoiser, ii) APE-GAN \cite{shen2017apegan} which uses a Generative Adversarial Network (GAN) trained with adversarial samples to eliminate any adversarial perturbation of an input image, iii) Certified Defense \cite{raghunathan2020certified} which proposes a new differentiable upper bound yielding a model certificate ensuring that no attack can cause the error to exceed a specific value and iv) \cite{tuna2020closeness} which uses several uncertainty metrics for detecting adversarial samples. \section{Approach} \subsection{Chosen Activation Functions} We used specific type of activation functions (sigmoid and hyperbolic tangent) whose derivatives can be expressed in terms of themselves. And the derivative of these activation functions approaches to 0 when the output of the activation functions approaches to their maximum and minimum values. Starting with the sigmoid function; we know that sigmoid function ($\sigma(x)$) can be represented as in Eq. \ref{eq:sigmoid} and it squeezes the input to the range of 0 and 1 as can be seen in Figure \ref{fig:sig}. \begin{equation} \sigma(x) = \frac{1}{1+e^{-x}} \label{eq:sigmoid} \end{equation} And the derivative of Sigmoid function can be expressed as in Eq. \ref{eq:sigmoid_derivative}: \begin{equation} \frac{d}{{dx}}\sigma(x) = \sigma(x) . (1 - \sigma(x)) \label{eq:sigmoid_derivative} \end{equation} One can easily derive using above formulation or verify from Figure \ref{fig:sig} that the derivative of sigmoid function approach to 0 when the output of the sigmoid function approaches to 0 or 1. Similarly, we can represent hyperbolic tangent ($\tanh(x)$) function as in Eq. \ref{eq:tanhh}. Different from sigmoid, hyperbolic tangent function squeezes the input to the range of -1 and 1 as can be seen in Figure \ref{fig:tanh}. \begin{equation} \tanh{x}=\frac{e^x - e^{-x}}{e^x + e^{-x}} \label{eq:tanhh} \end{equation} The derivative of hyperbolic tangent function can be expressed as in Eq. \ref{eq:tanhh_derivative}. Using Eq. \ref{eq:tanhh_derivative} or Figure \ref{fig:tanh}, we can verify that the derivative of $\tanh$ function approaches to 0 when the output of the $\tanh$ function approaches to -1 or 1. So, the pattern is similar to the one we see in sigmoid function. The derivative of both of these activation functions yields to 0 when the output of the activation functions are at their minimum or maximum values. This property will be quite usefull when use these activation functions at the output layer of DNN classifiers to zeroing out the gradients. \begin{equation} \frac{d}{{dx}}\tanh x = 1 - \tanh ^2 x \label{eq:tanhh_derivative} \end{equation} \begin{figure}[!htbp] \centering \subfloat[\centering Sigmoid\label{fig:sig}]{{\includegraphics[width=5.2cm]{figures/sigmoid.pdf} }}% \qquad \subfloat[\centering Hyperbolic Tangent\label{fig:tanh}]{{\includegraphics[width=5.2cm]{figures/tanh.pdf} }}% \end{figure} \subsection{Proposed Method} We begin this part by introducing the loss calculation for a standard deep neural network classifier. Let $K$ denotes the number of output classes, $\mathcal{D} = \{(\mathbf{x}_i,\mathbf{y}_i)\}_{i=1}^{N}$ be our dataset, where $x_i \in \mathbb{R}^{d}$ and $y_i \in \{o_1,o_2...,o_k\}$ are the $i^{th}$ input and output respectively where $o_k$ is the one-hot encoded vector with the only $k^{th}$ index being one and zero for the other indices and the probability output score of any output class with index $k \in \{0,1...,K-1\}$ is represented by $P_k$. Based on this notation, the loss value ($J$) of the classifier for any test input $x^*$ can be calculated using cross-entropy loss function as below: \begin{equation} \begin{split} J = -\sum_{k=0}^{K-1}o_k[k] \cdot \log(P_k) & = -\log(P_{true}) \end{split} \end{equation} As can be seen in Figure \ref{fig:standard_dnn}, in standard DNN-based classifiers that are widely used today, usage of activation functions in the output layer is omitted and the prediction scores of each class is calculated by feeding the output of the last layer of the network (logits) to the softmax function. If we denote the logits by $Z= \{z_0,z_1...,z_{K-1}\}$, we can calculate the derivative of the loss with respect to $k^{th}$ logit using Eq. \ref{eq:logit_derivative}. Formal derivation of the Eq. \ref{eq:logit_derivative} is provided in Appendix B. \begin{equation} \frac{\partial J}{\partial z_k} = P_k - o_k[k] \label{eq:logit_derivative} \end{equation} \begin{figure}[!htbp] \centering \begin{subfigure}[b]{0.8\linewidth} \centering \includegraphics[width=\linewidth]{figures/sigmoid_trick-Standard_DNN.png} \caption{Standard DNN Classifier} \label{fig:standard_dnn} \end{subfigure} \hfill \begin{subfigure}[b]{0.8\linewidth} \centering \includegraphics[width=\linewidth]{figures/sigmoid_trick-Sigmoid.png} \caption{The proposed classifier} \label{fig:proposed_dnn} \end{subfigure} \caption{Comparison of standard DNN classifier and the proposed classifier} \label{fig:proposed_classifier} \end{figure} Loss-based adversarial attacks try to exploit the gradient of the loss function $(J)$ with respect to input sample $x$, and what the attacker is trying to do is to use $\frac{\partial J}{\partial x}$ to maximize $J$. We know from chain rule that $\frac{\partial J}{\partial x} = \frac{\partial J}{\partial z}.\frac{\partial z}{\partial x}$. Therefore, for any target class $k$, the gradient of the model's loss function with respect to the input image is directly proportional to $\frac{\partial J}{\partial z_k}$. In response to such kind of attack idea, several defense approaches have been proposed which mask the gradients of the models. For example, Defensive Distillation technique achieves this against untargeted loss-based attacks by enabling the model to make highly confident predictions. Because, when the model makes highly confident predictions in favor of the true class; $P_{true}$ approaches 1, and since the label for true class is also 1, $\frac{\partial J}{\partial z}$ and therefore $\frac{\partial J}{\partial x}$ approaches to 0 for the specific untargeted attack case. However, above approach will not work for targeted attack case. Because, in order to prevent targeted attacks, we have to make $\frac{\partial J}{\partial z}$ to become 0 for target class. And, the way to achieve this for standard DNN-based classifiers is to make target probability ($P_{target}$) to be very close to 1 (to make $P_{target} - o_{target}[target]$ equals to 0) which obviously contradicts with the natural learning task. Therefore, there actually exists a dilemma between masking the gradient of the model for targeted attack case and achieving the task of learning for the model at our hand. This phenomenon is beautifully explained by Katzir et al. in \cite{katzir2019blocking}. To overcome this problem, we propose to use either of the two commonly known nonlinear activation functions (sigmoid and $\tanh$) on the logits of the model as depicted in Figure \ref{fig:proposed_dnn}. The important thing is to apply an high temperature value to these activation functions during learning process (e.g. : $\sigma(x,T)=1/(1+\exp(-x/T))$ and use the model by ignoring the temperature value at prediction time, just like defensive distillation technique. After our proposed modification, the output of the last layer will be $\hat{Z}$, where $\hat{Z}=\{\hat{z}_0,\hat{z}_1...,\hat{z}_{K-1}\}$ and $\hat{Z}=\tanh{(Z)}$ or $\hat{Z}=\sigma{(Z)}$, depending on the chosen activation function. Based on this modified architecture, derivative of the model's loss with respect to input image under gradient-based attack against any class $k$ can be formulated as below: \begin{equation} \begin{split} \frac{\partial J}{\partial x} = \frac{\partial J}{\partial \hat{z}_k}.\frac{\partial \hat{z}_k}{\partial z_k}.\frac{\partial z_k}{\partial x} \end{split} \label{eq:long_chain} \end{equation} In case of sigmoid function, the above equation can be reformulated as below by using Eq. \ref{eq:sigmoid_derivative} and Eq.\ref{eq:logit_derivative}. \begin{equation} \begin{split} \frac{\partial J}{\partial x} = (P_k - o_k[k]).\hat{z}_k.(1 - \hat{z}_k).\frac{\partial z_k}{\partial x} \end{split} \label{eq:long_sigmoid} \end{equation} And in case of tanh function, the Eq.\ref{eq:long_chain} can be written as below Eq. \ref{eq:tanhh_derivative} and Eq.\ref{eq:logit_derivative}: \begin{equation} \begin{split} \frac{\partial J}{\partial x} = (P_k - o_k[k]).(1 - \hat{z}_k ^2).\frac{\partial z_k}{\partial x} \end{split} \label{eq:long_tanh} \end{equation} During the training of the DNN classifier depicted in Figure \ref{fig:proposed_dnn}, we force $\hat{z_k}$ to be at it's maximum possible value for the true class in order the maximize the final softmax prediction score. And similarly, we force $\hat{z_k}$ to be at it's minimum value for the other classes. Therefore, in case of sigmoid and tanh functions, $\hat{z_k}$ will approach to 1 for true class. And for the classes other than the true class, $\hat{z_k}$ will approach to 0 and -1 for sigmoid and tanh functions respectively. Since we additionally applied a high temperature value to these activation functions during training time, the output of these activation functions ($\hat{z_k}$) will be even more close to their maximum and minimum values at prediction time when we omit their temperature values. Consequently, Eq. \ref{eq:long_sigmoid} and Eq. \ref{eq:long_tanh} will approach to 0 for both targeted and untargeted attack cases. Because, if we use the proposed model architecture using sigmoid function, $\hat{z}_k.(1 - \hat{z}_k)$ will be 0 when $\hat{z}_k$ is either 0 or 1. And if we use the proposed model architecture using tanh function, $1 - \hat{z}_k ^2$ will become 0 when $\hat{z}_k$ is either -1 or 1. This way, we can successfully zero out (mask) the gradients of the model for loss-based targeted and untargeted attack threats. To avoid any round-off errors in floating point operations, high precision should be set for floating point numbers in the ML calculations. \subsection{Visual Representations of Loss Surfaces} We know that normally-trained models are vulnerable to gradient-based white-box targeted and untargeted attack threats. The main reason for this vulnerability lies in the ability of the attacker to successfully exploit the loss function of the model. To illustrate this fact, we made a simple experiment using a test image from MNIST (Digit) dataset and draw the loss surfaces of various models against two different directions (one for loss gradient direction and one for a random direction). When we check Figures \ref{fig:normal_untargeted} and \ref{fig:normal_targeted} which display the loss surfaces of normally-trained model, we see in both cases that there exists a useful gradient information which the attacker can exploit to craft both untargeted and targeted adversarial samples. \begin{figure*}[!htbp] \centering \subfloat[\centering normal model untargeted\label{fig:normal_untargeted}]{{\includegraphics[width=5cm]{figures/mnist_normal_untargeted_loss_surface_in_loss_direction.pdf} }}% \qquad \subfloat[\centering normal model targeted\label{fig:normal_targeted}]{{\includegraphics[width=5cm]{figures/mnist_normal_targeted_loss_surface_in_loss_direction.pdf} }}% \subfloat[\centering distilled model untargeted\label{fig:distilled_untargeted}]{{\includegraphics[width=5cm]{figures/mnist_distilled_untargeted_loss_surface_in_loss_direction.pdf} }}% \qquad \subfloat[\centering distilled model targeted\label{fig:distilled_targeted}]{{\includegraphics[width=5cm]{figures/mnist_distilled_targeted_loss_surface_in_loss_direction.pdf} }}% \subfloat[\centering our model untargeted\label{fig:our_untargeted}]{{\includegraphics[width=5cm]{figures/mnist_tanh_untargeted_loss_surface_in_loss_direction.pdf} }}% \qquad \subfloat[\centering our model targeted\label{fig:our_targeted}]{{\includegraphics[width=5cm]{figures/mnist_tanh_targeted_loss_surface_in_loss_direction.pdf} }}% \caption{Loss surfaces of various models under untargeted and targeted attack scenarios}% \label{fig:loss-surfaces}% \end{figure*} To prevent the above vulnerability, various defense methods has been proposed, including Defensive distillation. This technique was found to significantly reduce the ability of traditional gradient-based untargeted attacks to build adversarial samples. Because defense distillation has an effect of diminishing the gradients down to zero for untargeted attack case and the usage of standard objective function is not effective anymore. As depicted in Figure \ref{fig:distilled_untargeted}, the gradient of the distilled model diminishes to zero and thus loss-based untargeted attacks have difficulty in crafting adversarial samples for defensively distilled models. However, it was then demonstrated that attacks, such as the TGSM attack, could defeat the defensive distillation strategy \cite{ross2017improving}, but without providing a mathematical proof about why these attacks actually work. And the actual reason of success for these kind of attacks against defensively distilled model is shown to lie in the targeted nature of these attacks \cite{katzir2019blocking}. Figure \ref{fig:distilled_targeted} demonstrates the loss surface of a distilled model under a targeted attack and we can easily see that the gradient of the model loss does not diminish to zero as in Figure \ref{fig:distilled_untargeted}. The result is not surprising at all, because for a defensively distilled model under targeted attack, we expect $P_{target}$ to be almost 0 and $o_{target}[target]$ is 1. Therefore, we expect $\frac{\partial J}{{\partial z}}$ (equivalent to $P_{target}$-$o_{target}[target]$) approach to -1 which is more than enough to exploit the gradient of the loss function for a successful attack. As a last attempt, we analyze the loss surfaces of one of our proposed models (model which was trained using $\tanh$ activation function with high temperature value at the output layer). When we check Figures \ref{fig:our_untargeted} and \ref{fig:our_targeted}, we see that the gradient of model's loss function diminishes to 0 for both of the untargeted and targeted attack cases. And this prevents the attacker to exploit the gradient information of the model to craft successful adversarial perturbations. \subsection{Softmax prediction scores of proposed architectures} For a normally trained standard DNN-based classifiers, we expect the model to make a prediction in favor of true class with a prediction score usually close to 1. In case of a defensively distilled model, we force the model to make high confident predictions. That is why, we see a prediction score very close to 1 in favor of the true class. However, for our proposed model architectures, the softmax prediction score of the true class is lower compared to a normal or defensively distilled model. Because the activation functions in the last layer of the model limits the values for $\hat{z}_k$ to an interval of (0,1) and (-1,1). If we use sigmoid function in the last layer, maximum prediction score will be 0.232 and if we use tanh function in the last layer, maximum prediction score will be 0.450. And this will be the case for all the predictions. Similarly, minimum prediction scores will be 0.085 and 0.061 for models with sigmoid and tanh activation functions respectively. Softmax prediction score output of a test sample from MNIST dataset is displayed in Figure \ref{fig:softmax-scores} for various models. We believe that this behaviour of our models, just like the case in defensively distilled models, might also be quite useful to prevent attackers to infer an information that is supposed to be private from the output probability scores of any prediction of the model and might contribute to the privacy of model as suggested in \cite{shokri2017membership,shokri2019}. \begin{figure}[!htbp] \centering \includegraphics[width=1\linewidth]{figures/softmax_scores.pdf} \caption{Softmax score outputs of various models} \label{fig:softmax-scores} \end{figure} \section{Experiments} \subsection{Adversarial Assumptions} In this research study, we assume that the attacker can chose to implement targeted or untargeted attacks towards the target the model. Our assumption was that the attacker was fully aware of the architecture and parameters of the target model as in the case of \textit{whitebox} setting and use the model as it is. Another crucial assumption concerns the constraints of the attacker. Clearly, the attacker should be limited to applying a perturbation with $l_p$ norm up to certain $\epsilon$ value for an attack to be unrecognizable to the human eye. For this study, we used $l_\infty$ and $l_2$ norm metrics to restrict the maximum perturbation amount that an adversary can apply on the input sample. Finally, the error rate of our proposed defense technique is assessed over the percentage of resulting successful attack samples which is proposed by Goodfellow et al. \cite{goodfellow2015explaining} and recommended by Carlini et al. \cite{carlini2019evaluating}. \subsection{Experimental Setup}\label{sec:experimental-setup} For our experiments, we used four sets of models for each dataset as normal model, defensively distilled(student) model, proposed model with sigmoid activation and proposed model with tanh activation function at the output layer. By using same architectures, we trained our CNN models using MNIST (Digit) \cite{lecun-mnisthandwrittendigit-2010} and CIFAR-10 \cite{cifar10} datasets. For MNIST (Digit) dataset, our models attained accuracy rates of 99.35\%, 99.41\%, 98.97\% and 99.16\% respectively. And, for CIFAR-10 dataset, our models attained accuracy rates of 83.95\%, 84.68\%, 82.37\% and 80.15\% respectively. The architectures of our CNN models and the hyperparameters used in model training are listed Appendix A. Finally, we set the temperature ($T$) value as 20 and 50 for MNIST and CIFAR datasets respectively during the training of the defensively distilled model and our proposed models. \subsection{Experimental Results}\label{sec:experimental-results} During our tests, we only implemented attack on the test samples if our models had previously classified them accurately. Because an adversary would have no reason to tamper with samples that have already been labeled incorrectly. For the TGSM and Targeted BIM attacks, we regard the attacks successful only if each the perturbed image is classified by the model as the chosen target class. We set the target class to "2" for MNIST (Digit) dataset, and "Cars" for CIFAR-10 dataset. We utilized an open source Python library called Foolbox \cite{rauber2018foolbox} to implement the attacks used in this study. The attack parameters used in BIM and Targeted BIM are provided in Table \ref{tab:iterative_settings}. The results of our experiments for MNIST and CIFAR10 datasets are available in Tables \ref{tab:mnist_results},\ref{tab:mnist_results2} and Tables \ref{tab:cifar_results},\ref{tab:cifar_results2} together with the amount of perturbations applied and chosen norm metrics. Just for CW and Deepfool attacks, we used the $l_{2}$ norm equivalent of the applied perturbation by using the formula $l_{2} = l_{inf} \times \sqrt{n} \times \sqrt{2}/\sqrt{\pi e}$, where $n$ is the input sample dimension. When we check the results, we observe that normally trained models are vulnerable to both targeted and untargeted attack types, whereas defensively distilled models are vulnerable to only targeted attack types. And our proposed models (squeezed models) provides a high degree of robustness to both targeted (TGSM,Targeted BIM, CW) and untargeted (FGSM,BIM) attacks. This success results from the effectiveness of our models in zeroing out the gradients in both scenarios. \begin{table} \centering \caption{Attack success rates on MNIST (Digit) - Part 1 } \label{tab:mnist_results} \scriptsize \begin{tabular}{!{\color{black}\vrule}l|c!{\color{black}\vrule}c!{\color{black}\vrule}c!{\color{black}\vrule}c!{\color{black}\vrule}} \cline{2-5} \multicolumn{1}{l!{\color{black}\vrule}}{} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Tanh)\end{tabular} \\ \hline FGSM ($l_\infty$, $\epsilon$ : 0.1) & 9.75\% & 2.13\% & 0.12\% & 0.38\% \\ \hline TGSM ($l_\infty$, $\epsilon$ : 0.1) & 1.77\% & 1.74\% & 0.04\% & 0.03\% \\ \hline BIM ($l_\infty$, $\epsilon$ : 0.1) & 34.20\% & 2.31\% & 0.05\% & 0.19\% \\ \hline Targeted-BIM ($l_\infty$, $\epsilon$ : 0.1) & 13.04\% & 9.31\% & 0.03\% & 0.02\% \\ \hline CW ($l_2$, $\epsilon$ : 1.35 conf : $0$) & 80.94\% & 59.99\% & 0.04\% & 0.07\% \\ \hline Deepfool ($l_2$, $\epsilon$ : 1.35 ) & 29.73\% & 21.22\% & 0.06\% & 0.14\% \\ \hline \end{tabular} \label{tab:mnist_results} \end{table} \begin{table} \centering \caption{Attack success rates on MNIST (Digit) - Part 2} \label{tab:mnist_results2} \scriptsize \begin{tabular}{!{\color{black}\vrule}l|c!{\color{black}\vrule}c!{\color{black}\vrule}c!{\color{black}\vrule}c!{\color{black}\vrule}} \cline{2-5} \multicolumn{1}{l!{\color{black}\vrule}}{} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Tanh)\end{tabular} \\ \hline FGSM ($l_\infty$, $\epsilon$ : 0.2) & 31.09\% & 2.23\% & 0.38\% & 0.12\% \\ \hline TGSM ($l_\infty$, $\epsilon$ : 0.2) & 9.86\% & 8.33\% & 0.03\% & 0.04\% \\ \hline BIM ($l_\infty$, $\epsilon$ : 0.2) & 98.19\% & 2.71\% & 0.23\% & 0.08\% \\ \hline Targeted-BIM ($l_\infty$, $\epsilon$ : 0.2) & 90.05\% & 77.77\% & 0.03\% & 0.04\% \\ \hline CW ($l_2$, $\epsilon$ : 2.70 conf : $0$) & 100\% & 99.96\% & 0.11\% & 0.11\% \\ \hline Deepfool ($l_2$, $\epsilon$ : 2.70 ) & 97.69\% & 87.41\% & 0.17\% & 0.06\% \\ \hline \end{tabular} \label{tab:mnist_results2} \end{table} \begin{table} \centering \caption{Attack success rates on CIFAR10 - Part 1 } \label{tab:cifar_results} \scriptsize \begin{tabular}{|l|c|c|c|c|} \cline{2-5} \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}\\\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Tanh)\end{tabular} \\ \hline FGSM ($l_\infty$, $\epsilon$ : 3/255) & ~72.38\% & 13.88\% & ~4.07\% & ~1.64\% \\ \hline TGSM ($l_\infty$, $\epsilon$ : 3/255) & ~22.84\% & ~21.36\% & ~0.56\% & 0.23\% \\ \hline BIM ($l_\infty$, $\epsilon$ : 3/255) & ~93.53\% & ~15.01\% & 2.61\% & ~0.95\% \\ \hline Targeted-BIM ($l_\infty$, $\epsilon$ : 3/255) & ~57.36\% & 57.22\% & ~0.46\% & 0.22\% \\ \hline CW ($l_2$, $\epsilon$ :0.798) & ~100.00\% & ~100.00\% & 2.93\% & ~1.36\% \\ \hline Deepfool ($l_2$, $\epsilon$ : 0.798) & ~99.98\% & ~99.76\% & ~2.22\% & ~0.87\% \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Attack success rates on CIFAR10 - Part 2 } \label{tab:cifar_results2} \scriptsize \begin{tabular}{|l|c|c|c|c|} \cline{2-5} \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}\\\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Our Model~\\(Tanh)\end{tabular} \\ \hline FGSM ($l_\infty$, $\epsilon$ : 6/255) & 80.58\% & 13.85\% & 4.04\% & 1.7\% \\ \hline TGSM ($l_\infty$, $\epsilon$ : 6/255) & 25.41\% & 25.58\% & 0.64\% & 0.3\% \\ \hline BIM ($l_\infty$, $\epsilon$ : 6/255) & 96.75\% & 15.11\% & 3.07\% & 1.14\% \\ \hline Targeted-BIM ($l_\infty$, $\epsilon$ : 6/255) & 68.16\% & 73.07\% & 0.6\% & 0.29\% \\ \hline CW ($l_2$, $\epsilon$ :1.596) & 100\% & 100\% & 3.08\% & 1.75\% \\ \hline Deepfool ($l_2$, $\epsilon$ :1.596 ) & 99.98\% & 100\% & 2.17\% & 0.89\% \\ \hline \end{tabular} \end{table} One other thing worth to mention about the result of our experiments is that, in addition to gradient-based attacks, our proposed models exhibit excellent performance against Deepfool attack as well. Generally, the reason behind the success of Deepfool attack against standard DNN-based classifiers is the linear nature of these models as argued by Goodfellow et. al. \cite{goodfellow2015explaining} and the authors of Deepfool paper formalized their methods based on this assumption \cite{moosavidezfooli2016deepfool}. However, since we introduce additional non-linearity to the standard DNN classifiers at the output layer, Deepfool attack algorithm fails to succeed in crafting adversarial samples compared to normally or defensively distilled models. \section{Conclusion} In this study, we first showed that existing DNN-based classifiers are vulnerable to gradient-based White-box attacks. And, even the model owner uses a defensively distilled model, the attacker can still have a chance to craft successful targeted attacks. We then proposed a modification to the standard DNN-based classifiers which helps to mask the gradients of the model and prevents the attacker to exploit them to craft both targeted and untargeted adversarial samples. We empirically verified the effectiveness of our approach on standard datasets which are heavily used by adversarial ML community. Finally, we demonstrated that our proposed model variants have inherent resistance to Deepfool attack thanks to the increased non-linearity at the output layer. In this study, we focused on securing DNN based classifiers against evasion attacks. However, it is shown that previous defense approaches on adversarial robustness suffer from privacy preservation issues\cite{shokri2019}. In the future, we plan to evaluate our proposed models against privacy related attack strategies, specifically membership inference attacks. \section{Appendix - A} \begin{table}[!htbp] \centering \caption{Model Architectures used in our experiments} \label{tab:cnn_model_arch_digit} \scriptsize \begin{tabular}{|c||c|c|} \hline \textbf{Dataset} & \textbf{Layer Type} & \textbf{Layer Information}\\ \hline \hline \multirow{10}{*}{MNIST - Digit} & Convolution (padding:1) + ReLU & $3 \times 3 \times 32$ \\ & Convolution (padding:1) + ReLU & $3 \times 3 \times 32$ \\ & Max Pooling & $2 \times 2$ \\ & Convolution (padding:1) + ReLU & $3 \times 3 \times 64$ \\ & Convolution (padding:1) + ReLU & $3 \times 3 \times 64$ \\ & Max Pooling & $2 \times 2$ \\ & Fully Connected + ReLU & $3136 \times 200$ \\ & Dropout & p : 0.5 \\ & Fully Connected + ReLU & $200 \times 200$ \\ & Dropout & p : 0.5 \\ & Fully Connected & $200 \times 10$ \\ \hline \hline \multirow{14}{*}{CIFAR10} & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 32$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 64$ \\ & Max Pooling (Stride 2) & $2 \times 2$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 128$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 128$ \\ & Max Pooling (Stride 2) & $2 \times 2$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 256$ \\ & Convolution (Padding = 1) + ReLU & $3 \times 3 \times 256$ \\ & Dropout & p : 0.5 \\ & Max Pooling (Stride 2) & $2 \times 2$ \\ & Fully Connected + ReLU & $4096 \times 1024$ \\ & Dropout & p : 0.5 \\ & Fully Connected + ReLU & $1024 \times 256$ \\ & Dropout & p : 0.5 \\ & Fully Connected & $256 \times 10$ \\ \hline \end{tabular} \end{table} Note: The common softmax layers are omitted for simplicity. For our proposed methods, we have applied Sigmoid and Tanh activation layers just after the final fully connected layers. The model architectures are available in the shared Github repository. \begin{table*}[!htbp] \centering \caption{CNN model parameters} \label{tab:cnn_model_params} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\\\end{tabular}}} & \multicolumn{4}{c|}{MNIST (Digit)} & \multicolumn{4}{c|}{CIFAR-10} \\ \cline{2-9} \multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}Normal~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distilled~\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ours\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ours\\(Tanh)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Normal\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distilled\\Model\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ours\\(Sigmoid)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ours \\(Tanh)\end{tabular} \\ \hline Opt. & Adam & Adam & Adam & Adam & Adam & Adam & Adam & Adam \\ \hline LR & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\ \hline Batch S. & 128 & 128 & 128 & 128 & 128 & 128 & 128 & 128 \\ \hline Dropout & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.05 & 0.25 \\ \hline Epochs & 20 & 20 & 20 & 20 & 50 & 50 & 50 & 50 \\ \hline Temp. & 1 & 20 & 20 & 20 & 1 & 50 & 50 & 50 \\ \hline \end{tabular} \end{table*} \begin{table}[!htbp] \centering \caption{Parameters that are used in BIM and Targeted BIM attacks: $\alpha$ denotes the step size and $i$ denotes \# of steps for a perturbation budget $\epsilon$} \label{tab:iterative_settings} \begin{tabular}{c|c|c} \hline \textbf{Dataset} & \textbf{Parameters} & \textbf{$l_p$ norm}\\ \hline \hline MNIST Digit & $\epsilon$ = 0.1 \& 0.2, $\alpha$ = $\epsilon$ $\cdot$ 0.1, i = 20 & $l_\infty$\\ CIFAR10 & $\epsilon$ = 3/255 \& 6/255, $\alpha$ = $\epsilon$ $\cdot$ 0.1, i = 20 & $l_\infty$ \\ \hline \end{tabular} \end{table} \section{Appendix - B} The gradient derivation of the cross-entropy loss coupled with the softmax activation function is described in this part. This derivation was detailed for the first time in \cite{Campbell_onthe}. We will be using the derivation explained by Katzir et al. in \cite{katzir2019blocking} as it is. Softmax Function Gradient Derivation: Let $K$ represents number of classes in training data, $y=(y_0,y_1,....,y_{K-1})$ denotes the one-hot encoded label information, $z_i$ denotes the $i^{th}$ component of the logits layer output given some network input $x$. The probability estimate of the $i^{th}$ class associated with the input by the softmax function is: \begin{equation} P_i = \frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \end{equation} $P_i$'s derivative with respect to $z_k$ can then be calculated as below: \begin{equation} \frac{\partial P_i}{\partial z_j} = \frac{\partial \left(\frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \right)}{\partial z_j} \end{equation} In the case of $i=j$, we get: \begin{equation} \begin{split} \frac{\partial P_i}{\partial z_j} & = \frac{\partial \left(\frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \right)}{\partial z_j} = \frac{e^{z_i}\sum_{k=0}^{K-1}e^{z_k}-e^{z_i}e^{z_j}}{\left(\sum_{k=0}^{K-1} e^{z_k} \right)^2} \\ & = \frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \cdot \frac{(\sum_{k=0}^{K-1}e^{z_k}) - e^{z_j}}{\sum_{k=0}^{K-1}e^{z_k}} = P_i(1-p_j) \end{split} \end{equation} Likewise, when $i \neq j$, we get: \begin{equation} \begin{split} \frac{\partial P_i}{\partial z_j} & = \frac{\partial(\frac{e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}})}{\partial z_j} = \frac{0-e^{z_i}e^{z_j}}{(\sum_{k=0}^{K-1}e^{z_k})^2} \\ & = \frac{-e^{z_i}}{\sum_{k=0}^{K-1}e^{z_k}} \cdot \frac{e^{z_j}}{\sum_{k=0}^{K-1}e^{z_k}} = -P_iP_j \end{split} \end{equation} When we combine the two previous results, we get: \begin{equation} \label{eq:different} \begin{split} \frac{\partial P_i}{\partial z_j} & = \begin{cases} P_i(1-P_j), & \text{if $i=j$}\\ -P_iP_j, & i \neq j \end{cases} \end{split} \end{equation} The cross-entropy loss $L$ for any input $x$ is formulated as : \begin{equation} L = -\sum_{i=0}^{K-1}y_i \cdot \log(P_i) \end{equation} Assuming ‘log’ as natural logarithm (ln) for simplicity, we may formulate the gradient of the cross-entropy loss with respect to the $i^{th}$ logit as below: \begin{equation} \begin{split} \frac{\partial P_i}{\partial z_j} & = \frac{\partial (-\sum_{i=0}^{K-1}y_i \cdot \log(P_i))}{\partial z_i} \\ & = -\sum_{k=0}^{K-1}y_k \cdot \frac{\partial \log(P_k)}{\partial z_i} = -\sum_{k=0}^{K-1}y_k \cdot \frac{\partial \log(P_k)}{\partial P_k} \cdot \frac{\partial P_k}{\partial z_i} \\ & = -\sum_{k=0}^{K-1}y_k \cdot \frac{1}{P_k} \cdot \frac{\partial P_k}{\partial z_i} \end{split} \end{equation} Combining Cross-Entropy and Softmax Function Derivatives: Knowing from Eq. \ref{eq:different} that the softmax derivative equation for the case when $i = j$ differs from the other cases, we rearrange the loss derivative equation slightly to differentiate this case from the others: \begin{equation} \begin{split} \frac{\partial L}{\partial z_i} & = -\sum_{k=0}^{K-1}y_k \cdot \frac{1}{P_k} \cdot \frac{\partial P_k}{\partial z_i} \\ & = -y_i \cdot \frac{1}{P_i} \cdot \frac{\partial P_i}{\partial z_i}-\sum_{k \neq i}y_k \cdot \frac{1}{P_k} \cdot \frac{\partial P_k}{\partial z_i} \end{split} \end{equation} We can now apply the derivative of softmax we derived before to obtain: \begin{equation} \begin{split} -y_i \cdot \frac{P_i(1-P_i)}{p_i} & - \sum_{k \neq i}\frac{y_k \cdot (-P_kP_i)}{P_k} = -y_i + y_iP_i + \sum_{k \neq i} y_kP_i \\ & = P_i\left(y_i+\sum_{k \neq i}y_k\right)-y_i \end{split} \end{equation} Luckily, because $Y$ is the one-hot encoded actual label vector, we know that: \begin{equation} y_i + \sum_{k \neq i} y_k = \sum_{k=0}^{K-1}y_k=1 \end{equation} and therefore we finally end up with below expression as the derivative of the loss with respect to any logit: \begin{equation} \frac{\partial L}{\partial z_i} = P_i \left( y_i + \sum_{k \neq i}Y_k \right) -y_i = P_i - y_i \end{equation} \bibliographystyle{IEEEtran}
1,116,691,496,987
arxiv
\section{Introduction} Algebraic Manipulation Detection (AMD) Codes \cite{AMD} protect messages against additive adversarial tampering, assuming the codeword cannot be ``seen" by the adversary. In AMD codes, a message is encoded to a codeword that is an element of a publicly known group $\cal G$. The codeword is stored in a private storage which is perfectly opaque to the adversary. The adversary however can add an {\em arbitrary } element of $\cal G$ to the storage to make the decoder output a different message. A $\delta$-secure AMD code guarantees that any such manipulation succeeds with probability at most $\delta$. Security of AMD codes has been defined for ``weak" and ``strong" codes: weak codes provide security assuming message distribution is uniform, while strong codes guarantee security for any message distribution. Weak AMD codes are primarily deterministic codes and security relies on the randomness of the message space. Strong AMD codes are randomized codes and provide security for any message. \remove \textcolor{red}{In {\em systematic AMD codes} a tag function $f$ is used to calculate a tag. The codeword of a systematic strong AMD code is of the form $(\mathbf{m},\mathbf{r}, f(\mathbf{m},\mathbf{r}))$, where $\mathbf{r}$ is the explicit randomness of encoding and $f$ is the tag.} AMD codes have wide applications as a building block of cryptographic primitives such as robust information dispersal \cite{AMD}, and anonymous message transmission \cite{AMD}, and have been used to provide a generic construction for robust secret sharing schemes from linear secret sharing schemes \cite{AMD}. AMD codes with leakage were first considered in \cite{LLR-AMD} where the leakage was defined for specific parts of the encoding process. An {\em $\alpha$-weak AMD code with linear leakage}, also called $\alpha$-weak LLR-AMD code, is a deterministic code that guarantees security when part of the message is leaked but the min-entropy of the message space is at least $1-\alpha$ fraction of the message length (in bits). An {\em $\alpha$-strong LLR-AMD } is a randomized code that guarantees security when the randomness of encoding, although partially leaked, has at least min-entropy $(1-\alpha)\log |\mathcal{R}|$ where $\mathcal{R}$ is the randomness set of encoding. \remove{ An AMD encoder takes a message, and possibly a random string (in the case of strong code), and outputs a codeword $\mathbf{x}$. The above models of leakage effectively assumes the input to the encoder is partially leaked and imposes restrictions on the amount of remaining entropy of the input to the encoder. Our proposed leakage model however focuses on } In this paper we consider leakage from the storage that holds the codeword. This effectively relaxes the original model of AMD codes that required the codeword to be perfectly private. As we will show this model turns out to be more challenging compared to LLR-AMD models where the leakage is in a more restricted part in the encoding process. A more detailed relation between our model and LLR-AMD models is given in Section \ref{sec: LV-AMD}. \subsection*{Our work} We define $\rho$-Algebraic Manipulation Detection ($\rho$-AMD ) codes as an extension of AMD codes when the storage that holds the codeword (an element of $\mathcal{G}$), leaks up to $\rho\log|\mathcal{G}|$ bits of information about the codeword. We assume the adversary can apply an arbitrary function to the storage and receive up to $\rho\log|\mathcal{G}|$ bits of information about the codeword. Similar to the original AMD codes, we define weak and strong $\rho$-AMD codes as deterministic and randomized codes that guarantee security for a uniformly distributed message and any message, respectively. \remove{ and provide protection against an arbitrary message chosen by the adversary. The leaked information can be used by the adversary to choose the best offset vector and maximize their chance of success in resulting an undetectable tampering. Weak $\rho$-AMD codes are deterministic codes and their security guarantee is for a uniformly chosen message. We define strong and weak codes when the storage leaks information, and } Efficiency of $\rho$-AMD codes is defined concretely (similar to \cite{AMD}) and asymptotically (using the {\em rate of the code family}, which is the asymptotic ratio of the message length to the codeword length, as the message length approaches infinity ). We prove concrete bounds for both strong and weak $\rho$-AMD codes and a non-trivial upper bound $1-\rho$ on the rate of the strong $\rho$-AMD codes. Comparison of bounds for different models of AMD codes is summarized in Table \ref{tb: bounds} \remove{ Efficiency of $\rho$-AMD codes is defined concretely and asymptotically using {\em effective tag size} and the {\em rate of the code family }, respectively. Tag size is the difference between the bit length of a codeword, and the bit length of a message. The rate of a code family is the asymptotic ratio of the message length to the codeword length, as the message length approaches infinity. We prove lower bounds on the effective tag size, and an upper bound on the rate of the $\rho$-AMD codes (strong). } \begin{table} \begin{center} \begin{tabular}{@{} |c|c|c|@{}} \hline codes & concrete bound & rate bound \\ \hline strong AMD & $G\geq\frac{M-1}{\delta^2}+1$ & $1$ \\ {\bf strong $\rho$-AMD } & $\mathbf{G^{1-\rho}\geq\frac{M-1}{\delta^2}+1}$ & $\mathbf{1-\rho}$\\ $\alpha$-strong LLR-AMD & $G\geq\frac{(M-1)(1-e^{-1})}{\delta^{\frac{2}{1-\alpha}}}+1$ & $1$\\ \hline weak AMD & $G\geq\frac{M-1}{\delta}+1$ & $1$\\ {\bf weak $\rho$-AMD } &$\mathbf{G\geq\frac{M-1}{\delta}+1}$ {\bf and} $\mathbf{M\geq\frac{G^\rho}{\delta}}$ & $\mathbf{1}$\\ $\alpha$-weak LLR-AMD & $G\geq\frac{(M-1)(1-e^{-1})}{\delta^{\frac{1}{1-\alpha}}}+1$ and $G\geq\frac{M^\alpha (M-1)(1-e^{-1})}{\delta}+1$ & $\frac{1}{1+\alpha}$ \\ \hline \end{tabular} \caption{\label{tb: bounds}$G$ denotes the size of the group $\mathcal{G}$ that codewords live in and $M$ denotes the size of the message set $\mathcal{M}$. $\delta$ is the security parameter.} \end{center} \end{table} \remove \begin{table} \begin{center} \begin{tabular}{@{} |c|c|c|c| @{}} \hline codes & concrete bound & rate bound & construction \\ \hline strong AMD & $G\geq\frac{M-1}{\delta^2}+1$ & $1$ &\\ strong $\rho$-AMD & $G^{1-\rho}\geq\frac{M-1}{\delta^2}+1$ & $1-\rho$&Theorem 2 in [2]\\ $\alpha$-strong LLR-AMD & $G\geq\frac{(M-1)(1-e^{-1})}{\delta^{\frac{2}{1-\alpha}}}+1$ & $1$ &\\ \hline weak AMD & $G\geq\frac{M-1}{\delta}+1$ & $1$ &\\ weak $\rho$-AMD & $G\geq\frac{M-1}{\delta}+1$ and $M\geq\frac{G^\rho}{\delta}$ & $1$ &Theorem 2 in [1]\\ $\alpha$-weak LLR-AMD & $G\geq\frac{(M-1)(1-e^{-1})}{\delta^{\frac{1}{1-\alpha}}}+1$ and $G\geq\frac{M^\alpha (M-1)(1-e^{-1})}{\delta}+1$ & $\frac{1}{1+\alpha}$ &\\ \hline \end{tabular} \caption{\label{tb: bounds}$G$ denotes the size of the group that codewords live in and $M$ is the size of the message set} \end{center} \end{table} For construction, we use the relationship between $\rho$-AMD codes and LLR-AMD codes, to construct (non-optimal) $\rho$-AMD codes, and leave the construction of rate optimal $\rho$-AMD codes as an interesting open problem. We however define a special type of leakage in which leakage is specified by the number of codeword components that the adversary can select for eavesdropping. The model is called {\em limited-view $\rho$-AMD ($\rho^{LV}$-AMD )}. The $\rho^{LV}$-AMD adversary is allowed to select a fraction $\rho$ of the {\em codeword components}, and select their tampering (offset) vector after seeing the values of the chosen components. This definition of limited-view adversary was first used in \cite{ISIT-LV} where the writing power of the adversary was also parametrized. We give an explicit construction of strong $\rho^{LV}$-AMD codes that achieve rate $1-\rho$, using an AMD code and a wiretap II code as building blocks. We note that this rate is achievable for large constant size alphabets, if we allow a seeded encoder involving a universal hash family (see \cite{eAWTP}). That is the alphabet size depends on the closeness to the actual capacity value. Also we do not know if $1-\rho$ is the capacity of strong $\rho^{LV}$-AMD codes. Finding the capacity of strong $\rho^{LV}$-AMD codes however is an open question as the type of leakage (component wise) is more restricted than strong $\rho$-AMD codes. \remove REMOVE: We define the \textcolor{blue}{limitted-view} adversary by a set of functions, each \textcolor{red}{mapping their view of the codeword to an offset vector to be added to the codeword} \textcolor{blue}{detailing an adversarial strategy: choose a subset of the codeword components to read and depending on the value of the components select an offset vector to be added to the codeword}. This approach was used in non-malleable code \cite{DzPiWi}. } We also construct a family of weak $\rho^{LV}$-AMD codes that achieve rate $1$ for any leakage parameter $\rho$. We consider two applications. The first application can be seen as parallel to the application of the original AMD codes to robust secret sharing scheme. The second application is a new variation of active adversary wiretap channel II. \noindent {\bf Robust ramp secret sharing scheme.} \remove{ It is shown in \cite{AMD} that a $\delta$-robust secret sharing scheme can be constructed from a linear SSS by adding a layer of AMD coding before the secret is divided into shares. The privacy of the SSS guarantees the ``abstract private storage'' and the linearity turns the corruption on shares into an additive translate on the AMD codeword. } A $(t,r,N)$-ramp secret sharing scheme \cite{ramp SSS,new ramp} is a secret sharing scheme with two thresholds, $t$ and $r$, such that any $t$ or less shares do not leak any information about the secret while any $r$ or more shares reconstruct the secret and if the number $a$ of shares is in between $t$ and $r$, an $\frac{a-t}{r-t}$ fraction of information of the secret will be leaked. We define a \textit{robust ramp secret sharing scheme} as a ramp secret sharing scheme with an additional ($\rho,\delta$)-robustness property which requires that the probability of reconstructing a wrong secret, if up to $t+\lfloor\rho(r-t)\rfloor$ shares are controlled by an active adversary, is bounded by $\delta$. Here $\rho$ is a constant. We will show that a $(t,r,N,\rho,\delta)$-robust secret sharing scheme can be constructed from a linear $(t,r,N)$-ramp secret sharing scheme, by first encoding the message using a $\rho$-AMD code with security parameter $\delta$, and then using the linear ramp secret sharing scheme to generate shares. \remove{ property, which requires that as long as the number of compromised players is at most $t+\lfloor\rho(r-t)\rfloor$, the probability of reconstructing a wrong secret is bounded by $\delta$. We will show that a $(t,r,N,\rho,\delta)$-robust secret sharing scheme can be constructed from a linear $(t,r,N)$-ramp secret sharing scheme, by first encoding the message using a $\rho$-AMD code and then using a linear ramp secret sharing scheme to generate shares. } \noindent {\bf Wiretap II with an algebraic manipulation adversary.} Wiretap model of communication was proposed by Wyner \cite{WT}. In wiretap II setting \cite{WtII}, the goal is to provide secrecy against a passive adversary who can adaptively select a fraction $\rho$ of transmitted codeword components to eavesdrop. We consider active wiretap II adversaries that in addition to eavesdropping the channel, algebraically manipulate the communication by adding a noise (offset) vector to the sent codeword. The code must protect against eavesdropping and also detect tampering. An algebraic manipulation wiretap II code is a wiretap II code with security against an eavesdropping adversary and so the rate upper bound for wiretap II codes is applicable. Our construction of $\rho^{LV}$-AMD codes gives a family of algebraic manipulation wiretap II codes which achieve this rate upper bound and so the construction is capacity-achieving. The result effectively shows that algebraic manipulation detection in this case can be achieved for ``free'' (without rate loss), asymptotically \remove{ \textcolor{blue}{ We consider active wiretap II adversaries that can also algebraically manipulate the communication, namely, choosing a non-zero group element $\Delta$ and adding it to the transmitted codeword $\mathbf{c}$ (assume codewords live in an additive group $\mathcal{G}$). We define algebraic adversary wiretap II codes that, in addition to providing secrecy against an eavesdropping adversary, detect algebraic manipulation of the codeword. An algebraic adversary wiretap II code is by definition a wiretap II code. So the rate upper bound for wiretap II codes is a trivial upper bound for algebraic adversary wiretap II codes. We show that our construction of $\rho^{LV}$-AMD codes in fact yields a family of algebraic adversary wiretap II codes which achieves the rate upper bound. This upper bound is then the capacity of algebraic adversary wiretap II codes and the construction is capacity-achieving.} } Table \ref{tb: constructions and applications} summarizes the code constructions and applications. \begin{table} \begin{center} \begin{tabular}{@{} |c|c|c|@{}} \hline codes constructed & asymptotic rate & applications \\ \hline {\bf strong $\rho$-AMD } & {\bf N.A. }& {\bf $(\rho,\delta)$-robust ramp secret sharing}\\ {\bf strong $\rho^{LV}$-AMD } & $\mathbf{1-\rho}$ & {\bf $(\rho,0,\delta)$-algebraic adversary wiretap II}\\ \hline {\bf weak $\rho$-AMD } & {\bf N.A. } & {\bf N.A. }\\ {\bf weak $\rho^{LV}$-AMD } & $\mathbf{1}$ & {\bf N.A. } \\ \hline \end{tabular} \caption{\label{tb: constructions and applications}Summary of codes constructed in this paper and their applications.} \end{center} \end{table} \subsection*{Related works} \vspace{1.5mm} \noindent {\bf Related works.} AMD codes were proposed in \cite{AMD} and have found numerous applications. A work directly comparable to ours is \cite{LLR-AMD} where LLR-AMD code with different leakage models for weak and strong codes are introduced. Our leakage model uses a single leakage model for both weak and strong codes and is a natural generalization of the original AMD codes. The relation between our model and LLR-AMD codes is given in Section \ref{sec: LV-AMD}. More generally, there is a large body of work on modelling leakage and designing leakage resilient systems. A survey can be found in \cite{LRcrypto}. Ramp secret sharing schemes (ramp SSS) are introduced in \cite{ramp SSS}. Robust secret sharing schemes (robust SSS) are well studied (see for example \cite{AMD}). To our knowledge robust ramp secret sharing schemes (robust ramp SSS) have not been considered before. In a robust SSS, robustness is defined only when the number of the compromised players is below the privacy threshold of the underling SSS. Our definition of robust ramp SSS has robustness guarantee even when the number of compromised players is bigger than the privacy threshold. Wiretap II model with active adversary was first studied in \cite{Lai Lifeng}, where the eavesdropped components and tampered components are restricted to be the same set. A general model of wiretap II adversaries with additive manipulation was defined in \cite{AWTP}. In this model (called adversarial wiretap or AWTP) the adversary can read a fraction $\rho_r$, and add noise to a fraction $\rho_w$, of the codeword components. The goal of the encoding scheme is to provide secrecy and guarantee reliability (message recovery) against this adversary. A variation of AWTP called eAWTP is studied in \cite{eAWTP}, where erasure of codeword components instead of additive tampering is considered. Interestingly, both AWTP and eAWTP have the same capacity $1-\rho_r-\rho_w$. The alphabet of known capacity-achieving codes are, $\mathcal{O}({\frac{1}{\xi^4}}^{\frac{1}{\xi^2}})$ for AWTP codes and $\mathcal{O}(2^{\frac{1}{\xi^2}})$ for eAWTP codes, respectively, where $\xi$ is the difference of the actual rate and capacity \cite{eAWTP}. The adversary of algebraic manipulation wiretap II codes defined in this paper can be seen as the AWTP adversary with $\rho_r =\rho$ and $\rho_w=1$, yielding $1-\rho_r-\rho_w<0$. In this case recovering the message is impossible. Our results on algebraic manipulation wiretap II show that a weaker goal against active attack, that is {\em to detect } manipulation of the message, is achievable and can be achieved with capacity $1-\rho$, which is the same as the capacity of wiretap II codes with no security against active attacks. \bigskip \noindent {\em Organization:} In Section \ref{sec: intro}, we give notations and introduce AMD codes (with\slash without leakage) and wiretap II codes. In Section \ref{sec: rho-AMD}, we define $\rho$-AMD codes and derive efficiency bounds. In Section \ref{sec: LV-AMD construction}, we study $\rho^{LV}$-AMD codes and give concrete constructions. In Section \ref{sec: applications}, we give two applications. \section{Preliminaries} \label{sec: intro} Calligraphy letters $\mathcal{X}$ denote sets and their corresponding capital letters denote the cardinality, $|\mathcal{X}|=X$. Boldface letters $\mathbf{x}$ denote vectors. $\mathbf{x}_{|S}$ denotes the sub-vector of $\mathbf{x}$ consisting of the components specified by the index set $S$. $[n]$ denotes $\{1,2,\cdots,n\}$. Capital boldface letters $\mathbf{X}$ denote random variables, and $\mathbf{X}\leftarrow\mathcal{X}$ denotes sampling of the random variable $\mathbf{X}$ from the set $\mathcal{X}$, with $\mathbf{X}\stackrel{\$}{\leftarrow}\mathcal{X}$ denoting a uniform distribution in sampling. The statistical distance between $\mathbf{X}$ and $\mathbf{Y}$ that are both defined over the set $\mathcal{W}$, is defined as, $$ \mathsf{SD}(\mathbf{X},\mathbf{Y})\triangleq \dfrac{1}{2}\sum_{\mathbf{w} \in \mathcal{W}}|\mathsf{Pr}[\mathbf{X}=\mathbf{w}]-\mathsf{Pr}[\mathbf{Y}=\mathbf{w}]|. $$ We say $\mathbf{X}$ and $\mathbf{Y}$ are $\delta$-close if $\mathsf{SD}(\mathbf{X},\mathbf{Y})\leq \delta$. The \textit{min-entropy} $\mathsf{H}_\infty(\mathbf{X})$ of a random variable $\mathbf{X}\leftarrow\mathcal{X}$ is $$ \mathsf{H}_\infty(\mathbf{X})=-\log\max_{\mathbf{x} \in \mathcal{X}}\mathsf{Pr}[\mathbf{X}=\mathbf{x}]. $$ The \textit{(average) conditional min-entropy} $\tilde{\mathsf{H}}_\infty(\mathbf{X}|\mathbf{Z})$ of $\mathbf{X}$ conditioned on $\mathbf{Z}$ is defined \cite{Dodis Fuzzy} as, $$ \tilde{\mathsf{H}}_\infty(\mathbf{X}|\mathbf{Z})=-\log\left( \mathbb{E}_{\mathbf{Z}=\mathbf{z}}\max_{\bf x} \mathsf{Pr}[\mathbf{X} = {\bf x}|\mathbf{Z}=\mathbf{z}]\right). $$ The following bound on the amount of information about one variable that can leak through a correlated variable is proved in \cite{Dodis Fuzzy}. \begin{lemma}\label{lem: conditional min-entropy}\cite{Dodis Fuzzy}Let $\mathbf{X}\leftarrow\mathcal{X}$ and $\mathbf{Z}\leftarrow\mathcal{Z}$ with $\ell=\log |\mathcal{Z}|$. Then $$ \tilde{\mathsf{H}}_\infty(\mathbf{X}|\mathbf{Z})\geq \mathsf{H}_\infty(\mathbf{X})-\ell. $$ \end{lemma} \begin{definition}\label{def: AMD} An $(M,G,\delta)$-algebraic manipulation detection code, or $(M,G,\delta)$-AMD code for short, is a probabilistic encoding map $\mbox{Enc}: \mathcal{M}\rightarrow \mathcal{G}$ from a set $\mathcal{M}$ of size $M$ to an (additive) group $\mathcal{G}$ of order $G$, together with a deterministic decoding function $\mbox{Dec}: \mathcal{G}\rightarrow \mathcal{M}\bigcup\{\perp\}$ such that $\mbox{Dec}(\mbox{Enc}(\mathbf{m}))=\mathbf{m}$ with probability $1$ for any $\mathbf{m}\in\mathcal{M}$. The security of an AMD code requires that for any $\mathbf{m}\in\mathcal{M}$, $\Delta\in\mathcal{G}$, $\mathsf{Pr}[\mbox{Dec}(\mbox{Enc}(\mathbf{m})+\Delta)\notin\{\mathbf{m},\perp\}]\leq\delta$. \remove{An AMD code is called {\em systematic } if $\mathcal{M}$ is a group, and the encoding is of the form $$ \mbox{Enc}: \mathcal{M}\rightarrow \mathcal{M}\times \mathcal{G}_1\times\mathcal{G}_2, \mathbf{m}\mapsto(\mathbf{m},\mathbf{r},f(\mathbf{r},\mathbf{m})) $$ for some (tag) function $f$ and $\mathbf{r}\stackrel{\$}{\leftarrow}\mathcal{G}_1$. The decoding function of a systematic AMD code is naturally given by $\mbox{Dec}(\mathbf{m}^{'},\mathbf{r}^{'},\sigma^{'})=\mathbf{m}^{'}$ if $\sigma^{'}=f(\mathbf{r}^{'},\mathbf{m}^{'})$ and $\perp$ otherwise.} \end{definition} The AMD code above is said to provide {\em strong security}. {\em Weak AMD} codes provide security for randomly chosen messages. Efficiency of $(M,G,\delta)$-AMD codes is measured by the \textit{effective tag size} which is defined as the minimum tag length $\min\{\log_2 G\}-u$, where the minimum is over all $(M,G,\delta)$-AMD codes with $M\geq2^u$. Concrete lengths are important in practice, and additionally, the asymptotic rate (defined as the limit of the ratio of message length to codeword length as the length grows to infinity) of both weak and strong AMD codes has been shown \cite{AMD} to be $1$. \begin{lemma}\cite{AMD}\label{lem: AMD bounds} Any weak, respectively strong, $(M,G,\delta)$-AMD code satisfies $$ G\geq\frac{M-1}{\delta}+1, \mbox{ respectively, } G\geq\frac{M-1}{\delta^2}+1. $$ \end{lemma} The following construction is optimal with respect to effective tag size. \begin{construction}\label{ex: AMD} \cite{AMD}: Let $\mathbb{F}_q$ be a field of size $q$ and characteristic $p$, and let $d$ be any integer such that $d+2$ is not divisible by $p$. Define the encoding function, $$ \mbox{Enc}: \mathbb{F}_q^d\rightarrow \mathbb{F}_q^d\times \mathbb{F}_q\times\mathbb{F}_q, \mathbf{m}\mapsto(\mathbf{m},\mathbf{r},f(\mathbf{r},\mathbf{m})),\mbox{ where } f(\mathbf{r},\mathbf{m})=\mathbf{r}^{d+2}+\sum_{i=1}^dm_i\mathbf{r}^i. $$ The decoder Dec verifies a tagged message $(\mathbf{m},\mathbf{r},t )$ by comparing $t= f(\mathbf{r},\mathbf{m})$ and outputs $\mathbf{m}$ if agree; $\perp$ otherwise. (Enc,Dec) gives a $(q^d,q^{d+2},\frac{d+1}{q})$-AMD code. \end{construction} \begin{definition}[strong LLR-AMD]\cite{LLR-AMD}\label{def: strong LLR-AMD} A randomized code with encoding function $\mbox{Enc}:\mathcal{M}\times\mathcal{R}\rightarrow \mathcal{X}$ and decoding function $\mbox{Dec}:\mathcal{X}\rightarrow \mathcal{M}\bigcup\{\perp\}$ is a $(M,X,|\mathcal{R}|,\alpha,\delta)$-strong LLR-AMD code if for any $\mathbf{m}\in\mathcal{M}$ and any $\mathbf{r}\in\mathcal{R}$, $\mbox{Dec}(\mbox{Enc}(\mathbf{m},\mathbf{r}))=\mathbf{m}$, and for any adversary $\mathbb{A}$ and variables $\mathbf{R}\stackrel{\$}{\leftarrow}\mathcal{R}$ and $\mathbf{Z}$ such that $\tilde{\mathsf{H}}_\infty(\mathbf{R}|\mathbf{Z})\geq (1-\alpha)\log |\mathcal{R}|$, it holds for any $\mathbf{m}\in\mathcal{M}$: \begin{equation}\label{eq: strong LLR-AMD} \b [\mbox{Dec}(\mbox{Enc}(\mathbf{m},\mathbf{R})+\mathbb{A}(\mathbf{Z}))\notin\{\mathbf{m},\perp\}]\leq\delta, \end{equation} where the probability is over the randomness of encoding. \end{definition} \begin{definition}[weak LLR-AMD]\cite{LLR-AMD}\label{def: weak LLR-AMD} A deterministic code with encoding function $\mbox{Enc}:\mathcal{M}\rightarrow \mathcal{X}$ and decoding function $\mbox{Dec}:\mathcal{X}\rightarrow \mathcal{M}\bigcup\{\perp\}$ is a $(M,X,\alpha,\delta)$-weak LLR-AMD code if for any $\mathbf{m}\in\mathcal{M}$, $\mbox{Dec}(\mbox{Enc}(\mathbf{m}))=\mathbf{m}$, and for any adversary $\mathbb{A}$ and variables $\mathbf{M}\leftarrow\mathcal{M}$ and $\mathbf{Z}$ such that $\tilde{H}_\infty(\mathbf{M}|\mathbf{Z})\geq (1-\alpha)\log |\mathcal{M}|$, it holds: \begin{equation}\label{eq: weak LLR-AMD} \b [\mbox{Dec}(\mbox{Enc}(\mathbf{M})+\mathbb{A}(\mathbf{Z}))\notin\{\mathbf{M},\perp\}]\leq\delta, \end{equation} where the probability is over the randomness of the message. \end{definition} In the above two definitions, leakages are from randomness (bounded by $\tilde{\mathsf{H}}_\infty(\mathbf{R}|\mathbf{Z})$ $\geq (1-\alpha)\log |\mathcal{R}|$) and message space (bounded by $\tilde{\mathsf{H}}_\infty(\mathbf{M}|\mathbf{Z})\geq (1-\alpha)\log |\mathcal{M}|$), respectively. \iffalse \subsubsection*{Randomness extractors.} An {\em $(n,\beta)$-source} is the family of distributions over $\mathbb{F}_q^{n}$ with min-entropy at least $\beta\cdot\log q$. \begin{definition} A function $\mbox{Ext}:\mathbb{F}_q^{n} \times \mathbb{F}_q^r \to \mathbb{F}_q^{\ell}$ is a seeded $(\beta, \delta)$ extractor if for any $(n,\beta)$-source $X$, we have $\mbox{SD}(\mbox{Ext}(X,S);U_{\mathbb{F}_q^\ell}) \leq \delta$, where $S$ denotes a uniformly random choice of the seeds $\mathcal{S}$ and $U_{\mathbb{F}_q^\ell}$ denotes the uniform distribution over $\mathbb{F}_q^\ell$. $\mbox{Ext}$ is called a \textit{strong} extractor if $\mathsf{SD}((\mathsf{Ext}(X,S),S);(U_{\mathbb{F}_q^\ell},S)) \leq \delta$. \end{definition} An \textit{$(n,\beta)$-affine source} is the family of flat distributions supported on affine subspaces in $\mathbb{F}_q^{n}$ of min-entropy at least $\beta\cdot\log q$. \begin{definition} A function $\mathsf{Ext}:{\mathbb F}_q^n\rightarrow {\mathbb F}_q^\ell$ is a seedless $(\beta, \delta)$-affine extractor if for every distribution of the $(n, \beta)$-affine source, the distribution $\mathsf{Ext}(X)$ is $\delta$-close to uniform. That is, \[ \mathsf{SD}(\mathsf{Ext}(X),U_{{\mathbb F}_q^\ell})\leq \delta. \] \end{definition} \fi \subsubsection*{Wiretap II codes.} Wiretap II model \cite{WtII} of secure communication considers a scenario where Alice wants to send messages to Bob over a reliable channel that is eavesdropped by an adversary, Eve. The adversary can read a fraction $\rho$ of the transmitted codeword components, and is allowed to choose any subset (the right size) of their choice. A wiretap II code provides information-theoretic secrecy for message transmission against this adversary. \begin{definition}\label{def: WtII} A $(\rho,\varepsilon)$ wiretap II code, or $(\rho,\varepsilon)$-WtII code for short, is a probabilistic encoding function $\mbox{Enc}: \mathbb{F}_q^k\rightarrow \mathbb{F}_q^n$, together with a deterministic decoding function $\mbox{Dec}: \mathbb{F}_q^n\rightarrow \mathbb{F}_q^k$ such that $\mbox{Dec}(\mbox{Enc}(\mathbf{m}))=\mathbf{m}$ for any $\mathbf{m}\in\mathbb{F}_q^k$. The security of a $(\rho,\varepsilon)$-WtII code requires that for any $\mathbf{m}_0,\mathbf{m}_1\in\mathbb{F}_q^k$, any $S\subset [n]$ of size $|S|\leq n\rho$, \begin{equation}\label{eq: WtII security} \mathsf{SD}(\mbox{Enc}(\mathbf{m}_0)_{|S};\mbox{Enc}(\mathbf{m}_1)_{|S})\leq \varepsilon \end{equation} A rate $R$ is achievable if there exists a family of $(\rho,\varepsilon)$-WtII codes with encoding and decoding functions $\{\mbox{Enc}_n,\mbox{Dec}_n\}$ such that $\lim_{n\rightarrow\infty}\frac{k}{n}=R$. \end{definition} The above definition of security is in line with \cite{AWTP} and is stronger than the original definition \cite{WtII}, and also the definition in \cite{invertible extractors}. \remove{ \textcolor{red}{The following rate upper bound was proved for wiretap II codes with a weaker security, and hence will also be an upper bound for WtII codes with our stronger security definition. The weaker security assumes a uniformly distributed message $\mathbf{M}$ and is defined as $\lim_{k\rightarrow\infty}\frac{H_\infty(\mathbf{M}|\mathbf{Z})}{k}=1$, where $\mathbf{Z}$ is the view of the adversary.} } \begin{lemma} \cite{AWTP}\label{lem: WtII upper bound} The achievable rate of $(\rho,0)$-WtII codes is upper bounded by $1-\rho$. \end{lemma} When $\varepsilon=0$ is achieved in (\ref{eq: WtII security}), the distribution of any $\rho$ fraction of the codeword components is independent of the message. This is achieved, for example, by the following construction of wiretap II codes. \begin{construction}\label{ex: WtII}\cite{WtII} Let $G_{(n-k)\times n}$ be a generator matrix of a $[n,n-k]$ MDS code over $\mathbb{F}_q$. Append $k$ rows to $G$ such that the obtained matrix $\left [\begin{array}{c} G\\\tilde{G}\end{array} \right ]$ is non-singular. Define the encoder WtIIenc as follows. $$ \mbox{WtIIenc}(\mathbf{m})=[\mathbf{r},\mathbf{m}]\left [\begin{array}{c} G\\\tilde{G}\end{array} \right ],\mbox{ where }\mathbf{r}\stackrel{\$}{\leftarrow}\mathbb{F}_q^{n-k}. $$ WtIIdec uses a parity-check matrix $H_{k\times n}$ of the MDS code to first compute the syndrome, $H\mathbf{x}^T$, and then map the syndrome back to the message using the one-to-one correspondence between syndromes and messages. The above construction gives a family of $(\rho,0)$-WtII codes for $\rho=\frac{n-k}{n}$. \end{construction} \section{AMD codes for leaky storage}\label{sec: rho-AMD} We consider codes over a finite field $\mathbb{F}_q$, where $q$ is a prime power, and assume message set $\mathcal{M}=\mathbb{F}_q^k$ and the storage stores an element of the group $\mathcal{G}=\mathbb{F}_q^n$. \subsection{Definition of $\rho$-AMD }\label{sec: LV-AMD} \begin{definition An $(n,k)$-coding scheme consists of two functions: a randomized {\em encoding function } $\mbox{Enc}:\mathbb{F}_q^k\rightarrow\mathbb{F}_q^n$, and deterministic {\em decoding function} $\mbox{Dec}:\mathbb{F}_q^n\rightarrow\mathbb{F}_q^k\cup\{\perp\}$, satisfying $\mathsf{Pr}[\mbox{Dec}(\mbox{Enc}(\mathbf{m}))=\mathbf{m}]=1$, for any $\mathbf{m}\in\mathbb{F}_q^k$. Here probability is taken over the randomness of the encoding algorithm. The {\em information rate } of an $(n,k)$-coding scheme is $\frac{k}{n}$. \end{definition} We now define our leakage model and codes that detect manipulation in presence of this leakage. { Let $\mathbf{X}=\mbox{Enc}(\mathbf{m})$ for a message $\mathbf{m}\in {\cal M}$, and $\mathbb{A}_\mathbf{Z}$ denote an adversary with access to a variable $\mathbf{Z}$, representing the leakage of information about the codeword. \begin{definition}[$\rho$-AMD ]\label{def: AMD with leakage} An $(n,k)$-coding scheme is called a {\em strong $\rho$-AMD code} with security parameter $\delta$ if $\mathsf{Pr}[\mbox{Dec}(\mathbb{A}_\mathbf{Z}(\mbox{Enc}(\mathbf{m})))\notin\{\mathbf{m},\perp\}]\leq\delta$ for any message $\mathbf{m}\in\mathbb{F}_q^k$ and adversary $\mathbb{A}_\mathbf{Z}$ whose leakage variable $\mathbf{Z}$ satisfies $\tilde{\mathsf{H}}_\infty(\mathbf{X}|\mathbf{Z})\geq \mathsf{H}_\infty(\mathbf{X})-\rho n\log q$, and is allowed to choose any offset vector in $\mathbb{F}_q^n$ to add to the codeword \noindent The code is called a {\em weak $\rho$-AMD code} if security holds for $\mathbf{M}\stackrel{\$}{\leftarrow}\mathbb{F}_q^k$ (rather than an arbitrary message distribution). The encoder in this case is deterministic and the probability of outputing a different message is over the randomness of the message. \noindent A {\em family $\{(\mbox{Enc}_n,\mbox{Dec}_n)\}$ of $\rho$-AMD codes is a set of $(n,k(n))$-coding schemes} indexed by the codeword length $n$, where for any value of $\delta$, there is an $N\in \mathbb{N}$ such that for all $n\geq N$, $(\mbox{Enc}_n,\mbox{Dec}_n)$ is a $\rho$-AMD code with security parameter $\delta$. \noindent A {\em rate $R$ is achievable} if there exists a family $\{(\mbox{Enc}_n,\mbox{Dec}_n)\}$ of $\rho$-AMD codes such that $\lim_{n\rightarrow\infty}\frac{k(n)}{n}=R$ as $\delta$ approaches $0$ \end{definition} Our definition bounds the amount of leakage in comparison with an adversary who observes up to $\rho n$ components of the stored codeword. We call this latter adversary a {\em Limited-View (LV) adversary} \cite{ISIT-LV}. According to Lemma \ref{lem: conditional min-entropy}, the min-entropy of the stored codeword given an LV-adversary will be $\tilde{\mathsf{H}}_\infty(\mathbf{X}|\mathbf{Z})\geq \mathsf{H}_\infty(\mathbf{X})-\rho n\log q$. We require the same min-entropy be left in the codeword, for an arbitrary leakage variable ${\bf Z}$ accessible to the adversary. } \remove{ Our definition captures the intuition that the information leakage, from the storage $\mathcal{G}$ to the adversary $\mathbb{A}$, is bounded by $\rho\log|\mathcal{G}|$. Consider the setting where the codeword is stored as a vector in $\mathbb{F}_q^n$, and the adversary $\mathbb{A}$ can only choose $\rho n$ components of the vector for eavesdropping. Lemma \ref{lem: conditional min-entropy} asserts that such adversary satisfies $\tilde{H}_\infty(\mathbf{X}|\mathbf{Z})\geq H_\infty(\mathbf{X})-\rho n\log q$, where $\mathbf{Z}$ is any $\rho$ fraction of codeword components. An adversary with this type of leakage has been called a {\em Limited-View (LV) adversary} \cite{ISIT-LV}. In Section \ref{sec: LV-AMD construction}, we give constructions of $\rho$-AMD codes for an LV adversary. } \begin{figure} \centerline{\includegraphics[scale=0.45]{comparison.jpg}} \caption{\label{fig: comparison}The arrow shows the part of the system that leaks.} \end{figure} Fig. \ref{fig: comparison} shows places of leakage in AMD encoding in our model, and the models in Definition \ref{def: strong LLR-AMD} and Definition \ref{def: weak LLR-AMD}. \begin{proposition}\label{prop1 {Let $\mathbf{X}$ denote a random variable representing the codeword of a message $\mathbf{m}$ ($\mathbf{M}$ for weak codes), and $\mathbf{Z}$ denote the leakage variable of the adversary $\mathbb{A}_\mathbf{Z}$ who uses the leakage information to construct the best offset vector to make the decoder output a different message. For a $\rho$-AMD code with security parameter $\delta$, we have $\tilde{H}(\mathbf{X}|\mathbf{Z})\geq\log\frac{1}{ \delta}$. } \end{proposition} \begin{proof} We write the proof for strong $\rho$-AMD codes. (The proof for weak $\rho$-AMD codes follows similarly.) According to the security definition of $\rho$-AMD codes, we have $$ \mbox{Pr}[\mbox{Dec}(\mathbb{A}_\mathbf{Z}(\mathbf{X}))\notin\{\mathbf{m},\perp\}]\leq\delta, $$ where the probability is over the randomness of $\mathbf{X}$, and is the expectation over $\mathbf{z}\in\mathcal{Z}$. If the adversary with the leakage variable $\mathbf{Z}=\mathbf{z}$ can correctly guess the value $\mathbf{x}$ of $\mathbf{X}$, then a codeword $\mathbf{x}'$ corresponding to another message $\mathbf{m}'$ can be constructed to cause the decoder to output $\mathbf{m}'$, by using $\mathbb{A}_\mathbf{z}(\mathbf{X})=\mathbf{X}+(\mathbf{x}'-\mathbf{x})$. We then have $$ \mbox{Pr}[\mbox{Dec}(\mathbb{A}_\mathbf{Z}(\mathbf{X}))\notin\{\mathbf{m},\perp\}|\mathbf{Z}=\mathbf{z}]\geq\max_{\mathbf{x}}\mbox{Pr}[\mathbf{X}=\mathbf{x}|\mathbf{Z}=\mathbf{z}], $$ which by taking expectation over $\mathbf{z}\in\mathcal{Z}$ yields $$ \mathbb{E}_\mathbf{z}\left(\mbox{Pr}[\mbox{Dec}(\mathbb{A}_\mathbf{Z}(\mathbf{X}))\notin\{\mathbf{m},\perp\}|\mathbf{Z}=\mathbf{z}]\right)\geq\mathbb{E}_\mathbf{z}\left(\max_{\mathbf{x}}\mbox{Pr}[\mathbf{X}=\mathbf{x}|\mathbf{Z}=\mathbf{z}]\right)=2^{-\tilde{H}(\mathbf{X}|\mathbf{Z})}, $$ The last equality follows from the definition of conditional min-entropy. The desired inequality then follows directly from the security definition of $\rho$-AMD codes as follows. $$ 2^{-\tilde{H}(\mathbf{X}|\mathbf{Z})}\leq\mbox{Pr}[\mbox{Dec}(\mathbb{A}_\mathbf{Z}(\mathbf{X}))\notin\{\mathbf{m},\perp\}|\mathbf{Z}=\mathbf{z}]\leq \delta\Longleftrightarrow \tilde{H}(\mathbf{X}|\mathbf{Z})\geq\log\frac{1}{ \delta}. $$ \qed } \end{proof} \begin{definition} Let $\cal C$ denote the set of codewords of a code, and {${\cal C}_\mathbf{m}$ denote the set of codewords corresponding to the message $\mathbf{m}$, i.e. ${\cal C}_\mathbf{m} = \{ \mbox{Enc}(\mathbf{m},\mathbf{r})| \mathbf{r}\in {\cal R}\} $}. A randomised encoder is called \textit{regular} if $|{\cal C}_\mathbf{m}| = |{\cal R}|$ for all $\mathbf{m}$. \end{definition} We note that because the code has zero decoding error when there is no adversary corruption, we have \begin{equation}\label{eq_le_10} {\cal C}_\mathbf{m} \cap C_{\mathbf{m}'} =\emptyset,\; \forall \mathbf{m},\mathbf{m}' \in {\cal M}. \end{equation} {This means that for regular randomised encoders, a codeword uniquely determines a pair $(\mathbf{m},\mathbf{r})$.} Assuming that the randomized encoder uses $r$ uniformly distributed bits, { the random variable {$\mathbf{X}=\mbox{Enc}(\mathbf{m},\mathbf{R})$ is flat over} ${\cal C}_\mathbf{m}$. \begin{lemma}\label{lem: strong comparison The relations between Strong LLR-AMD codes and strong $\rho$-AMD codes are as follows. \begin{enumerate} \item If there exists a regular randomized encoder for a $(q^k,q^n,2^r,\alpha,\delta)$-strong LLR-AMD code, then there is an encoder for strong $\rho$-AMD code with security parameter $\delta$ and leakage parameter $\rho$ where {$\rho\leq\frac{\alpha r}{n\log q}$}. \item If there exists a regular randomized encoder for a strong $\rho$-AMD code with security parameter $\delta$ and leakage parameter $\rho$, then there is an encoder for a $(q^k,q^n,2^r,\alpha,\delta)$-strong LLR-AMD code with $\alpha$ and $r$ where {$\alpha\leq\frac{n\rho\log q}{r}$ and $r\geq\log \frac{1}{\delta}+n\rho\log q$.} \end{enumerate} \end{lemma} } Proof of Lemma \ref{lem: strong comparison} is given in Appendix \ref{sec: proof of strong comparison}. In \cite{LLR-AMD}, it is shown that the optimal AMD code in Construction \ref{ex: AMD} gives a $(q^d,q^{d+2},q,\alpha,\frac{d+1}{q^{1-\alpha}})$-strong LLR-AMD code. The parameters of this LLR-AMD code are $k=d$, $n=d+2$, $r=\log q$ and $\delta=\frac{d+1}{q^{1-\alpha}}$. A simple mathematical manipulation of these equations gives $\alpha=1-\log_q \frac{n-1}{\delta}$, and substituting them into Lemma \ref{lem: strong comparison}, item 1, we obtain $$ \rho\leq\frac{(1-\log_q \frac{n-1}{\delta})\log q}{n\log q }=\frac{1-\log_q \frac{n-1}{\delta}}{n }. $$ This results in the following. \begin{corollary} The code in Construction \ref{ex: AMD} is a strong $\rho$-AMD code with $k=d$, $n=d+2$, security parameter $\delta$ and leakage parameter {$\rho\leq\frac{1-\log_q \frac{n-1}{\delta}}{n }$}. \end{corollary} It is easy to see that $\rho<\frac{1}{n}$. Thus the resulting construction of strong $\rho$-AMD codes can only tolerate a very small leakage. Moreover the upper bound on $\rho$ vanishes as $n$ goes to infinity and so this construction cannot give a non-trivial family of strong $\rho$-AMD code. We note that the same construction resulted in a family of strong LLR-AMD codes with asymptotic rate $1$. \begin{lemma}\label{lem: weak comparison} The relations between weak LLR-AMD codes and weak $\rho$-AMD codes are as follows. \begin{enumerate} \item A $(q^k,q^n,\alpha,\delta)$-weak LLR-AMD code is a weak $\rho$-AMD code with security parameter $\delta$ and leakage parameter $\rho$ satisfying $\rho\leq\frac{\alpha k}{n}$. \item A weak $\rho$-AMD code with security parameter $\delta$ and leakage parameter $\rho$ is a $(q^k,q^n,\alpha,\delta)$-weak LLR-AMD code satisfying $\alpha\leq\frac{\rho n}{ k}$ \end{enumerate} \end{lemma} Proof of Lemma \ref{lem: weak comparison} is given in Appendix \ref{sec: proof of weak comparison}. A construction of $(q^d,q^{d+1},\alpha,\frac{2}{q^{1-\alpha d}})$ weak LLR-AMD codes is given in \cite[Theorem 2]{LLR-AMD}. The code has parameters $k=d$, $n=d+1$ and $\delta=\frac{2}{q^{1-\alpha d}}$. A simple mathematical manipulation of these equations gives $\alpha=\frac{1-\log_q \frac{2}{\delta}}{n-1}$, and so from Lemma \ref{lem: weak comparison}, item 1, we obtain $$ \rho\leq\frac{(\frac{1-\log_q \frac{2}{\delta}}{n-1})(n-1)}{n}=\frac{1-\log_q \frac{2}{\delta}}{n}. $$ \begin{corollary} The code in \cite[Theorem 2]{LLR-AMD} is a weak $\rho$-AMD code with $k=d$, $n=d+1$, security parameter $\delta$ and leakage parameter $\rho\leq\frac{1-\log_q \frac{2}{\delta}}{n}$. \end{corollary} This construction gives $\rho$-AMD codes with small $\rho$, and cannot be used to construct a family of $\rho$-AMD codes for $\rho>0$. \subsection{Efficiency bounds for $\rho$-AMD codes}\label{sec: upper bound} \begin{theorem}\label{lem: generalised upper bound} If an $(n,k)$-coding scheme is a strong $\rho$-AMD code with security parameter $\delta$, then, \begin{equation} k\leq n(1-\rho)+\frac{2\log \delta-1}{\log q}. \end{equation} The achievable rate of strong $\rho$-AMD codes is upper bounded by $1-\rho$. \end{theorem} \begin{proof} Consider a strong $\rho$-AMD code with security parameter $\delta$. By Proposition \ref{prop1}, $\tilde{H}_\infty({\bf X} |{\bf Z})\geq\log\frac{1}{\delta}$ should hold for any $\mathbf{Z}$ satisfying $\tilde{H}_\infty({\bf X} |{\bf Z})\geq H_\infty( {\bf X}) -\rho n\log q$. In particular, the inequality should hold for $\mathbf{Z}$ such that $\tilde{H}_\infty({\bf X} |{\bf Z})= H_\infty( {\bf X}) -\rho n\log q$. We then have $H_\infty( {\bf X}) -\rho n\log q\geq\log\frac{1}{\delta}$. On the other hand, we always have $\log |{\cal C}_\mathbf{m}|\geq H_\infty( {\bf X})$, where ${\cal C}_\mathbf{m}$ denotes the set of codewords corresponding to message $\mathbf{m}$, which is the support of ${\bf X}$. This gives the following lower bound on $|{\cal C}_\mathbf{m}|$. \begin{equation}\label{eq: coset bound} {|\cal C}_\mathbf{m}| \geq \frac{2^{\rho n \log q}}{\delta} = \frac{q^{\rho n}}{\delta}. \end{equation} Now consider the adversary randomly choose an offset $\Delta\neq 0^n$, we have \begin{equation} \begin{split} \delta & \geq \mathsf{Pr}[\mbox{Enc}({\bf m}) + \Delta \in \cup_{{\bf m}' \neq {\bf m}} E_{{\bf m}'} ]\\ &\geq \frac{| \bigcup_{{\bf m}' \neq {\bf m}} E_{{\bf m}'} |}{|\mathbb{F}_q^n| - 1}\\ & \overset{(\ref{eq_le_10}),(\ref{eq: coset bound})}{\geq} \frac{(q^{k}-1)\cdot \frac{q^{\rho n}}{\delta}}{q^{n}-1}. \\ \end{split} \end{equation} Therefore, $$ k\leq n(1-\rho)+\frac{2\log \delta-1}{\log q}. $$ \qed \end{proof} \begin{proposition}\label{pro: weak bound} If an $(n,k)$-coding scheme is a weak $\rho$-AMD code with security parameter $\delta$, then $q^{\rho n-k}\leq\delta$ and $\frac{q^{k}-1}{q^{n}-1}\leq \delta$. \end{proposition} Proof of Proposition \ref{pro: weak bound} is given in Appendix \ref{sec: proof of weak bound}. \iffalse The models and achievable rate bounds are summarized in Table \ref{tab: list of models}. \begin{table} \caption{\label{tab: list of models}List of AMD with leakage models} \begin{center} \begin{tabular}{@{} |c|c|c|c| @{}} \hline leakage model& leakage location & encoder & rate upper bound\\ \hline weak LLR-AMD & message & deterministic & $\frac{1}{1+\alpha}$({probably wrong})\\ strong LLR-AMD & encoder & randomized & $1$\\ \hline weak $\rho$-AMD & storage & deterministic &$1$\\ strong $\rho$-AMD & storage & randomized &$1-\rho$\\ \hline \end{tabular} \end{center} \end{table} {(probably wrong) because Hadi seemed to combine ``reading'' attack and random attack. This is too strong. Once the adversary is doing random attack, namely, counting the ratio of messages to codewords, the adversary can no longer use reading ability. One will need an oracle to tell him if an offset is good when he does not know what the actual codeword is. Counter example of Hadi's bound is an extractor to generate a tag.} \fi \section{Limited-View $\rho$-AMD codes} \label{sec: LV-AMD construction} \remove{ In this subsection, we give code constructions for LV adversary, namely, an adversary who reads a $\rho$ fraction of codeword components and tampers according to what he sees. Now we first make the intuition that ``an adaptive adversary tampers according to what he sees'' exact. We will take the following approach. Firstly, an adversary is capable of a set of manipulation strategies, each of which is described by a function. Secondly, the adversary is worst-case adversary, namely, he will always apply his best strategy to manipulate. So a code is secure with respect to the adversary means the code is secure against all manipulation strategies the adversary is capable of. } We consider a special type of leakage where the adversary chooses a subset $S, |S| = \rho n $ ($n$ is the codeword length), and the codeword components associated with this set will be revealed to them. The adversary will then use this information to construct their offset vector. A tampering strategy is a function from $\mathbb{F}_q^{n}$ to $\mathbb{F}_q^{n}$ which can be described by the notation $f_{S,g}$, where $S\subset [n]$ and a function $g:\mathbb{F}_q^{n\rho}\rightarrow\mathbb{F}_q^{n}$, with the following interpretation. The set S specifies a subset of $\rho n$ indexes of the codeword that the adversary choose. The function $g$ determines an offset for each read value on the subset $S$. A $\rho^{LV}$-AMD code provides protection against all adversary strategies. (This approach to defining tampering functions is inspired by Non-Malleable Codes (NMC) \cite{DzPiWi}.) \remove{ A {\em tampering strategy $f_{S,g}$} is determined by a {\em read set $S\subset [n]$} and a function $g:\mathbb{F}_q^{n\rho}\rightarrow\mathbb{F}_q^{n}$. The set $S\subset [n]$ specifies $n\rho$ positions of the codeword that the adversary has chosen. The function $g:\mathbb{F}_q^{n\rho}\rightarrow\mathbb{F}_q^{n}$ specifies the chosen offset of the adversary, after the codeword components associated with $S$ are seen by the adversary. \textcolor{blue}{The set of all tampering strategies characterizes the $\rho^{LV}$-AMD adversary in that if a coding scheme guarantees $\delta$-security for any tampering strategy $f_{S,g}$, then it guarantees $\delta$-security against the $\rho^{LV}$-AMD adversary. This way of characterizing an adversary is inspired by Non-Malleable Codes (NMC) \cite{DzPiWi}.} } Let $\mathcal{S}^{[n\rho]}$ be the set of all subsets of $[n]$ of size $n\rho$. Let $\mathcal{M}(\mathbb{F}_q^{n\rho},\mathbb{F}_q^{n})$ denote the set of all functions from $\mathbb{F}_q^{n\rho}$ to $\mathbb{F}_q^{n}$, namely, $\mathcal{M}(\mathbb{F}_q^{n\rho},\mathbb{F}_q^{n}):=\{g:\mathbb{F}_q^{n\rho}\rightarrow\mathbb{F}_q^{n}\}$. \begin{definition}[$\mathcal{F}^{add}_{\rho}$ The class of tampering function $\mathcal{F}^{add}_{\rho}$, consists of the set of functions $\mathbb{F}_q^n\rightarrow\mathbb{F}_q^n$, that can be described by two parameters, $S\in\mathcal{S}^{[n\rho]}$ and $g\in\mathcal{M}(\mathbb{F}_q^{n\rho},\mathbb{F}_q^{n})$. The set $\mathcal{F}^{add}_{\rho}$ of limited view algebraic tampering functions are defined as follows. \begin{equation}\label{eq: AMD with leakage} \mathcal{F}^{add}_{\rho}=\left\{f_{S,g}(\mathbf{x})\ |\ S\in\mathcal{S}^{[n\rho]},g\in\mathcal{M}(\mathbb{F}_q^{n\rho},\mathbb{F}_q^{n})\right\}, \end{equation} where $f_{S,g}(\mathbf{x})=\mathbf{x}+ g(\mathbf{x}_{|S})$ for $\mathbf{x}\in\mathbb{F}_q^{n}$. \end{definition} \remove{ \begin{definition} \textcolor{blue}{Let $\mathcal{S}^{[n\rho]}$ be the set of all subsets of $[n]$ of size $n\rho$. Let ${\mathbb{F}_q^{n\rho}}^{\mathbb{F}_q^{n}}$ be the set of all functions from $\mathbb{F}_q^{n\rho}$ to $\mathbb{F}_q^{n\rho}$, namely, ${\mathbb{F}_q^{n\rho}}^{\mathbb{F}_q^{n}}:=\{g:\mathbb{F}_q^{n\rho}\rightarrow\mathbb{F}_q^{n}\}$.} The set of limited-view $\rho$-AMD tampering functions are defined as follows. \begin{equation}\label{eq: AMD with leakage} \mathcal{F}^{add}_{\rho}=\{f_{S,g}(\mathbf{x})=\mathbf{x}+ g(\mathbf{x}_{|S})\ |\ S\in\mathcal{S}^{[n\rho]},g\in{\mathbb{F}_q^{n\rho}}^{\mathbb{F}_q^{n}}\}. \end{equation} \end{definition} } \begin{definition}[$\rho^{LV}$-AMD ]\label{def: LV-AMD codes} An $(n,k)$-coding scheme is called a strong $\rho^{LV}$-AMD code with security parameter $\delta$ if $\mbox{Pr}[\mbox{Dec}(f(\mbox{Enc}(\mathbf{m})))\notin\{\mathbf{m},\perp\}]\leq\delta$ for any message $\mathbf{m}\in\mathbb{F}_q^k$ and any $f_{S,g}\in\mathcal{F}^{add}_{\rho}$. It is called a weak $\rho^{LV}$-AMD code if it only requires the security to hold for a random message $\mathbf{M}\leftarrow\mathbb{F}_q^k$ rather than an arbitrary message $\mathbf{m}$. \end{definition} We first give a generic construction of strong $\rho^{LV}$-AMD codes from WtII codes and AMD codes \begin{construction}\label{con: basic construction} Let $(\mbox{AMDenc},\mbox{AMDdec})$ be a $(q^k,q^{n'},\delta)$-AMD code and let $(\mbox{WtIIenc}, \mbox{WtIIdec})$ be a linear $(\rho,0)$-wiretap II code with encoder $\mbox{WtIIenc}:\mathbb{F}_q^{n'}\rightarrow\mathbb{F}_q^n$. Then $(\mbox{Enc},\mbox{Dec})$ defined as follows is a strong $\rho^{LV}$-AMD codes with security parameter $\delta$. $$ \left\{ \begin{array}{ll} \mbox{Enc}(\mathbf{m})&=\mbox{WtIIenc}(\mbox{AMDenc}(\mathbf{m}));\\ \mbox{Dec}(\mathbf{x})&=\mbox{AMDdec}(\mbox{WtIIdec}(\mathbf{x})).\\ \end{array} \right. $$ When instantiated with the $(q^k,q^{k+2},\frac{k+1}{q})$-AMD code in Construction \ref{ex: AMD} and the linear $(\rho,0)$-wiretap II code in Construction \ref{ex: WtII}, we obtain a family of strong $\rho^{LV}$-AMD codes with security parameter $\frac{k+1}{q}$ that achieves rate $1-\rho$. \end{construction} \begin{figure} \centerline{\includegraphics[scale=0.45]{AMDcode.jpg}} \caption{\label{fig: tampering experiment}WtII$\circ$AMD construction with $A^i_\mathbf{m}$ denoting the values of $\mbox{AMDenc}(\mathbf{m})$} \end{figure} \begin{proof} Since both AMDenc and WtIIenc are randomised encoders, in this proof we write the randomness of a randomized encoder explicitly. Let $I$ denote the randomness of AMDenc and let $J$ denote the randomness of WtIIenc. As illustrated in Fig. \ref{fig: tampering experiment}, a message ${\bf m}$ is first encoded into an AMD codeword $A^I_{\bf m}=\mbox{AMDenc}({\bf m},I)$. The AMD codeword $A^I_{\bf m}$ is then further encoded into a WtII codeword, which is the final $\rho^{LV}$-AMD codeword: $\mbox{Enc}({\bf m})=\mbox{WtIIenc}(A^I_{\bf m},J)$. According to (\ref{eq: WtII security}), $\mbox{SD}\left(\mbox{WtIIenc}(A^{i_1}_{\bf m},J)_{|S};\mbox{WtIIenc}(A^{i_2}_{\bf m},J)_{|S}\right)=0$. This says that $A^\mathbf{I}_{\bf m}$ and $\mbox{Enc}(\mathbf{m})_{|S}$ are independent random variables, in particular, $\mathbf{I}$ and $\mbox{Enc}(\mathbf{m})_{|S}$ are independent. According to Definition \ref{def: LV-AMD codes}, to show that $(\mbox{Enc},\mbox{Dec})$ is a strong $\rho^{LV}$-AMD code with security parameter $\delta$, we need to show that for any message ${\bf m}$, and any $f_{S,g}\in\mathcal{F}^{add}_{\rho}$, $\mbox{Pr}[\mbox{Dec}(f_{S,g}(\mbox{Enc}(\mathbf{m})))\notin \{\mathbf{m},\perp\}] \leq \delta$, where the probability is over the randomness ($\mathbf{I},\mathbf{J}$) of the encoder Enc. We show this in two steps. \textbf{Step 1.} In this step, we assume that $\mbox{Enc}(\mathbf{m})_{|S}= {\bf a}$ has occurred and bound the error probability of (Enc,Dec) under this condition. We compute $$ \begin{array}{l} \mbox{Pr}[\mbox{Dec}(f_{S,g}(\mbox{Enc}(\mathbf{m})))\notin \{\mathbf{m},\perp\}|(\mbox{Enc}(\mathbf{m})_{|S}=\mathbf{a})]\\ =\mbox{Pr}[\mbox{Dec}(\mbox{Enc}(\mathbf{m})+g(\mathbf{a}))\notin \{\mathbf{m},\perp\}|(\mbox{Enc}(\mathbf{m})_{|S}=\mathbf{a})]\\ =\mbox{Pr}[\mbox{AMDdec}(\mbox{WtIIdec}(\mbox{WtIIenc}(\mbox{AMDenc}(\mathbf{m},\mathbf{I}),\mathbf{J})+g(\mathbf{a})))\notin \{\mathbf{m},\perp\}\\ \ \ \ \ |(\mbox{Enc}(\mathbf{m})_{|S}=\mathbf{a})]\\ =\mbox{Pr}[\mbox{AMDdec}(\mbox{AMDenc}(\mathbf{m},\mathbf{I})+\mbox{WtIIdec}(g(\mathbf{a})))\notin \{\mathbf{m},\perp\}|(\mbox{Enc}(\mathbf{m})_{|S}=\mathbf{a})]\\ =\mbox{Pr}[\mbox{AMDdec}(\mbox{AMDenc}(\mathbf{m},\mathbf{I})+\mbox{WtIIdec}(g(\mathbf{a})))\notin \{\mathbf{m},\perp\}],\\ \leq \delta, \end{array} $$ where the third equality follows from the linearity of (WtIIenc,WtIIdec), the last equality follows from the fact that $\mathbf{I}$ and $\mbox{Enc}(\mathbf{m})_{|S}$ are independent discussed in the beginning of the proof, and the inequality follows trivially from the security of (AMDenc,AMDdec). \textbf{Step 2.} In this step, we conclude the first part of the proof by showing $$ \begin{array}{l} \mbox{Pr}[\mbox{Dec}(f_{S,g}(\mbox{Enc}(\mathbf{m})))\notin \{\mathbf{m},\perp\}]\\ =\sum_{{\bf a}} \mbox{Pr}[\mbox{Enc}(\mathbf{m})_{|S}= {\bf a}]\cdot\mbox{Pr}[\mbox{Dec}(f_{S,g}(\mbox{Enc}(\mathbf{m})))\notin \{\mathbf{m},\perp\}|(\mbox{Enc}(\mathbf{m})_{|S}=\mathbf{a})]\\ \leq \sum_{{\bf a}} \mbox{Pr}[\mbox{Enc}(\mathbf{m})_{|S}= {\bf a}]\cdot\delta\\ =\delta, \end{array} $$ where the inequality follows from \textbf{Step 1.} Finally, the rate of the $(\rho,0)$-wiretap II code in Construction \ref{ex: WtII} is $\frac{k+2}{n}=1-\rho$. So the asymptotic rate of the strong $\rho^{LV}$-AMD code family is $$ \lim_{n\rightarrow \infty}\frac{k}{n}=\lim_{n\rightarrow \infty}\frac{(1-\rho)n-2}{n}=1-\rho. $$ \qed \end{proof} { We next show a construction of weak $\rho^{LV}$-AMD codes that achieves asymptotic rate $1$. \begin{construction}\label{th: weak LV-AMD}Let $\mathbb{F}_q$ be a finite field of $q$ elements. Let $G$ be a $k\times k$ non-singular matrix over $\mathbb{Z}_{q-1}$ such that each column of $G$ consists of distinct entries, i.e., $g_{i,j}\neq g_{i^{'},j}$ for any $j$ and $i\neq i^{'}$. Assume the entries of $G$ (viewed as integers) is upper-bounded by $\psi k$ for constant $\psi$, i.e., $g_{i,j}\leq \psi k$. Then the following construction gives a family of weak LV-AMD codes of asymptotic rate $1$ with any leakage parameter $\rho<1$. $$ \mbox{Enc}\footnote{The message distribution in this construction is not exactly uniform over $\mathbb{F}_q^k$ but $(\mathbb{F}_q^{*})^k$. So this construction can achieve security even when the message distribution is not uniform.}:(\mathbb{F}_q^{*})^k\rightarrow (\mathbb{F}_q^{*})^k\times\mathbb{F}_q: \mathbf{m}\mapsto(\mathbf{m}||f(\mathbf{m},G)), $$ where $\mathbb{F}_q^{*}$ denotes the set of non-zero elements of $\mathbb{F}_q$ and the tag $f(\mathbf{m},G)$ is generated as follows. \begin{equation}\label{eq: tag function} f(\mathbf{m},G)=\sum_{j=1}^k\prod_{i=1}^km_i^{g_{i,j}}. \end{equation} The decoder dec checks if the first $k$-tuple of the input vector, when used in \ref{eq: tag function}, match the last component. \end{construction} The proof of Construction \ref{th: weak LV-AMD} is given in Appendix \ref{apdx: proof of theorem weak LV-AMD}. Concrete constructions of the matrix $G$ can be found in \cite[Remark 2]{LLR-AMD}. } \section{Applications}\label{sec: applications} \subsection{Robust Ramp SSS} A \textit{Secret Sharing Scheme (SSS)} consists of two algorithms (Share,Recover). The algorithm Share maps a secret $\mathbf{s}\in\mathcal{S}$ to a vector $\mathbf{S}=(S_1,\ldots,S_N)$ where the shares $S_i$ are in some set $\mathcal{S}_i$ and will be given to participant $P_i$. The algorithm Recover takes as input a vector of shares $\tilde{\mathbf{S}}=(\tilde{S}_1,\ldots,\tilde{S}_N)$ where $\tilde{S}_i\in\mathcal{S}_i\bigcup\{\perp\}$, where $\perp$ denotes an absent share. For a $(t, N)$-threshold SSS, $t$ shares reveal no information about the secret $\mathbf{s}$ and $t +1$ shares uniquely recover the secret $\bf s$. For a $(t, r, N)$-\textit{ramp} SSS \cite{ramp SSS} with $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$ as sharing and recovering algorithms, the access structure is specified by two thresholds. The privacy threshold is $t$, and the reconstruction threshold is $r$. In a $(t, r, N)$-ramp SSS, subsets of $t$ or less shares do not reveal any information about the secret, and subsets of $r$ or more shares can uniquely recover the secret $\bf s$. A set of shares of size $t< a < r$ may leak some information about the secret. In particular, we consider ramp schemes in which a set of $t + \alpha (r - t)$ shares leak $\alpha$ fraction of secret information. \\ \begin{definition}[$(t, r, N)$-Ramp Secret Sharing Scheme] \label{def: rsss} A $(t, r, N)$-ramp secret sharing scheme is consist of a \ pair \ of \ algorithms $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$, where $\mathsf{Share}_\mathsf{rsss}$ randomly maps a secret ${\bf s}\in\mathcal{S}$ to a share vector ${\bf S}=(S_1,\cdots,S_N)$ and $\mathsf{Recover}_\mathsf{rsss}$ deterministically reconstruct a $\tilde{{\bf s}}\in\mathcal{S}$ or output $\bot$, satisfy the following. \begin{itemize} \item Privacy: The adversary can access up to $r - 1$ shares. If the number of shares accessed by the adversary is $a\leq t$, no information will be leaked about the secret. If the number of leaked share is $a=t + \alpha(r - t)$, where $0 < \alpha < 1$, then $\tilde{H}_\infty(S|S_{i_1} \cdots S_{i_a})\geq H_\infty(S)-\alpha\log |\mathcal{S}|$ \footnote{This definition of leakage is seemingly different from \cite{new ramp}, where uniform distribution of secret $S$ is assumed and Shannon entropy is used instead of min-entropy. } , for $S\leftarrow\mathcal{S}$ and any $\{i_1, \cdots, i_a\} \subset [N]$. \item Reconstruction: Any $r$ correct shares can reconstruct the secret ${\bf s}$. \end{itemize} \end{definition} A \textit{linear ramp SSS} has the additional property that the Recover function is linear, namely, for any $\mathbf{s}\in\mathcal{G}$, any share vector $\mathbf{S}$ of $\mathbf{s}$, and any vector $\mathbf{S}^{'}$ (possibly containing some $\perp$ symbols), we have $\mathsf{Recover}_\mathsf{rsss}( \mathbf{S} + \mathbf{S}^{'}) = \mathbf{s} + \mathsf{Recover}_\mathsf{rsss}(\mathbf{S}^{'})$, where vector addition is defined element-wise and addition with $\perp$ is defined by $\perp+ {\bf x} = \mathbf{x} + \perp = \perp$ for all $\mathbf{x}$. In a linear SSS, the adversary can modify the shares $\tilde{S}_i=S_i+\Delta_i$, such that the difference $\Delta=\tilde{\mathbf{s}}-\mathbf{s}$ between the reconstructed secret and the shared secret, is known. \\ In a $(t, N, \delta)$-robust SSS, for any $t + 1$ shares with at most $t$ shares modified by the adversary, the reconstruction algorithm can recover the secret ${\bf s}$, or detect the adversarial modification and output $\perp$, with probability at least $1 - \delta$ \cite{AMD}. That is with probability at most $\delta$ the secret is either not recoverable, or a wrong secret is accepted. A modular construction of the robust SSS using an AMD code and a linear SSS is given by Cramer {\it et al.} \cite{AMD}.\\ We define robust ramp secret sharing scheme when the adversary can adaptively corrupt up to $t + \rho(r - t)$ shares, where $0<\rho<1$ is a constant (level of robustness against active adversaries). \begin{definition}[$(t, r, N, \rho, \delta)$-Robust Ramp Secret Sharing Scheme] \label{def: rrsss} A $(t, r, N, \rho, \delta)$-robust ramp secret sharing scheme is consist of a pair of algorithms $(\mathsf{Share}_\mathsf{rrsss}, \mathsf{Recover}_\mathsf{rrsss})$, where $\mathsf{Share}_\mathsf{rrsss}$ randomly maps a secret ${\bf s}\in\mathcal{S}$ to a share vector ${\bf S}=(S_1,\cdots,S_N)$ and $\mathsf{Recover}_\mathsf{rrsss}$ deterministically reconstruct a $\tilde{{\bf s}}\in\mathcal{S}$ or output $\bot$, satisfy the following. \begin{itemize} \item Privacy: The adversary can access up to $r - 1$ shares. If the number of shares accessed by the adversary is $a\leq t$, no information will be leaked about the secret. If the number of leaked share is $a=t + \alpha(r - t)$, where $0 < \alpha < 1$, then $\tilde{H}_\infty(S|S_{i_1} \cdots S_{i_a})\geq H_\infty(S)-\alpha\log |\mathcal{S}|$, for $S\leftarrow\mathcal{S}$ and any $\{i_1, \cdots, i_a\} \subset [N]$. \item Reconstruction: Any $r$ correct shares can reconstruct the secret $s$. \item Robustness: For any $r$ shares with at most $t + \rho(r - t)$ corrupted shares, the probability that either the secret is correctly reconstructed, or the the adversary's modifications being detected, is at least $1 - \delta$. \end{itemize} \end{definition} We propose a general construction of robust ramp secret sharing scheme using a $\rho_\mathsf{amd}$-AMD and $(t, r, N)$-ramp secret sharing scheme. \begin{theorem} Consider a linear $(t, r, N)$-ramp secret sharing scheme with the algorithm pair $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$ and shares ${\cal S}_i \in {\mathbb F}_q^m$, $i = 1, \cdots, N$, and let (Enc,Dec) be a $\rho_\mathsf{amd}$-AMD code ${\mathbb F}_q^k \rightarrow {\mathbb F}_q^{n}$, with failure probability $\delta_\mathsf{amd}$ and $n = (r - t)m$. Then there is a robust ramp secret sharing scheme with algorithm pair $(\mathsf{Share}_\mathsf{rrsss}, \mathsf{Recover}_\mathsf{rrsss})$ given by $\mathsf{Share}_\mathsf{rrsss}(\mathbf{s})=\mathsf{Share}_\mathsf{rsss}(\mbox{Enc}(\mathbf{s}))$ and $\mathsf{Recover}_\mathsf{rrsss}(\tilde{\mathbf{S}})=\mbox{Dec}(\mathsf{Recover}_\mathsf{rsss}(\tilde{\mathbf{S}}))$ is a $(t, r, N, \rho, \delta)$-Robust Ramp Secret Sharing Scheme with $\rho \leq \rho_\mathsf{amd}$ and $\delta \leq \delta_\mathsf{amd}$. \end{theorem} \begin{proof} First, we show that if the adversary reads at most $t + \rho(r - t)$ shares, the $\rho_\mathsf{amd}$-AMD codeword $c$ leaks at most $\rho n \log q$ informations. Since the $\rho_\mathsf{amd}$-AMD codeword is encoded by a $(t, r, N)$ ramp secret sharing scheme, $t$ shares will not leak any information about the $\rho_\mathsf{amd}$-AMD codeword $c$. Given that the share size $|{\cal S}_i|\leq q^m$ and $n = (r - t)m$, the leakage of the extra $\rho(r - t)$ shares will leak at most $\rho n \log q$ bit of information about the $\rho_\mathsf{amd}$-AMD codeword $c$. Second, we show that the resulting secret sharing scheme is $\delta$-robust. For a secret $\mathbf{s}$, let $\mathbf{S}\leftarrow \mathsf{Share}_\mathsf{rrsss}(\mathbf{s})$ be the original share vector and $\tilde{\mathbf{S}}$ be the corrupted one, and let $\mathbf{S}^{'}=\tilde{\mathbf{S}}-\mathbf{S}$. For any $r$ shares, the failure probability of the reconstruction is given by, $$ \begin{array}{ll} \mbox{Pr}[\mathsf{Recover}_\mathsf{rrsss}(\tilde{\mathbf{S}}) \notin \{\mathbf{s},\perp\}]&\stackrel{(1)}=\mbox{Pr}[\mbox{Dec}(\mathsf{Recover}_\mathsf{rsss} (\mathbf{S})+\mathsf{Recover}_\mathsf{rsss} (\mathbf{S}^{'}))\notin\{\mathbf{s},\perp\}]\\ &=\mbox{Pr}[\mbox{Dec}(\mbox{Enc}(\mathbf{s})+\Delta)\notin\{\mathbf{s},\perp\}],\\ \end{array} $$ where $\Delta = \mathsf{Recover}_\mathsf{rsss}(\mathbf{S}^{'})$ is chosen by the adversary $\mathbb{A}$, and (1) uses the linearity of the ramp scheme. In choosing $\Delta$, the adversary $\mathbb{A}$ can use at most $\rho$ fraction of information in the $\rho_\mathsf{amd}$-AMD codeword $c = \mbox{Enc}(\mathbf{s})$. Since at most $\rho n \log q$ information bit is leaked to the adversary, that is $\tilde{H}_\infty( C |{\bf Z})\geq H_\infty(C) - \rho n \log q $, from the definition of $\rho_\mathsf{amd}$-AMD code with $\rho \leq \rho_\mathsf{amd}$, the decoding algorithm $\mbox{Dec}$ outputs correct secret ${\bf s}$, or detects the error $\perp$, with probability at least $1-\delta_\mathsf{amd}$. This means that the ramp secret sharing scheme is robust and outputs either the correct secret, or detects the adversarial tampering, with probability at most $1-\delta \geq 1-\delta_\mathsf{amd}$. Thus a $(t, r, N)$-ramp secret sharing scheme and a $\rho_\mathsf{amd}$-AMD with security parameter $\delta_\mathsf{amd}$, give a $(t, r, N, \rho, \delta)$-robust ramp secret sharing scheme with $\delta \leq \delta_\mathsf{amd}$ and $\rho \leq \rho_\mathsf{amd}$. \qed \end{proof} \remove A \textit{Secret Sharing Scheme (SSS)} consists of two algorithms ($\mathsf{Share},\mathsf{Recover}$). The algorithm Share maps a secret $\mathbf{s}$ from some group $\mathcal{G}$ to a vector $\mathbf{S}=(S_1,\ldots,S_N)$ where the shares $S_i$ are in some group $\mathcal{G}_i$ and will be given to participant $P_i$. The algorithm Recover takes as input a vector of shares $\tilde{\mathbf{S}}=(\tilde{S}_1,\ldots,\tilde{S}_N)$ where $\tilde{S}i\in\mathcal{G}_i\bigcup\{\perp\}$, where $\perp$ denotes an absent share. For a $(t, N)$-threshold SSS \cite{Shamir SSS}, $t$ shares or less reveal no information about the secret $\mathbf{s}$ and $t +1$ shares uniquely recover the secret $\bf s$. For a $(t, r, N)$-\textit{ramp} SSS \cite{ramp SSS,new ramp} with $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$ as sharing and recovering algorithms, the access structure is specified by two thresholds. The privacy threshold is $t$, and the reconstruction threshold is $r$. In a $(t, r, N)$-ramp SSS, subsets of $t$ or less shares do not reveal any information about the secret, and subsets of $r$ or more shares can uniquely recover the secret $\bf s$. A set of shares of size $t+1 \leq \ell < r$ may leak some information about the secret. In particular, we consider ramp schemes in which a subset of $t + \rho (r - t)$ share leak $\rho$ fraction of secret information. \\ \begin{definition}[$(t, r, N)$-Ramp Secret Sharing Scheme] \label{def: rsss} \\ A $(t, r, N)$-ramp secret sharing scheme is consist of a \ pair \ of \ algorithms $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$, where $\mathsf{Share}_\mathsf{rsss}$ randomly maps a secret ${\bf s}$ from some group $\mathcal{G}$ to a share vector ${\bf S}=(S_1,\cdots,S_N)$ and $\mathsf{Recover}_\mathsf{rsss}$ deterministically reconstruct a $\tilde{{\bf s}}\in\mathcal{G}$ or output $\bot$, satisfy the following. \begin{itemize} \item Privacy: The adversary can access up to $r - 1$ shares. If the number of shares accessed by the adversary is $t$ or less, no information is leaked. If the number of accessed shares is $a=t + \alpha(r - t)$, where $0 < \alpha < 1$, then \footnote{This definition of linear leakage is seemingly different from \cite{new ramp}, where uniform distribution of secret $S$ is assumed and Shannon entropy is used instead of min-entropy. } $$\tilde{H}_\infty(S|S_{i_1} \cdots S_{i_a})\geq H_\infty(S)-\alpha\log |\mathcal{G}|, \mbox{ for }S\leftarrow\mathcal{G}\mbox{ and any }\{i_1, \cdots, i_a\} \subset [N].$$ \item Reconstruction: Any $r$ correct shares can reconstruct the secret ${\bf s}$. \end{itemize} \end{definition} A \textit{linear \textcolor{red}{ramp} SSS} has the additional property that the Recover function is linear, namely, for any $\mathbf{s}\in\mathcal{G}$, any share vector $\mathbf{S}$ of $\mathbf{s}$, and any vector $\mathbf{S}^{'}$ (possibly containing some $\perp$ symbols), we have $\mathsf{Recover}_{\textcolor{red}{\mathsf{rsss}}}( \mathbf{S} + \mathbf{S}^{'}) = \mathbf{s} + \mathsf{Recover}_{\textcolor{red}{\mathsf{rsss}}}(\mathbf{S}^{'})$, where vector addition is defined element-wise and addition with $\perp$ is defined by $\perp+ {\bf x} = \mathbf{x} + \perp = \perp$ for all $\mathbf{x}$. In a linear SSS, the adversary can modify the shares $\tilde{S}_i=S_i+\Delta_i$, such that the difference $\Delta=\tilde{\mathbf{s}}-\mathbf{s}$ between the reconstructed secret and the shared secret, is known. \textcolor{blue}{This means that the adversary can introduce any algebraic change (adding $\Delta$) to the secret, which motivates the study of the robustness property in SSS. }\\ general robustness: In a $(t, N, \delta)$-robust SSS, for any $t + 1$ shares with at most $t$ shares modified by the adversary, the reconstruction algorithm can recover the secret ${\bf s}$, or detect the adversarial modification and output $\perp$, with probability at least $1 - \delta$ \cite{AMD}. That is with probability at most $\delta$ \textcolor{red}{the secret is either not recoverable, or} a wrong secret is accepted. A modular construction of the robust SSS using an AMD code and a linear SSS is given by Cramer {\it et al.} \cite{AMD}.\\ We define robust ramp secret sharing scheme when the adversary can adaptively corrupt up to $t + \rho(r - t)$ shares, where $0<\rho<1$ is a constant (level of robustness against active adversaries). A $(t, r, N)$-ramp secret sharing scheme is called a $(t, r, N, \rho, \delta)$-Robust Ramp Secret Sharing Scheme ($(t, r, N, \rho, \delta)$-robust ramp SSS) if an additional robustness condition is satisfied. More precisely, {\color{blue} \begin{definition}[$(t, r, N, \rho, \delta)$-Robust Ramp Secret Sharing Scheme] \label{def: rrsss} \\ A $(t, r, N, \rho, \delta)$-robust ramp SSS is consist of a pair of algorithms $(\mathsf{Share}_\mathsf{rrsss}, \mathsf{Recover}_\mathsf{rrsss})$, where $\mathsf{Share}_\mathsf{rrsss}$ randomly maps a secret ${\bf s}$ from some group $\mathcal{G}$ to a share vector ${\bf S}=(S_1,\cdots,S_N)$ and $\mathsf{Recover}_\mathsf{rsss}$ deterministically reconstruct a $\tilde{{\bf s}}\in\mathcal{G}$ or output $\bot$, satisfy the following. \begin{itemize} \item Privacy: The adversary can access up to $r - 1$ shares. If the number of shares accessed by the adversary is $t$ or less, no information is leaked. If the number of accessed shares is $a=t + \alpha(r - t)$, where $0 < \alpha < 1$, then $$\tilde{H}_\infty(S|S_{i_1} \cdots S_{i_a})\geq H_\infty(S)-\alpha\log |\mathcal{G}|, \mbox{ for }S\leftarrow\mathcal{G}\mbox{ and any }\{i_1, \cdots, i_a\} \subset [N].$$ \item Reconstruction: Any $r$ correct shares can reconstruct the secret ${\bf s}$. \item Robustness: If at most $t + \rho(r - t)$ shares are accessed by the adversary, the probability that $\tilde{{\bf s}}\neq{\bf s}$ is at most $\delta$. \end{itemize} \end{definition} } We propose a general construction of robust ramp secret sharing scheme using a $\rho$-AMD and a linear $(t, r, N)$-ramp secret sharing scheme. {\color{blue} \begin{theorem} Consider a linear $(t, r, N)$-ramp secret sharing scheme with the algorithm pair $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$ with secrets in $\mathcal{G}={\mathbb F}_q^n and let (Enc,Dec) be a $\rho$-AMD code with message set ${\mathbb F}_q^k$, codeword set ${\mathbb F}_q^{n}$ and security parameter $\delta$ Then the algorithm pair $(\mathsf{Share}_\mathsf{rrsss}, \mathsf{Recover}_\mathsf{rrsss})$ given by $\mathsf{Share}_\mathsf{rrsss}(\mathbf{s})=\mathsf{Share}_\mathsf{rsss}(\mbox{Enc}(\mathbf{s}))$ and $\mathsf{Recover}_\mathsf{rrsss}(\tilde{\mathbf{S}})=\mbox{Dec}(\mathsf{Recover}_\mathsf{rsss}(\tilde{\mathbf{S}}))$ is a $(t, r, N, \rho, \delta)$-robust ramp secret sharing scheme. \end{theorem} \begin{proof} Let $\mathbf{X}=\mbox{Enc}(\mathbf{s})$. Then $\mathsf{Share}_\mathsf{rrsss}(\mathbf{s})=\mathsf{Share}_\mathsf{rsss}(\mathbf{X})=(S_1,\cdots,S_N)$. We verify that all three conditions in Definition \ref{def: rrsss} hold. \begin{itemize} \item Privacy. If the adversary read $t + \alpha(r - t)$ shares of $\mathsf{Share}_\mathsf{rrsss}(\mathbf{s})$, then the privacy of $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$ guarantees that an $\alpha$ fraction of information about $\mathbf{X}$ is leaked, which leads to that an $\alpha$ fraction of information about $\mathbf{s}$ is leaked. This trivially includes the case of $t$ or less shares (let $\alpha=0$). \item Reconstruction. The reconstruction of $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$ guarantees that any $r$ components of $(S_1,\cdots,S_N)$ can correctly recover $\mathbf{X}$. Then the correctness property of (Enc,Dec) guarantees that $\mbox{Dec}(\mathbf{X})={\bf s}$. \item Robustness. For a secret $\mathbf{s}$, let $\mathbf{S}\leftarrow \mathsf{Share}_\mathsf{rrsss}(\mathbf{s})$ be the original share vector, and $\tilde{\mathbf{S}}$ be the corrupted one, and let $\mathbf{S}^{'}=\tilde{\mathbf{S}}-\mathbf{S}$. For any $r$ shares, the probability of reconstructing a wrong secret is given by, $$ \begin{array}{ll} \mbox{Pr}[\mathsf{Recover}_\mathsf{rrsss}(\tilde{\mathbf{S}}) \notin \{\mathbf{s},\perp\}]&\stackrel{(*)}=\mbox{Pr}[\mbox{Dec}(\mathsf{Recover}_\mathsf{rsss} (\mathbf{S})+\mathsf{Recover}_\mathsf{rsss} (\mathbf{S}^{'}))\notin\{\mathbf{s},\perp\}]\\ &=\mbox{Pr}[\mbox{Dec}(\mbox{Enc}(\mathbf{s})+\Delta)\notin\{\mathbf{s},\perp\}],\\ \end{array} $$ where $\Delta = \mathsf{Recover}_\mathsf{rsss}(\mathbf{S}^{'})$ is constructed by the adversary $\mathbb{A}$ that can choose at most $b=t + \rho(r - t)$ shares $S_{i_1} \cdots S_{i_b}$ to observe, and (*) uses the linearity of $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$. Finally, the privacy of $(\mathsf{Share}_\mathsf{rsss}, \mathsf{Recover}_\mathsf{rsss})$ guarantees that $\tilde{H}_\infty(\mathbf{X}|S_{i_1} \cdots S_{i_b})\geq H_\infty(\mathbf{X})-\rho\log |{\mathbb F}_q^n|$. It then follows from the security of the $\rho$-AMD code (Enc,Dec) that $$ \mbox{Pr}[\mathsf{Recover}_\mathsf{rrsss}(\tilde{\mathbf{S}}) \notin \{\mathbf{s},\perp\}]\leq\mbox{Pr}[\mbox{Dec}(\mbox{Enc}(\mathbf{s})+\Delta)\notin\{\mathbf{s},\perp\}]\leq \delta. $$ \end{itemize} \qed \end{proof} } \subsection{Wiretap II with Algebraic Adversary} The Wiretap II \cite{WtII} problem considers a passive adversary that can read a $\rho$ fraction of the codeword components and the goal is to prevent the adversary from learning information about the sent message. Wiretap II with an active adversary has been considered in \cite{Lai Lifeng} and later generalized in \cite{eAWTP,AWTP}. In this latter general model, called Adversarial Wiretap (AWTP) mode, the adversary is characterized by two parameters $\rho_r$ and $\rho_w$, denoting the fraction of the codeword components the adversary can ``read'' and ``modify additively ", respectively. The goal is two-fold: to prevent the adversary from obtaining any information (secrecy) and, to recover the message despite the changes made by the adversary (reliability). It was proved \cite{AWTP} that in AWTP model, where the adversary can write to a $\rho_w$ fraction of the codeword components additively, secure and reliable communication is possible if, $\rho_r+\rho_w <1$. This says that when $\rho_r+\rho_w > 1$, one can only hope for weaker type of security, for example, secrecy and error detection. We consider wiretap II with an algebraic adversary, who can read a $\rho$ fraction of the codeword components and tamper with the whole codeword algebraically, namely, adding a non-zero group element to the codeword (codewords are assumed to live in a group). The adversary in this model is equivalent to the AWTP adversary with $\rho_r=\rho$ and $\rho_w=1$. But the coding goal of wiretap II with an algebraic adversary is different from AWTP. \begin{definition}\label{def_awtpchannel} An algebraic tampering wiretap II channel is a communication channel between Alice and Bob that is (partially) controlled by an adversary Eve with two following two capabilities \begin{itemize} \item Read: Eve adaptively selects a fraction $\rho$ of the components of the transmitted codeword $\mathbf{c}=c_1,\cdots,c_n$ to read, namely, Eve's knowledge of the transmitted codeword is given by $ \mathbf{Z}=\{c_{i_1},\cdots, c_{i_{\rho n}}\} $, where $S=\{i_1,\cdots,i_{\rho n}\}\subset [n]$ is chosen by Eve. \item Write: Assume $\mathbf{c}\in\mathcal{G}$ for some additive group $\mathcal{G}$. Eve chooses an ``off-set'' $\Delta\in\mathcal{G}$ according to $\mathbf{Z}$ and add it to the codeword $\mathbf{c}$, namely, the channel outputs $\mathbf{c}+\Delta$. \end{itemize} \end{definition} \begin{definition}[$(\rho,\epsilon,\delta)$-algebraic tampering wiretap II ($(\rho,\epsilon,\delta)$-AWtII)] \label{def_awtpcode} A $(\rho,\epsilon,\delta)$-AWtII code is a coding scheme (Enc,Dec) that guarantees the following two properties. \begin{itemize} \item {\em Secrecy:} For any pair of messages $\mathbf{m}_0$ and $\mathbf{m}_1$, any $S\subset [n]$ of size $|S|\leq n\rho$, (\ref{eq: WtII security}) should hold, namely, $$ \mathsf{SD}(\mbox{Enc}(\mathbf{m}_0)_{|S};\mbox{Enc}(\mathbf{m}_1)_{|S})\leq \varepsilon. $$ \item {\em Robustness:} If the adversary is passive, Dec always outputs the correct message. If the adversary is active, the probability that the decoder outputs a wrong message is bounded by $\delta$. That is, for any message $\mathbf{m}$ and any $\rho$-algebraic tampering wiretap II adversary $\mathbb{A}$, \[ \mathsf{Pr}[\mathsf{Dec}({\mathbb{A}}(\mathsf{Enc}(\mathbf{m}))) \notin \{\mathbf{m}, \perp\}] \leq \delta. \] \end{itemize} \end{definition} The secrecy of $(\rho,\epsilon,\delta)$-AWtII code implies that a $(\rho,\epsilon,\delta)$-AWtII code is a $(\rho,\epsilon)$-WtII code. The following rate upper bound follows directly from Lemma \ref{lem: WtII upper bound}. \begin{corollary} The rate of $(\rho,0,\delta)$-AWtII codes is bounded by $R \leq 1 - \rho$. \end{corollary} The robustness property of $(\rho,\epsilon,\delta)$-AWtII code is the same as the security of a strong $\rho^{LV}$-AMD code (see Definition \ref{def: LV-AMD codes}). Furthermore, the construction of $\rho^{LV}$-AMD codes in Construction \ref{con: basic construction} uses a $(\rho, 0)$-WtII code to encode $\mathbf{c}=\mbox{AMDenc}(\mathbf{m})$, which guarantees secrecy with respect to any pair of $(\mathbf{c}_0,\mathbf{c}_1)$, and hence secrecy with respect to any pair of $(\mathbf{m}_0,\mathbf{m}_1)$. These assert that Construction \ref{con: basic construction} yields a family of $(\rho,0,\delta)$-AWtII codes. \begin{corollary} There exists a family of $(\rho,0,\delta)$-AWtII codes that achieves rate $R = 1 - \rho$. \end{corollary} \section{Conclusion} We considered an extension of AMD codes when the storage leaks information and the amount of leaked information is bounded by $\rho\log|\mathcal{G}|$. We defined $\rho$-AMD codes that provide protection in this scenario, both with weak and strong security, and derived concrete and asymptotic bounds on the efficiency of codes in these settings. Table \ref{tb: bounds} compares our results with original AMD codes and an earlier work (called LLR-AMD) that allow leakage in specific parts of the encoding process. Unlike LLR-AMD that uses different leakage requirements for the weak and strong case, we use a single model to express the leakage and require that the left-over entropy of the codeword be lower bounded. This makes our analysis and constructions more challenging. In particular, optimal constructions of LLR-AMD codes follow directly from the optimal constructions of original AMD codes. However constructing optimal $\rho$-AMD code, in both weak and strong model, remain open. We gave an explicit construction of a family of codes with respect to a weaker notion of leakage ($\rho^{LV}$-AMD ) whose rate achieves the upper bounds of the $\rho$-AMD codes. We finally gave two applications of the codes to robust ramp secret sharing schemes and algebraic manipulation wiretap II channel.
1,116,691,496,988
arxiv
\section{Introduction}\label{sec:intro} A ubiquitous problem in machine learning, statistics, and signal processing is to accurately estimate an unknown sparse vector from a few noisy linear measurements. This estimation problem, which we refer to as \emph{sparse coding}, is at the heart of the field of compressed sensing, revealing that under sparsity assumptions it is possible to successfully recover a signal that sampled significantly below the Nyquist rate~\cite{candes2006stable,donoho2006compressed}. This, in turn, led to a dramatic increase in magnetic resonance imaging (MRI) scanning session speed~\cite{lustig2007sparse}. Another exciting application that also builds on the sparsity assumption is unsupervised representation learning, i.e., given high-dimensional input data, such as an image, finding a low-dimensional representation that captures the intrinsic underlying structure in the input~\cite{engan1999method,aharon2006k,mairal2009online}. These representations are often used in image restoration tasks to effectively remove noise (denoising)~\cite{elad2006image,dong2011sparsity}, fill-in missing pixels (inpainting)~\cite{shen2009image,fadili2009inpainting,sulam2016trainlets}, and to achieve high quality digital zoom (super-resolution) \cite{fadili2009inpainting,yang2010image,zeyde2010single,romano2014single}. Sparsity also plays a key role in linear regression when given a large pool of features, to form a predictive rule that estimates an unknown response using a smaller, \emph{interpretable} subset of features that manifests the strongest effects~\cite{tibshirani1996regression,chen1994basis,chen2001atomic,efron2004least}. To formalize the sparse coding problem, which is central for tackling the aforementioned applications, we consider the following linear model: \begin{equation} {\bm{b}} = {\bm{A}}{\bm{x}} + {\bm{v}}, \end{equation} where ${\bm{A}}$ is a matrix of size ${M\times N}$, the vector ${\bm{x}}$ is of length $N$, and ${\bm{v}}$ is a noise vector of length~${M}$. In this paper, we focus on a challenging setting in which $M \ll N$, where a crucial assumption we make is that the vector ${\bm{x}}$ is $k$-sparse, i.e., it contains only $k$ non-zero elements with $k \ll N$~\cite{donoho2006compressed,candes2006stable,elad2010sparse}. In the context of sparse linear regression, the vector ${\bm{b}}$ is called a \emph{response}, the columns of ${\bm{A}}$ are the \emph{features}, and the rows correspond to the \emph{observations}. In the context of unsupervised representation learning, the matrix ${\bm{A}}$ is called a \emph{dictionary}, whose columns are referred to as basis elements. The assumption in this literature is that the underlying signal can be represented as a \emph{sparse} linear combination of a few columns taken from the dictionary ${\bm{A}}$, providing in a form of a compression effect~\cite{engan1999method,aharon2006k,mairal2009online}. Suppose we are given a matrix ${\bm{A}}$ and a noisy vector ${\bm{b}}$, and our goal is to find a sparse solution $\hat{{\bm{x}}}$ to the following optimization problem: \begin{align} \label{eq:k-sparse-problem} \hat{{\bm{x}}} = \arg \min_{{\bm{x}}} \|{\bm{A}}{\bm{x}} - {\bm{b}}\|_2^2 \ \ \text{subject to} \ \|{\bm{x}}\|_0=k, \end{align} where $\|{\bm{x}}\|_0$ is the $L_0$ pseudo-norm, counting the number of non-zeros in ${\bm{x}}$. Unfortunately, the above is a non-convex optimization problem, and finding a sparse solution that minimizes~\eqref{eq:k-sparse-problem} is NP-hard in general~\cite{davis1997adaptive,natarajan1995sparse}: intuitively, to solve \eqref{eq:k-sparse-problem}, we should sweep over all possible combinations of the $k$-sparse vectors and choose the one that minimizes the squared error term. Problem \eqref{eq:k-sparse-problem} is often expressed in the following Lagrangian form: \begin{align} \label{eq:lambda-problem} \hat{{\bm{x}}} = \arg \min_{{\bm{x}}} \|{\bm{A}}{\bm{x}} - {\bm{b}}\|_2^2 + \lambda \|{\bm{x}}\|_0, \end{align} where a proper choice of $\lambda$---controlling the strength of the sparsity penalty (larger $\lambda$ leads to sparser $\hat{{\bm{x}}}$)---may render \eqref{eq:k-sparse-problem} and \eqref{eq:lambda-problem} to yield the same solution. The desire to find a solution for the non-convex sparse coding problem has led to practical approximation algorithms, which are supported by theoretical guarantees. For example, the well-known lasso~\cite{tibshirani1996regression} and basis pursuit~\cite{chen2001atomic} algorithms suggest replacing the non-convex $L_0$ with the convex $L_1$ norm in \eqref{eq:lambda-problem}. This choice is attractive since one can utilize the convexity of the modified objective, and minimize it efficiently~\cite{daubechies2004iterative,beck2009fast,combettes2005signal}. Another popular algorithm is the orthogonal matching pursuit (OMP)~\cite{mallat1993matching,pati1993orthogonal,tropp2004greed}---a method that greedily picks one non-zero at a time in lieu of exhaustively sweeping over all the possible solutions. Loosely speaking, following \cite{elad2010sparse,tropp2006just}, these methods perform particularly well when (i) the sparsity level $k$ of the underlying ${\bm{x}}$ is ``small enough''; and (ii) the columns of ${\bm{A}}$ are of low coherence, implying that any subset of $k$ columns is almost an orthonormal submatrix~\cite{candes2005decoding}. In fact, under certain assumptions on $k$, the correlation structure of ${\bm{A}}$, and the noise level, lasso and OMP can provably find the correct locations of the non-zero elements in the unknown ${\bm{x}}$, and, in turn, obtain an estimate $\hat{{\bm{x}}}$ that is close to ${\bm{x}}$, e.g., in the Euclidean distance~\cite{tropp2004greed,tropp2006just,candes2006stable,donoho2006compressed,elad2010sparse}. The NP-hardness nature of the sparse coding problem brings us to the fascinating and ever-evolving field of quantum computing. The rise of quantum or quantum-inspired technologies unlock new opportunities to solve more accurately NP-hard problems than classic computing machines. Notable examples include the traveling salesman problem~\cite{lucas2014ising}, protein folding~\cite{dill2012protein}, and graph clustering~\cite{schaeffer2007graph}. Many of these problems can be cast as a quadratic unconstrained binary optimization (QUBO) problem~\cite{patton2019efficiently}, a formulation that is of particular interest to us as well. Furthermore, there has been growing activity in leveraging quantum technologies to tackle machine learning problems, such as linear regression and classification \cite{arthur2021qubo}, unsupervised clustering \cite{arthur2021qubo,otterbach2017unsupervised}, and even for improving deep generative models \cite{gao2020high}. \subsection{Our contribution} Due to the computational difficulty of minimizing the original problem \eqref{eq:lambda-problem}, it is always being replaced by one of its surrogates, e.g., lasso and OMP. However, in many cases, the conditions for guaranteeing the validity of the surrogates are either unknown or not satisfied. This paper ``goes back to basics'' and introduces novel quantum machine learning algorithms, directly minimizing the non-convex problem~\eqref{eq:lambda-problem} with the ultimate goal of obtaining more accurate solutions than standard approximation algorithms. Concretely, we show how to formulate \eqref{eq:lambda-problem} as a QUBO problem, which, in turn, can be minimized efficiently using cutting-edge solving platforms, such as quantum computers, Ising machines~\cite{meirzada2022lightsolver}, or well-known heuristics including Simulated Annealing and Tabu Search~\cite{glover1989tabu,glover1990tabu2}. Our derivation, presented in Section~\ref{sec:quantum-coding}, covers (i) a simple case where each entry in the unknown sparse vector ${\bm{x}}$ is binary (1-bit representation); (ii) a more flexible setting, using a 2-bit representation per element in ${\bm{x}}$; and (iii) the most general formulation where ${\bm{x}}$ is represented using any arbitrary fixed-point precision. We also analyze the complexity of the proposed formulation with respect to the number of qubits (or spins) required to solve the QUBO problem. Naturally, the binary and 2-bit cases are highly efficient in terms of space complexity---we show how to form the QUBO problem without introducing any auxiliary (ancilla) spins. By contrast, the most general formulation in which ${\bm{x}}$ can have fractional, non-integer entries comes at the cost of increasing the space complexity. Yet, our formulation is highly efficient as it requires only one ancilla spin for each entry in ${\bm{x}}$, as we carefully argue in Section~\ref{sec:qubo_l0}. Numerical experiments, given in Section~\ref{sec:res}, demonstrate the superiority of our QUBO-based solutions over the classic and most commonly used approximation algorithms---lasso and OMP. Specifically, we minimize the proposed QUBO problem via the quantum-inspired computing framework of LightSolver~\cite{meirzada2022lightsolver}, and show how our method tends to produce more accurate estimations of the unknown ${\bm{x}}$ in regimes where the sample size is small or when the number of non-zero elements to be estimated is relatively large. \subsection{Related work} The general QUBO problem can be written as~\cite{patton2019efficiently} \begin{equation} \label{eq:qubo} \min_{{\bm{q}} \in \mathbb{B}^D} {\bm{q}}^{\top} {\bm{W}} {\bm{q}}, \end{equation} where $\mathbb{B}=\{0,1\}$ a binary set and ${\bm{W}} \in \mathbb{R}^{D \times D}$ is a real-valued QUBO matrix. The ability to efficiently minimize~\eqref{eq:qubo} using quantum computers paves the way to the design of powerful solvers for fundamental machine learning problems. Importantly, QUBO optimization becomes appealing when the runtime of quantum-based algorithms is smaller than that of classic computers. Motivated by this capability, \cite{jun2021qubo} presented a QUBO formulation for solving a linear system of equations ${\bm{A}}{\bm{x}}={\bm{b}}$. The authors of \cite{arthur2021qubo} broadened this scope even further and showed how to fit least squares regressors, support vector machine (SVM) classifiers, and balanced k‑means clustering algorithms, by translating these problems into a QUBO scheme, too. Notably, the above algorithms were shown to have improved time and space complexities compared to classic SVM and balanced k-means clustering algorithms, and similar complexity to that of classic least squares solvers. Our work enriches the quantum machine learning toolbox by introducing a novel QUBO formulation for NP-hard sparse coding problems. Stressing this point, since we express the sparsity penalty in general QUBO terms, we could augment it to the QUBO SVM method of \cite{arthur2021qubo} and tackle the sparse SVM problem \cite{bradley1998feature} instead. We believe such an extension could be of great interest by itself, and we leave it for future work. Recently, \cite{wezeman2022quantum,ayanzadeh2019quantum,ayanzadeh2020ensemble} presented a QUBO formulation for solving a special case of \eqref{eq:lambda-problem}---also referred to as binary compressive sensing---for which the unknown ${\bm{x}}$ is sparse and \emph{binary}. These quantum-based solutions were shown to yield improved performance compared to the classic lasso method, an observation that is also corroborated by our experiments. Our paper builds upon the above line of work and extends the QUBO formulation to \emph{non-binary} ${\bm{x}}$, represented in fixed-point arithmetic. A biological problem that is tightly connected to binary sparse signal recovery is the estimation of the transcription factor in DNA binding. By making a linear modeling assumption, \cite{li2018quantum} expressed the gene sequence as a QUBO problem with \emph{binary} sparse regularization. The authors compared their quantum-based estimation to existing baseline methods (including lasso) and reported that the most significant improvement is obtained in small sample size regimes. This observation is also supported by our experiments, although conducted in a different context as well as in the challenging setting where ${\bm{x}}$ is a non-binary vector. The method reported in \cite{ide2022sparse} is the closet to ours, as it presents a QUBO formulation for the recovery of sparse signals represented in fixed-point arithmetic. Our formulation, however, is much more efficient with respect to the number of spins required to derive the QUBO model. Let $P$ be the number of bits defining the resolution of the fixed point representation. To present~\eqref{eq:lambda-problem} as a QUBO problem for $P\geq3$, the approach presented in~\cite{ide2022sparse} introduces $P-2$ auxiliary spins per variable while ours uses only one! We conclude with \cite{ferrari2022towards}, which ``translated to QUBO'' several feature selection methods. Instead of minimizing \eqref{eq:lambda-problem}, the authors of \cite{ferrari2022towards} assigned each feature with an information-theoretic measure of importance---e.g., its correlation with the target variable---and selected the ones that deemed to be the most important. This approximation allows handling a non-binary ${\bm{x}}$ by assigning a single binary spin per feature, indicating whether the feature should be selected or not, akin to binary compressive sensing problems. Importantly, the formulations of the feature selection methods in \cite{ferrari2022towards} are substantially different than ours---we aim to tackle the original, ubiquitous sparse coding problem \eqref{eq:lambda-problem} in its most general form, rather than an alternative formulation of it. \section{A motivating example} \label{sec:motivation} Before describing the proposed QUBO formulations, we pause to present a motivating, small-scale example that demonstrates the advantage of an exhaustive method for solving the sparse coding problem compared to classical approximation methods. To this end, we generate a matrix ${\bm{A}}$ with $L_2$-normalized columns, i.e., $\|{\bm{a}}_{i}\|_2=1$ for all $1\leq i\leq N$, where ${\bm{a}}_{i}$ is the $i$th column of ${\bm{A}}$. We design the matrix ${\bm{A}}$ such that its \emph{mutual coherence}, defined as~\cite{donoho2005stable,tropp2006just,elad2010sparse} \begin{equation} \label{eq:coherence} \mu({\bm{A}}) = \max_{i \neq j}|{\bm{a}}_i^{\top} {\bm{a}}_j|, \end{equation} is minimized.\footnote{A detailed procedure for generating matrices with low coherence is given in Appendix~\ref{app:genA}.} We make this design choice since low mutual coherence is crucial for the success of lasso and OMP in recovering the unknown ${\bm{x}}$~\cite{elad2010sparse}. Put simply, we wish to show that an algorithm that exhaustively search for a solution for \eqref{eq:lambda-problem} is highly valuable, even when ``playing on the turf'' of lasso and OMP. \begin{figure} \centering \begin{subfigure}[b]{0.329\textwidth} \centering \includegraphics[width=\textwidth]{./motivating_rec_err_vs_M__k=3_noise_std_01.png} \includegraphics[width=\textwidth]{./motivating_supp_err_vs_M__k=3_noise_std_01.png} \caption{Cardinality $k=3$, noise $\sigma=0.1$.} \label{fig:mot_rec_vs_m} \end{subfigure} \hfill \begin{subfigure}[b]{0.329\textwidth} \centering \includegraphics[width=\textwidth]{./motivating_rec_err_vs_noise__k=3_M_8.png} \includegraphics[width=\textwidth]{./motivating_supp_err_vs_noise__k=3_M_8.png} \caption{Cardinality $k=3$, rows $M=8$.} \label{fig:mot_rec_vs_noise} \end{subfigure} \hfill \begin{subfigure}[b]{0.329\textwidth} \centering \includegraphics[width=\textwidth]{./motivating_rec_err_vs_cardinality__noise_std_01_M_8.png} \includegraphics[width=\textwidth]{./motivating_supp_err_vs_cardinality__noise_std_01_M_8.png} \caption{Rows $M=8$, noise $\sigma=0.1$.} \label{fig:mot_rec_vs_k} \end{subfigure} \caption{Comparison between the solutions obtained by Algorithm~\ref{alg:exact}, which exhaustively minimizes~\eqref{eq:k-sparse-problem}, and two classic approximation algorithms---lasso and OMP. (a)~Estimation error as a function the number of rows. (b)~Estimation error as a function the the noise level $\sigma$. (c)~Estimation error as a function of the number non-zeros $k$ in the true ${\bm{x}}$. Each point in the graphs is an average over 30 independent realizations of ${\bm{A}},{\bm{x}}$, and ${\bm{v}}$; the length of ${\bm{x}}$ is fixed and equals to $N=16$.} \label{fig:motivation} \end{figure} Given such a matrix ${\bm{A}}$, we generate a vector $\bar{{\bm{b}}}={\bm{A}}{\bm{x}}$ by drawing an ${\bm{x}}$ with $k$ randomly chosen non-zero entries that are equal to~$1$, and then forming its noisy version ${\bm{b}} = \bar{{\bm{b}}} + {\bm{v}}$, where the noise component ${\bm{v}}$ is sampled from a Normal distribution with mean zero and standard deviation (STD) $\sigma$. Now, given ${\bm{A}}$ and the noisy ${\bm{b}}$, we attempt to reconstruct ${\bm{x}}$ via lasso, OMP, and an exhaustive algorithm for minimizing~\eqref{eq:k-sparse-problem}, detailed in Algorithm~\ref{alg:exact}. Due to the high computational complexity of Algorithm~\ref{alg:exact}, we could run this experiment only for a matrix ${\bm{A}}$ with a small the number of columns, which we set to $N=16$. The hyper-parameter of each method (the $L_1$ regularization strength in lasso, and the cardinality level in OMP and in Algorithm~\ref{alg:exact}) is tuned by an ``oracle'' that has access to the true ${\bm{x}}$, optimizing each of the following two error metrics. The first metric evaluates the reconstruction error, defined as $\|{\bm{x}}-\hat{{\bm{x}}}\|_2/\|{\bm{x}}\|_2$. The second measures the support recovery error, defined as $\|\mathcal{S} - \hat{\mathcal{S}}\|_0$, where $\mathcal{S} \in \mathbb{B}^N$ is a binary vector that contains the value $\mathcal{S}_i = 1$ in the $i$th index if ${\bm{x}}_i \neq 0$, or the value $\mathcal{S}_i = 0$ otherwise. The vector $\hat{\mathcal{S}} \in \mathbb{B}^N$ is defined similarly, but with respect to the estimated vector $\hat{{\bm{x}}}$. In plain words, the support recovery error is the number of non-zero elements in the unknown ${\bm{x}}$ and estimated $\hat{{\bm{x}}}$ that do not coincide in their locations. Figure~\ref{fig:motivation} compares the performance of each method as a function of the sample size~$M$ (left), noise level~$\sigma$ (middle), and cardinality~$k$ (right). Following that figure, we can see that the exhaustive method for minimizing \eqref{eq:k-sparse-problem} outperforms lasso and OMP by a large gain, especially (i)~in small sample size regimes, (ii)~for small noise levels, and (iii)~for higher cardinality values. This toy experiment, which explicitly shows how powerful is the original $L_0$ regularization compared to its surrogates, elucidates our strong desire to leverage quantum computing technologies to better approximate the NP-hard sparse coding problem, and further scale-up the dimension of ${\bm{A}}$ to have a much larger number of columns. \begin{algorithm} \caption{An exhaustive solver for \eqref{eq:k-sparse-problem}.}\label{alg:exact} \begin{algorithmic} \Require A matrix ${\bm{A}}$ of size $M \times N$, a noisy vector ${\bm{b}}$ of length $M$, and a sparsity level $k$. \Ensure A $k$-sparse vector $\hat{{\bm{x}}}$ minimizing \eqref{eq:k-sparse-problem}. \State $S \gets $ a set containing all the possible binary support vectors with $k$ non-zero elements \Comment{$N \choose k$ vectors} \State $\hat{{\bm{x}}} \gets \bm{0} $, where $ \bm{0}$ is an $N$-dimensional vector of zeros. \State $R \gets \|{\bm{b}}\|_2^2$. \Comment{Squared error of the zero-vector solution} \For{all ${\bm{s}}$ in $S$} \State ${\bm{A}}_{{\bm{s}}} \gets$ a matrix of size $M \times k$ containing $k$ columns from ${\bm{A}}$, specified by the support vector ${\bm{s}}$. \State $\hat{{\bm{z}}}_{{\bm{s}}} \gets ({\bm{A}}_{{\bm{s}}}^{\top}{\bm{A}}_{{\bm{s}}})^{-1}{\bm{A}}_{{\bm{s}}}^{\top} {\bm{b}} $. \Comment{Solution for the least squares problem: $ \min_{{\bm{z}}} \| {\bm{A}}_{{\bm{s}}} {\bm{z}} - {\bm{b}}\|_2^2$} \State ${r} \gets \| {\bm{A}}_{{\bm{s}}} \hat{{\bm{z}}}_{{\bm{s}}} - {\bm{b}}\|_2^2$. \If{${r} < R$} \Comment{The current solution is better that the best we found thus far} \State $R \gets r$. \State $\hat{{\bm{x}}}_{{\bm{s}}} \gets \hat{{\bm{z}}}_s$, where the off-support elements that are not specified by ${\bm{s}}$ are set to zero. \EndIf \EndFor \end{algorithmic} \end{algorithm} \section{Quantum solvers for sparse coding problems}\label{sec:quantum-coding} The problem of solving a system of linear equations with minimum errors can be reformulated as minimizing a quadratic function of binary variables, i.e., as a QUBO problem. The derivation presented in this section decomposes the objective function~\eqref{eq:lambda-problem} into the squared error term $\|{\bm{A}}{\bm{x}}-{\bm{b}}\|_2^2$ and the sparsity penalty term $\|{\bm{x}}\|_0$, forming the QUBO matrix for each component, denoted by ${\bm{W}}^{L_2}$ and ${\bm{W}}^{L_0}$, respectively. At a high level, the QUBO problem that we formalize takes the following form: \begin{equation} \label{eq:qubo-total} \min_{{\bm{q}} \in \mathbb{B}^{D}} {{\bm{q}}}^{\top} ({{\bm{W}}}^{L_2} + \lambda {{\bm{W}}}^{L_0}) {\bm{q}}, \end{equation} where ${\bm{q}}$ is a binary vector of spins. These spins express the unknown elements in the vector ${\bm{x}}$, using fixed-point arithmetic as follows: \begin{equation} \label{eq:fp_rep} x_i = c_i^{\text{min}} + d_i \sum_{p=1}^P q_{i,p}2^{p-1}, \quad P \geq 1. \end{equation} Above, $c_i^{\text{min}}, d_i$ and $P$ are pre-specified constants: $c_i^{\text{min}}$ is the minimal value that can be expressed, $d_i$ is a scaling factor, and $P$ is the number of bits allocated; the variables $$q_{i,p} \in \mathbb{B}, \ \ 1 \leq i \leq N, \ 1 \leq p \leq P $$ are the spins, stored in the $NP$ dimensional vector \begin{equation} \label{eq:q} {\bm{q}} = [q_{1,1}, \dots, q_{1,P}, q_{2,1}, \dots, q_{2,P}, \dots, q_{N,1}, \dots, q_{N,P}]^{\top}. \end{equation} The above is the most general representation for ${\bm{x}}$. For example, consider the case where $c_i^{\text{min}}=-1$, $d_i=1$, and $P=2$. Here, $x_i$ can get one of the values in the set $\{-1, 0, 1, 2\}$, corresponding to the four possible combinations of $q_{i,p} \in \mathbb{B}, \ 1 \leq p \leq 2$. Importantly, throughout this paper, we require that the value $0$ is always contained in the possible values that $x_i$ can get; otherwise this feature would always be considered a non-zero. We shall note that the analysis of the most general fixed-point representation for ${\bm{x}}$ with $P\geq3$ requires us to expand ${\bm{q}}$ by adding one auxiliary spin per entry $x_i$ and revising \eqref{eq:qubo-total} accordingly; we discuss this in detail in Section~\ref{sec:qubo_l0}. \subsection{QUBO matrix for the squared error term} \label{sec:qubo_residual} In this section, we focus only on the minimization of the squared error term $\|{\bm{A}}{\bm{x}}-{\bm{b}}\|_2^2$ with respect to ${\bm{x}}$, and show how to formulate it as a QUBO problem that minimizes the same objective, however, with respect to the spins ${\bm{q}}$. To do so, we first expand the $L_2$ norm and rewrite it as follows: \begin{equation} \label{eq:res_x_space} H_{L_2} := \|{\bm{A}}{\bm{x}}-{\bm{b}}\|_2^2 = \sum_{i=1}^{N}\sum_{j=1}^{N}x_ix_j W_{i,j}^{\text{base,1}} + \sum_{i=1}^N x_i W_{i,i}^{\text{base,2}} + h^{\text{base}}, \end{equation} where ${\bm{W}}^{\text{base},1}$ is a matrix of size $N \times N$ with entries \begin{align} & W_{i,j}^{\text{base},1} := \sum_{m=1}^M A_{m,i}A_{m,j}, \ \ 1 \leq i,j \leq N; \end{align} the diagonal matrix ${\bm{W}}^{\text{base},2}$ is also of size $N \times N$ with diagonal elements \begin{align} W_{i,i}^{\text{base},2} := -2 \sum_{m=1}^{M} A_{m,i}b_m, \ \ 1 \leq i \leq N; \end{align} and $h^{\text{base}}$ is a constant scalar, defined as \begin{align} h^{\text{base}} := \sum_{m=1}^M b_m^2. \end{align} Next, we plug into \eqref{eq:res_x_space} the explicit fixed-point expression for $x_i$ defined in~\eqref{eq:fp_rep}, expressing $H_{L_2}$ using the binary spins $q_{i,p} \in \mathbb{B}$ in lieu of $x_i$. After applying basic algebraic manipulations, we get the following expression: \begin{equation} \label{eq:res_fp_rep} H_{L_2} = \sum_{i=1}^{N}\sum_{j=1}^{N}\sum_{s=1}^{P}\sum_{p=1}^{P} q_{i,s}q_{j,p} W_{s + P(i-1),p + P(j-1)}^{L_2,1} + \sum_{i = 1}^N \sum_{p = 1}^P q_{i,p} W_{p + P(i-1),p + P(i-1)}^{L_2,2} + h^{L_2}. \end{equation} Above, ${\bm{W}}^{L_2,1}$ is a matrix of size $NP \times NP$ whose entries are given by \begin{align} W_{s + P(i-1),p + P(j-1)}^{L_2,1} := 2^{s+p-2}W_{i,j}^{\text{base},1}d_id_j, \ \ 1 \leq i,j \leq N, \ \ 1 \leq s,p \leq P. \end{align} In addition, ${\bm{W}}^{L_2,2}$ is an $NP \times NP$ diagonal matrix whose diagonal elements are expressed as \begin{align} W_{p + P(i-1),p + P(i-1)}^{L_2,2} := 2^{p-1} d_i \left( W_{i,i}^{\text{base},2} + 2 \sum_{j=1}^{N} c_j^{\text{min}} W_{i,j}^{\text{base},1} \right), \ 1 \leq i \leq N, \ \ \ \ 1 \leq p \leq P, \end{align} and the constant $h^{L_2}$ is formulated as \begin{align} h^{L_2} := \sum_{i=1}^N\sum_{j=1}^N W_{i,j}^{\text{base},1}c_i^{\text{min}}c_j^{\text{min}} + \sum_{i=1}^N W_{i,i}^{\text{base},2} c_i^{\text{min}} + h^{\text{base}}. \end{align} Now, we make two observations. First, since $q_{i,p}$ is binary we have $q_{i,p} q_{i,p} = q_{i,p}^2 = q_{i,p} $, and therefore we can add the diagonal elements of ${\bm{W}}^{L_2,1}_{ip,ip}$ to those of ${\bm{W}}^{L_2,2}_{ip,ip}$. Second, the minimization of $H_{L_2}$ with respect to ${\bm{q}}$ is not affected by $h^{L_2}$ since the latter is a constant. These two observations complete our derivation of the QUBO matrix for the squared error term in~\eqref{eq:lambda-problem}, concluding that the matrix ${\bm{W}}^{L_2}$ in \eqref{eq:qubo-total} is given by \begin{equation} \label{eq:W_L2} {\bm{W}}^{L_2} = {\bm{W}}^{L_2,1} + {\bm{W}}^{L_2,2}. \end{equation} \subsection{QUBO matrix for the cardinality term} \label{sec:qubo_l0} We turn to develop an explicit expression for the QUBO matrix that corresponds to the sparsity penalty. The idea is to compute the cardinality of the estimated ${\bm{x}}$ under the fixed-point representation, implemented via the binary vector ${\bm{q}}$. Since we allow $x_i$ to have negative values, we use the following strategy to facilitate the representation of the zero elements in ${\bm{x}}$. Denote the combination of the binary elements that leads to $x_i=0$ by $c_{i,p}^0$, satisfying \begin{equation} x_i = 0 = c_i^{\text{min}} + d_i \sum_{p=1}^P c_{i,p}^02^{p-1}. \end{equation} Recall our example where we set $c_i^{\text{min}}=-1$, $d_i=1$, and $P=2$, in which $x_i\in\{-1,0,1,2\}$. In this case, the value $x_i=0$ is obtained for $c^0_{i,1}=1$ and $c^0_{i,2}=0$. Importantly, the binary elements $c^0_{i,p}$ are known constants, as these are derived from the pre-determined constants $c_i^{\text{min}}$, $d_i$, and $P$. Given the constants $c^0_{i,p}$, we define \begin{equation} \label{eq:transformed_spins} y_{i,p} = \begin{cases} 1 - q_{i,p}, &c_{i,p}^0 = 0, \\ q_{i,p}, &c_{i,p}^0 = 1, \end{cases} \end{equation} where the binary variables $y_{i,p}\in\mathbb{B}$ can be thought of as ``transformed spins''. Importantly, under the above formulation, we have \begin{align} y_{i,p}=1 \ \ \forall 1\leq p \leq P \ \ \xleftrightarrow \ \ c^0_{i,p}=q_{i,p} \ \ \forall 1\leq p \leq P \ \ \xleftrightarrow \ \ x_i = 0, \end{align} where $\leftrightarrow$ stands for ``if and only if''. In words, the above implies that $x_i$ is equal to zero if and only if all the transformed spins $y_{i,p}$, $1 \leq p \leq P$ are `active' and equal to $1$. Compactly, this can be expressed as \begin{equation} \label{eq:zi} z_i = y_{i,1}y_{i,2} \dots y_{i,P}, \end{equation} where $z_i = 1$ if and only if $x_i=0$. Now, we can compute how many non-zeros we have in ${\bm{x}}$ by \begin{equation} \label{eq:L0_H} H_{L_0} := \|{\bm{x}}\|_0 = \sum_{i=1}^N(1-z_i). \end{equation} Therefore, the formulation of the transformed spins $y_{i,p}$ allows us to express the sparsity penalty in QUBO terms, for the most general fixed-point representation. Unfortunately, in this general case, the variables $z_i$ are not quadratic in the spins $q_{i,p}$, and therefore \eqref{eq:L0_H} can not be used in its present form to construct the matrix ${\bm{W}}^{L_0}$ in \eqref{eq:qubo-total}. Yet, we will show how to overcome this challenge using auxiliary ancilla spins. Before doing so, however, we pause to discuss two important special cases. The first is a case where ${\bm{x}}$ is binary with $P=1$ and $d_i=1$ in~\eqref{eq:fp_rep}, and the second extends the former to the 2-bit case, with $P=2$. In contrast to the $P\geq3$ case, both the binary and 2-bit representation do not require the use of ancilla spins. \subsubsection*{Binary representation: $\mathbold{P}$=1} In this simple setting, $x_i = q_i$ by construction, as $q_i$ is the binary spin that corresponds to the $i$th entry in the binary ${\bm{x}}$. Therefore, the number of non-zeros in the estimated ${\bm{x}}$ is nothing but the sum over all the original spins, which can be written as follows: \begin{equation} H_{L0}^{\text{binary}} = \|{\bm{x}}\|_0= \sum_{i=1}^N q_i = \sum_{i=1}^N q_iq_i = {\bm{q}}^{\top}{\bm{W}}^{L_0}{\bm{q}}. \end{equation} Above, the third equality holds since $q_i\in\mathbb{B}$, and the forth equality holds for the identity matrix ${\bm{W}}^{L_0} = {\bm{I}}$ of size $N \times N$. \subsubsection*{2-bit representation: $\mathbold{P}$=2} In contrast to the binary case, in this setting the cardinality of ${\bm{x}}$ is not equal anymore to the sum the original spins $q_{i,p}$, $1\leq i \leq N$, $1 \leq p \leq 2$. Yet, following~\eqref{eq:L0_H}, we can express the sparsity penalty using the two transformed spins $y_{i,1}$ and $y_{i,2}$, where this transformation is pre-defined via the constants $c^0_{i,p}\in\mathbb{B}$, $1 \leq p \leq 2$. Below, we show that for the special case of $P=2$, the expression in~\eqref{eq:L0_H} is a quadratic function of the original spins $q_{i,p}$, which is crucial to yield the QUBO matrix ${\bm{W}}^{L_0}$. The idea is to map the transformed spins $y_{i,p}$ back to the original space of $q_{i,p}$, for all $i,p$, and then rewrite the resulting expression in a form of ${\bm{q}}^\top {\bm{W}}^{L_0} {\bm{q}}$. Concretely, consider our running example, in which $c_i^{\text{min}}=-1$, $d_i=1$, and $P=2$. Here, $x_i$ can get negative values, however we know that $x_i=0$ for the constants $c^0_{i,1}=1$ and $c^0_{i,2}=0$. Now, imagine we start with a $2N\times 2N$ matrix ${\bm{W}}^{L_0}$ of zeros, and we wish to fill-in the entries in ${\bm{W}}^{L_0}$ that correspond to $x_i$. According to~\eqref{eq:transformed_spins}, since $c_{i,1}^0=1, \ c_{i,2}^0=0$, we have \begin{equation} y_{i,1}=q_{i,1} \ \text{and} \ y_{i,2}=1-q_{i,2}, \end{equation} leading to $1-z_i$ that is quadratic in the spins $q_{i,p}$, since \begin{equation} \label{eq:2bits_example} 1-z_i = 1-y_{i,1}y_{i,2} = 1-q_{i,1}(1-q_{i,2}) = 1 - q_{i,1}q_{i,1} + q_{i,1}q_{i,2}. \end{equation} The above implies that we should set the corresponding entries in the QUBO matrix as follows: \begin{equation} \label{eq:W_L0_binary} W^{L_0}_{1+2(i-1),1+2(i-1)} = -1; \ W^{L_0}_{1+2(i-1),2+2(i-1)} = \frac{1}{2}; \ \text{and} \ W^{L_0}_{2+2(i-1),1+2(i-1)} = \frac{1}{2}. \end{equation} Note that we ignore the leading constant `1' in the right-hand-side of~\eqref{eq:2bits_example} since constants do not affect the solution of the underlying optimization problem. The remaining elements in the matrix can be determined by repeating a similar set of steps for all $1 \leq i \leq N$, however with possibly different values of $c^0_{i,1}, c^0_{i,2}$. Notice that there are four cases for $c^0_{i,p}\in\mathbb{B}$, $1 \leq p \leq 2$ that should be considered, while above we studied only one of these for which $c^0_{i,1}=1$ and $c^0_{i,2}=0$. For completeness, we present below the three remaining cases. For $c_{i,1}^0=0, \ c_{i,2}^0=0$, we follow \eqref{eq:transformed_spins} and get $y_{i,1}=1-q_{i,1} \ \text{and} \ y_{i,2}=1-q_{i,2}$, leading to $1-z_i = q_{i,1} + q_{i,2} - q_{i,1}q_{i,2}$. Analogously, when $c_{i,1}^0=0, \ c_{i,2}^0=1$ we have $y_{i,1}=1-q_{i,1} \ \text{and} \ y_{i,2}=q_{i,2}$, leading to $1-z_i = 1-q_{i,2} + q_{i,1}q_{i,2}$. The last case deals with $q_{i,1}^0=1, \ q_{i,2}^0=1$; here, $y_{i,1}=q_{i,1} \ \text{and} \ y_{i,2}=q_{i,2}$, and thus $1-z_i = 1- q_{i,1}q_{i,2}$. It is immediate to explicitly express ${\bm{W}}^{L_0}$ for each case as in~\eqref{eq:W_L0_binary}; we omit this in the interest of space. \subsubsection*{The general case: $\mathbold{P\geq}$ 3} This is the most difficult case to address, as a naive extension of \eqref{eq:zi} to the $P$-bit case would result in $H_{L_0}$ in \eqref{eq:L0_H} that has high-order interactions between the original spins, breaking the bilinear structure of the QUBO problem. As a way out, we introduce ancilla spins, which allow us to express $H_{L_0}$ for the most general representation of ${\bm{x}}$ in a quadratic form that perfectly fits the QUBO structure. We take inspiration from \cite{freedman2005energy,ishikawa2010transformation} and offer a solution that is extremely efficient with respect to the number of ancilla spins: we add only one auxiliary spin per feature, which we view as the minimal number of spins that one can hope for, in such a general case. Turning to the details, denote the ancilla spin for $x_i$ by $s_{i}\in\mathbb{B}$ and define the function \begin{equation} \label{eq:F} F(y_{i,1}, y_{i,2}, \dots, y_{i,P}, s_i) := s_i \cdot \left(y_{i,1} + y_{i,2} + \dots + y_{i,P} - (P - 1)\right), \end{equation} which gets as input all the transformed spins $y_{i,p}$ as well as the ancilla spin $s_i$ of $x_i$. Now, we invoke a beautiful result presented in \cite{freedman2005energy} and \cite[Section 4.2]{ishikawa2010transformation}, stating that \begin{equation} -z_i = -y_{i,1}y_{i,2} \dots y_{i,P} = \min_{s_i} -F(y_{i,1}, y_{i,2}, \dots, y_{i,P}, s_i). \end{equation} In plain words, the minimal value of the function $-F(y_{i,1}, y_{i,2}, \dots, y_{i,P}, s_i)$, optimized with respect to the ancilla spin $s_i$, is in fact equal to $-z_i$. Since the function $F$ in \eqref{eq:F} is a sum of $s_iy_{i,p}$, $1 \leq p \leq P$, we can harness it to form a cost function that is equivalent to $H_{L_0}$ in \eqref{eq:L0_H}, but involves only bilinear spin terms. Specifically, we define our cardinality Hamiltonian as \begin{align} \label{eq:general_HL0_F} H_{L_0} = \|{\bm{x}}\|_0 = \sum_{i=1}^N(1-z_i) &= \sum_{i=1}^N(1-F(y_{i,1}, y_{i,2}, \dots, y_{i,P})) \\ &= \sum_{i=1}^N\left(1-s_iy_{i,1}-s_iy_{i,2}-\dots-s_iy_{i,P} + s_i\left(P-1\right)\right). \end{align} Armed with the above cost function, we are able to form the structure and content of the matrix ${\bm{W}}^{L_0}$. One technical challenge is that $y_{i,p}$ are the transformed versions of the original spins $q_{i,p}$, and so we must substitute the spins $q_{i,p}$ directly into \eqref{eq:general_HL0_F}. This step is similar in spirit to the 2-bit case that we have already discussed in depth earlier. By doing so, we obtain an expression that is quadratic with respect to the $NP$ original spins $q_{i,p}$, as well as with the additional $N$ ancilla spins $s_i$. Concretely, we define the vector $\tilde{{\bm{q}}}=[{\bm{q}} \ ; \ {\bm{s}}]^{\top}$ that contains a total of $N(P + 1)$ spins; the first $NP$ elements are the original spins ${\bm{q}}$, and the rest are the $N$ ancilla spins $$ {\bm{s}} = [s_1, s_2, \dots, s_{N}].$$ Consequently, the matrix ${\bm{W}}^{L_0}$ is of size $N(P + 1) \times N(P + 1)$, where we use the convention that the first $NP$ rows (resp. columns) correspond to the spins $q_{i,p}$ and the rest $N$ rows (resp. columns) correspond to the ancilla spins $s_{i}$. Here, the QUBO problem defined with the vector of spins $\tilde{{\bm{q}}}$ is given by $$ \min_{\tilde{{\bm{q}}} \in \mathbb{B}^d} \tilde{{\bm{q}}}^{\top} (\widetilde{{\bm{W}}}^{L_2} + \lambda {\bm{W}}^{L_0}) \tilde{{\bm{q}}},$$ where $\widetilde{{\bm{W}}}^{L_2}$ is a matrix of size $N(P+1) \times N(P+1)$, obtained by padding the matrix ${\bm{W}}^{L_2}$ in~\eqref{eq:W_L2} with zeros in all the entries that correspond to the ancilla spins. To better understand how to construct ${\bm{W}}^{L_0}$, it may be best to consider an example with $P=4$. Observe that each of the four transformed spins $y_{i,1},y_{i,2},y_{i,3},y_{i,4}$ can get one of two possible expressions, determined by the constants $c_{i,1}^0$ and $c_{i,2}^0$, $c_{i,3}^0$, $c_{i,4}^0$, respectively. This creates 16 different explicit forms for the term $F(y_{i,1},y_{i,2}, y_{i,3}, y_{i,4}, s_{i})$ as a function of $c_{i,1}^0,c_{i,2}^0,c_{i,3}^0,c_{i,4}^0$. For instance, following \eqref{eq:transformed_spins}, the choice of $c_{i,1}^0=1, c_{i,2}^0=0, c_{i,3}^0=0, c_{i,4}^0=1$ implies that $y_{i,1}=q_{i,1} \ y_{i,2}=1-q_{i,2}, \ y_{i,3}=1-q_{i,3}, \ y_{i,4}=q_{i,4}$. Plugging the latter into the explicit expression of $F$ results in \begin{align} 1-z_i = 1-F(y_{i,1},y_{i,2},y_{i,3},y_{i,4}, s_{i}) &= 1-F(q_{i,1},1-q_{i,2}, 1-q_{i,3}, q_{i,4}, s_{i,1} ) \\ &= 1-s_i (q_{i,1} + (1-q_{i,2}) + (1-q_{i,3}) + q_{i,4} - (4 - 1)) \\ &=1 - s_i (q_{i,1} - q_{i,2} -q_{i,3} + q_{i,4} - 1). \label{eq:F_1_0_0_1} \end{align} Observe that the minimal value of \eqref{eq:F_1_0_0_1} is 0, obtained for the combination $q_{i,1}=c_{i,1}^0=1, q_{i,2}=c_{i,2}^0=0, q_{i,3}=c_{i,3}^0=0$, $q_{i,4}=c_{i,4}^0=1$, and $s_i=1$. Now, since the leading constant `1' in \eqref{eq:F_1_0_0_1} does not affect the combination of spins that minimizes this expression, we can set the entries in ${\bm{W}}^{L_0}$ that correspond to \eqref{eq:F_1_0_0_1} as follows: $$W^{L_0}_{1+4(i-1),N+i} = -1, \ W^{L_0}_{2+4(i-1),N+i} = 1, \ W^{L_0}_{3+4(i-1),N+i} = 1, \ W^{L_0}_{4+4(i-1),N+i} = -1, \ \text{and} \ W^{L_0}_{N+i,N+i} = +1.$$ A similar analysis can be done for the remaining 15 cases, which we do not provide here in the interest of space. In closing, this section introduced a novel decomposition of the sparse coding problem~\eqref{eq:lambda-problem} as a QUBO model. To formulate an efficient solution with respect to the number of spins, our analysis is separated into three cases: binary with $P=1$, 2-bit with $P=2$, and a general one with $P\geq3$, where the total number of spins varies with $P$. For $P=1$ we have $N$ spins, for $P=2$ we have $2N$ spins, and for the general $P\geq3$ case we have $N(P+1)$ spins in total. \section{Experiments}\label{sec:res} \begin{figure} \centering \begin{subfigure}[b]{0.329\textwidth} \centering \includegraphics[width=\textwidth]{./binary_rec_err_vs_M__k=30_noise_std_01.png} \includegraphics[width=\textwidth]{./binary_supp_err_vs_M__k=30_noise_std_01.png} \caption{Cardinality $k=30$, noise $\sigma=0.1$.} \label{fig:bin_rec_vs_m} \end{subfigure} \hfill \begin{subfigure}[b]{0.329\textwidth} \centering \includegraphics[width=\textwidth]{./binary_rec_err_vs_noise__k=30_M_80.png} \includegraphics[width=\textwidth]{./binary_supp_err_vs_noise__k=30_M_80.png} \caption{Cardinality $k=30$, rows $M=80$.} \label{fig:bin_rec_vs_noise} \end{subfigure} \hfill \begin{subfigure}[b]{0.329\textwidth} \centering \includegraphics[width=\textwidth]{./binary_rec_err_vs_cardinality__noise_std_01_M_80.png} \includegraphics[width=\textwidth]{./binary_supp_err_vs_cardinality__noise_std_01_M_80.png} \caption{Rows $M=80$, noise $\sigma=0.1$.} \label{fig:bin_rec_vs_k} \end{subfigure} \caption{Results obtained by LightSolver's digital simulator, minimizing the proposed QUBO problem for a binary ${\bm{x}}$ of length $N=160$. Each point in the graphs is an average over 20 independent realizations of ${\bm{A}},{\bm{x}}$, and ${\bm{v}}$. Other details are as in Figure~\ref{fig:motivation}.} \label{fig:binary} \end{figure} \paragraph{Binary sparse vectors.} We return to the experiment from Section~\ref{sec:motivation}, but now increase significantly the dimensions of the problem. We scale-up the dimensions of ${\bm{A}}$ and ${\bm{x}}$ by a factor of 10, focusing on a sparse \emph{binary} vector ${\bm{x}}$ ($P=1$) of dimension $N=160$. The matrix ${\bm{A}}$ is designed to have low mutual coherence according to~\eqref{eq:coherence}, generated as described in~Appendix~\ref{app:genA}. Since the vector ${\bm{x}}$ is binary, we use the QUBO formulation with $P=1$, where the total number of spins is equal to~$N=160$. Armed with ${\bm{W}}^{L_2}$ and ${\bm{W}}^{L_0}$, we minimize~\eqref{eq:qubo-total} using a quantum-inspired annealer implemented by LightSolver's digital simulator. We then compare the quality of our quantum-inspired estimation $\hat{{\bm{x}}}$ to lasso and OMP, serving as baseline methods. Naturally, we cannot include Algorithm~\ref{alg:exact} as a baseline, since it is infeasible to run such an exhaustive optimization method with the dimensions of the problem that we study here. Similarly to Section~\ref{sec:motivation}, the hyper-parameter $\lambda$ of our method, as well as the hyper-parameters of lasso and OMP, are chosen by an ``orcale'' that has access to ${\bm{x}}$, seeking the parameter that achieves the smallest possible error metric under study, for each method. Figure~\ref{fig:binary} presents the reconstruction and support recovery errors as a function of $M$ (Figure~\ref{fig:bin_rec_vs_m}), $\sigma$ (Figure~\ref{fig:bin_rec_vs_noise}), and $k$ (Figure~\ref{fig:bin_rec_vs_k}). Observe how the quantum-inspired estimations are highly accurate, portraying nearly perfect recovery in all cases. In particular, the quantum-based estimations are superior than those obtained by the widely-used lasso and OMP, all across the board. This dramatic improvement can be explained as follows. First, the annealer accurately minimizes the NP-hard QUBO problem of interest. Second, lasso and OMP do not leverage the prior knowledge that ${\bm{x}}$ is binary, in contrast to the annealer that utilizes this knowledge, encapsulated \textit{apriori} in the QUBO matrices. \paragraph{2-bit sparse vectors.} We now turn to expand the above experiment, by considering a more flexible representation of ${\bm{x}}$ with $P=2$. Recall that our QUBO formulation for the 2-bit setting requires $2N$ spins in total, in contrast to the binary case that requires only $N$ spins. Therefore, we consider a smaller matrix ${\bm{A}}$ with $N=80$ columns, keeping the total number of spins identical across the two experiments. The 2-bit representation of ${\bm{x}}$ follows \eqref{eq:fp_rep}, with $c_i^{\text{min}}=0$ and $d_i=1$; consequently, each \emph{non-zero} entry is randomly sampled from the discrete set $\{1,2,3\}$. Figure~\ref{fig:2-bit} displays the reconstruction and support recovery errors as a function of the sample-size $M$, while fixing the cardinality ($k=10$) and noise ($\sigma=0.1$) levels. As portrayed, the annealer achieves impressive estimation errors, outperforming OMP and lasso especially in the most challenging setting in which the sample size is relatively small. \begin{figure} \centering \begin{subfigure}[b]{0.329\textwidth} \centering \includegraphics[width=\textwidth]{./fixed_point_rec_err_vs_M__k=10_noise_std_01.png} \label{fig:fp_rec_vs_m} \end{subfigure} \hspace{0.1cm} \begin{subfigure}[b]{0.329\textwidth} \centering \includegraphics[width=\textwidth]{./fixed_point_supp_err_vs_M__k=10_noise_std_01.png} \label{fig:fp_supp_vs_m} \end{subfigure} \caption{Results obtained by LightSolver's digital simulator, minimizing the proposed QUBO problem for a 2-bit ${\bm{x}}$ of length $N=80$. We fixed the cardinality to $k=10$ and set the noise level to $\sigma=0.1$. Error metrics are evaluated over $20$ independent realizations of ${\bm{A}},{\bm{x}}$ and ${\bm{v}}$. Other details are as in Figure~\ref{fig:motivation}.} \label{fig:2-bit} \end{figure} \section{Conclusions and future directions}\label{sec:conclusions} In this work, we formulate the ubiquitous sparse coding task as a quadratic unconstrained binary optimization (QUBO) problem, which, potentially, can be minimized efficiently using quantum computers and Ising machines. Numerical experiments demonstrate the superiority of our quantum-based estimations compared to widely-used, classic sparse approximation algorithms---lasso and OMP. In particular, we report significant improvements in challenging regimes where (i) the sample-size is relatively small, (ii) the noise-level is relatively large, and (iii) the cardinality is relatively large. While our QUBO formulation can handle the most general fixed-point representation of the underlying sparse vector, we believe it would be of great interest to extend the proposed framework to a floating point representation. Such a formulation may reduce the number of spins required to express the solution with a sufficient precision, which is of paramount importance given the challenge of quantum/Ising platforms to increase the number of spins. We also believe it would be exciting to expand our work beyond regression settings, and provide QUBO formulations to sparse-regularized classification algorithms. In a broader view, there is a growing interest in using quantum technologies in biological sciences~\cite{emani2021quantum}, and the methods presented in this paper may provide performance improvements in applications where sparse linear regression plays a role. As an example, in genome-wide association studies (GWAS) scientists are interested in accurately identifying which of the thousands of single-nucleotide polymorphisms (SNPs) are associated with high cholesterol, or any other phenotype of interest~\cite{usai2009lasso,waldmann2013evaluation}. The ability to find a sparse, interpretable set of `important' SNPs can be used to improve medical treatment as well as to expand our understanding of the human genome. In these applications, it is common for the number of observations (e.g., individuals participating in a study) to be smaller than the number of features (SNPs), as biotech development allows a massive collection of genetic variants for each individual~\cite{sesia2021false}. The desire to form an interpretable predictive rule together with the high-dimensional nature of the data---the number of SNPs (columns of $A$) is larger than the observations (rows of $A$)---may explain the popularity of sparse linear regression methods in this field. Although conducted on simulated data, the experiments presented in this paper indicate that our quantum-based approach thrives in ultra high-dimensional regimes, and therefore we believe it would be exciting to explore the potential use of our quantum sparse coding method in this domain. \bibliographystyle{unsrt}
1,116,691,496,989
arxiv
\section*{Introduction} Engineering single-photon sources with high efficiency, purity, and indistinguishability is a longstanding goal for applications such as linear optical quantum computation \cite{carolan2015}, boson sampling \cite{wang2019}, quantum networks \cite{yin2017} and quantum metrology \cite{slussarenko2017}. Atomic systems have shown significant progress towards quantum light-matter interfaces, including efficient quantum memories \cite{wang2019efficient}, quantum networks \cite{yu2020}, high-fidelity light-matter entanglement \cite{bock2018}, atomic gates \cite{ballance2016}, and quantum simulators \cite{gross2017}. Atomic platforms require spectrally matched single photons that can coherently couple with atomic processors, provided with high-efficiency generation, purity, and indistinguishability. Strongly interacting Rydberg atoms provide a particularly promising system. They have proven to be versatile for engineering strong interactions between photons, exhibiting nonlinearities at the single-photon level \cite{peyronel2012, maxwell2013, li2016, paris2017}. Recent experiments using Rydberg interactions have demonstrated on-demand single-photon generation \cite{dudin2012, ripka2018}, as well as photon transistors \cite{gorni2014, tiarks2014, gorni2016}, photonic and atomic phase gates \cite{tiarks2016, thompson2017,tiarks2018, maller2015, zeng2017, levine2018}, high-visibility quantum interference in hybrid systems \cite{us}, and quantum simulators \cite{schauss2012, zeiher2017,lienhard2018, kim2018}. We describe here an efficient single-photon source based on collective excitation and de-excitation of a cold, trapped ensemble of atoms through a highly excited Rydberg state \cite{saffman2002, dudin2012, ripka2018}. During two-photon excitation from the ground to the Rydberg state via an intermediate state [see Fig.~\ref{fig:exp}(a)], long-range van der Waals interactions suppress multiple Rydberg excitations within a blockade radius, $r_b$ \cite{lukin2001}. The resulting single, collective atomic excitation is coherently shared among $N$ atoms as a spin wave \cite{saffman2002}. Due to the collective nature of the excitation, if the initial phase coherence of the spin wave is maintained, the subsequent coupling of the Rydberg state to the intermediate state can efficiently map the excitation onto a single photon in a well-defined mode~\cite{sangouard2011}. Our system produces single photons with repetition rates up to 400~kHz, a generation probability up to 0.40(4), $g^{(2)}=2.0(1.5)\times10^{-4}$, and indistinguishability of 0.982(7). We model the write and retrieval process, including the measured spin-wave dephasing rate. We identify long-lived-contaminant Rydberg states \cite{Elizabeth2016} as a limiting factor on the source efficiency for increasing production rates. Given the requirements for most quantum information applications, the single-mode efficiency, rate, and quality of single-photon sources are of key importance since successful scaling of these systems involves detection of multiple identical photons. Thus, we introduce metrics to describe the probability, rate, and fidelity of producing a single photon in a single-mode, which includes the contributions from the commonly used metrics: overall collection efficiency, purity, indistinguishability, and repetition rate \cite{eisaman2011}. \section*{Experimental Apparatus and Procedure } \begin{figure}[t] \centering\includegraphics[width=8.4 cm]{Fig1.pdf} \caption{(a) Relevant atomic levels and set-up for single-photon generation. During the spin wave writing stage we set the single-photon detuning $\Delta_p\approx2\pi\times50$~MHz, and the two-photon detuning $\delta=\Delta_p+\Delta_c$ to Raman resonance, $\delta\approx-2\pi\times2$~MHz. For retrieval, $\Delta_c\approx2\pi\times7$~MHz. (b) Experimental set-up schematic. There is a polarization beamsplitter (PBS) to project the photons into a single polarization mode, followed by an acousto-optic-modulator (AOM) that gates the incoming photons. All the light is directed to the polarization maintaining fiber (PMF) to realize a purity measurement. For the indistinguishability characterization, we split the light such that the rate is roughly the same at both ports of the second beamsplitter (BS). By rotating the half waveplate ($\lambda/2$) we can control the relative polarization of the photons coming from the PMF port and the long delay port. (c) Photon temporal envelope, gray dashed lines indicate the software gate window. (d) Timing sequence for the generation of successive single photons, the writing $\pi$-pulse lasts for $t_w\approx370$~ns. We use a minimum storage time $t_s\approx350$~ns to maximize the retrieval and vary $t_r$ to change the repetition rate, $R=1/t_p$. } \label{fig:exp} \end{figure} We start the experiment with a magneto-optical trap of $^{87}$Rb atoms and further laser cool the atoms with a $\Lambda$-gray molasses down to $\approx10$ $\mu$K. We load the atoms into a 1003-nm wavelength optical dipole trap. To write the spin wave, we couple the ground state, $|g\rangle = |5S_{1/2}, F=2, m_F=2\rangle$ to the Rydberg state $|r\rangle =|139S_{1/2}, m_J=1/2\rangle$ via the intermediate state $|e\rangle= |5P_{3/2}, F=3, m_F=3\rangle$ with an intermediate detuning $\Delta_p\approx2\pi\times50$~MHz, as shown in Figure \ref{fig:exp}(a). The probe beam coupling $|g\rangle$ to $|e\rangle$ is focused into the atom cloud with a waist of $\approx3.3$~$\mu$m, with a Rabi frequency $\Omega_p \approx 2 \pi \times 1$~MHz. The counter-propagating control beam coupling $|e\rangle$ to $|r\rangle$ has a larger, $\approx19$~$\mu$m waist and peak Rabi frequency $\Omega_c \approx 2 \pi \times 7$~MHz. The van der Waals coefficient of the Rydberg state $139S_{1/2}$ is $C_6 \approx -2 \pi \times 2.5\times10^6$~GHz~$\mu$m$^6$ \cite{ARC}, which results in a blockade radius $r_b \approx 60$~$\mu$m during the spin-wave writing. Since $r_b$ is larger than the probe beam waist and the atomic cloud extension in the propagation direction, $\sigma_z\approx27$ $\mu m$, the excitation volume is blockaded. The effective two-photon Rabi frequency, $\Omega_{\text{2ph}}= \frac{\Omega_p \Omega_c }{2\Delta_p}$ is enhanced by a factor $\sqrt{N}\approx20$ from the $N$ atoms participating in the collective excitation \cite{saffman2002, dudin2012coll}. After a spin-wave storage time $t_s > 350$~ns [see Fig.~\ref{fig:exp}(d)], we turn back on the control field with a detuning $\Delta_c \approx 2 \pi \times 7$~MHz that maximizes the retrieval efficiency of the spin wave into a single photon. We can vary the repetition rate of the write-retrieval sequence up to 400~kHz, with interrogation times up to 600~ms (0.6 duty cycle) before we need to reload the optical dipole trap. \section*{Single-photon source purity and indistinguishability} \begin{figure} \centering\includegraphics[width=7.5cm]{Fig2.pdf} \caption{Measured coincidences for purity and characterization. (a) Normalized coincidences for $g^{(2)}(\tau)$ with 5 $\mu$s cycle. (b) Normalized coincidences for $g^{(2)}(\tau)$ around $\tau=0$, grey line represents the background coincidences with 20-ns bins. The shape of this profile arises from the convolution of the photon pulse shape with a constant background within the gate window, and the pedestal asymmetry is because the background rate is not the same for each channel. All data shown were taken with $60\%$ duty cycle.} \label{fig:g2} \end{figure} We use Hanbury Brown-Twiss and Hong-Ou-Mandel interferometers to characterize the purity and indistinguishability of our single photons [see Fig.\ \ref{fig:exp}(b)]. We define the purity of our single-photon source as $1-g^{(2)}(0)$, where $g^{(2)}(\tau)$ is the second-order autocorrelation function. We apply a 1.4 $\mu$s long software gate window, containing more than $99.9\%$ of the pulse [see Fig.\ \ref{fig:exp}(c)]. Coincidences at zero time delay are substantially suppressed, as shown in Figure \ref{fig:g2}(a), with strong antibunching $g^{(2)}_{\text{raw}}(0)=0.0145(2)$, integrating the area around $\tau=0$ and without background subtraction. The background coincidence rate is dominated by coincidences involving photon events with background counts unrelated to the single-photon generation, coming from detector dark counts and room light leakage. The independently measured background rate, photon shape, and photon rate are constant throughout each experimental run, from which we determine that the accidental coincidences contribute to $g^{(2)}_{\text{back}}(0)=0.0143$. The gray curve in Figure \ref{fig:g2}(b) shows the background coincidence profile within the gate window (see \cite{SM} for details). After background subtraction, our single-photon source has $g^{(2)}(0)=2.0(1.5)\times 10^{-4}$. \begin{figure} \centering\includegraphics[width=7.5cm]{Fig3.pdf} \caption{Measured coincidences for indistinguishability characterization. (a) Normalized coincidences for HOM characterization with 4.92 $\mu$s cycle. Indistinguishable polarization states are represented in blue, and distinguishable polarization states are in red. (b) Normalized coincidences for HOM around $\tau=0$, the grey line represents the background coincidences with 52-ns bins. All data shown were taken with $60\%$ duty cycle.} \label{fig:HOM} \end{figure} We use a Hong-Ou-Mandel interferometer (HOM) to measure the photon indistinguishability. We implement a fiber-based $4.92~\mu$s delay in one arm to temporally overlap adjacently produced photons. Additionally, there is a polarizing beam splitter (PBS) at the output of each fiber to account for any polarization rotation due to the fibers. At the exit of the short arm, there is a half-wave plate (HWP) to rotate the polarization and control the degree of distinguishability of the photons. Figure \ref{fig:HOM}(a) shows the normalized coincidences for orthogonal and parallel polarizations. Integrating the number of coincidences in a window around $\tau=0$ for the two cases, we measure a raw HOM interference visibility $\mathcal{V}_{\text{raw}}=1-C_{\parallel}/C_{\perp}=0.894(6)$. Accounting for the accidental coincidences with background events and the slight differences in the transmission and reflection coefficients of our combining beamsplitter gives a mode overlap of 0.982(7) (see \cite{SM}). \section*{Source efficiency} \begin{figure*}[t] \centering\includegraphics[width=0.75\textwidth]{Fig4.pdf} \caption{Effect of contaminants on single-photon generation. (a) Photon generation probability as function of pulse period $t_p$. Dark-blue line is fitted using Eq.~\ref{eq:prob} in steady state for $n \rightarrow\infty$ using the values for $P_c$ and $\tau_c$ in the main text, we obtain $P_{max}=0.35(2)$. Red band shows the generation probability predicted by the theoretical model. (b) Normalized summed counts per pulse for a pulse train with 2.5-$\mu$s pulse period. Dark-blue line is fitted with Eq.~\ref{eq:prob}. (c) $P_c$ vs. peak atomic density $\rho_0$ with a fixed storage $t_s=350$~ns. (d) $P_c$ vs. time $t_s$ with a density $\approx4\times10^{11}$~cm$^{-3}$.} \label{fig:contaminants} \end{figure*} We measure a peak probability of 0.18(2) to generate a single photon into a single-mode fiber after polarization filtering and averaged for a 20$\%$ duty cycle. Accounting for optical losses and assuming that the single-photon has the same spatial mode as the 780-nm-write beam, we estimate a generation probability of 0.40(4) immediately after the atomic ensemble. The average probabilities go down to 0.14(1) and 0.31(1), respectively for a $60\%$ duty cycle. We calculate $P_{\text{th}}=\eta_w \eta_s \eta_r$ as a product of the writing, $\eta_w$, storage, $\eta_s$, and retrieval, $\eta_r$, efficiencies to estimate the theoretical probability of generating a photon. Referring the reader to the Supplement \cite{SM} for the details of the theoretical analysis, we summarize it here only briefly. We simulate the writing of the spin wave using a Lindblad master equation to estimate the writing efficiency and the storage efficiency. We calculate the retrieval efficiency using the optical Maxwell-Bloch equations with the formalism in Ref.\ \cite{alexey2007}. Using independently measured experimental values as input parameters, we obtain a theoretical prediction of $P_{\text{th}}\approx0.42(3)$ (see Supplement \cite{SM}). This value is consistent with the measured generation probability for the longest pulsing periods, $t_p$. We observed that the average photon production efficiency decreased at higher repetition rates, as shown in Figure~\ref{fig:contaminants}(a). (Here the photon probability is determined immediately after the atom cloud by accounting for independently measured optical losses.) The initial pulse in a pulse series had higher efficiency, however, the efficiency of subsequent pulses decreased exponentially to the steady-state value on a $\approx60$~$\mu$s time scale [see Figure~\ref{fig:contaminants}(b)]. These observations are consistent with the creation of contaminant atoms in other long-lived Rydberg states that are not removed by the retrieval field. These states interact strongly with the target Rydberg state, affecting subsequent writing events. Similar contaminant states have been observed in previous experiments \cite{DeSalvo2016, Elizabeth2016, Radiation2017}, and have been analyzed extensively \cite{Aman2016, Chem2016, boulier2017, bienias2018}. Once a contaminant is in the medium, it disables the writing of a spin wave for the later pulses. However, contaminants have a finite lifetime in the medium, therefore, the photon generation probability decreases for shorter pulse periods. We use a simple model to capture the effect of contaminants on photon production (see \cite{SM} for details). We assume that for any given pulse, there is a probability $P_c$ of creating a contaminant. If the contaminant state has a lifetime $\tau_c$, then the probability $P_n$ of having a contaminant in the $n$-th pulse of a pulse series with period $t_p$ is \begin{equation} P_n=P_c \frac{1-(e^{- t_p/\tau_c}-P_c)^n }{1-e^{-t_p/\tau_c}+P_c}. \label{eq:prob} \end{equation} For $\tau_c \gg t_p$, the average contaminant probability as $n\rightarrow \infty$ can be significant, even if $P_c$ is small. The probability $P_g(n)$ of successfully generating a single-photon on the $n$-th pulse in the presence of a contaminant is decreased according to $P_g(n)=P_{max}(1-P_n)$, where $P_{max}$ is the probability of photon generation in the absence of contaminants. The steady state efficiency is given by $P_g(n\rightarrow\infty)$. Fitting this equation to pulse sequence data as shown in Fig.~\ref{fig:contaminants}(b), we determine $P_c=1.9(3)\times10^{-2}$, and $\tau_c=65(8)$~$\mu$s, which is in good agreement with the data in Fig.~\ref{fig:contaminants}(a). We find that $P_c$ increases linearly with atomic density $\rho$ [see Fig.~\ref{fig:contaminants}(c)], which suggests that the source of contaminants is ground-Rydberg interactions. For high principal quantum number, $n$, collisionally produced contaminants were identified in Ref~\cite{Chem2016} to be Rydberg states with principal quantum number $n-4$ and quantum angular momentum $l>2$. Furthermore, we find that $P_c$ increases with storage time $t_s$ at a rate $\approx3\times10^{-2}$~$\mu$s$^{-1}$, which gives a contaminant generation time-scale of $\approx33$~$\mu$s for a density $\approx4\times10^{11}$~cm$^{-3}$. Contaminants are not a fundamental limitation since strong electric field pulses between writing pulses could be used to remove them. We also note that for interrogation times longer than 100~ms, other effects such as heating and atom depolarization from rescattering become more significant, further reducing the photon generation for shorter $t_p$. However, these effects can be mitigated by detuning farther from the intermediate state. \begin{figure} \centering\includegraphics[width=8cm]{Fig5.pdf} \caption{Performance of a sample from different single-photon sources. Solid-state systems considered are spontaneous parametric down-conversion (SPDC) \cite{wang2016}, multiplexed-heralded-single-photon source (MUX-HSPS) \cite{xiong2016, kaneda2019} and quantum dots (QD) \cite{somaschi2016, loredo2016, wang2017, kir2017, wang2019p}. Atomic systems considered are single atoms in free-space \cite{maunz2007, rosenfeld2017}, atoms in cavities \cite{thompson2006, wilk2007, nisbet2011, mucke2013}, and the Rydberg ensemble studied in this work (indicated in the purple line) accounting for the effect of different repetition rates for a duty cycle of 0.6. (For details on these sources, see tables in \cite{SM}). (a) Fidelity vs. single-mode efficiency. (b) Brightness vs. single-mode efficiency.} \label{fig:comparison} \end{figure} \section*{Single-mode efficiency, rate and fidelity} There are many metrics used to quantify the various properties of single-photon sources. Optical quantum information schemes are susceptible to errors if they are not implemented with highly pure and indistinguishable single photons. In addition, scaling up quantum information protocols needs high generation efficiency, since any inefficiency will lead to an exponential decrease of the success probability with system size. Finally, the rate of single-photon production provides a limitation on the practicality of any protocol. To that end, we define three metrics that quantify these properties: $\mathcal{F}$, the single-photon fidelity, which is the fraction of emission that consists of a single photon in a single spectral, temporal, polarization, and spatial mode; $\eta$, the probability of generating a single photon in the desired mode; and $\mathcal{R}$, the brightness, which the rate of photon production in the desired mode. Assuming that the probability of multi-photon events greater than two is negligible, the only outcomes from a source are: single photons in the desired mode with probability $\eta$, single photons in an undesirable mode with probability $P_1'$, two photons with probability $P_2$, and null events with probability $P_0$. Experimentally, we measure the following quantities: the overall emission efficiency, $P=1-P_0$; the HOM visibility, $\mathcal{V}$; and the measure of the single-photon purity, $g^{(2)}$. These are given by: \begin{equation} \begin{split} P & = 1-P_0=\eta+P_1'+P_2,\\ \mathcal{V} & = \frac{\eta}{\eta+P_1'},\\ g^{(2)}& \approx \frac{2 P_2}{(\eta+P_1'+2P_2)^2}, \end{split} \end{equation} where we have assume that the visibility $\mathcal{V}$ is compensated for multi-photon events \cite{SM}, and that these measurements are taken with standard non-number resolving photon counting detectors. Solving the system of equation for $\eta$ to second order in $g^{(2)}$, we get the single-mode efficiency $\eta$: \begin{equation} \eta=P \mathcal{V}\left(1-\frac{1}{2}P g^{(2)}\left(1+P g^{(2)}\right) \right). \label{eq:eta} \end{equation} We report the source brightness as $\mathcal{R}=R_{\text{eff}}\eta$, where $R_{\text{eff}}$, is the clock rate weighted by the experimental duty cycle. Apart from source brightness, the rate at which undesirable emission is produced also matters for applications. We characterize this rate by the fidelity, \begin{equation} \mathcal{F}=1-\frac{P_1'+P_2}{P}=\frac{\eta}{P}, \end{equation} which is the fraction of collected emission that is made up of single photons in the correct mode. In Fig.~\ref{fig:comparison} we show $\eta$, $\mathcal{F}$, and $\mathcal{R}$ for a sample of different single-photon sources. Narrow bandwidth sources naturally compatible with coherent atomic systems are indicated with filled symbols. \section*{Conclusion} By using the quantum nonlinearities of strongly interacting Rydberg states in a cold atomic ensemble, we demonstrated a single-photon source, operating with a 60$\%$ duty cycle, single-mode efficiency $\eta=0.139(5)$, a single-mode brightness of $\mathcal{R}=840(70)~s^{-1}$, and single-mode fidelity $\mathcal{F}=0.982(7)$, this fidelity is the highest reported to our knowledge for an atomic-based source. Furthermore, we investigated the limitations of our current setup arising from nearby long-lived contaminant states. Implementing feasible improvements to the current experiment we estimate that we can achieve up to $\eta\approx0.4$ and moreover, ionizing pulses after each write-retrieval pulse to remove atoms in pollutant states may increase the brightness up to $\mathcal {R}\approx1.2\times10^5~s^{-1}$ without decreasing the duty cycle or the fidelity (see \cite{SM} for details). The efficiency could be further improved if the ensemble were coupled to a cavity \cite{clark2019}. Given their high efficiency, brightness, and fidelity, we have shown that single-photon sources based on Rydberg-atomic ensembles provide a promising platform for scalable quantum photonics. Furthermore, they are inherently compatible with narrow-bandwidth atomic platforms that have shown significant progress towards quantum information applications. \section*{Acknowledgments} All authors acknowledge support from the United States Army Research Lab's Center for Distributed Quantum Information (CDQI) at the University of Maryland and the Army Research Lab. A.C, D.O.-H, A.J.H., S.L.R., J.V.P., Y.W., P.B., and A.V.G.\ additionally acknowledge support from the National Science Foundation Physics Frontier Center at the Joint Quantum Institute (Grant No. PHY1430094). Y.W., P.B., and A.V.G.\ additionally acknowledge support from AFOSR, ARO MURI, and DoE ASCR Quantum Testbed Pathfinder program (award No. DE-SC0019040). We are grateful to Mary Lyon for her significant contributions to the design and construction of the apparatus and Patrick Banner for his contributions to data collection. We also want to thank Luis A. Orozco for fruitful discussions.
1,116,691,496,990
arxiv
\section{Introduction} There have been a considerable body of literatures devoted to the study of perturbed stochastic differential equations(SDEs), see \cite{CPY}-\cite{GY1},\cite{PW},\cite{YZ}. Let $(\Omega, \mathcal{F}, \{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P})$ be a filtered probability space with filtration$\{\mathcal{F}_{t}\}_{t\geq 0}$, let $\{B_{t}\}_{t\geq0}$ be a one-dimensional standard $\{\mathcal{F}_{t}\}_{t\geq 0}$-Brownian Motion. Suppose that $\sigma(x), b(x)$ are Lipschitz continuous functions on $\mathbb{R}$. It was proved in \cite{RDZ} that the following perturbed stochastic differential equation: \begin{equation} \label{e:PerEqn} X_t=x+\int_0^t b(X_s) \mathrm{d} s+\int_0^t \sigma(X_s) \mathrm{d} B_s+\alpha \sup_{0 \le s \le t} X_s,\ \ \forall \ \alpha<1, \end{equation} admits a unique solution. If $|\sigma(x)|>0$, it was shown in \cite{YZ} that the law of $X_t$ is absolutely continuous with respect to Lebesgue measure, i.e. the law of $X_t$ admits a density for $t>0$. \vskip 3mm There seem no smooth density results for the law of a perturbed diffusion process, this paper aims to partly fill in this gap. The smoothness of densities is a popular topic in stochastic analysis and has been intensively studied for several decades, we refer readers to \cite{N}, \cite{S} and references therein. Our approach to proving the smoothness of densities is by Malliavin calculus, so let us first recall some well known results on Malliavin calculus \cite{N} to be used in this paper. \vskip 3mm Let $\Omega=C_{0}(\mathbb{R}_{+})$ be the space of continuous functions on $\mathbb{R}_{+}$ which are zero at zero. Denote by $\mathcal{F}$ the Borel $\sigma$-field on $\Omega$ and $\mathbb{P}$ the Wiener measure, then the canonical coordinate process \{$\omega_{t}$, $t\in \mathbb{R}_{+}$\} on $\Omega$ is a Brownian motion $B_t$. Define $\mathcal{F}_{t}^{0}=\sigma(B_{s},s\leq t)$ and denote by $\mathcal{F}_{t}$ the completion of $\mathcal{F}_{t}^{0}$ with respect to the $\mathbb{P}$-null sets of $\mathcal{F}.$ Let $H:=L^{2}(\mathbb{R}_{+},\mathcal{B},\mu)$ where $(\mathbb{R}_{+},\mathcal{B})$ is a measurable space with $\mathcal{B}$ being the Borel $\sigma$-field of $\mathbb{R}_{+}$ and $\mu$ being the Lebesgue measure on $\mathbb{R}_{+}$, we denote the norm of $H$ by $\|.\|_H$. For any $h \in H$, $W(h)$ is defined by \begin{equation} W(h)=\int_{0}^{\infty}h(t)\mathrm{d} B_{t}, \end{equation} note that $\{W(h),h\in H\}$ is a Gaussian Process on $H$.\\ We denote by $C_{p}^{\infty}(\mathbb{R}^{n})$ the set of all infinitely differentiable functions $f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ such that $f$ and all of its partial derivatives have polynomial growth. Let $\mathcal S$ be the set of smooth random variables defined by \begin{equation*} \mathcal S=\{F=f(W(h_{1}),...,W(h_{n}));\ h_{1},...,h_{n}\in H,n\geq 1, f\in C_{p}^{\infty}(\mathbb{R}^{n})\}. \end{equation*} Let $F\in \mathcal S$, define its Malliavin derivative $D_{t}F$ by \begin{equation} D_{t}F=\sum_{i=1}^{n}\partial_{i}f(W(h_{1}),...,W(h_{n}))h_{i}(t), \end{equation} and its norm by \ \begin{equation*} ||F||_{1,2}=[\E(|F|^{2})+\E(||DF||_H^{2})]^{\frac{1}{2}}, \end{equation*} where $||DF||_H^{2}=\int_0^\infty |D_t F|^2 \mu(\mathrm{d} t)$. Denote by $\mathbb{D}^{1,2}$ the completion of $\mathcal S$ under the norm $\|.\|_{1,2}$. We further define the norm \ \begin{equation*} ||F||_{m,2}=\left[\E(|F|^{2})+\sum_{k=1}^m \E(\|D^kF\|_{H^{\otimes k}}^{2})\right]^{\frac{1}{2}}. \end{equation*} Similarly, $\mathbb{D}^{m,2}$ denotes the completion of $\mathcal S$ under the norm $||.||_{m,2}$. \vskip 3mm We shall use the following two propositions: \begin{proposition}[Proposition 1.2.3 of \cite{N}]\label{estimate of composition} Let $\phi:\mathbb{R}^d\to R$ be a continuously differentiable function with bounded partial derivatives. Suppose that $F=(F^1,\cdots,F^d)$ is a random vector whose components belong to the space $\mathbb{D}^{1,2}$. Then $\phi(F) \in \mathbb{D}^{1,2}$, and \begin{equation*} D(\phi(F))=\sum_{i=1}^{d}\partial_i\phi(F)DF^i. \end{equation*} \end{proposition} \begin{proposition}[Proposition 2.1.5 of \cite{N}] \label{t:Cri} If $F \in \mathbb{D}^{\infty,2}$ with $\mathbb{D}^{\infty,2}=\cap_{m \ge 1} \mathbb{D}^{m,2}$ and $\|DF\|_H^{-1} \in \cap_{p\geq 1}L^p(\Omega)$, then the density of $F$ belongs to the infinitely continuously differentiable function space $C^\infty(\mathbb{R})$. \end{proposition} Throughout this paper, for a bounded measurable function $f$, we shall denote $$\|f\|_\infty=\sup_{x \in \mathbb{R}} |f(x)|. $$ \section{Main Results} Throughout this paper, we need to assume $\alpha<1$ to guarantee that Eq. \eqref{e:PerEqn} has a unique solution \cite{RDZ}. Furthermore, it is shown in \cite{YZ} that \ \begin{thm}(\cite[Theorem 3.1]{YZ}) Let $(X_t)_{t \ge 0}$ be the unique solution to Eq. \eqref{e:PerEqn}. Then $X_t \in \mathbb{D}^{1,2}$ for all $t>0$. \end{thm} \begin{thm} (\cite[Theorem 3.2]{YZ}) Assume that $\sigma$ and $b$ are both Lipschitz continuous, and $|\sigma(x)|>0$ for all $x \in \mathbb{R}$. Then, for $t>0$, the law of $X_t$ is absolutely continuous with respect to Lebesgue measure. \end{thm} In this paper, we shall prove the following results about the smoothness of densities: \begin{thm} \label{t:MThm1} Assume that $b$ is bounded smooth and that $\sigma(x)\equiv \sigma$. If $\alpha<1$, $t_0>0$ and $b$ satisfy $$\theta(t_0,\alpha,b)<1/2,$$ with $\theta(t_0,\alpha,b):=\sqrt{2\|b'\|_{{\rm \infty}}^2 t_0^2+8 \alpha^2}+{\|b'\|_{{\rm \infty}}^2 t_0^2+4\alpha^2}$, then the law of $X_t$ in \eqref{e:PerEqn} admits a smooth density for all $t \in (0,t_0]$. \end{thm} \begin{thm} \label{t:MThm2} Assume that $b$ is bounded smooth, and $\sigma$ is bounded smooth with $\|\sigma'\|_\infty<\infty$, $\|\sigma''\|_\infty<\infty$ and $\inf_{x\in \mathbb{R}}|\sigma(x)|>0$. Let \begin{equation} F(y)=\int_x^{y} \frac{1}{\sigma(u)} \mathrm{d} u, \ \ \ \ y \in (-\infty,\infty) \end{equation} and $\tilde b(x)=\frac{b(F^{-1}(x))}{\sigma(F^{-1}(x))}-\frac 12 \sigma'(F^{-1}(x))$, then $\tilde b$ is bounded smooth with $\|\tilde b'\|_\infty<\infty$. If $\alpha<1$, $t_0>0$ and $b$ satisfy $$\theta(t_0,\alpha,\tilde b)<1/2$$ with $\theta(t_0,\alpha,\tilde b):=\sqrt{2\|\tilde b'\|_{{\rm \infty}}^2 t_0^2+8 \alpha^2}+{\|\tilde b'\|_{{\rm \infty}}^2 t_0^2+4\alpha^2}$, then the law of $X_t$ in \eqref{e:PerEqn} admits a smooth density for all $t \in (0,t_0]$. \end{thm} \begin{proof} [{\bf Proofs of Theorems \ref{t:MThm1} and \ref{t:MThm2}:}] The main idea is to use Proposition \ref{t:Cri} to prove the two theorems. To verify the conditions in Proposition \ref{t:Cri}, it suffices to prove that $X_t \in D^{m,2}$ for all $m \ge 1$ and $\|DX_t\|_H \ge c>0$ a.s. for some constant $c>0$. Theorem \ref{t:MThm1} immediately follows from Lemmas \ref{l:XtDm2} and \ref{t:SthDen} below. Now we prove Theorem \ref{t:MThm2}. Recall $Y_t=\int_x^{X_t} \frac{1}{\sigma(u)} du$ in Lemma \ref{t:SthDenY} below, by the condition of $\sigma$, $F$ is a continuous and strictly increasing function with bounded derivative and thus \begin{equation} \|D Y_t\|_H=\|D F(X_t)\|_H \le \frac{1}{\inf_{x\in \mathbb{R}}|\sigma(x)|} \|D X_t\|_H. \end{equation} Hence, by Lemmas \ref{l:XtDm2} and \ref{t:SthDenY} below, under the same condition as in Theorem \ref{t:MThm2} we have \begin{equation} \|D X_t\|_H \ge \inf_{x\in \mathbb{R}}|\sigma(x)|\cdot \|D Y_t\|_H \ge \inf_{x\in \mathbb{R}}|\sigma(x)| \cdot \frac{[1-2 \theta(t_0,\alpha,\tilde b)] t}{2 (1+2 \|\tilde b'\|_{{\rm \infty}}^2 t^2+2 \alpha^2)} \ \ \ \ \ t \in [0,t_0]. \end{equation} Hence, $X_t$ admits a smooth density for all $t \in (0,t_0]$. \end{proof} \section{Auxiliary lemmas} It is well known that $\|DX_t\|_H$ has the following representation \cite{YZ} for all $t>0$: $$\|DX_t\|_H=\left(\int_0^t |D_r X_t|^2 \mathrm{d} r\right)^{\frac 12}$$ with $D_rX_t$ satisfying \begin{equation} \label{e:DrXt} D_r X_t=\sigma(X_r)+\int_r^t D_r b(X_s) \mathrm{d} s+\int_r^t D_r \sigma(X_s)\mathrm{d} B_s+\alpha D_r \left(\sup_{0 \le s \le t} X_s\right). \end{equation} We shall often use the following fact (\cite{YZ}, \cite{N}) \begin{equation} \label{e:DrXt=0} D_r X_t=0 \ \ \ \ {\rm if}\ \ r>t, \end{equation} \begin{equation} \label{e:MalSupLes} \left\|D(\sup_{0 \le s \le t} X_s)\right\|_H \le \sup_{0 \le s \le t} \left\|D X_s\right\|_H, \end{equation} where \begin{equation*} \left\|D(\sup_{0 \le s \le t} X_s)\right\|_H^2=\int_0^t \left|D_r \left(\sup_{0 \le s \le t} X_s\right)\right|^2 \mathrm{d} r, \ \ \ \ \left\|D X_t\right\|_H^2=\int_0^t |D_r X_t|^2 \mathrm{d} r. \end{equation*} \ \ \ \\ \subsection{$X_t$ is an element in $\mathbb{D}^{m,2}$ for all $t>0$ and $m \ge 1$} \begin{lemma} \label{l:XtDm2} Let $X_t$ be the solution of the perturbed stochastic differential equation \eqref{e:PerEqn}, and suppose that the coefficients $b$ and $\sigma$ are smooth with bounded derivatives of all orders. Then $X_t$ belongs to $\mathbb{D}^{m,2}$ for all $t>0$ and all $m \ge 1$. \end{lemma} \begin{proof} We shall use Picard iteration to prove the lemma. Letting $X^0_t=x_0$ for all $t>0$, define $X_t^{n+1}$ be the unique, adapted solution to the following equation: \ \begin{equation} \label{e:Pic1} X_t^{n+1}=x_0+\int_{0}^{t}\sigma(X_s^n)\mathrm{d} B_s+\int_{0}^{t}b(X_s^n) \mathrm{d} s+\alpha \max_{0\leq s\leq t} \left(X_s^{n+1}\right), \end{equation} which obviously implies \ \begin{equation*} \max_{0 \le s \le t}\left(X_s^{n+1}\right)=x_0+\max_{0 \le s \le t}\left(\int_{0}^{t}\sigma(X_s^n)\mathrm{d} B_s+\int_{0}^{t}b(X_s^n) \mathrm{d} s\right)+\alpha \max_{0\leq s\leq t} \left(X_s^{n+1}\right). \end{equation*} Therefore, \ \begin{equation*} \max_{0 \le s \le t}\left(X_s^{n+1}\right)=\frac{x_0}{1-\alpha}+\frac1{1-\alpha}\max_{0 \le s \le t}\left(\int_{0}^{t}\sigma(X_s^n)\mathrm{d} B_s+\int_{0}^{t}b(X_s^n)\mathrm{d} s\right), \end{equation*} this and \eqref{e:Pic1} further gives \begin{equation*} \begin{split} X_t^{n+1}=&\frac{x_0}{1-\alpha}+\int_{0}^{t}\sigma(X_s^n)\mathrm{d} B_s+\int_{0}^{t}b(X_s^n)\mathrm{d} s\\ &+\frac{\alpha}{1-\alpha}\max_{0\leq s\leq t}\left(\int_{0}^{s}\sigma(X_u^n)\mathrm{d} B_u+\int_{0}^{s}b(X_u^n)\mathrm{d} u\right). \end{split} \end{equation*} By the above representation of $X^{n+1}_t$ and a standard method \cite{RDZ}, for every $t>0$ we have \ \begin{equation} \label{e:XnConXt} \lim_{n \rightarrow \infty} X^n_t=X_t \ \ \ \ {\rm in} \ L^2(\Omega). \end{equation} Let $m \ge 1$, it is standard to check that $X^n_t \in \mathbb{D}^{m,2}$ for every $t>0$ and $n \ge 1$ \cite[Theorem 3.1]{YZ}. By a similar argument as in \cite[Theorem 3.1]{YZ}, we have \ \begin{equation} \label{e:Pic2} \sup_{n \ge 1}\E\left[\|D^k X^n_t\|_{H^{\otimes k}}^2\right] <\infty, \ \ \ \ \ k=1,...,m. \end{equation} Next we prove $X_t \in \mathbb{D}^{m,2}$ by the argument of \cite[Proposition 1.2.3]{N}. Indeed, by \eqref{e:Pic2}, there exists some subsequence $D^k X^{n_j}_t$ weakly converges to some $\alpha_k$ in $L^2(\Omega,H^{\otimes k})$ for $k=1,...,m$. By \eqref{e:XnConXt} and the remark immediately below \cite[Proposition 1.2.2]{N}, the projections of $D^k X^{n_j}_t$ on any Wiener chaos converge in the weak topology of $L^2(\Omega)$, as $n_j$ tends to infinity, to those of $\alpha_k$ for $k=1,...,m$. Hence, $X_t \in \mathbb{D}^{m,2}$ and $D^k X_t=\alpha_k$ for $k=1,...,m$. Moreover, for any weakly convergent subsequence the limit must be equal to $\alpha_1,..., \alpha_m$ by the same argument as above, and this implies the weak convergence of the whole sequence. \end{proof} \ \ \subsection{Additive noise case} If $\sigma(x) \equiv \sigma$, then Eq. \eqref{e:DrXt} reads as \begin{equation} \label{e:MalDer} D_r X_t=\sigma+\int_r^t D_r b(X_s) \mathrm{d} s+ \alpha D_r \left(\sup_{0 \le s \le t} X_s\right). \end{equation} \begin{lem} \label{l:DrXDifEst} Let $t>0$ be arbitrary and $b$ be bounded smooth with $\|b'\|_\infty<\infty$. For all $0<t_1<t_2 \le t$, we have \begin{equation*} \begin{split} \left|\|D X_{t_2}\|_H^2-\|D X_{t_1}\|_H^2\right| \le 2\left[\sqrt{2\|b'\|_{{\rm \infty}}^2(t_2-t_1)^2+8\alpha^2}+{\|b'\|_{{\rm \infty}}^2(t_2-t_1)^2+4\alpha^2}\right]\sup_{0 \le s \le t} \left\|D X_s\right\|_H^2. \end{split} \end{equation*} \end{lem} \begin{proof} It is easy to see that \begin{equation*} \left|\|D X_{t_2}\|_H^2-\|D X_{t_1}\|_H^2\right|=\left|\int_0^{t_2} (D_r X_{t_2})^2 \mathrm{d} r-\int_0^{t_1} (D_r X_{t_1})^2 \mathrm{d} r\right| \le I_1+I_2, \end{equation*} where $$I_1:=\int_{t_1}^{t_2} (D_r X_{t_2})^2 \mathrm{d} r, \ \ \ \ \ I_2:=\int_{0}^{t_1} \left|(D_r X_{t_2})^2-(D_r X_{t_1})^2\right| \mathrm{d} r.$$ We claim that \begin{equation} \label{e:DrXDifEst} \begin{split} \int_0^{t_2} (D_r X_{t_2}-D_r X_{t_1})^2 \mathrm{d} r & \le 2 \left[\|b'\|_{{\rm \infty}}^2(t_2-t_1)^2+4\alpha^2\right] \sup_{0 \le s \le t} \left\|D X_s\right\|_H^2. \end{split} \end{equation} and we will prove it in the last part of this proof. Let us now estimate $I_1$ and $I_2$ by \eqref{e:DrXDifEst}. Observe $$I_1=\int_{t_1}^{t_2} (D_r X_{t_2}-D_r X_{t_1})^2 \mathrm{d} r \le \int_{0}^{t_2} (D_r X_{t_2}-D_r X_{t_1})^2 \mathrm{d} r,$$ by \eqref{e:DrXDifEst} we have \begin{equation} \begin{split} I_1 \le 2 \left[\|b'\|_{{\rm \infty}}^2(t_2-t_1)^2+4\alpha^2\right] \sup_{0 \le s \le t} \left\|D X_s\right\|_H^2. \end{split} \end{equation} Further observe \begin{equation*} \begin{split} I_2 & \le \left[\int_{0}^{t_1} (D_r X_{t_2}-D_r X_{t_1})^2 \mathrm{d} r\right]^{\frac 12}\left[\int_{0}^{t_1} |D_r X_{t_2}+D_r X_{t_1}|^2 \mathrm{d} r\right]^{\frac 12} \\ & \le \sqrt 2 \left[\int_{0}^{t_1} (D_r X_{t_2}-D_r X_{t_1})^2 \mathrm{d} r\right]^{\frac 12}\left[\int_{0}^{t_1} |D_r X_{t_2}|^2+|D_r X_{t_1}|^2 \mathrm{d} r\right]^{\frac 12} \\ & \le \sqrt 2 \left[\int_{0}^{t_1} (D_r X_{t_2}-D_r X_{t_1})^2 \mathrm{d} r\right]^{\frac 12}\left[\int_{0}^{t_2} |D_r X_{t_2}|^2 \mathrm{d} r+\int_0^{t_1} |D_r X_{t_1}|^2 \mathrm{d} r\right]^{\frac 12}\\ & \le 2 \left[\int_{0}^{t_1} (D_r X_{t_2}-D_r X_{t_1})^2 \mathrm{d} r\right]^{\frac 12} \sup_{0 \le s \le t} \left\|D X_s\right\|_H \\ & \le 2 \left[\int_{0}^{t_2} (D_r X_{t_2}-D_r X_{t_1})^2 \mathrm{d} r\right]^{\frac 12} \sup_{0 \le s \le t} \left\|D X_s\right\|_H, \end{split} \end{equation*} this inequality and \eqref{e:DrXDifEst} gives \begin{equation*} \begin{split} I_2 \le 2\sqrt{2[\|b'\|_{{\rm \infty}}^2(t_2-t_1)^2+4\alpha^2]} \sup_{0 \le s \le t} \left\|D X_s\right\|_H^2. \end{split} \end{equation*} Combining the estimates of $I_1$ and $I_2$, we immediately get the desired inequality in the lemma. It remains to prove \eqref{e:DrXDifEst}. By \eqref{e:MalDer}, we have \begin{equation*} \begin{split} (D_r X_{t_2}-D_r X_{t_1})^2 & \le 2 \left|\int_{t_1}^{t_2} D_r b(X_s) \mathrm{d} s\right|^2+2 \alpha^2 \left|D_r \left(\sup_{0 \le s \le t_1} X_s\right)-D_r \left(\sup_{0 \le s \le t_2} X_s\right)\right|^2 \\ & \le 2 \left|\int_{t_1}^{t_2} D_r b(X_s) \mathrm{d} s\right|^2+4\alpha^2 \left|D_r \left(\sup_{0 \le s \le t_1} X_s\right)\right|^2+4 \alpha^2 \left|D_r \left(\sup_{0 \le s \le t_2} X_s\right)\right|^2. \end{split} \end{equation*} By H\"{o}lder inequality, \eqref{e:DrXt=0} and Proposition \ref{estimate of composition}, we have \begin{equation*} \begin{split} \int_0^{t_2} \left|\int_{t_1}^{t_2} D_r b(X_s) \mathrm{d} s\right|^2 \mathrm{d} r & \le \|b'\|_{{\rm \infty}}^2 \int_0^{t_2} (t_2-t_1) \int_{t_1}^{t_2} |D_r X_s|^2 \mathrm{d} s \mathrm{d} r \\ & = \|b'\|_{{\rm \infty}}^2 (t_2-t_1) \int_{t_1}^{t_2} \int_0^s |D_r X_s|^2 \mathrm{d} r \mathrm{d} s \\ & \le \|b'\|_{{\rm \infty}}^2(t_2-t_1)^2 \sup_{0 \le s \le t} \left\|D X_s\right\|_H^2. \end{split} \end{equation*} Moreover, by \eqref{e:MalSupLes} and \eqref{e:DrXt=0} we have \begin{equation*} \int_0^{t_2} \left|D_r \left(\sup_{0 \le s \le t_2} X_s\right)\right|^2 \mathrm{d} r \le \sup_{0 \le s \le t_2} \|D X_s\|_H^2 \le \sup_{0 \le s \le t} \|D X_s\|_H^2, \end{equation*} \begin{equation*} \int_0^{t_2} \left|D_r \left(\sup_{0 \le s \le t_1} X_s\right)\right|^2 \mathrm{d} r = \int_0^{t_1} \left|D_r \left(\sup_{0 \le s \le t_1} X_s\right)\right|^2 \mathrm{d} r \le \sup_{0 \le s \le t} \|D X_s\|_H^2. \end{equation*} Collecting the above four inequalities, we immediately get the desired \eqref{e:DrXDifEst}. \end{proof} \begin{lem} \label{l:LowUppBouDrX} Let $b$ be bounded smooth with $\|b'\|_\infty<\infty$, we have \begin{equation} \label{e:LowUppBouDrX} \sup_{0 \le s \le t} \|DX_s\|_H^2 \ge \frac{\sigma^2 t}{2 (1+2\|b'\|_{{\rm \infty}}^2 t^2+2 \alpha^2)}, \ \ \ \ \ t>0. \end{equation} \end{lem} \begin{proof} By \eqref{e:MalDer} and using $(a+b)^2\geq \frac{1}{2}a^2-b^2$, we have \begin{equation*} \begin{split} (D_r X_t)^2 & \ge \frac 12 \sigma^2-\left[\int_r^t D_r b(X_s) \mathrm{d} s+\alpha D_r \left(\sup_{0 \le s \le t} X_s\right)\right]^2 \\ & \ge \frac 12 \sigma^2-2 \left(\int_r^t D_r b(X_s) \mathrm{d} s\right)^2-2\alpha^2 \left[D_r \left(\sup_{0 \le s \le t} X_s\right)\right]^2. \end{split} \end{equation*} Further observe \begin{equation} \label{e:MalDerDb} \begin{split} \int_0^t\left(\int_r^t D_r b(X_s) \mathrm{d} s\right)^2 \mathrm{d} r & \le \int_0^t (t-r) \int_r^t |D_r b(X_s)|^2 \mathrm{d} s \mathrm{d} r \\ & \le \int_0^t (t-r) \|b'\|_{{\rm \infty}}^2 \int_r^t |D_r X_s|^2 \mathrm{d} s \mathrm{d} r \\ & \le t \|b'\|_{{\rm \infty}}^2 \int_0^t \int_r^t |D_r X_s|^2 \mathrm{d} s \mathrm{d} r \\ &= t \|b'\|_{{\rm \infty}}^2\int_0^t \|DX_s\|_H^2 \mathrm{d} s \\ &\le t^2 \|b'\|_{{\rm \infty}}^2 \sup_{0 \le s \le t}\|DX_s\|_H^2, \end{split} \end{equation} where the second inequality is by Proposition \ref{estimate of composition}. Hence, \begin{equation*} \begin{split} \|D X_t\|_H^2 & \ge \frac{\sigma^2 t}{2}-2 \|b'\|_{{\rm \infty}}^2 t^2 \sup_{0 \le s \le t} \|DX_s\|_H^2-2\alpha^2 \|D (\sup_{0 \le s \le t}X_s) \|_H^2 \\ & \ge \frac{\sigma^2 t}{2}-2 \|b'\|_{{\rm \infty}}^2 t^2 \sup_{0 \le s \le t} \|DX_s\|_H^2-2\alpha^2 \sup_{0 \le s \le t}\|D X_s\|_H^2, \end{split} \end{equation*} where the last inequality is by \eqref{e:MalSupLes}. This clearly implies \begin{equation*} \sup_{0 \le s \le t} \|DX_s\|_H^2 \ge \frac{\sigma^2 t}{2}-2\|b'\|_{{\rm \infty}}^2 t^2 \sup_{0 \le s \le t} \|DX_s\|_H^2-2\alpha^2 \sup_{0 \le s \le t} \|DX_s\|_H^2, \end{equation*} which immediately yields the desired bound. \end{proof} \begin{lem} \label{t:SthDen} Let $b$ is bounded smooth with $\|b'\|_\infty<\infty$ and $\sigma(x)\equiv \sigma$ with $\sigma \neq 0$. If $\alpha<1$, $t_0>0$ and $b$ satisfy $$\theta(t_0,\alpha,b)<1/2$$ with $\theta(r,\alpha,b):=\sqrt{2\|b'\|_{{\rm \infty}}^2 r^2+8 \alpha^2}+{\|b'\|_{{\rm \infty}}^2 r^2+4\alpha^2}$, then \begin{equation} \|D X_{t}\|_H^2 \ge \frac{[1-2 \theta(t_0,\alpha,b)] \sigma^2 t}{2 (1+2\|b'\|_{{\rm \infty}}^2 t^2+2 \alpha^2)}, \ \ \ \ \ t \in [0,t_0]. \end{equation} \end{lem} \begin{proof} Let $t \in [0,t_0]$. For all $0 \le t_1 \le t_2 \le t$, by Lemma \ref{l:DrXDifEst}, we have \begin{equation*} \left|\|D X_{t_2}\|_H^2-\|D X_{t_1}\|_H^2\right| \le 2\theta(t_2-t_1,\alpha,b)\sup_{0 \le s \le t} \left\|D X_s\right\|_H^2. \end{equation*} Hence, for all $s \in [0,t]$, \begin{equation*} \begin{split} \|D X_{s}\|_H^2 & \le \left|\|D X_{s}\|_H^2-\|D X_{t}\|_H^2\right|+\|D X_{t}\|_H^2 \\ & \le 2 \theta(t-s,\alpha,b)\sup_{0 \le s \le t} \left\|D X_s\right\|_H^2+\|D X_{t}\|_H^2, \end{split} \end{equation*} and consequently \begin{equation*} \begin{split} \sup_{0 \le s \le t} \|D X_{s}\|_H^2 \le 2\theta(t-s,\alpha,b)\sup_{0 \le s \le t} \left\|D X_s\right\|_H^2+\|D X_{t}\|_H^2. \end{split} \end{equation*} The above inequality and \eqref{e:LowUppBouDrX} further give \ \begin{equation*} \begin{split} \|D X_{t}\|_H^2 & \ge \left[1-2\theta(t-s,\alpha,b)\right]\sup_{0 \le s \le t} \|D X_{s}\|_H^2 \\ & \ge \left[1-2\theta(t_0,\alpha,b)\right]\sup_{0 \le s \le t} \|D X_{s}\|_H^2. \end{split} \end{equation*} Combining the above inequality and Lemma \ref{l:LowUppBouDrX} immediately gives the desired inequality. \end{proof} \ \ \subsection{Multiplicative noise case} By the condition of $\sigma$, we have $\sup_{x \in \mathbb{R}} \sigma(x)<0$ or $\inf_{x \in \mathbb{R}} \sigma(x)>0.$ Without loss of generality, we assume that $$\inf_{x \in \mathbb{R}} \sigma(x)>0.$$ Let us consider the following well known transform \begin{equation} F(X_t)=\int_x^{X_t} \frac{1}{\sigma(u)} \mathrm{d} u, \end{equation} it is easy to see that $F$ is a strictly increasing function with bounded derivative. Hence, \begin{equation} \label{e:SupF} \sup_{0 \le s \le t} F(X_s)=F \left(\sup_{0 \le s \le t} X_s\right). \end{equation} By It\^{o} formula, we have \begin{equation} F(X_t)=\int_0^t \left(\frac{b(X_s)}{\sigma(X_s)}-\frac 12 \sigma'(X_s)\right) \mathrm{d} s+B_t+\alpha\int_0^t \frac{1}{\sigma(X_s)} \mathrm{d} M_s \end{equation} where $M_t=\sup_{0 \le s \le t} X_s$. It is easy to see that $M_t$ is an increasing function of $t$ and that $\frac{1}{\sigma(X_s)}$ has a contribution to the related integral only when $X_s=M_s$. Hence, \begin{equation} F(X_t)=\int_0^t \left(\frac{b(X_s)}{\sigma(X_s)}-\frac 12 \sigma'(X_s)\right) \mathrm{d} s+B_t+\alpha\int_0^t \frac{1}{\sigma(M_s)} \mathrm{d} M_s. \end{equation} Since $M_t$ is a continuous increasing function with respect to $t$, we have \begin{equation} F(X_t)=\int_0^t \left(\frac{b(X_s)}{\sigma(X_s)}-\frac 12 \sigma'(X_s)\right) \mathrm{d} s+B_t+\alpha \int_0^{M_t} \frac{1}{\sigma(u)} \mathrm{d} u. \end{equation} By \eqref{e:SupF}, \begin{equation} F(X_t)=\int_0^t \left(\frac{b(X_s)}{\sigma(X_s)}-\frac 12 \sigma'(X_s)\right)\mathrm{d} s+B_t+\alpha \sup_{0 \le s \le t} F(X_s). \end{equation} Denote $Y_t=F(X_t)$, it solves the following perturbed SDE: \begin{equation} Y_t=\int_0^t \tilde b(Y_s) \mathrm{d} s+ B_t+\alpha \sup_{0 \le s \le t} Y_s \end{equation} where $\tilde b(x)=\frac{b(F^{-1}(x))}{\sigma(F^{-1}(x))}-\frac 12 \sigma'(F^{-1}(x))$. Applying Lemma \ref{t:SthDen}, we get the following lemma about the dynamics $Y_t$: \begin{lem} \label{t:SthDenY} Assume that $b$ is bounded smooth and that $\sigma$ is bounded smooth with $\|\sigma'\|_\infty<\infty$, $\|\sigma''\|_\infty<\infty$ and $\inf_{x \ge 0}|\sigma(x)|>0$. Then $\tilde b$ is bounded smooth. If $\alpha<1$, $t_0>0$ and $b$ satisfy $$\theta(t_0,\alpha,\tilde b)<1/2$$ with $\theta(r,\alpha,\tilde b):=\left[\sqrt{2 \|\tilde b'\|_{{\rm \infty}}^2 r^2+8 \alpha^2}+{\|\tilde b'\|_{{\rm \infty}}^2 r^2+4\alpha^2}\right]$, then \begin{equation} \|D Y_{t}\|_H^2 \ge \frac{[1-2 \theta(t_0,\alpha,\tilde b)] t}{2 (1+2\|\tilde b'\|_\infty^2 t^2+2 \alpha^2)}, \ \ \ \ \ t \in [0,t_0]. \end{equation} \end{lem} \begin{proof} It is easy to check that under the conditions in the lemma $\tilde b$ is bounded smooth with $\|\tilde b'\|_\infty<\infty$. Hence, the lemma immediately follows from applying Lemma \ref{t:SthDen} to $Y_t$. \end{proof} {\bf Acknowledgement}: We would like to gratefully thank Dr. Xiaobin Sun for helpful discussions. \bibliographystyle{amsplain}
1,116,691,496,991
arxiv
\section{Motivation and Background} Fueled by increasingly available user data, growing computing power, and recent advances in machine learning, Artificial Intelligence (AI) technologies are transforming our society and daily lives. However, users' negative preconceptions of AI may hinder adoption and continued use of AI technologies. Negative user preconceptions can affect user trust, which is a key factor in determining acceptance of technology \citep{venkatesh2003user}. Inadequate user trust can in turn lead to misuse (i.e., inappropriate reliance on technology) and disuse (i.e., underutilization of technology due to rejection of its capability) \citep{parasuraman1997humans,lee2004trust}. To enhance user perceptions of AI systems, previous research has investigated AI transparency, explainability, and interpretability (e.g., \citep{adadi2018peeking, arrieta2020explainable}), as modern machine learning methods are largely black boxes \citep{holzinger2018current, castelvecchi2016can}. For example, prior work has explored how visualization may aid user understanding of how machine learning models work (e.g., \citep{samek2019explainable, choo2018visual}). Explanations of these models and justifications for decisions made by intelligent machines help users understand their inner workings once they begin interacting with the AI technologies. In this work, we explore how to improve users' existing \emph{preconceptions} of AI agents prior to any interactions with the agents. Simulated setups, such as mock trials, mock interviews, and drills, have been used as low-cost, hands-on tools in early training phases to help people become accustomed to unfamiliar practices and processes prior to engaging in them. Similarly, we explore the potential of using mock interactions in which users label training data for AI models in modulating users' confidence in AI agents' capabilities and their comfort with the possibility of using technologies employing the AI agents before engaging in real interactions with the AI agents. We contextualize our exploration within the scenario of training AI agents for use in autonomous vehicles---a safety-critical domain that is likely to involve interactions with everyday users. Our findings indicate that users' perceptions of AI agents improved through participation in mock model training, especially when they were able to precisely label objects that they perceived to be important. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/conditions.pdf} \caption{We explore how mock model training involving various data labeling strategies may affect users' perceptions of AI agents posed as driving assistants.} \label{fig:exp-design} \end{figure*} \section{Methods} \label{sec:methods} \subsection{Experimental Design, Task, and Conditions} \label{sec:setup} We conducted a within-subjects study that consisted of four experimental conditions (Figure \ref{fig:exp-design}). The study was contextualized within the scenario of labeling images to train four AI agents to perform driving-related object identification: \begin{itemize}\itemsep0em \item \textbf{A1}: To train this agent, the participant was presented with a grid of images that included five positive examples for each of six item categories commonly encountered during driving: stop sign, speed limit sign, traffic light, car, bicyclist, and pedestrian. This labeling process is similar to image selection tasks commonly used in web security checks. It represents low labeling precision (i.e., the user did not localize the object within the image) and passive labeling (i.e., the user only labeled items for the requested categories). It is analogous to binary object \emph{detection} (i.e., indicating whether or not a specified item is present). \item \textbf{A2}: The participant followed a similar labeling process to train agent A2 as done for A1, with the additional task of drawing bounding boxes around the target item in all images. This process represents high labeling precision and is analogous to binary object \emph{recognition}.. \item \textbf{A3}: In training this agent, the participant was provided with a set of individual images for labeling. For each image, the user was prompted to list all items within the image that they considered to be relevant via text. The user was free to specify as many item categories as they wanted. This method is analogous to multiple object \emph{detection} \item \textbf{A4}: Similarly to the training task for A3, the participant was prompted to draw bounding boxes around all items that they considered to be relevant and to specify the associated labels via text within each image in the set. This process represents high labeling precision and is analogous to multiple object \emph{recognition}. \end{itemize} We also presented a baseline pre-trained agent to the participant at the beginning of the study. The participant was able to review the images used to the train the agent. We used this baseline condition as a reference to measure users' preconceptions of an AI agent without undergoing mock training. \subsection{Measures} We used a range of metrics to measure user perceptions that may affect user trust in and adoption of AI technologies. For each trained agent, we computed the \emph{difference} in comfort, projected capability, and task confidence relative to the baseline, pre-trained agent (i.e., positive values indicate an improvement in user perceptions relative to the baseline). We normalized the data from all questionnaire responses to get values in a 0 -- 1 range before computing the difference. \begin{itemize} \itemsep0em \item \textbf{Trustworthiness.} Trust was measured through a single question asking which AI agent the participant would trust the most if it was employed in an autonomous vehicle. \item \textbf{Comfort.} Comfort was measured through a custom scale consisting of six statements (Cronbach's $\alpha=0.90$) prompting users to rate how comfortable they felt towards a self-driving car employing the trained agent (Appendix \ref{app:comfort}). \item \textbf{Projected Capability.} Projected capability was measured through a custom scale consisting of four statements (Cronbach's $\alpha=0.87$) prompting participants to rate how capable they felt the self-driving car employing the trained agent to be (Appendix \ref{app:cabability}). \item \textbf{Task Confidence.} To quantify their perception of the AI agent's performance, we asked participants to rate their confidence (0--100\%) in its ability to identify specified items (e.g., stop sign) for a set of 14 images. This set included two images for each of the six item categories (12 images) and two images representing ``unseen'' items (e.g., no-left-turn sign and pedestrian-crossing sign) that were not included in the object categories used for training agents A1 and A2. \end{itemize} \subsection{Procedure} \label{sec:procedure} The study consisted of five phases: (1) \textit{Introduction and consent}. Upon opening the website, participants were briefed about the study and were informed that they would be training AI agents to become driving assistants by providing examples of things (e.g., stop signs and pedestrians) that the agents may encounter on the road. (2) \textit{Reference}. The participants review the images used to train the baseline, pre-trained agent and complete the confidence assessment and perception survey. (3) \textit{Labeling training examples for AI agents A1-A4}. The participants labeled training data for for the four experimental conditions, which were counterbalanced using a Latin square design. (4) \textit{Confidence assessment and perception survey}. Participants were asked to rate task confidence and questions about trust. They then continued to the next condition and repeated phase 3--4. (5) \textit{Post-study questionnaire}. At the end, participants filled out a post-study questionnaire, which asked which agent they trusted the most and collected demographics information. The study was approved by our institutional review board and took approximately 45 minutes to complete. The participants were compensated with \$10 USD upon completion of the study. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/results.pdf} \caption{One-way repeated measures ANOVAs were conducted to discover effects of experimental condition on comfort, projected capability, and task confidence for seen and unseen cases. Error bars represent 95\% confidence intervals; only significant comparisons ($p<.05$) are highlighted.} \label{fig:results} \end{figure*} \section{Results} A total of 35 participants (17 females, 17 males, 1 non-binary) were recruited for this online study via convenience sampling. The participants were aged between 18 to 35 ($M=25.91, SD=4.78$) and were from a variety of educational backgrounds, including computer science, engineering and technology, social work, healthcare, life sciences, business, law, media, public policy, and education. The participants reported having minimal experience with self-driving cars ($M=1.40, SD=0.85$), and moderate experience with AI products ($M=3.49, SD=1.72$) and with training AI or machine learning models ($M=3.09, SD=1.69$), using 6-point rating scales with 1 being no experience and 6 being lots of experience. Figure \ref{fig:results} summarizes our main findings. For all statistical tests reported below, $p<.05$ was considered a significant effect. We followed Cohen's guidelines on effect size and considered $\eta_p^2=0.01$ a small effect size, $\eta_p^2=0.06$ a medium effect size, and $\eta_p^2=0.14$ a large effect size \citep{cohen1988statistical}. A chi-square goodness-of-fit test showed that users did not perceive AI agents, including the baseline agent, as equally trustworthy, $\chi^2(4,35)=43.60, p<.001, v=0.56$. In particular, A4 (active labeling with high precision) was considered the most trustworthy agent by the most participants (51\%). A one-way repeated measures analysis of variance (ANOVA) yielded a significant main effect of experimental condition on comfort, $F(3,102) = 3.75, p=.013, \eta_{p}^{2} = .099$. Post-hoc pairwise comparisons with a Bonferroni correction revealed that comfort increased with active labeling with precision, A4 ($M=0.10, SD=0.19$), more than with active labeling without precision, A3 ($M=0.02, SD=0.22$), $p=.028$. Moreover, a one-way repeated measures ANOVA yielded a significant main effect of the experimental condition on projected capability, $F(3,102) = 4.69, p=.004, \eta_{p}^{2} = .121$. Post-hoc pairwise comparison with a Bonferroni correction revealed that active labeling with precision, A4 ($M=0.09, SD=0.22$), had higher improvement in projected capability than active labeling without precision, A3 ($M=-0.02, SD=0.22$), $p=.009$. A one-way repeated measures ANOVA yielded a significant main effect of the experimental condition on task confidence for \textit{unseen} cases, $F(3,102) = 6.76, p<.001, \eta_{p}^{2} = .166$. Post-hoc pairwise comparisons with a Bonferroni adjustment revealed that active labeling with precision, A4 ($M=18.00, SD=32.16$), had higher improvement in task confidence than passive labeling with precision, A2 ($M=-2.57, SD=16.47$), $p=.004$. While a one-way repeated measures ANOVA yielded a significant main effect of the experimental condition on task confidence for seen cases, $F(3,102) = 4.44, p=.006, \eta_{p}^{2} = .115$, we did not observe any significant differences in pairwise comparisons. \section{Discussion} In this study, we observed that users associated higher levels of comfort and projected capability with the agents for which they labeled training data with precision. Moreover, for unseen cases, users perceived the agent for which they were able to freely label objects of interest to be more capable. Our results suggest that everyday users can perceive the importance of high-precision training data representative of diverse scenarios in determining AI task performance. Therefore, involving users in mock training exercises where they obtain hands-on experience with training data may help them in developing accurate mental models of how an AI agent operates and in maintaining appropriate trust levels in the AI agent's performance before working with or using the AI technology. Furthermore, our study suggests that greater levels of user involvement (e.g., precise labeling using bounding boxes) may help users feel more comfortable with using an AI agent, even in a more safety-critical scenario. Overall, our study suggests that mock training setups can serve to help set up appropriate user understanding and improved preconceptions of how an AI agent will operate prior to real interaction with the AI agent. \label{sec:limitations} One of the limitations of this study is that we focused on user trust in AI through a questionnaire item, rather than relying on behavioral (e.g., \cite{yu2019trust}) or physiological (e.g., \cite{hergeth2016keep}) measures. As a result, we may have failed to accurately or fully capture actual user trust in AI systems. In future studies, we would like to investigate alternate methods for measuring and investigating trust so that we can better understand the range of factors that contribute to user trust in human-AI interaction. We would also like to expand our study of mock model training in AI systems to encompass new types of interactions in different domains. In this work, we chose to contextualize our study within the scenario of training AI agents for self-driving cars, which is a safety-critical domain that many users may not have direct experience with. Therefore, we would like to further investigate how our findings would apply to more general, commonplace scenarios that may involve lower stakes, such as speech-based interactions with AI agents in smart-speakers. Furthermore, we investigated the effects of mock model training as \textit{explicit} participation in this work, but users may participate in different forms and in other phases of machine learning, such as algorithm design or error correction. Furthermore, modern machine learning systems may involve users without their knowledge or explicit consent, such as recommender systems used in online services. Future work should investigate if user participation still positively influences perceptions of AI in cases where users are engaged outside of AI training or implicitly without their awareness. \section*{Acknowledgements} This work was supported by the National Science Foundation award \#1840088, the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1746891, and the Nursing/Engineering joint fellowship from the Johns Hopkins University. \newpage \bibliographystyle{unsrtnat}
1,116,691,496,992
arxiv
\section{Introduction} \label{intro} In the context of star formation and evolution, understanding the physics of young low-mass stars is essential. Such stars possess strong magnetic fields that regulate the transfer of mass and angular momentum to and from the circumstellar disk, via accretion and outflow phenomena. Young low-mass stars are also intense sources of high-energy emission (UV and X-rays) that ionizes, heats, and photoevaporates material in the circumstellar disk, thus affecting its physical and chemical evolution and, eventually, the disk lifetime \citep{ErcolanoDrake2008,GortiHollenbach2009}. Low-mass pre-main sequence stars are classified as classical T~Tauri stars (CTTS) when they still accrete mass from the circumstellar disk. They become weak-line T~Tauri stars (WTTS) when the accretion process ends. Both CTTS and WTTS are bright in X-rays due to the presence of hot coronal plasmas, heated and confined by the intense stellar magnetic fields \citep{FeigelsonMontmerle1999,FavataMicela2003,PreibischKim2005,GudelNaze2009}. It was suggested that in CTTS also the accretion process, beside the coronal magnetic activity, can provide a further X-ray emission mechanism \citep{Ulrich1976,Gullbring1994,Lamzin1999}. Magnetospheric accretion models predict that in CTTS mass transfer from the inner disk onto the star occurs via accretion streams funneled by magnetic flux tubes \citep[e.g.][]{Konigl1991,HartmannHewett1994,BouvierAlencar2007}, where material moves in a almost free fall with typical velocities of $\sim300-500\,{\rm km\,s^{-1}}$. The impact with the stellar atmosphere, usually involving small fractions of the stellar surface, generates shock fronts that heat the infalling material up to temperatures of a few MK, and therefore should yield significant emission in the soft X-ray band ($0.1-1$\,keV). Numerical modeling predicts high $L_{\rm X}$ ($\sim10^{30}\,{\rm erg\,s^{-1}}$) even for low accretion rates ($10^{-10}\,{\rm M_{\odot}\,yr^{-1}}$), indicating that X-ray emission related to the accretion process can rival or exceed coronal emission \citep{GuntherSchmitt2007,SaccoArgiroffi2008} at least in principle. Strong evidence of accretion-driven X-rays from CTTS has been provided by the observed high densities of the X-ray emitting plasma at $T\sim2-4$\,MK \citep[$n_{\rm e}\sim10^{12}-10^{13}\,{\rm cm^{-3}}$,][]{KastnerHuenemoerder2002,SchmittRobrade2005,GuntherLiefke2006,ArgiroffiMaggio2007,HuenemoerderKastner2007,RobradeSchmitt2007,ArgiroffiFlaccomio2011}. These densities, considering the typical accretion rates and surface filling factors, are compatible with predictions of shock-heated material, and are significantly higher than that of typical quiescent coronal plasmas at temperatures of a few MK \citep[$n_{\rm e}\le10^{10}\,{\rm cm^{-3}}$,][]{NessGudel2004,TestaDrake2004}. Moreover \citet{GudelTelleschi2007} observed a soft X-ray excess in CTTS with respect to WTTS, compatible with the scenario of a further plasma component at a few MK produced by accretion. However other results are discrepant with predictions: the observed $L_{\rm X}$ of the high-density cool-plasma component in CTTS is lower than that predicted from the accretion rate by more than a factor 10 \citep{ArgiroffiMaggio2009,CurranArgiroffi2011}, leaving the coronal component the major contributor to the X-ray emission in CTTS; furthermore in the cool plasma of CTTS the density increases for increasing temperature, at odds with predictions based on a single accretion stream \citep{BrickhouseCranmer2010}. Because of these apparent discrepancies different scenarios were proposed, suggesting that the high-density cool plasma in CTTS could be coronal plasma, confined into magnetic loops, that is somehow modified by the accretion process \citep{GudelSkinner2007,BrickhouseCranmer2010,DupreeBrickhouse2012}. In addition to containing plasma at a few MK, the shock region is known to be associated with material at $T\sim10^{4}$\,K or more, significantly hotter than the surrounding unperturbed photosphere, as a consequence of the energy locally deposited by the accretion process. This photospheric hot spot produces excess emission in the UV and optical band, which is often rotationally modulated because of the very small filling factor of the accretion-shock region and because accretion streams are usually not symmetric with respect to the rotation axis \citep{BouvierCabrit1993,HerbstHerbst1994,PetrovGahm2001}. Therefore, if the observed high-density X-ray emitting plasma also originates in the accretion shock, then, its X-ray emission might display rotational modulation. Specifically plasma heated in the accretion-shock, observed in the X-rays, could display periodic variations in density, emission measure, average temperatures, absorption, and source optical depth, as a consequence of stellar rotation. First hints of accretion driven X-rays that vary because of the stellar rotation were provided by \citet[][]{ArgiroffiFlaccomio2011} for the star V2129~Oph. Understanding the origin of this high-density plasma is important, both for constraining the total amount of X-rays emitted in CTTS, and setting the energy balance of the accretion-shock region \citep{SaccoOrlando2010}. Eventually, a definitive confirmation that this plasma component is material heated in the accretion shock, would make its X-ray radiation an insightful tool to probe the physical properties (i.e. density and velocity) of the accretion stream, and to measure the chemical composition of the inner disk material \citep{DrakeTesta2005}. To search for such X-ray modulation effects we planned and carried out X-ray monitoring of V4046~Sgr, a close binary CTTS system in which both components are actively accreting from a circumbinary disk (see \S~\ref{vsagprop}). In this work we describe the first results from an {\it XMM-Newton} Large Program (LP) focused on V4046~Sgr, based on time-resolved high-resolution X-ray spectroscopy on timescales down to 1/10 of the system orbital period. To constrain the large-scale magnetic field and the accretion geometry, we also carried out a coordinated multi-wavelength campaign involving photometry, spectroscopy, and spectropolarimetry of V4046~Sgr. In \S~\ref{project} we summarize the project focused on V4046~Sgr, whose properties are described in \S~\ref{vsagprop}. Details of the data processing and analysis are reported in \S~\ref{obs}. The observing results are presented in \S~\ref{res}, and then discussed in \S~\ref{disc}. \section{The V4046~Sgr project} \label{project} The {\it XMM-Newton} observation of V4046~Sgr consists of a 360\,ks exposure performed on 2009 September 15-19 (Obs-id: 0604860201, 0604860301, and 0604860401). This observation is part of a quasi-simultaneous multi-wavelength campaign (optical photometry with REM/ROSS, 2009 September 1-30; optical spectroscopy with TNG/SARG, 2009 September 10-17; optical spectropolarimetry with CFHT/ESPADONS, 2009 September 2-8), aimed at studying simultaneously the properties of coronal plasmas, stellar magnetic field structure, photospheric spots (both cool spots and hot spots), and the accretion process. Here we present the results obtained with the {\it XMM-Newton}/RGS specifically aimed at searching for rotational modulation in the accretion-driven X-rays. The results of the entire observing campaign are presented in a series of papers describing, among other results, the properties of the X-ray emitting plasma (A.~Maggio et al., in preparation), maps of the large-scale magnetic field structure and accretion geometry as inferred from optical spectropolarimetry \citep[][ G.~A.~J.~Hussain et al., in preparation, S.~G.~Gregory et al., in preparation]{DonatiGregory2011}, variations in the accretion process over a range of timescales (G.~G.~Sacco et al., in preparation), detection and identification of a distant comoving WTTS system \citep{KastnerSacco2011}. \section{V4046~Sgr properties} \label{vsagprop} V4046~Sgr is a close CTTS binary system composed of two solar-like mass stars \citep[masses of 0.91 and $0.88\,{\rm M_{\odot}}$, radii of 1.12 and $1.04\,{\rm R_{\odot}}$][]{DonatiGregory2011}, separated by $8.8\,{\rm R_{\odot}}$. The two components are synchronously rotating with a period of 2.42\,d, in circularized orbits \citep{StempelsGahm2004}. V4046~Sgr is estimated to lie at a distance of 73\,pc \citep{TorresQuast2008} and it is viewed with an inclination of $35^{\circ}$ \citep[the angle between the rotation axis and the line of sight,][]{StempelsGahm2004,KastnerZuckerman2008}, with the orbital axis of the binary likely aligned with the individual stellar rotation axes \citep{DonatiGregory2011}. At an age of $\sim10-15$\,Myr \citep{TorresQuast2008,DonatiGregory2011} and classified as a CTTS, V4046~Sgr is still surrounded by a dusty, molecule-rich circumbinary disk \citep{RodriguezKastner2010} from which both the components are actively accreting \citep{StempelsGahm2004}. A previous {\it Chandra} observation \citep{GuntherLiefke2006} showed that V4046~Sgr has a cool plasma component ($T\approx2-4$\,MK) at high density ($n_{\rm e}\approx0.3-1\times10^{12}\,{\rm cm^{-3}}$), interpreted as material heated in the accretion shock. At the time of the {\it XMM-Newton} observation, the spectroscopic optical monitoring demonstrated that both components were accreting with a constant rate of $5\times10^{-10}\,{\rm M_{\odot}\,yr^{-1}}$ \citep[inferred from the analysis of the \ion{Ca}{2} IRT, ][]{DonatiGregory2011}. Both components displayed complex magnetic fields \citep[average surface intensity of $\sim200$\,G,][]{DonatiGregory2011}, significantly weaker than that of younger solar-like CTTS \citep[e.g.][]{DonatiSkelly2010}. These magnetic fields are not strong enough to disrupt local disks farther than $1\,R_{\star}$ above stellar surface, thus the formation of circumstellar stellar disks around each component, distinct from the circumbinary disk, may be possible \citep{deValBorroGahm2011}. The accretion process, based on \ion{Ca}{2} IRT, did not show significant rotational modulation, suggesting that post shock material contributing to these lines is symmetrically distributed with respect to stellar poles. The optical monitoring campaign confirmed the orbital/rotational period ($2.42\,{\rm d}$), and determined the conjunction and quadrature epochs at the time of the {\it XMM-Newton} observation\footnote{The quadrature with primary receding occurred at 2455078.199 HJD}. In this work we adopt the phase reference defined in \citet[][ ${\rm HJD} = 2446998.335+2.4213459\,E$, with phase $0.0$ indicating the quadrature with primary receding]{StempelsGahm2004}. However our optical monitoring revealed a phase shift of 0.069 with respect to that ephemeris, with quadratures occurring at phases 0.93 and 0.43, and conjunctions at phases 0.18 and 0.68 \citep{DonatiGregory2011}. \section{Observations} \label{obs} The {\it XMM-Newton} observation of V4046~Sgr, composed of three observing segments of $\sim120$\,ks each separated by gaps of $\sim50$\,ks, covered 2.2 system rotations. X-ray emitting material heated in the accretion shock is expected to have temperatures of a few MK at most. Therefore to search for X-ray variability possibly produced in the accretion shock we analyzed the {\it XMM-Newton}/RGS spectra, that contain emission lines that specially probe the coolest plasma components. The RGS spectrograph, composed of two nominally identical gratings (RGS1 and RGS2), covers the $\sim2-38$\,\AA\, wavelength range. The first order spectrum, embracing the $4-38$\,\AA\, band, has a resolution FWHM of 0.06\,\AA, while the second order provides a resolution FWHM of 0.03\,\AA\, in the $\sim2-19$\,\AA\, range. We extracted RGS spectra using the standard {\sc rgsproc} task. Data were filtered discarding time segments affected by high background count rates. The final net exposures of the three observing segments were of 115, 122, 120\,ks, respectively. We then applied the {\sc rgscombine} task to add the RGS1 and RGS2 spectra of the same order. Totally 34800 and 8200 net counts were registered in the first and second order RGS spectra, respectively. We analyzed the RGS spectra using the IDL package PINTofALE~v2.0 \citep{KashyapDrake2000} and the XSPEC~v12.5 \citep{Arnaud1996} software. We measured individual line fluxes by fitting simultaneously first and second\footnote{Second order spectrum was used only for lines contained in its wavelength range.} order RGS spectra. Fit procedure was performed in small wavelength intervals ($\Delta \lambda \lesssim 1.0$\,\AA). The adopted best fit function takes into account the RGS line spread function (determined by the matrix response fuction), and the continuum contribution (determined by adding a constant to the line emission, and leaving this constant as a free parameter in the fit). \section{Results} \label{res} The RGS spectra collected during the entire observation (see details in A.~Maggio et al., in preparation) indicate that the main properties of the X-ray emitting plasma of V4046~Sgr are similar to those observed during the previous {\it Chandra} observation \citep{GuntherLiefke2006}: the plasma at $T\sim1-4$\,MK has high density, $n_{\rm e}\sim10^{11}-10^{12}\,{\rm cm^{-3}}$, as determined by the $f/i$ line ratio of He-like triplets of \ion{N}{6}, \ion{O}{7}, and \ion{Ne}{9}\footnote{The measurements of the \ion{Ne}{9} triplet was performed by including in the fit the \ion{Fe}{19} line at 13.52\,\AA, that is anyhow weaker than the \ion{Ne}{9} lines.}. \input{t1} \begin{figure*} \epsscale{2.0} \plotone{f1.ps} \caption{Total flux of the cool line set versus time. The set of cool lines is composed of: \ion{Ne}{9} triplet (13.45, 13.55, and 13.70\,\AA), \ion{O}{8} Ly$\alpha$ and Ly$\beta$ (16.00 and 18.98\,\AA), \ion{O}{7} resonance line (21.60\,\AA), and \ion{N}{7} Ly$\alpha$ (24.78\,\AA)). Horizontal error bars represent the time-bin width. Dotted line marks the best-fit sinusoidal function. Orbital/rotational phases are computed according to the ephemeris ${\rm HJD} = 2446998.335+2.4213459\,E$ defined in \citet{StempelsGahm2004}. Vertical dashed lines (dark gray) indicate quadrature and conjunction epochs, with the corresponding schematic views of the system plotted above (white and gray circles represent the primary and secondary components, respectively). Time intervals adopted for extracting spectra corresponding to {\it low} and {\it high} phases are marked by the vertical bands (light blue and light red for the low and high phase, respectively).} \label{f1} \end{figure*} \begin{figure*} \epsscale{2.0} \plotone{f2.ps} \caption{Total flux of the hot line set versus time. The set of hot lines is composed of: \ion{Ne}{10} Ly$\alpha$ line at 12.13\,\AA~ and \ion{Fe}{17} line at 15.02\,\AA. Dotted line marks a sinusoidal function with the same period, phase, and relative amplitude obtained from the best fit of the total flux of the cool line set. The hot lines do not show rotational modulation, unlike the cool lines, see Fig.~\ref{f1}, suggesting that their variability is associated with coronal plasma variability.} \label{f2} \end{figure*} \subsection{Time resolved RGS spectra} To investigate variability on short timescales, we analyzed RGS spectra gathered in time intervals of $\sim25$\,ks (i.e. bins of 0.12 in rotational phase). Totally nine lines have fluxes detected at 1$\sigma$ level in all the time intervals. These lines, and their fluxes at different time intervals, are reported in Table~\ref{t1}. Significant variability on the explored timescales is observed for all the listed lines. To check for variations in the coolest plasma components we considered lines with peak formation temperature $T_{\rm max}<5$\,MK among the lines reported in Table~\ref{t1}. This sample of lines, named {\it cool} lines, is composed of: the \ion{Ne}{9} triplet (13.45, 13.55, and 13.70\,\AA), \ion{O}{8} Ly$\beta$\footnote{This is blended with an \ion{Fe}{18}, that is however negligible because of the $EMD$ and abundances of the X-ray emitting plasma.} and Ly$\alpha$ (16.00 and 18.98\,\AA), \ion{O}{7} resonance line (21.60\,\AA), and \ion{N}{7} Ly$\alpha$ (24.78\,\AA). Among the lines reported in Table~\ref{t1}, the \ion{Ne}{10} and \ion{Fe}{17} lines stay out of the {\it cool} line sample, because of their $T_{\rm max}$ higher than 5\,MK. Therefore their flux likely includes significant contributions from hot plasma. These two lines compose the {\it hot} line sample. \input{t2} To maximize the $S/N$ of the coolest plasma emission we added the measured fluxes of the cool lines for each time interval. This total line flux, plotted in Fig.~\ref{f1}, is variable and the observed modulation is clearly linked to the stellar rotation: the flux is higher near phases 0.0 and 0.5, i.e. quadrature phases, and lower near phases 0.25 and 0.75, i.e. conjunction phases. To confirm this variability pattern we fitted these observed flux variations with a sinusoid plus a constant. We left all the best-fit function parameters (period, phase, amplitude, and the additive constant) free to vary. We obtained a best-fit period of $1.22\pm0.01$\,d, and an amplitude of $23\pm2\,\%$ with respect to the mean value (Table~\ref{t2}). The inferred period is exactly half the rotational period of the system. As guessed maximum and minimum phases occur approximately at quadrature and conjunction, respectively. To check whether this observed modulation is effectively linked to the cool plasma emission, and not to a given line emission, we performed the same fit by separately considering the total flux obtained from different and independent cool line subsets. In all the inspected cases (see Table~\ref{t2}) we found the same periodic variability (period, phase, amplitude). We checked whether this modulation is present also in the emission of hotter plasma by applying the same fit procedure to the total flux of the hot lines, \ion{Ne}{10} and \ion{Fe}{17}. Fit results are reported in Table~\ref{f2}, in this case the periodic modulation is not detected. The observed variability is instead likely dominated by hot (coronal) plasma. Figure~\ref{f2} shows a comparison between \ion{Ne}{10}+\ion{Fe}{17} line variability with modulation observed for the cool lines. The detected X-ray rotational modulation is also not visible in the EPIC lightcurves (A.~Maggio et al., in preparation), even considering only a soft band. The substantial continuum contribution mostly due to the highly variable hot plasma likely masks the rotationally modulated signal. Hence we conclude that the observed X-ray line flux modulation is due to the high-density, cool plasma component. To understand the nature of the observed variability we searched for variations in the average temperature by considering ratios of lines originating from the same element. All the inspected ratios display significant variability, but are not correlated among themselves, and are not related to the rotational phase. We also searched for variations in the plasma density, probed by the $f/i$ ratio of the \ion{Ne}{9} triplet. This line ratio is approximately constant ($f/i\approx1$, indicating $n_{\rm e}\approx10^{12}\,{\rm cm^{-3}}$) during the entire observation, except for a lower value measured during the third interval of the second segment ($f/i=0.45\pm0.13$, corresponding to $n_{\rm}=(5.2^{+2.0}_{-1.3})\times10^{12}\,{\rm cm^{-3}}$), and a higher value observed during the fourth interval of the third segment ($f/i=3^{+2.5}_{-1.1}$, corresponding to $n_{\rm}<4\times10^{11}\,{\rm cm^{-3}}$). These variations appear to be associated with episodic events, like clumpy accretion flows, and not with a rotational modulation effect. \input{t3} \input{t4} \begin{figure*} \epsscale{2.0} \plotone{f3.ps} \caption{RGS spectra corresponding to minimum (low) and maximum (high) phases, corresponding to exposure times of 84 and 94\,ks respectively. For clarity reasons the two spectra are slightly smoothed, and the spectrum of the high-flux phase is shifted toward longer wavelengths by $0.1$\,\AA.} \label{f3} \end{figure*} \subsection{RGS spectra at different phases} The total flux of the cool lines from V4046~Sgr displayed variations in time linked to the stellar rotation. To investigate the differences of the X-ray emitting plasma corresponding to epochs of low and high fluxes of the cool lines, we added RGS data collected at the same phases with respect to the X-ray rotational modulation. We extracted two RGS spectra obtained by adding all the events registered during time intervals centered on maximum and minimum times, with duration of one fourth of the observed X-ray period (integration time intervals are shown in Fig.~\ref{f1}). The two resulting {\it low} and {\it high} spectra, whose exposure times are 84 and 94\,ks respectively, are shown in Fig.~\ref{f3}, while the measured line fluxes, detected at 1$\sigma$ level in the two spectra, are listed in Table~\ref{t3}. We searched for differences in the {\it low} and {\it high} spectra to investigate how the emitting plasma properties vary between these two phases. The two spectra display significantly different photon flux ratios of \ion{N}{7}, \ion{O}{8}, \ion{Ne}{9} lines, as reported in Table~\ref{t4}. In principle, these line ratios may vary due to changes of absorption, plasma temperature, or source optical depth. In Fig.~\ref{f4} we plot the measured line ratios together with the values predicted in the optically thin regime for different temperatures and different hydrogen column densities, $N_{\rm H}$. Absorption can change line ratios because, on average, lines at longer wavelengths suffer larger attenuation for increasing $N_{\rm H}$. The two \ion{N}{7} lines considered here are an exception, because the absorption cross section of the interstellar medium has the oxygen K-shell edge \citep[23.3\,\AA, e.g.][]{WilmsAllen2000ApJ} located between their wavelengths, making the longer wavelength line, the Ly$\alpha$ (24.78\,\AA), slightly less absorbed than the Ly$\beta$ (20.91\,\AA). The two lines however suffer very similar absorption, making the absorption effect of little relevance in the case of the \ion{N}{7} Ly$\alpha$/Ly$\beta$ ratio (as can be seen from the upper panel of fig.~\ref{f4}, where the curves predicted for different $N_{\rm H}$ are very similar). Therefore any change in this line ratio, as that observed, is hardly explained in terms of $N_{\rm H}$ variability. Instead, an $N_{\rm H}$ decrease from the {\it low} to the {\it high} state might explain the variation of the \ion{O}{8} line ratios, but an opposite $N_{\rm H}$ variation should be invoked to justify the \ion{Ne}{9} variability (middle and lower panels of fig.~\ref{f4}). All this findings indicate that the hydrogen column density toward the source appears to be unchanged, and that line ratio variability is produced by a different mechanism. This conclusion is supported by the similar fluxes between {\it low} and {\it high} spectra measured for the two lines at long wavelengths (\ion{N}{6} at 28.8\,\AA\, and \ion{C}{6} at 33.7\,\AA), the most affected by absorption, and it is also confirmed by the full fledged analysis of the EPIC data presented in A.~Maggio et al., in preparation, where $N_{\rm H}$ is found to vary by only a factor 2 over the whole observation around a mean value of $3\times10^{20}\,{\rm cm^{-2}}$ (i.e. $\log N_{\rm H} = 20.5$). \begin{figure} \epsscale{1.0} \plotone{f4.ps} \caption{Ly$\alpha$ to Ly$\beta$ photon flux ratio for the \ion{N}{7} and \ion{O}{8} H-like ions and $13.45$\,\AA~ to $11.55$\,\AA~ for the \ion{Ne}{9} He-like ion, vs plasma temperature. Horizontal bands indicate values measured during {\it high} (red) and {\it low} (blue) phases. Black, dark gray, and light gray lines represent predicted values for different absorptions $N_{\rm H}$, with labels reporting the corresponding $\log N_{\rm H}$ value, and with thicker lines marking curve portions compatible with the observed ratios.} \label{f4} \end{figure} The three explored ratios depend also on temperature, because of the different energy of the upper levels of the two electronic transitions considered in each ratio. In this respect, the three ratios do not vary consistently. In fact, a temperature decrease from the {\it low} state to the {\it high} state could explain the increasing \ion{N}{7} and \ion{O}{8} line ratios, but not the variation \ion{Ne}{9} lines. The derivation of the plasma model (see A.~Maggio et al., in preparation) is beyond the scope of this work, but we anticipate here that the $EMD$ does not appear to vary enough between the two phases to justify the observed variations of the line ratios. Moreover the average plasma temperature ($\log T \sim 6.6-6.7$), together with the measured $N_{\rm H}$, indicates that, in some phases, line ratios are not compatible with the optically thin limit, irrespective of the nature of their variability. Optical depth effects can change line ratios because each line optical depth is directly proportional to the oscillator strength of the transition \citep{Acton1978}. Therefore, if optically thin emission does not apply, transitions with very different oscillator strength may suffer different attenuation/enhancement, with stronger effects occurring in lines with higher oscillator strengths. In the inspected ratios the lines with higher oscillator strength are the $Ly\alpha$ line of \ion{N}{7} and \ion{O}{8}, and the $13.45$\,\AA~line of \ion{Ne}{9} \citep[e.g.][]{TestaDrake2007}. Non negligible optical depth is expected in strong X-ray resonance lines produced from shock-heated plasma in CTTS \citep{ArgiroffiMaggio2009}. The observed variable ratios might indicate that some lines are affected by a changing optical depth. Since the expected attenuation/enhancement with respect to optically thin emission depends on the source geometry and viewing angle, stellar rotation can produce periodic changes in the line opacity, and hence in the observed ratios. However, once again the three ratios do not vary in the same direction, with \ion{N}{7} and \ion{O}{8} ratios being higher in the {\it low} state, whereas the ratio of the slightly hotter \ion{Ne}{9} lines results higher in the {\it high} phase. Summarizing we stress the significant variations observed in line ratios between {\it low} and {\it high} phases. The origin of these variations remains unclear. If these variations were due to changes in plasma temperature or absorption, a coherent behavior would be expected for the three ratios, and this is not the case. Opacity effects instead can operate in a more complex way, provoking both line enhancements or reductions, depending on the source geometry and viewing angle. This hypothesis is therefore the most intriguing, especially considering that in some phases line ratios are discrepant from the value expected in the optically-thin limit. \section{Discussion} \label{disc} The main result of the time resolved spectral analysis of the X-ray emitting plasma from V4046~Sgr (\S~\ref{res}) is that the high-density plasma component at $3-4$\,MK is rotationally modulated with a period of half the system orbital period, with maximum and minimum phases occurring at quadrature and conjunction epochs, respectively. The observed X-ray rotational modulation indicates that this high-density plasma component is not symmetrically distributed with respect to the stellar rotational axes. We also found that strong emission lines from this plasma component provide some indications of non negligible optical depth effects, and that the periodic modulation appears to be associated with variations in the source optical depth, as evidenced by the significant variations in line ratios sensitive to optical depth observed between {\it low} and {\it high} phases. The strongest X-ray emission lines, produced by shock-heated material in CTTS, are expected to have non-negligible optical depth due to the high density and typical size of the post-shock region \citep{ArgiroffiMaggio2009}. Moreover the optical depth should vary if the viewing geometry of the post shock region changes. Hints of non-negligible optical depth observed in the strongest X-ray lines of V4046~Sgr indicate that the high-density plasma is mostly concentrated in a compact portion of the stellar surface, as predicted for the post-shock material. Moreover, the variability of the optical depth can be naturally explained with the changing viewing geometry of the volume occupied by the high-density plasma during stellar rotation. This scenario requires plasma confinement by the stellar magnetic fields. We observed an X-ray period of half the system orbital period, as already observed for accretion indicators \citep[e.g.][]{VrbaChugainov1993,KurosawaHarries2005} and X-ray emission \citep{FlaccomioMicela2005} from some CTTS. That could be explained, in the case of V4046~Sgr, by different scenarios. If the X-ray emitting plasma is located only in one of the two system components, then a period of half the rotational period is observed when there are two accretion-shock regions on the stellar surface at opposite longitudes, or there is only one accretion-shock region and the maximum X-ray flux is observed when the base of the accretion stream is viewed sideways \citep[][ a configuration that occurs twice in one stellar rotation]{ArgiroffiFlaccomio2011}. Considering the system symmetry and the accretion geometry previously suggested by \citet{StempelsGahm2004}, it is conceivable that both components possess similar amounts of high-density cool plasma. In this scenario the half period can be naturally explained assuming that the location on each stellar surface of this plasma, compact and not azimuthally symmetric with respect to each stellar rotation axis, is symmetric for $180^{\circ}$ rotations with respect to the binary rotation axis. The simultaneous optical monitoring campaign indicated that the two components have similar accretion rates, validating the assumption that the two components possess similar amounts of high-density plasma. However, the optical accretion spots, probed by \ion{Ca}{2} IRT, did not show rotational modulation \citep{DonatiGregory2011}. Therefore accretion regions emitting \ion{Ca}{2} should be symmetrically distributed with respect to the stellar poles. This scenario, different from that obtained from the X-ray data, could be reconciled considering that X-rays are likely produced only by a fraction of the entire accretion-shock region \citep{SaccoOrlando2010}. In conclusion, our {\it XMM-Newton}/RGS data of the V4046~Sgr close binary system have shown for the first time the rotational modulation of X-ray lines characteristic of a cool, high-density plasma corotating with the stars. This strongly support the accretion-driven X-ray emission scenario, in which the high-density cool plasma of CTTS is material heated in the accretion shock. It moreover suggests that the accretion flow is channeled by magnetic field lines anchored on the stars, along small magnetic tubes. This is consistent with the general framework of magnetic accretion, but brings new insights into the accretion mechanism in close binary systems of CTTS. \acknowledgments This work is based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. C.A., A.M., and F.D. acknowledge financial contribution from the agreement ASI-INAF I/009/10/0. \newpage
1,116,691,496,993
arxiv
\section{Introduction} Neurobiologists have long pursued an understanding of the emergent phenomena of nervous systems, such as the neuronal basis for choice and behavior. Much research on neuronal systems grapples with the complex dynamics of interactions among multiple neurons. New techniques, such as calcium imaging, voltage-sensitive dye (VSD) imaging \citep{cacciatore,gonzalez} and multi-unit electrode recordings, enable larger views of nervous systems. However, for many experimental preparations, the amount of data that can be collected via tedious experiments is limited. As an example, data from voltage-sensitive dyes are time-limited because of bleaching of the dyes and also neuronal damage caused by phototoxicity. We have developed methods for combining the data from multiple experiments to pool data on neural function. The approach allows us to make inferences from data sets that are impossible to obtain from individual preparations. Coalescing the data from multiple experiments is an intrinsically difficult problem because of the difficulty in matching cells and their roles across animals. Variation is observed in nervous systems of individual animals based on developmental differences as well as artifacts introduced in the preparation and execution of experiments. Developing a means for identifying correspondences in cells across animals would allow data to be pooled from multiple animals supporting deeper inferences about neuronal circuits and behaviors. We focus specifically on experimental studies of neurons composing the ganglia of H. verbana \citep{briggman}. The leech has a stereotypical nervous system consisting of repeating packets of about 400 neurons. About a third of these neurons have been identified, and these neurons can be found reliably in different animals. The remaining two-thirds of neurons have yet to be identified, but are believed to maintain similar properties and functional roles across animals. The general problem of correspondence matching of the cells of two different animals is illustrated in Figure 1. We seek to identify neurons that are equivalent across ganglia obtained from different animals. For example, the red cell in animal $a$ has several candidate correspondences in animal $b$, but with varying degrees of similarity (indicated by shades of red). Ideally, we will find a one-to-one match via jointly considering multiple similarities. With only two animals, this problem can be mapped to a bipartite graph-matching task and can be solved optimally \citep{munkres}. However, we want to jointly solve the matching task for larger numbers of animals. Such matching across multiple graphs defined by the individual nervous systems is an intractable problem in the NP-hard class \citep{papa}. In addition, the problem is even more difficult because such matching must also take into account variations in the numbers and properties of neurons observed in different animals. These variations can be due to both developmental differences (e.g., some neurons may be missing or duplicated) and experimental artifacts (e.g., some neurons may be out of the plane of focus or destroyed in the delicate dissection). A key challenge in this endeavor is the formulation of a similarity measure that takes into account physical parameters of cells, such as their size and location, as well as their functional properties. \begin{figure} \includegraphics[width=\textwidth]{Fig1.png} \caption{Challenge of identifying correspondence among neurons in ganglia from different animals. Given compatibility constraints, a correspondence algorithm seeks a one-to-one mapping between neurons in two H. verbana preparations. The goal is to find out correspondences between cells in each of the animals. The color coding illustrates compatibility constraints: Feasible matches for the highlighted red, green, and blue cells in animal 1 (c) are found in animal 2 (d), as indicated with matched colors. The degree of feasibility of the matches is depicted via shading of cells in animal $b$, where the most compatible cell for each source neuron in animal $a$ is highlighted with a white border. Although the figure shows only two animals, such compatibility constraints occur across all pairs of the 6 animals used.} \end{figure} \section{Machine Learning Framework} \begin{table} \caption{The proposed algorithmic framework}\label{algo} \begin{center} \begin{tabular}{l l l} \hline \hline & & Given training data with pairs of match and\\ & & non-match cells, estimate the parameter matrix\\ Step 1: & Learn Compatibility Measure & $A$ that defines the compatibility function\\ & & $f^{ab}(i,j)$, between any $i^{th}$ and $j^{th}$ neurons for\\ & & all animal pairs. \\ \hline & & Start with an initialized empty match set $\mathcal{S}_0$.\\ & & Iteratively determine the next best match\\ Step 2: & Recover Correspondence Map & $\mathcal{M}_t$ by solving equation \ref{equ:match}\\ & & and update $\mathcal{S}_t = \mathcal{S}_{t-1} \cup \mathcal{M}_t$.\\ & & End when all the cells are matched. \\ \hline & & Construct the matrix $Y$ that aggregates\\ & & data from all the animals, where each \\ Step 3: & Infer Missing Data & row corresponds to cells and are permuted\\ & & according to the matching. Use Probabilistic \\ & & PCA on $Y$ to infer the missing data.\\ \hline \hline \end{tabular} \end{center} \end{table} We use a set of neuronal data collected in \cite{briggman}, which consists of optical VSD recordings from populations of $123$-$148$ neurons in a mid-body segmental ganglion from six different leeches. Earlier research on this data identified neurons involved in decision-making. In particular, the study aimed at understanding the roles of neuronal populations in decisions to swim or to crawl following stimulation. Sensory neurons (DP nerve) were stimulated in such a way that would elicit, with equal ($0.5$) likelihood, swimming or crawling. This previous study considered single cell activations and joint analysis of neurons using dimensionally reduction techniques of PCA and LDA. However, these techniques were limited to one animal at a time. In the current study, we propose a framework that analyzes data across-animals to increase the power of the analysis. Rather than using a handcrafted measure, we have employed a machine-learning framework that relies on supervised training data. This algorithm estimates an appropriate similarity function between neurons in different animals based on a training set of high-confidence correspondences. These correspondences are readily identified neurons in the nervous systems of H. verbana (Muller et al., 1981). An important capability of the algorithm is to take into account the probabilistic nature of inferred correspondences. The algorithm begins by learning a weighting function of relevant features that maximizes the likelihood of matches within the training set. The next step of the approach is to jointly solve the correspondence matching problem for neurons across animals, while considering potential missing or extra cells in each animal. The final step is to consider correspondences with functions that fill in missing data. As we will demonstrate below, pooling neurophysiological data from multiple studies in a principled manner leads to larger effective data with greater statistical power than the individual studies. Specifically, the pipeline for the methodology includes three steps (detailed in Table \ref{algo}): (1) Determining a similarity score across pairs of cells, (2) recovering correspondences that are consistent with the similarity measure, and (3) estimating missing data. We describe these steps in detail below: \subsection{Learning Similarity Measure for Cells} The goal in Step 1 is to learn a similarity function $f^{ab}(i,j)$ indicating the feasibility of a match between the $i^{th}$ cell in animal $a$ and the $j^{th}$ cell in animal $b$. The most desirable characteristic for such a function is a high positive value for likely matches and diminishing values for poor matches. Such a characteristic is captured by the exponentiation of a negative distance measure among sets of features that represent multiple properties of cells. Formally: \begin{equation} f^{ab}(i,j)=e^{-[\phi(i,a)-\phi(j,b)]^T A[\phi(i,a)-\phi(j,b)]} \end{equation} Here, $\phi(\cdot)$ are $d$ dimensional feature representations for the individual cells for each animal and summarizes physical (e.g., size, location etc.) and functional (e.g., optical recordings) properties. $A$ is a $d \times d$ parameter matrix with positive entries that are learned from data. Intuitively, the negative log of the similarity function is a distance function between the feature representations: a zero distance between two feature vectors result in highest similarity measure of 1, whereas representations at further distance away in the feature space have a diminishing value. The matrix $A$ parameterizes this distance measure. Given training data consisting of several probable pairs of matched neurons, we use $A$ to solve an optimization problem. We describe the details below. The following list of features were used in our work: \begin{itemize} \item{{\bf Structural features:} Absolute position of the cell with respect to the entire observed frame, relative position of cell in relation to the entire ganglion, absolute size of the cell in pixels, indicator vector specifying packet the neuron is located in (among Central, Left Anterior, Left Posterior, Right Anterior, Right Posterior or Central Posterior packet), and relative position coordinate of the neuron in its respective packet.} \item{{\bf Functional features:} Coherence of electrophysiological observations with swim oscillations and single cell discrimination time (see \cite{briggman}) that distinguishes from swim versus the crawl behavior.} \end{itemize} Intuitively, the optimization problem finds the parameter $A$ that minimizes the distance between pairs of cells that were tagged as matches, while maximizing the distance among other pairs. Formally, parameter $A$ of the compatibility measure is estimated by minimizing the objective: \begin{equation} A^* = \arg \min_{A} ⁡\sum_{i,j}{[-2 \log⁡ f^{ab}(i,j) + \log \sum_{j'\in b, j' \neq j} f^{ab} (i,j') + \log \sum_{i' \in a, i' \neq i} f^{ab}(i',j)]} \end{equation} subject to the constraint that all entries of $A$ are positive. The sum is over all the labeled training pairs $(i,j)$ tagged as likely matches. Intuitively the first term $-2 \log⁡ f^{ab} (i,j)$ in the objective prefers solutions that would collapse the distance between matched pairs to zero, while the rest of the terms prefer a solution where the distance between the rest of the cells are maximized. The optimization is straightforward and a simple gradient descent will always find a locally optimal solution. Note that a more appropriate constraint is positive semi-definite condition on $A$, however we suggest using a non-negativity constraint due to simplicity in optimization with almost no reduction in performance of the pipeline. \subsection{Correspondence Matching} In a second step, we calculate the correspondence matches across all the animals. Instead of calculating all the matches simultaneously, the framework follows an iterative procedure: future matches are made not only by using the similarity function, but also by comparing the geometric and structural relationship of the candidates to the past matches. Besides considering the distances induced by the similarity function (i.e. $-\log ⁡f^{ab}(i,j)$), and unlike past work on graph matching \citep{williams,bunke}, the proposed method utilizes knowledge of “landmarks” by inducing constraints that impose topological and geometric invariants. This match-making algorithm considers the iteration $t$ and denotes the set of already determined matches by $\mathcal{S}_t$. The algorithm then determines $\mathcal{M}^{t+1}$, the next set of neurons from all the animals to be matched by solving the following optimization task: \begin{equation} \mathcal{M}^{t+1} = \arg \min_{\mathcal{M}} ⁡\sum_{\mbox{all pairs} (i,j) \in \mathcal{M}} -\log⁡ f^{ab}(i,j)+ \lambda D_{LM}(i, j, \mathcal{S}_t) \label{equ:match} \end{equation} Here $\lambda$ is the trade-off parameter that balances the compatibility measure with landmark distances $D_{LM}(\cdot)$ from the matches recovered in all the prior iterations. The landmark distance computation provides important structural and topological constraints for solving the correspondence tasks. Given anchor points, the landmark distances attempt to capture structural and locational relationship with respect to the available landmarks. There are several options such as commute distance \citep{mckay,lovasz} on a nearest-neighbor graph, or Euclidean distance computed by considering either the locations or the feature representation of the neurons. In our experiments, we compute landmark distances between neuron $i$ in animal $a$ and neuron $j$ in animal $b$ with respect to a set of anchor points S as: \begin{equation} D_{LM}(i,j,S)= \sum_{i' \in \mathcal{S}} \log⁡ f^{aa} (i,i') - \sum_{j' \in \mathcal{S}} \log⁡ f^{bb} (j,j'). \end{equation} The optimization problem in the above equation is solved using off-the-shelf energy minimization procedures \citep{boykov,minka}. The set of the newly discovered matches are then included and the process is repeated until all matches stay the same (settle). Essentially, the goal is to find a set of matched neurons across all the animals such that objective function is minimized. We start with a reasonable initialization of solution (for example by solving for consecutive pairs of animals). This solution is iteratively refined by considering data drawn from one study at a time and searching for a replacement neuron which would lower the total energy. Such replacements continue until no further minimization is observed. Utilizing landmarks are appropriate as an informative signal for matching neurons in the leech, because there is a typified geometric structure. Although soma positions do vary from animal to animal, often certain somas remain arranged with particular geometrical relationships. For instance the Nut and AE cells typically form a box-like pattern, the N and T sensory neurons usually will be arranged in a hemi-circle along the packet edge, which often will wrap around the AP cell. These types of arrangements are useful for identification of cells by eye, and we extend our algorithm to utilize these relationships. The framework is extended to handle poor matches and missing cells by considering a {\em sink} cell in every animal. The sink cell has a fixed cost of matching, denoted as c, and acts as a threshold such that neuron matches with costs greater than c are disallowed. The sink cells are a soft representation of the probability that a particular neuron was not visible during a given preparation. \begin{figure}[t] \includegraphics[width=\textwidth]{Fig2.png} \caption{Cell correspondences inferred across six H. verbana. Graphics show results of the correspondence matching procedure across six animals. Color coding indicates the correspondences, where matched cells across different animals share the same color. We highlight two cells (depicted as 1 and 2) and show the matches as lines linking neurons across the animals. Several cells remain unmatched and are depicted using the dashed lines (unfilled interior). The algorithm is capable of handling partial matches where cells are not present in all the six animals due to true structural differences or losses either in their preparation or in their sensing.} \end{figure} \subsection{Pooling Across Animals} \begin{figure}[t] \includegraphics[width=\textwidth]{Fig3.png} \caption{Computed canonical ganglion for H. verbana derived from the correspondence matching algorithm. We used the results of the correspondence matching algorithm to generate an average or canonical ganglion by computing mean location and size for each cell that was matched across at least three different animals. The shades of neurons are colored according to the weight determined by an LDA projection that would distinguish between swim and crawl models (brighter color mean higher weight; the colors used were arbitrary).} \end{figure} Finally in the third step, the framework reconstructs data corresponding to cells that are missing and remain unobserved in some animals. In particular, if we consider the electrophysiological activity for unobserved cells as latent random variables, then we can infer those latent variables by exploiting the fact that they were observed in other animals. Once we have correspondence information across animals, we can fill in missing electrophysiological data. Formally, we invoke data completion via Probabilistic Principle Component Analysis (PPCA) \citep{roweis,tipping}. To apply PPCA, we construct a matrix Y, where each row corresponds to a neuron and each column corresponds to the fluorescence intensity in a short time interval. Further, since the correspondences between all the animals are calculated, we can stack the data from all the animals in Y such that the rows are arranged according to the discovered correspondences. (we use -1 to denote absence of data due to missing cells in an animal). The PPCA algorithm recovers the low dimensional structure in the data, and inserts missing data via Expectation Maximization \citep{dempster}. The PPCA algorithm starts with an initialized low-dimensional projection and alternates between the E-step and the M-step. The E-step is where the missing data is estimated by considering statistical relationships in the data. The M-step is where the estimates of the low-dimensional projection are further refined. Consider the matrix $Y$ (dimensions $c \times n$), which consists of neuronal activity recordings of $c$ cells from all the animals, is constructed using the methodology described in text above. We then first scale all the values in the matrix $Y$ between $0$ and $1$. Let’s denote the low-dimensional representation of the data as matrix $X$ (dimensions $k \times n$, where $k < c$) and the principal components as $C$ (dimensions $c \times k$). The PPCA algorithm first initializes the matrices $X$ and $C$ randomly and then alternates between the following two steps: \begin{align*} \mbox{E-step: } & \mbox{Estimate } \hat{Y} = CX\\ \mbox{M-step: } & \mbox{Refine } X_{new}=(C^T C)^{-1} C^T \mbox{ and }C_{new}=\hat{Y}X_{new}^T (X_{new} X_{new}^T )^{-1}\\ \end{align*} The algorithm converges when the maximum change in any of individual dimensions of estimates Y ̂ is less than 0.001. PPCA is guaranteed to converge so that it produces data completion even for neurons that are not observed in some animals. In our implementation, optimization for Step 1 (see Table 1) is performed via Limited Memory BFGS \citep{liu} routine and energy minimization for the above Equation is performed via iterative variational inference \citep{beal}. There are three parameters that need to be specified in the framework: $c$ the upper limit on cost of allowed matches, the trade-off parameter between compatibility and relative locality measure, and $k$ the dimensionality of the low dimensional projection in PPCA. These parameters are determined via a cross-validation methodology. The cross validation is performed out by considering the aggregated matrix $Y$, randomly reducing $10$\% of the observed data, and considering the reconstruction error using an $L2$ norm on the removed data. This process is repeated $10$ times and parameters with minimum average reconstruction error are chosen. The search space for parameters $c$ and $\lambda$ lie in log-scale (i.e. $c$ and $\lambda \in [10^{-5}, 10^{-4}, .., 10^5]$), while for $k$ we try in a linear range (i.e. $k \in [1, .., 25]$). \section{Experiments} Training data for learning the parameter A was collected by an experimentalist (EPF) who hand-annotated 815 different match pairs across all the animals. Fig. 1 shows the resulting compatibility measure for these data. Note how the physical properties (such as size, relative location, packet membership) of the most likely matches (highlighted by a white outline) illustrate the quality of the learned function. The matching procedure results in a correspondence map (Fig 2.) matching neurons across the 6 different animals. Once the correspondence map was calculated, it was used to generate a prototypic model of animal by averaging physical as well as functional properties (Fig. 3). \begin{figure}[t] \includegraphics[width=\textwidth]{Fig4a.png} \caption{Activity of neurons in a leech ganglion from prior study \citep{briggman}, showing how their neuronal activity can be used to identify homologous neurons across animals. Voltage-sensitive dye traces from two different neurons that were considered to be matches by the correspondence matching algorithm. The traces highlight that the algorithm has the capability of recovering correspondence across cell that are functionally similar.} \end{figure} Because the resulting correspondence map was computed simultaneously across all the animals, it provides a simple way to analyze the quality of the recovered solution. In addition to the physical properties, the functional characteristics of any two matched neurons across different animals are similar across animals (Figure 4). We observed that a simple estimator based on average activity of neurons in five animals predicted the activity of the sixth one (Figure 5). Here we estimated the entire time series of activity for a given cell in an animal by considering the activity for the corresponding cell across the rest of the five animals. Two different models for swim and crawl mode are computed where the prediction is performed by computing an average across all the observed time-series. For all six animals, lower differences between the observed electrophysiological activity and predictions made by a model learned from rest of the animals confirm that the framework had recovered correct correspondences between the neurons across animals. Although the matching algorithm performs quite well, it is likely that the algorithm is far from perfect. Many of the matched cells may not be correct. Since the functional responses of the cells are a factor for the matching, cells with little functional signal will be harder to match than those with big signals. Cells lacking functional signal, however, are not providing a lot of information for predicting behavioral outcome. Thus, these cells likely have poor matches, but are also likely the cells which are non-relevant to the swim-crawl decision circuit. It is also possible that many cells are effectively the same given this data set, but the matches do not truly reflect homologous pairs. We expect many cells to be functionally the same and mismatching these similar cells may not hurt our analysis. \begin{figure}[t] \includegraphics[width=\textwidth]{Fig4b.png} \caption{Bar graphs that highlight the results to test the recovered correspondence using a leave-one-animal-out analysis. The plots were generated by first considering a candidate test animal and then building a predictive model for each cell (from when the animal swam or crawled) using the remaining five animals. The bar-chart compares mean-squared error between predicted and observed electrophysiological activity when matching using the proposed framework with random selection. The differences across all of the six leave-one-animal-out test cases are significant.} \end{figure} \begin{figure}[t] \includegraphics[width=\textwidth]{Fig5a.png} \caption{Using correspondences to predict behavior from neuronal activity. Identification of corresponding neurons across animals enables larger data sets to be constructed by pooling observations from multiple preparations, which in turn enable deeper and more accurate data analysis to address questions of interest. This figure shows Non-linear projections generated by applying the ISOMAP algorithm. The blue and red dots correspond to swim and crawl mode and depict the trajectory that the voltage-sensitive dye trajectories take for each animal. Note that ISOMAP applied for an individual animal might result in projections that are inconsistent across the different animals. However, using the discovered correspondences of neurons across animals, we combine the data from all six animals, and recover projections that are consistent for all of the animals.} \end{figure} \begin{figure}[t] \includegraphics[width=\textwidth]{Fig5b.png} \caption{Bar graphs showing that pooled data allows us to discriminate between swim and crawl significantly earlier than what was reported earlier using a PCA analysis on data from a single animal \citep{briggman}.} \end{figure} \begin{figure}[h] \includegraphics[width=\textwidth]{Fig6.png} \caption{Determining influential cells using linear discriminant analysis. The ganglion maps from 6 experiments are shown. The maps are from the same experiments as in Fig. 4. Cells are color-coded based on the magnitude of the contribution to the linear discriminant direction. Red and yellow represent large magnitude contributions, blue represents small contributions. We can see that there are at least 3 cells that are influential and do not include cell 208 (marked using white arrow).} \end{figure} The correspondence matching algorithm enables pooling of the data across animals, which allows exploration that was not feasible previously. For example, Figure 6 shows a 3-dimensional projection recovered by applying ISOMAP \citep{tenenbaum}, a non-linear dimensionality reduction method that is an extension to linear methods such as PCA. Because the algorithm was applied to the entire pooled data, the recovered dimensions are consistent across all the animals, and thus can be visualized and analyzed within the same reference frames. Previously application of such techniques (such as PCA and LDA in \citep{briggman}) was limited to a single animal at a time resulting in dimensions which were incomparable across animals. The pooling of data enabled by methodology proved to be valuable in predictive models of decision making. Figure 7 shows that pooling the data across animals enable earlier predictions of one of the two behaviors (swimming or crawling) following stimulation than data from a single animal. Specifically, PCA was performed on pooled data and earliest discrimination time between swim and crawl was determined according the procedure described in \citep{briggman}. In Figure 3, we highlight cells in the composed canonical ganglion that play an important role in the behavioral decisions of the animal. Combining data across multiple animals enables transfer and overlay of information, allowing aggregation of important statistical parameters and more robust empirical models. Figure 8 shows ganglion maps for six animals highlighting cells that contribute most towards discrimination amongst the swim and the crawl trials. Note that the highly discriminative cells (towards the red spectrum) are consistent in physical properties such as location and size across the different animals. We also note that these cells are significantly different from cell $208$ that was identified in earlier studies \citep{briggman}. \section{Related Work} The work described in this paper builds upon many different sub-areas of machine learning. In particular, the key ingredients include metric learning, correspondence matching and probabilistic dimensionality reduction \citep{roweis,tipping}. Distance metric learning is a fairly active research area. Most of the work in distance metric learning focus on $K$-Nearest Neighbor ($k$-NN) classification scenario \citep{duda} and often aim to learn a Mahalanobis metric that is consistent with the training data~\citep{frome, bar, rca, metric-learning}. The distance metric learning method employed in this paper is closest to the work of \cite{roweis_NCA} and \cite{globerson}, but modified to just consider the sets of similar cells given by the user. Correspondence problems are employed in a multitude of applications. Computer vision is particularly closer to our scenario. Among the simplest are transformations of rigid bodies, where geometry can be exploited \citep{gm99,mcb08}, while correspondences among non-rigid objects, and between non-identical objects, can pose significant challenges. Algorithms applied to more general correspondence problems largely combine the compatibility of points by features with the local geometric compatibility of matches. Such models can be formulated as graphical models \citep{mcb08,tkr08,sh07} or as selecting nodes in an association graph \citep{cll10,css06}, and have been extended to higher-order criteria \citep{dbk09,zs08,lcl11}. Other methods consider the Laplacian constructed from a neighborhood graph \citep{um88,ehl11,mhk08}, and some models are learned from full training examples \citep{tkr08}. Closest to the idea of using reference points are approaches based on seed points \citep{shc11}, landmarks \citep{sj14}, coarse-to-fine strategies \citep{sh07}, and on guessing points that help orient the remaining points in a rigid body \citep{mc12}. \section{Conclusion and Future Work} The proposed methodology is likely to be even more useful in combination with other data-centric analyses. For example, the model learned from past data can be employed to guide future experimentation. By computing correspondences between the model and and data from an ongoing experiment in real-time, we can then use the model to guide information extraction strategies. The methodology can also be extended to perform within-leech analysis, such as discovering bilateral pairs of neurons. In addition, this methodology can readily be used to analyze the simultaneous activity of multiple neurons in other animals. We foresee valuable uses of the approach in overlaying data from larger nervous systems and, moving beyond cells, to higher-level abstractions of nervous system organization, such as components of retina or columns in vertebrate nervous systems. Given its simplicity and the appeal of potentially pooling large quantities of data, the correspondence methodology may find wide use in many areas of neuroscience. \acks{We acknowledge the assistance of Johnson Apacible, Erick Chastain and Paul Koch.}
1,116,691,496,994
arxiv
\subsection{Program prior}\label{program-prior} DreamCoder defines the prior over programs as a probabilistic context free grammar (PFCG; \citealt{johnson1998pcfg}) for programs generated as productions from a library $\mathcal L$ of functions $l \in \mathcal L$ \footnote{In addition to initial and learned functions, \citet{ellis2020dreamcoder} define $\mathcal L$ to also include any initial literals and a rule for generating variables, such that programs can be completely generated as productions from the PCFG. We use the same formulation.}. Formally, DreamCoder assigns a real-valued weight ${\theta_\library}_i$ to each library function, which when normalized yields a production probability $\text{P}} %{{\rm I\kern-.3em P}[l | \mathcal L, \theta_\library]$. The prior probability of a program $\rho$ is given by \begin{equation}\label{dreamcoder-prior} \text{P}} %{{\rm I\kern-.3em P}[\rho | \mathcal L, \theta_\library] = \prod_{l\in \rho}\text{P}} %{{\rm I\kern-.3em P}[l | \mathcal L, \theta_\library] \end{equation} the weighted product of probabilities of all of its constituent library functions. As all $\text{P}} %{{\rm I\kern-.3em P}[l | \mathcal L, \theta_\library] < 1$, this is equivalent to a \textit{description length} prior over programs: longer programs (with more constitutent elements) will have lower prior probability under Eq. \ref{dreamcoder-prior} since $\text{P}} %{{\rm I\kern-.3em P}[l | \mathcal L, \theta_\library]$ monotonically decreases as $|\rho| = |\{\l \in \rho\}|$ increases. \subsection{Amortized conditional inference} \label{amortized} To identify programs that solve tasks $t$ while obtaining high probability under $\text{P}} %{{\rm I\kern-.3em P}[\rho | \mathcal{L}, \theta_\library]$, DreamCoder trains a neural search heuristic $Q_i(\rho | t, \mathcal L_i)$ at each iteration $i$ to approximate the inverse conditional model. The heuristic uses a neural model trained to predict programs written in the current library $\mathcal L_i$ according to the posterior: \begin{equation}\label{dreamcoder-posterior} Q_i(\rho|t, \mathcal L_i) \approx \text{P}} %{{\rm I\kern-.3em P}[\rho|t,(\mathcal L_i,{\theta_\library}_i)]\propto\text{P}} %{{\rm I\kern-.3em P}[t|\rho]\text{P}} %{{\rm I\kern-.3em P}[\rho|(\mathcal L_i,{\theta_\library}_i)] \end{equation} conditioned on an encoding of the training examples (e.g. an embedding of the image in the task specification). This model is trained in the distant supervision setting (which begins with no supervised program data) by leveraging the forward generative model: sampling programs from the prior, executing them to produce observed tasks, and then minimizing $Q(\rho|t, \mathcal L)$ in Eq. \ref{dreamcoder-posterior} on the sampled programs, conditioned on their executions. This generative training procedure is generally applicable to any neural implementation of $Q(\rho | t, \mathcal L)$. (But see \citet{ellis2020dreamcoder} and our supplementary material for additional details on the model architecture, which we reimplement in our experiments). \subsection{Abstraction learning as program compression (maximizing the likelihood of programs)} The DreamCoder algorithm also iteratively updates the library ($\mathcal L_i, \theta_{\mathcal L_i}$) to approximately optimize Eq. \ref{optimal-library} (finding $\mathcal L^*, \theta_\library^*$ which maximize the likelihood of the inferred latent programs). \citet{ellis2020dreamcoder} leverage equivalence to a \textit{compression} problem defined over programs and the library. As discussed in \ref{program-prior}, the PCFG program prior is equivalent to a description length prior over programs. \citet{ellis2020dreamcoder} place an additional Dirichlet prior over the library description length: \begin{equation}\text{P}} %{{\rm I\kern-.3em P}\left[ \mathcal L\right]\propto\exp \left(-\lambda\sum_{\rho\in \mathcal L}\text{size}(\rho) \right) \end{equation} Estimating the optimal library then becomes the problem of inferring new library abstractions which can jointly compress the latent training programs (rewritten under the new library $\mathcal L_{i+1}$) and the description length $|\mathcal L_{i+1}|$ of the updated library (to optimize for shared abstractions across programs). This objective would still require inference over all possible ways of refactoring the latent programs under the updated library. \citet{ellis2020dreamcoder} approximate this by only considering candidate abstractions and program refactorings that can be found via an efficient lambda-abstraction algorithm. As an example, this could refactor the large hexagon program $$\small\texttt{(for $\infty$(move\_pen($*$ unit\_line 3)(/ 2$\pi$ 6))}$$ to expose a candidate abstraction like $$\small\texttt{$\lambda$x.(for $\infty$(move\_pen($*$ unit\_line 3)(/ 2$\pi$ x))}$$ while also rewriting the original program using this abstraction. Notably, this fragment -- which draws polygons with lines of length 3 for sides -- is not the most intuitively generalizable for the graphics domain. A programmer with more domain-specific prior knowledge would probably prefer an abstraction like $$\small\texttt{$\lambda$xy.(for $\infty$(move\_pen($*$ unit\_line y)(/ 2$\pi$ x))}$$ which additionally parameterizes the polygon by the length of its sides, and is semantically equivalent to the high-level $\texttt{polygon\_fn}$ described in the problem setup in Sec.~\ref{problem-formulation}. However, learning abstractions by compressing the library and current solved training tasks may actually disfavor this more intuitively generalizable (but less compressive) candidate. Our second key goal in introducing language will be to leverage it as an additional source of prior knowledge to improve abstraction generalization. \subsection{Joint prior over programs and language} \paragraph{Base prior} We formulate our joint prior over language and programs as \begin{equation} \label{joint-prior} \text{P}} %{{\rm I\kern-.3em P}[\rho, d_t] = \text{P}} %{{\rm I\kern-.3em P}[\rho | \mathcal L, \theta_\library] \text{P}} %{{\rm I\kern-.3em P}[d_t | \rho, \mathcal L] \end{equation} decomposed as the product of the original program prior defined on a program library $\text{P}} %{{\rm I\kern-.3em P}[\rho | \mathcal L, \theta_\library]$, and a learned program-to-natural-language ``translation" model $\mathcal T(d_t | \rho, \mathcal L) \approx \text{P}} %{{\rm I\kern-.3em P}[d_t | \rho, \mathcal L]$ which describes how natural language descriptions are generated for latent programs (in our running example, this model would describe how the \textit{large six gon} description was generated conditioned on the program solution for that task.) This decomposition builds modularly on the original program prior defined only on the library $\mathcal L$. Learning $\mathcal T(d_t | \rho, \mathcal L)$ formalizes the intuition that there should be a learnable relationship between language that describes tasks and latent programs that solve them. $\mathcal T(d_t | \rho, \mathcal L)$ can be implemented in many ways (e.g. \cite{wong2007learning,joshi1997tree,bahdanau2014neural,chen2018tree}), compatible with the vast literature on structured translation between languages, including natural languages and programming languages. Our experiments use the translation model popularly known as \emph{IBM Model 4} \citep{brown1993mathematics}, one of a class of well-studied Bayesian machine translation models \cite{gal2013systematic} which decompose $\mathcal T(d_t | \rho, \mathcal L)$ into \begin{equation} \label{decomposed-translation} \mathcal T(d_t | \rho, \mathcal L) \propto \prod_{w \in d_t, l \in \rho} \text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w | l] \end{equation} a product of learned token-level translation probabilities $\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w | l]$ between individual functions $l$ in a task's latent program $\rho$ and words $w$ in the task description $d_t$. (See supplementary materials for model implementation and training details.) This token-level decomposition more directly captures the intuition in our setup: that abstractions in a programming library generally correspond systematically to individual names in natural language descriptions, and that the inverse conditional search can be guided based on a generally compositional relationship between program primitives and words. This formulation also allows these compositional relationships to be inferred from fewer observed examples than would be possible with other translation models with weaker inductive biases. However, Eq. \ref{joint-prior} should extend to include any similar translation model and need not include this stronger decomposition. \vspace{-0.2cm} \paragraph{Adding richer priors} In LAPS, the joint model can also provide a controllable interface for incorporating additional prior knowledge about language into learning. Learned translation models are often fit to only maximize the likelihood of the observed language (here, with respect to inferred latent training programs). However, our formulation also supports $\mathcal T(d_t | \rho, \mathcal L)$ enriched to include additional priors over language (such as speaker-specific language usage, or \textit{pragmatics} models that capture a speakers' other communicative goals \cite{grice1989studies,goodman2016pragmatic}.) In our experiments (\autoref{section-experiments}) we showcase this with results from an extended model incorporating an additional \textbf{mutual exclusivity} prior. Mutual exclusivity models the expectation that newly encountered words should correspond to different meanings than known ones. This prior has been shown to play an important role in language learning in cognitive science \cite{frank2009using,markman1988children}, and in machine learning models \cite{gandhi2019mutual}. In the synthesis setting, mutual exclusivity can capture the expectation that ``new" words (which appear in descriptions of currently unsolved tasks) are more likely to correspond to different program components than those used in solved training tasks (and for which there would otherwise be no signal to learn a translation model in the distant setting). Our extended model incorporates this prior by updating Eq. \ref{decomposed-translation} to distinguish between $W_{known}$ (words that appear in solved training tasks with latent programs) and $W_{new}$ (newly encountered words) as \begin{equation} \label{decomposed-translation-me} \begin{split} \mathcal T_{ME}(d_t | \rho, \mathcal L) \propto \prod_{w \in d, l \in \rho} (\mathbbm{1}[w \in W_{known}] \text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w | l]) \\ (\mathbbm{1}[w \in W_{new}] \text{P}} %{{\rm I\kern-.3em P}[l | \mathcal L, \theta_\library]^{-1}]) \end{split} \end{equation} where new words are modeled as \textit{inversely} related to primitives under the program prior (fit to previously solved tasks) -- modeling the expectation that new words more likely relate to less-used program components than those used so far. \subsection{Integrating the joint model into amortized conditional search} The joint model allows LAPS to incorporate natural language into the learned conditional search model over programs. In place of the original neural amortized model in the base algorithm (\autoref{amortized}), we train an extended, language-conditioned model $Q_i(\rho | t, d_t, J_i)$ at each iteration to predict programs according to: \begin{equation} \begin{split} Q(\rho | t, d_t, J_i) &\approx \text{P}} %{{\rm I\kern-.3em P}[\rho|t, d_t, J,\theta_\joint] \\ & \propto\text{P}} %{{\rm I\kern-.3em P}[t|\rho]\text{P}} %{{\rm I\kern-.3em P}[\rho, d_t|J,\theta_\joint] \\ &\propto\text{P}} %{{\rm I\kern-.3em P}[t|\rho]\text{P}} %{{\rm I\kern-.3em P}[d_t|\rho]\text{P}} %{{\rm I\kern-.3em P}[\rho |\mathcal L,\theta_\library]\\ & \approx \text{P}} %{{\rm I\kern-.3em P}[t|\rho]\mathcal T(d_t|\rho, \mathcal L)\text{P}} %{{\rm I\kern-.3em P}[\rho |\mathcal L,\theta_\library] \end{split} \end{equation} which amortizes program inference under our joint model formulation. Importantly, we can train this neural model using samples from the \textit{joint} generative model, consisting of sampled programs \textit{and corresponding generated language}. As with the original learning setting, this sample-based training allows LAPS to learn a generalizable, language-conditioned neural search heuristic, capable of leveraging compositional patterns in natural language, from very few examples in the distant supervision setting. We can also now see the benefits of richer language-specific priors (such as mutual exclusivity): the neural model trained to amortize inference from the joint generative model can also approximate the mutual exclusivity bias, enabling better exploration and generalization in the presence of new words. \subsection{Abstraction learning as joint model compression} The extended joint model objective in Eq. \ref{optimal-library} and \ref{optimal-joint} also allows LAPS to incorporate natural language into \textit{abstraction learning}. Extending the compression-based abstraction objective in the base algorithm -- which optimized for libraries that maximally compress the latent training programs and library -- requires defining a prior over the language-program \textit{translation} model $\mathcal T$ in terms of the optimal program library. We place a prior over $\mathcal T$ defined on a program library $\mathcal L$ and a natural language token vocabulary $W$ as \begin{equation} \label{translation-prior} \text{P}} %{{\rm I\kern-.3em P}[\mathcal T | \mathcal L] \propto \sum_{l \in \mathcal L, w \in W}-I(\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w|l]) \end{equation} where $-I(\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w|l]) = -\log(\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w|l])$. This models the intuition that a good library contains program abstractions which correspond well to individual language tokens, and reduce entropy in the compositional translation model. Defining the prior compositionally also allows the algorithm to maintain the desirably property from \cite{ellis2020dreamcoder}, in which the joint likelihood can be efficiently re-approximated with respect to individual candidate program abstractions based on their constituent subcomponents $l$ and corresponding translation distributions $\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w|l]$ under the current translation model. As in the base synthesis algorithm, we fully re-estimate a new translation model at each iteration $\mathcal T_{i+1}(d_t |\rho_{i+1}, \mathcal L_{i+1})$ to fit the updated library and refactored programs. See the supplement for extended details. Taken together, Alg. \ref{algorithm-1} summarizes the concrete algorithm using LAPS to incorporate language into \cite{ellis2020dreamcoder}. \begin{algorithm}[t] \caption{ } \label{algorithm-1} \begin{algorithmic} \STATE \textbf{Input:} Initial library $\mathcal L_0$, annotated training tasks $(T, D)$ \STATE Initialize $\theta_\library \gets$ uniform; training task solutions \textbf{p} $\gets \{\}$ \FOR{i $\leq f$} \STATE $J_i \gets$ Fit $\theta_\library$ and $\mathcal T(d_t | \rho)$ to (\textbf{p}, $d_t$) \STATE $Q_i(\rho | t, d_t)$ $\gets $ Train on (\textbf{p}, T, $d_t$) and samples $\sim J$ \STATE \textbf{p} $\gets $ programs from search amortized with $Q_i$ \STATE $\mathcal L_i \gets$ abstractions optimized over (\textbf{p}, J) \ENDFOR \STATE \textbf{Return} $Q_f, \mathcal L_f$ \end{algorithmic} \end{algorithm} \subsection{Domains} \label{section-domains} \label{section-experiments} \begin{figure*}[h!] \centering \includegraphics[width=0.93\textwidth]{figures/all_domains_abstractions.pdf} \caption{\textbf{(A, B, C)} Example tasks from all three synthesis domains shown with synthetic and sample human language annotations. Inductive synthesis domains are shown with a random subset (n=3) of the paired input/output examples. Human language annotations are also randomly sampled (all domains were annotated by multiple people for a broader range of language.) \textbf{(D)} Representative \textit{initial program primitives} and \textit{library abstractions} learned with LAPS for the graphics domain. Shown with example tasks solved with synthesized programs containing the learned abstractions and high probability natural language learned from the joint model. }\label{domains} \end{figure*} All three domains consist of a dataset of inductive synthesis \textit{tasks} $t$ specified as input/output examples; procedurally generated \textit{synthetic language annotations}; and \textit{human language annotations} sourced from Mechanical Turk. We use synthetic language as our primary evaluation benchmark: we are interested in a controlled probe of learning when words are systematically reused and composed, but refer to more abstract concepts than in the initial base programming language. However, we also use human language to evaluate the practicality of our approach in real-world settings.\textit{ Additional information for all domains is in the supplement}. \textbf{String editing:} structured string transformation problems taken from \cite{andreas2017learning} (n=1000 train; n=500 test). Tasks consist of input dictionary strings transformed using randomly sampled regular expression transducer (30 I/O examples per task). We choose this domain to demonstrate LAPS on an important classic synthesis domain \cite{lau1998programming}. The dataset of \citet{andreas2017learning} contains human annotations; synthetic language annotations are generated over the ground-truth regexes using templates based on the original human annotations. We initialize synthesizers with functional programming primitives (\textit{map, fold, cons, car, cdr, length, index}) and character constants (following the simpler text editing domain in the baseline paper \cite{ellis2020dreamcoder}). The neural search model encodes the I/O task examples as character arrays with a bidirectional GRU. \textbf{Compositional graphics}: inverse graphics problems (n=200 train; n=111 test) where each task is specified by an image and solved by synthesizing a program in LOGO Turtle graphics \cite{abelson1986turtle}. This is inspired by the graphics domain in \cite{ellis2020dreamcoder} but re-designed to be more challenging (ground-truth programs are much longer on average in the base programming language) and explicitly compositional. Synthetic language annotations are generated with high-level templates over the objects and relations in each task; human annotations are sourced as image descriptions from MTurk. We initialize synthesizers with the graphics primitives in \cite{ellis2020dreamcoder}. The neural model encodes image examples with a CNN. \textbf{Structured scene reasoning:} inductive scene reasoning tasks (n= 212 train; n=115 test) where each synthesis problem is specified by a structured input scene, and outputs can be a number (\textit{how many red rubber things are there?}), a boolean value (\textit{are there more blue things than green?}), or another scene (\textit{what if all of the red things turned blue?}). This domain is modeled on CLEVR \cite{johnson2017clevr} but designed to support inductive synthesis tasks specified over the symbolic scene representations (an array of objects represented as dictionaries of attributes) from the original CLEVR task generator in \citet{johnson2017clevr}. We also add new tasks that require \textit{generating} or \textit{imagining} latent scenes (\textit{how many metal things would be left if all the blue cylinders were removed?}), which are not solvable in the original high-level DSL hand-designed for \citet{johnson2017inferring} (and used in synthesis-based approaches like \citet{yi2018neural}). We include these to demonstrate a key feature of our approach: the ability to \textit{learn} generalizable libraries from a basic but expressive set of primitives, rather than restricting the program space pre-emptively with a hand-designed language. We use synthetic language annotations from the original templates in \cite{johnson2017clevr} (and templates written in the same style for the extended tasks); human annotations are sourced from annotators shown the same tasks. We initialize synthesizers with functional programming primitives similar to the string-editing domain, with domain-specific query functions and constants (\textit{get\_color(x); get\_shape(x); blue; cube}). The neural model encodes the task examples as flattened arrays of object attributes using a bidirectional GRU. \subsection{Results}\label{results} \begin{table*}[] \caption{\% held-out test-tasks solved. To compare robustness, we run random seed replications in the graphics domain for the synthetic language dataset. \textit{Best} reports the best model across replications; \textit{Mean} averages across replications. } \label{quant-results} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}p{0.3\linewidth}lccccc@{}} \toprule Language & Model & \multicolumn{1}{l}{Strings (n$_{test}$ = 500)} & \multicolumn{2}{c}{Graphics (n$_{test}$ = 111)} & \multicolumn{2}{c}{Scenes (n$_{test}$ = 115)} \\ \midrule & & \% Solved & \% Solved (Best) & \% Solved (Mean) & \% Solved (Curric.) & \% Solved (Mean.) \\ \midrule Synth train/test & DreamCoder (no language) & 33.4 & 49.55 & 42. 64 & 67.80 & 73.9 \\ Synth train/test & Multimodal (no generative translation model) & 46.00 & 26.12 & 23.20 & 76.50 & 49.5 \\ \midrule Synth train/test & LAPS in neural search & 52.20 & 92.79 & 52.93 & 95.6 & 88.1 \\ Synth train/test & LAPS + mutual exclusivity & \textbf{57.00} & 86.49 & 80.18 & \textbf{96.5} & 82.3 \\ Synth train/test & LAPS + ME + language-program compression & 54.60 &\textbf{98.19} & \textbf{81.98} & 95.6 & \textbf{95.9} \\ \midrule Synth train/human test & LAPS + ME + language-program compression & 54.60 & 89.20 & -- & 97.4 & -- \\ Human train/human test & LAPS + ME + language-program compression & 48.60 & 58.55 & -- & 95.6 & -- \\ \midrule \textbf{No language at test} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \midrule No language on train/test & Original DSL; Enumerative & 0.06 & 0.00 & -- & 27.8 & -- \\ No language on train/test & DreamCoder (best library): Enumerative & 27.2 & 41.44 & -- & 53.6 & -- \\ No lang at test & LAPS (best library): Enumerative & 33.2 & 62.16 & -- & 93.04 & -- \\ No lang at test & LAPS (best library): example-only neural synthesis & \textbf{52.4} & \textbf{91.0} & -- & 95.6 & -- \\ \bottomrule \end{tabular}% } \end{table*} \begin{figure*}[h!] \centering \vspace*{-10pt} \includegraphics[width=\textwidth]{figures/learning_curves_long.pdf} \vspace*{-20pt} \caption{Learning curves comparing baselines and LAPS models in Table \ref{quant-results}, showing \% heldout tasks solved on the graphics domain over random training task orderings. (\textit{Mean} results in Table \ref{quant-results} shows average test-time performance from the trained model replications.)} \label{graphics-curves} \end{figure*} On all three domains, we compare our model against the baseline synthesizer (Table \ref{quant-results}, \textbf{DreamCoder, no language}); a multimodal baseline (Table \ref{quant-results},\textbf{ multimodal, no generative model}) that trains a neural model directly on solved training tasks (similar to neural synthesis models like DeepCoder \cite{devlin2017robustfill} but augmented to condition on language); and ablated LAPS variants (Table \ref{quant-results}; \textbf{LAPS} rows) to evaluate the additive contributions of the individual learning components. We compare all models using a matched search budget per task and number of training iterations overall, determined using a hyperparameter search with the baseline. The supplement contains full details (and code) to replicate all experiments; and additional qualitative results. We find that: (1) \textit{LAPS searches more effectively during training}, enabling it to solve and learn from more training tasks than the baseline synthesizer. Under the hierarchical model formulation, search and abstraction are closely related: successfully solving tasks is the basis for abstraction learning. Comparing the model \textit{learning trajectories} (Fig. \ref{graphics-curves}) on training tasks shows that the LAPS models consistently search more effectively during training: at each iteration they solve more tasks within a given time budget. Fig. \ref{graphics-curves} also highlights that LAPS models improve training \textit{robustness} in the distant learning setting: as in the baseline paper \cite{ellis2020dreamcoder}, we find the baseline model learning to be highly variable without a training curriculum (compare training curves from Fig. \ref{graphics-curves} with different random seed replications; and the \textit{best} vs. \textit{mean} performance, Table \ref{quant-results}.) Comparing the LAPS ablations also suggests that linguistic priors (like \textit{mutual exclusivity}) can indeed be practically useful here during learning (Table \ref{quant-results}, compare \textit{LAPS with ME and without}). What if we do use a curriculum? In the scene reasoning domain (where previous approaches (e.g. \citealt{mao2019neuro}) have argued for a curriculum), we also test a simple curriculum by ordering tasks according to their natural language token length (which can be evaluated without ground truth programs). Table 1 shows that our model is still more effective, and that non-curriculum performance is in fact comparable to curriculum performance. (2) \textit{LAPS abstracts more effectively during training}, adding in more generalizable library routines as it learns. The variability across training replications in the baselines also highlights a challenge for abstraction learning: not all shared subroutines encountered in training generalize well to new tasks. Adding poor abstractions can actually be detrimental: they increase the combinatorial search space. We find that our approach produces higher-quality libraries after training: Table \ref{quant-results} (\textbf{no language at test time} section) shows that we consistently improve performance in a head-to-head comparison using enumerative search from the library priors alone -- in some domains, enumerative search with our model’s library outperforms \textit{neurally guided search} from the baseline model. We also find the learned library is effective for neurally-guided synthesis when no language hints are available after training (Table \ref{quant-results},\textbf{ no language at test, example-guided synthesis}), showing that LAPS incorporates language to learn a more effective library overall, which generalizes to the non-language setting. See supplement for example learned abstractions from $\mathcal L_f$. (3) \textit{LAPS can use language during testing if it is available, though it doesn't need to for competitive performance}. Clearly, language can provide a useful source of high-level information if it is available for new tasks. Our approach produces a neural synthesizer pre-trained to condition on language where available. Results on all three domains show that the model can use it to achieve additional performance gains (Table \ref{quant-results}, see \textit{language at test} rows). We also find that the models trained on synthetic annotations generalize effectively to natural human language at test (Table \ref{quant-results}, \textit{synth train, human test}), suggesting that even if human annotation is too costly, in many cases hand-writing natural language templates to accompany a few ground-truth programs is likely sufficient (and easier than hand designing a full DSL). \section{Introduction}\label{section-introduction} \input{1_introduction} \section{Related Work}\label{related-work} \input{2_related_work} \section{Inductive synthesis and library learning}\label{problem-formulation} \input{3_problem_formulation} \section{Base learning algorithm: DreamCoder}\label{section-dreamcoder} \input{4_dreamcoder_review} \section{Our Approach: Language for Abstraction and Program Search}\label{section-LAPS} \input{5_laps} \section{Experiments}\label{experiments} \input{6_experiments} \section{Conclusion} \input{7_conclusion} \let\thefootnote\relax\footnotetext{\textbf{Acknowledgements}: Many thanks to M. Nye, J. Mu, A. Marzoev, J. Fan, R. Hawkins, R. Levy, L. Schulz and our anonymous reviewers for invaluable feedback. Supported by grants from the Air Force Office of Scientific Research, the NSF under Grant No. 1918839 and NSF-funded Center for Brains, Minds, and Machines, the MIT-IBM Watson AI Lab, Google, Microsoft and Amazon.} \section*{S4. Base learning algorithm: DreamCoder}\label{supplemental-dreamcoder} The LAPS framework described in the main paper (Sec. 5) is a general one for extending Bayesian models of program learning to incorporate information from natural language (see \cite{liang2010learning,lake2015human, dechter2013bootstrap,lake2013one}). Our concrete implementation and experiments use the DreamCoder approach of \cite{ellis2020dreamcoder, ellis2018learning} as the base synthesis algorithm, which implements the hierarchical Bayesian formulation of program learning. It defines a modular interface with two primary learning components: a learned \textit{conditional inference} model for search (as a neural search heuristic); and a learned \textit{abstraction} algorithm for updating the program prior (based on program refactoring and compression) \cite{ellis2020dreamcoder}. Each of these learning components has been additionally implemented in other work (such as \cite{devlin2017robustfill,polosukhin2018neural,nye2019learning,parisotto2016neuro,balog2016deepcoder} for neurally guided synthesis, and \cite{dechter2013bootstrap,zhang2017macro,shin2019program,artzi2014learning,dumancicinventing} for program abstraction learning). This supplementary section provides theoretical and implementation details on the DreamCoder algorithm we use in our experiments (summarized in Sec. 4). We match our implementation as closely as possible to the original work for comparison with published baselines. We provide key details relevant to the language-guided extension, but strongly recommend the original works which introduce the DreamCoder algorithm \cite{ellis2020dreamcoder, ellis2018learning} for further reference. \subsection*{S4.1 Program prior and MDL equivalence} Hierarchical Bayesian program learning formulations require a prior over expressible programs. DreamCoder is learned iteratively: it is initialized with a base library $\mathcal L_0$ and returns a library $\mathcal L_f$ containing program abstractions learned from solving training tasks. Therefore, DreamCoder defines its program prior with respect to the current library $\mathcal L_i$ maintained at each iteration. This is parameterized as a simple PCFG $\text{P}} %{{\rm I\kern-.3em P}[\rho | \mathcal L, \theta_\library]$ whose productions are of the form $l_i \to l_j \in \mathcal L$, each with a real-valued weight $\theta_{\mathcal L l}$, where the probability of a program $\rho$ is given by $\text{P}} %{{\rm I\kern-.3em P}[\rho | \mathcal L, \theta_\library] = \prod_{l\in \rho}\text{P}} %{{\rm I\kern-.3em P}[l | \mathcal L, \theta_\library]$ (Sec. 4.1). Minor complexity arises in order to support typing \cite{pierce}: following \cite{ellis2018learning}, the library $\mathcal L_i$ is implemented as a set of polymorphically typed $\lambda$-calculus expressions. The only change this produces to the original prior definition is to restrict the set of possible productions under the PCFG: that is, permissible productions are of the form $l_i \to l_j \in \{\mathcal L |l_i \to l_j \mkern3mu \text{is well typed}\}$. The prior probabilities of programs are therefore calculated with respect to the set of well-typed productions. As discussed in the main paper, this prior definition is \textit{equivalent to a minimum description-length prior over programs} under ($\mathcal L, \theta_\library$) when all $\theta_\library < 1.0$, as the product of additional productions in an expression will strictly decrease as the number of productions in an expression increases. \subsection*{S4.2 Amortized conditional inference} \begin{figure} [h] \centering \includegraphics[width =0.45\textwidth]{figures/supplemental-base-neural.png} \caption{Architecture of the neural model $Q_i(\rho | t, \mathcal L_i)$. The model takes as input task examples $t$. These are encoded using a domain-specific encoder $E(t)$. Task encodings feed to an MLP and activation layer and output a tensor $Q$. This parameterizes a distribution over program bigrams in the final DSL, which defines a conditional distribution from which to enumerate programs during search.} \label{neural_model} \end{figure} To identify programs that solve tasks $t$ while obtaining high probability under $\text{P}} %{{\rm I\kern-.3em P}[\rho | \mathcal{L}, \theta_\library]$, DreamCoder trains a neural search heuristic $Q_i(\rho | t, \mathcal L_i)$ at each iteration $i$ to approximate the inverse model. The training procedure in \cite{ellis2020dreamcoder} (summarized in Sec. 4.2) is a key contribution of the original work for learning in the distant supervision setting. The model is trained on samples from the generative prior (providing an endless training stream of random synthesis tasks); and this procedure should generalize immediately to any neural model for predicting programs conditioned on the task specification (e.g. \cite{devlin2017robustfill,polosukhin2018neural,nye2019learning,parisotto2016neuro,balog2016deepcoder}). The model is also supervised on any original training task examples and their program solutions discovered during learning. In our experiments we use the baseline neural model architecture in \cite{ellis2020dreamcoder}. This is parameterized by two modular components: \begin{enumerate} \item \textit{A domain-specific task encoder} $E(t)$. This encodes the task examples (e.g. \textit{images} in the graphics program domain, or input-output strings in the text editing domain) that are input to the neural model. This task encoder architecture is defined domain-specifically based on the form of the task examples (e.g. a CNN for the graphics domain). It outputs a fixed dimensional embedding for any given task as \textit{input} to the model. In our experiments this is a 64-dimensional embedding across all domains (See S6.1 for domain-specific architectures; and released code.) \item \textit{A conditional model over programs} $Q(\rho | E(t))$. This component receives the task encoding as input and outputs a distribution over programs. Following \cite{ellis2020dreamcoder}, this is a 2-layer fully-connected MLP (with 64 hidden units and a final tanh activation layer) that outputs a fixed-dimensional real-valued tensor encoding a distribution over programs in the library $\mathcal L$ as output. The real-valued tensor corresponds to weights over program primitives conditioned on their local context in the syntax tree of the program, consisting of the parent node in the syntax tree and which argument is being generated. This functions as a `bigram transition model' over trees that encodes the likelihood of transitions from one primitive to the next. $Q$ returns this as a $(|\mathcal L|+1)\times (|\mathcal L|+2)\times A$-dimensional tensor, where $A$ is the maximum arity of any primitive in the library. \end{enumerate} This parameterization supports fast sampling of programs during conditional synthesis: the neural model runs once per task (to encode the task examples and produce the bigram transition model) and the resulting parameterization can then be used to sample programs during synthesis (e.g. by enumerating programs by expanding trees (as `bigrams' over parent and children primitives) ranked in order of their likelihood starting from the program root.) Following \cite{ellis2020dreamcoder}, the neural model is trained to optimize the following MAP inference objective on the training tasks and the sampled tasks from the prior: \begin{equation} \mathcal{L}_{\text{MAP}} = \text{E}} %{{\rm I\kern-.3em E}_{t\sim(\mathcal L,\theta_\library) }\left[ \log Q\left(\argmax_{\substack{\rho}} \text{P}} %{{\rm I\kern-.3em P}[\rho |t,\mathcal L,\theta_\library]\;\bigg\vert\; t \right) \right] \end{equation} \subsection*{S4.3 Abstraction learning as program compression} DreamCoder learns new abstractions to approximately optimize for Eq. 2 (main paper), which infers an optimal library and parameters with respect to the observed programs on the training tasks. The DreamCoder abstraction algorithm is a primary contribution of the original work in \cite{ellis2020dreamcoder}, and is discussed extensively in \cite{ellis2020dreamcoder}. We therefore provide additional technical details here that are relevant to its integration with LAPS in our experiments, but strongly encourage referencing \cite{ellis2020dreamcoder} for the full implementation. As discussed in \cite{ellis2020dreamcoder} and our main work, DreamCoder approaches abstraction using an equivalence between Eq. 3 and the \textit{minimum description length} of the \textit{prior} (as the description length of the library) and the \textit{programs} produced from the prior (under the PCFG definition of the prior). Therefore, in practice, inferring the optimal library is equivalent to inferring the library which maximally compresses the description length of the library and the description length of programs which explain the training tasks. In particular, DreamCoder optimizes the following compression objective with respect to the training tasks $T$ and the finite \textit{beam} $B_t$ of program solutions discovered for each training task during learning: \begin{align} \log \text{P}} %{{\rm I\kern-.3em P}[\mathcal{L}] + \argmax_{\theta_\library}&\sum_{t\in T}\log \sum_{\rho\in \mathcal{B}_t}\text{P}} %{{\rm I\kern-.3em P}[t|\rho]\max_{\rho'\longrightarrow^*\rho}\text{P}} %{{\rm I\kern-.3em P}[\rho'|\mathcal{L},\theta_\library]\nonumber\\& \text{\phantom{tt}}+ \log \text{P}} %{{\rm I\kern-.3em P}[\theta_\library|\mathcal{L}] - \vert\theta_\library\vert_0 \label{infiniteReFactorObjective} \end{align} The key aspect of this algorithm is that it considers abstractions which compress not only the programs as they are \textit{currently written}, but any semantically equivalent \textit{refactorings} of these programs. Specifically, as programs are written in a $\lambda$-calculus, \textit{refactoring} refers to any program which is equivalent up to $\beta$-reduction (i.e., function application/variable substitution~\cite{pierce}). A primary contribution of the original work in \cite{ellis2020dreamcoder} is an efficient algorithm for computing these refactorings that is unchanged when we integrate language; we refer to the original text for details. In our work, the primary important aspect of this aspect is that refactorings are defined compositionally over the existing program primitives. Specifically, refactorings can be efficiently calculated according to semantic equivalences in the the $\lambda$-calculus (namely, that function application and variable substitution guarantee that the resulting refactored programs are equivalent. \textit{Abstractions} created by variable substitution will always be composed of subcomponents from the initial library.) We take advantage of this compositionality when defining our joint abstraction algorithm over natural language. Defining an initial \textit{compositional} translation model between language and the program components ensures that we can approximate compression in the joint model after the programs are refactored, without needing to induce an entirely new translation model over language and the refactored programs. \section*{S5. Our Approach: Language for Abstraction and Program Search} This section now describes technical details for the concrete LAPS implementation in our reported experiments, which is defined over the DreamCoder implementation. We structure this section according to the parallel implementations in the base algorithm for clarity. However, except for the specifics of the joint-abstraction algorithm, the technical implementation of each component should extend directly to most other similar learned synthesis algorithms (e.g. the joint model implementation should be reusable in \textit{any} synthesis algorithm that uses an explicit symbolic library of primitives.) \subsection*{S5.1 Joint prior over programs and language} LAPS extends the prior $\text{P}} %{{\rm I\kern-.3em P}[\rho]$ over programs under the library to a \textit{joint} prior $J(\rho, d_t)$ over programs for a given task and their natural language descriptions $d_t$ (Sec. 5.1). We formulate this prior as $$J(\rho, d_t) = \text{P}} %{{\rm I\kern-.3em P}[\rho | \mathcal L, \theta_\library] \text{P}} %{{\rm I\kern-.3em P}[d_t | \rho, \mathcal L]$$ the product of the original prior over programs $P[\rho | \mathcal L, \theta_\library]$ defined on the program library, and a \textit{program to descriptions} ``translation" model $\mathcal T(d_t | \rho, \mathcal L) \approx \text{P}} %{{\rm I\kern-.3em P}[d_t | \rho, \mathcal L]$ that describes how descriptions are generated for programs written in the library. The concrete implementation described in the main paper uses a translation model that additionally decomposes compositionally over language and programs--in particular, on the basis of token-token translation distributions $\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w | l]$ between words $w \in d_t$ and $l \in \mathcal L$. Many available translation and semantic parsing models (such as synchronous grammars over natural language and programs) preserve this further compositional requirement (e.g. \cite{artzi2014learning,wong2006learning}). See Figure S\ref{joint-generative} (supplement) for example samples from the generative model on the graphics domain at earlier and later stages of training. Our implementation uses a classical statistical machine translation model (the Model 4 version of the IBM Statistical Machine Translation models \cite{gal2013systematic}) whose parameters can be tractably estimated from very few paired programs and descriptions (in the distant supervision setting used in the original work, there may be no more than a couple of hundred training tasks in the full dataset, and fewer than 10 solved tasks on which to train the translation model at any given time.) In addition to inference in small data settings, this translation model has a fully compositional generative definition \cite{gal2013systematic} that allows it to be easily used to train the neural amortized inference model which conditions on language. Despite this, however, this translation model (and the further inductive biases used to specifically relate program trees to sentences) make strong compositonality assumptions about the relationship between program primitives and words as a joint generative model of programs and language; we find that these inductive biases are useful in the small data setting and produce empirically successful results. However, this is likely because of \textit{how} the joint model is used during training, which does not require a perfect generative model of language (or language with respect to programs) for either amortizing inference or abstraction in order to use language as a heuristic during learning. A full definition of the statistical translation model we use can be found in \cite{gal2013systematic}. We re-summarize important details here. The IBM family of translation models estimates the conditional token-token probabilities $\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w | l]$ on the basis of \textit{alignment} variables $a_{l,d}$, which specify a direct correspondence between tokens in parallel texts (e.g. a word in a task description and a program primitive.) These alignments are \textit{many:many} between tokens in programs and natural language sentences -- a given word can correspond to multiple primitives, and vice versa. Conditioned on a set of \textit{alignments} from paired programs and descriptions, the conditional probabilities in \textit{both} directions (the probability of generating a program primitive in a program based on the presence of a word in a sentence, and vice versa) are defined by marginalizing over the alignment variables. We provide one direction ($\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w | l]$), as the other is symmetrical: $$ \text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w | l] \propto \sum_{a_1}...\sum_{a_m} \text{P}} %{{\rm I\kern-.3em P}[w, a_1...a_m | l] \propto \prod_{i=1}^{m}q(a_i | i, l, m) $$ where $a_i$ are alignment variables inferred over a paired corpus and $q(j | i, l, m)$ can be interpreted as the probability of alignment variable $a_i$ (for the token with index $i$ in a program) taking value $j$ (where $j$ is an index into the corresponding sentence) conditioned on the lengths $l$ and $m$ of the program and natural language sentence \cite{gal2013systematic}. These alignments are inferred by approximately inverting the generative model in \cite{gal2013systematic} to maximize the likelihood of the observed paired sentences and programs. One implementation detail: the alignment algorithm operates over pairs of strings. For convenience we infer alignments between sentences and linearized token sequences in the program tree (which can be done with complete recoverability of the original program tree \cite{andreas2013semantic}). This is another inductive assumption that we choose after preliminary experimentation and find that our implementation yields strong empirical results regardless. The IBM translation model is a noisy-channel generative model that requires an additional language model $p(d)$ to generate language \cite{gal2013systematic,heafield2011kenlm}. We use an efficient parallelized implementation for inferring the translation model parameters from \cite{koehn2007moses}, which also contains a basic language model inference algorithm inferred over the full corpus of training task sentences (as a trigram model, which we again find simple but effective for our very small data setting). Specific model hyperparameters for all experiments are available in the released code repo (in the experiment runtime commands.)\\ \textbf{Mutual exclusivity}: Section 5.1 of the main paper also describes how the joint model can be modified to include language-specific priors, such as a simple implementation of the well-known \textbf{mutual exclusivity} prior documented in the cognitive language-learning literature \cite{markman1988children,gandhi2019mutual} and given a Bayesian formulation in \cite{frank2009using}. We provide an implementation to demonstrate that the joint model can be easily extended: specifically, a simple mutual exclusivity assumption can be added into the joint model by simply updating the compositional translation model to include additional distributions $t_{ME}(d_{new}| l)$ where $d_{new}$ are words that \textit{only} appear in unsolved training tasks and $$t_{ME}(d_{new}| l) \propto \alpha \text{P}} %{{\rm I\kern-.3em P}[l | \mathcal L, \theta_\library]^{-1}$$ new words are now assumed to correspond to primitives \textit{inversely} proportional to their current usage under the learned program prior. As we show in the next section, incorporating this prior at the level of the joint model can be used to approximate mutual exclusivity assumptions in the learned search heuristic, encouraging exploration in the presence of new words. Practically, we calculate the mutual exclusivity prior in our concrete implementation by leveraging the \textit{alignments} upon which our token-token translation probabilities are defined. Specifically, we add \textit{pseudoalignments} between each $d_{new}$ and each $l$ $\propto \alpha \text{P}} %{{\rm I\kern-.3em P}[l | \mathcal L, \theta_\library]^{-1}$; when the token-token translation probabilities marginalize over the latent alignments and these pseudo alignments, the resulting translation probabilities encode the mutual exclusivity prior. \subsection*{S5.2 Integrating the joint model into amortized conditional search} \begin{figure} [h] \centering \includegraphics[width =0.45\textwidth]{figures/supplemental-laps-neural.png} \caption{Architecture of the language-conditioned neural model $Q(\rho | d,t)$. The model takes as input task examples $t$. These are encoded using a domain-specific encoder $E(t)$. The model additionally takes in task descriptions $d$, encoded using a languag encoder $E_D(t)$ (implemented as a GRU). Task encodings are concatendated and feed to an MLP and activation layer and output a tensor $Q$. This parameterizes a distribution over program bigrams in the final DSL, which defines a conditional distribution from which to enumerate programs during search.} \label{laps-neural-model} \end{figure} The amortized conditional inference model $Q(\rho | t)$ (Sec. 4.2) extends straightforwardly in LAPS to condition on language $Q(\rho | d,t)$ (Sec. 5.2). Importantly, the training procedure in Sec. 4.2 (training the neural model on samples from the prior) also extends to the language-enriched condition (training the neural model on samples from the joint prior, which include generated language annotations.) In our experiments we implement the concrete neural model $Q(\rho | d,t)$ in our experiments by extending modularly on the original model in \cite{ellis2020dreamcoder} (and in the supplemental S4.2) for direct comparison. Our full architecture therefore has \textit{three} modular components to additionally condition on language: \begin{enumerate} \item A \textit{natural language task descriptions encoder} $E_D(d)$. This receives the task description $d$ as input. We implement this as an RNN model using a bidirectional GRU \cite{cho2014learning} with 64 hidden units; we embed natural language symbols as 64-dimensional vectors, and randomly initialize and backpropagate through the embedding during training. We tokenize the sentences in $u$ on whitespace and concatenate each sentence, delimited by special start and end of sentence tokens. At test time, we replace any OOV tokens with a special UNK token. \item A domain-specific task encoder $E(t)$, following S4.2. \item A bigram transition model over program primitives, following S4.2. To condition jointly on $E_D(d)$ and $E(t)$ we simply concatenate these two embeddings and update the first layer of the MLP to take the 128-dimensional concatenated embeddings as input. \end{enumerate} \subsection*{5.3 Abstraction learning as joint model compression} Finally, the \textit{abstraction learning} model in \cite{ellis2020dreamcoder} can also be generalized to condition on language, by extending the optimal library inference algorithm with respect to the program prior to an optimal library inference algorithm with respect to the joint model over language and programs (Eq. 6 and 7, main text.) In our concrete implementation with respect to the DreamCoder algorithm, this means extending the description-length compression objective -- originally defined over the program library and training task programs -- to include the translation model definition. The main paper defines a description-length prior over the compositional translation model (Eq. 10). Optimizing this tractably requires redefining the abstraction algorithm in \cite{ellis2020dreamcoder} -- which refactors $\lambda$-calculus programs via $lambda$-abstraction (see S4.3 for a summary) -- to also jointly re-estimate the description length of the translation model $\mathcal T(d_t | \rho, \mathcal L')$ using the refactored programs under the new candidate library $\mathcal L'$. We implement an efficient approximation that can be calculated with respect to the classical statistical translation model described in S4.1 \cite{gal2013systematic}. In particular, we leverage the \textit{alignment}-based definition (which uses latent correspondences inferred between program tokens and sentence tokens in paired programs and descriptions) to approximate $-H(\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w|l]) = -\log(\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w|l]) $, the entropy of the token-token translation probabilities. Specifically, as the IBM model defines the conditional token-token probabilities $$ \text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w | l] \propto \sum_{a_1}...\sum_{a_m} \text{P}} %{{\rm I\kern-.3em P}[w, a_1...a_m | l] $$ marginalized over alignments, where (slightly abusing notation) in any given paired program and sentence description we will have estimated a set of alignments $a_{w_j,l_k...l_n}$ between the $j$-th token in the description corresponding to one \textit{or more} tokens $l_k...l_n$ in the paired program. We therefore define the \textit{description}-length of each token-token translation as the sum of the description lengths of the alignments which express it under a library $\mathcal L$: $$ \sum_{a_i}...\sum_{a_m} \text{P}} %{{\rm I\kern-.3em P}[d, a_1...a_m | l, \mathcal L] \propto \sum_{a_1}...\sum_{a_m} |a_i|_{\mathcal L} $$ and the description lengths under the \textit{refactored} library $\mathcal L'$ containing new abstractions compresses according to \begin{equation} \begin{split}|a'_{w_j,l'_k...l'_n}|_{\mathcal L'} < |a'_{w_j,l_k...l_n}|_{\mathcal L} \iff \\ \{l'_i \text{contains only } l_k...l_n \text{ as subcomponents} | l'_k...l'_n\} \end{split} \end{equation} and we say that a primitive $l \in \mathcal L$ is a \textit{subcomponent} of a refactored abstraction $l \in \mathcal L$ if the abstraction can be $\beta$-reduced such that $l$ appears in it. That is, a refactored alignment $a’ : w_i \to \{l’...l_n\}$ is compressed only when a new abstraction $l'$ encapsulates over a strict subset of the constituent program primitives already aligned to the word in the original alignment. This allows us to re-approximate the description length of the new translation model with respect to a semantically-equivalent program refactoring without inducing $\text{P}} %{{\rm I\kern-.3em P}_\mathcal T[w | l]$ from scratch (which would require retraining the full translation model over the sentences and refactored programs.) \section*{S6. Experiments} This section describes additional details on each of the domains -- \textit{string editing, compositional graphics,} and \textit{scene understanding} -- in Section 6 of the main paper (see \textbf{Figure 2, main text} for examples from all three domains, shown along with the synthetic and human language annotations). We also provide additional details on the model and baseline hyperparameters available for each domain. All datasets generated for these experiments (including human language annotations) are released and links to static repositories are provided in the code release. We also release a complete set of commands to exactly replicate all model experiments. All experiments for were conducted on a high-powered computing cluster using a fixed training budget of wall-clock search time per task for all models and baselines in a given domain (determined via hyperparameter search using the baseline model per domain, and reported on a per-domain basis below). The experiments on the string editing and graphics domains used models trained using 48 CPUs for search (using the original parallel enumerative search implemented in the released code for the DreamCoder model in \cite{ellis2020dreamcoder}); and the experiments trained on the scene reasoning task used 24 CPUs (as preliminary experiments revealed that these experiments required shorter search time for our main model, and we wished to reduce the carbon footprint of the remaining experiments after our first two domains.) For all experiments we train the neural models for 1 $\times 10^4$ gradient steps. For experiments with language-guided compression, we use an upper bound of 5 new abstractions introduced per iteration. For mutual exclusivity experiments, we set $\alpha_{ME} = 0.1$. For all experiments, during program-only compression (see \cite{ellis2020dreamcoder} for a discussion of program-only compression hyperparameters) we use the hyperparameters from \cite{ellis2020dreamcoder} for parsimony with earlier work: a structure penalty of 1.5 and pseudocounts = 30. \subsection*{S6.1 Domains} (See \textbf{Figure 2}, main text for examples from all three domains, shown along with the synthetic and human language annotations.) As discussed in the main paper, each domain consists of a dataset of \textit{tasks}; a set of procedurally generated \textit{synthetic language annotations}; and a set of \textit{human language annotations} provided by Mechanical Turk workers; we also described the \textit{base primitives} $\mathcal L_0$ with which all models (including baselines and ablations) were initialized for each domain. \subsubsection*{S6.1.1 String Editing} \textbf{Tasks:} structured string transformation problems taken from a publicly released dataset in \cite{andreas2017learning} (n=1000 train; n=500 test). Tasks consist of input dictionary strings transformed using randomly sampled regular expression transducer (n=30 examples per task). Transducers were sampled according to abstract templates defined in \cite{andreas2017learning} and required identifying matched sequences of characters and \textit{adding} letters before them; \textit{removing} sequences; \textit{replacing} them with new sequences, or \textit{doubling} the sequence each time they appeared (See \textbf{Figure 2A, main text}). \textbf{Language data:} The human language dataset for this domain was previously collected by \cite{andreas2017learning}. We defined a synthetic grammar of high-level templates over the ground truth regular expression transducers (corresponding to the original templates used to generate the tasks.) The synthetic templates were defined based on language from the original human annotations, and in most cases closely matched the true human provided annotations (which were generally quite structured), though with significantly less variation (the original language contained multiple human descriptions per task. We generate a single synthetic for each one. The synthetic dataset has a vocabulary size of n=44 for both train and test. We use the human annotations in the original dataset when evaluating on human data, which have a vocabulary of n=727 (train) and n=622 (test).) We generate a synthetic dataset on this domain partly because of inaccuracies noted in \cite{andreas2017learning}. The released code contains the complete generation procedure for these synthetic annotations. See Figure 2A for representative tasks with examples, synthetic language, and human descriptions. \textbf{Initial program primitives:} We initialize all models with a set $\mathcal L_0$ of LISP-like primitives that operate over substring sequences to both construct regular expression match sequences and manipulate strings, augmented with three text manipulation-specific primitives intended for executing constructed regular expression sequences; \texttt{t} is a polymorphic type variable using standard Hindley-Milner polymorphism typing \cite{pierce}. The execution engine does include a regex-matching model; however, the synthesis model is naive to this execution engine and simply searches for manipulations over the input strings and the regexes as data arrays. $\mathcal L_0$ contains 14 substring manipulation primitives, given below with type information. We also give a semantic gloss for primitives that are not standard LISP primitives. \begin{itemize} \item \texttt{if (bool $\to$ t $\to$ t $\to$ t)} \item \texttt{cons (t $\to$ list(t) $\to$ list(t))} \item \texttt{car (list(t) $\to$ t)} \item \texttt{cdr list(t) $\to$ list(t} \item \texttt{map ($(t_0 \to t_1) \to list(t_0) \to list(t_1)$)} \item \texttt{tail (list(t) $\to$ t)} \item \texttt{append (t $\to$ list(t) $\to$ list(t))} \\ Appends element to end of list. \item \texttt{revcdr (list(t) $\to$ list(t))} \\ Takes all except the last element of the list. \item \texttt{match (substr $\to$ substr $\to$ bool)} \\ Returns true if the first argument, when executed as a regular expression, matches the second argument. \item \texttt{regexsplit (substr $\to$ fullstr $\to$ list(substr))} \\ Attempts to execute the first argument as a regular expression, and splits the second argument into a list of substrings, using the regular expression match as a delimiter (and includes the matched sequences in the returned list.) \item \texttt{flatten (list(substr) $\to$ fullstr)}\\ Flattens a list of substrings back into a string. \item \texttt{rconcat (substr $\to$ substr $\to$ substr)}\\ Concatenates two substrings. \item \texttt{rnot (substr $\to$ substr)}\\ Takes a substring argument \texttt{s} and returns the substring literal [\^\ s] \item \texttt{ror (substr $\to$ substr $\to$ substr)} \\Takes substring literals a and b and returns the substring literal ((a)|(b)) \end{itemize} We also include 26 character constants of type \texttt{substr} and constants \texttt{dot} (regular expression wildcard character) and \texttt{empty} (empty string). \textbf{Domain hyperparameters} We largely follow prior work \cite{ellis2020dreamcoder} to set algorithm training parameters; the earlier \cite{ellis2020dreamcoder} uses a 720s enumerative search budget for solving both text editing and general list manipulation tasks. We use the same 720s enumerative budget here. The encoder E(t) follows the domain-specific encoder used for text and list editing problems in \cite{ellis2020dreamcoder}, a 2-layer GRU with 64 hidden units. The model is trained for a fixed gradient step budget (10,000 gradient steps) and we sample equally at random between supervision on the solved training tasks (and their solution programs in the current DSL) and samples from the joint generative model. As with \cite{ellis2020dreamcoder}, when generating tasks from the generative model, we use randomly sample inputs (on which we execute generated programs to produce an output.) \subsubsection*{S6.1.2 Compositional Graphics} \textbf{Tasks:} inverse graphics problems (n=200 train; n=111 test) where each synthesis problem is specified by an image and solved by synthesizing a program in LOGO Turtle graphics \cite{abelson1986turtle}. The domain is inspired by the graphics domain in \cite{ellis2020dreamcoder} but intentionally re-designed to be much more challenging (ground-truth programs are much longer on average in the base programming language) and explicitly compositional: the training and testing tasks contain \textit{simple shape tasks} defined by compositional parameters for a set of basic shapes (\textit{a small triangle, a medium square; a small semicircle}); \textit{complex shape tasks} that require inferring more challenging (and longer) parameterized shapes (\textit{a greek spiral with eight turns}); and \textit{compositional tasks} defined by geometric rules and relations over the simple shapes (\textit{a seven sided snowflake with a short line and a small triangle as arms; a small triangle connected by a big space from a small circle}) (See \textbf{Figure 2C}). \textbf{\textit{Simple parameterized shapes}} are either polygons (\textit{triangle, square, [n] gon}), curves (\textit{semicircle, circle}) or \textit{line}s. Simple shapes are parameterized by one of three sizes (\textit{small} or \textit{short}; \textit{medium}; and \textit{big}). When generating synthetic language descriptions, pluralized objects are tokenized with separate tokens for the noun lemma and a token for the plural suffix (e.g. \textit{square s}). \\ \textbf{\textit{Complex parameterized shapes}} require constructing more complex images out of basic lines, and are intended to evaluate performance on tasks that pose a greater search challenge in the initial DSL, and whose structure is not directly cued by compositional relationships over easier components. Further, the complex shapes can be solved using abstractions (e.g. for repeatedly rotating a pen at right angles) that are not directly cued by shared lexical names -- we evaluate the algorithm's ability to learn and use abstractions that correspond to useful sublexical structures shared across multiple lexemes. We define four template families for complex shapes: \textit{spiral}s, \textit{staircase}s, \textit{zigzag}s, and \textit{star}s. \\ \textbf{\textit{Compositional graphics}} tasks invoke compositional relationships over the simple parameterized shapes. We define templates for generating 6 families of compositional tasks: \textit{nested}, \textit{next to}, \textit{separated by}, \textit{connected by}, \textit{in a row}, and \textit{snowflake}s. \textbf{Language data:} We gather human language annotations by asking Mechanical Turk workers to write an image description for the rendered graphics images that specify each task. Each worker labeled 20 training and 10 testing images after viewing a disjoint, randomly sampled set of 15 example images paired with their synthetic language captions. (Workers were asked to write a \textit{short, clear description that a person or robot could use to recreate the picture}, and told that the examples were paired with \textit{automatically generated captions as an example of the kinds of descriptions you could write for this picture}.) We control for description quality by requiring workers to complete a reference task on their own descriptions: after writing their initial annotations, workers were required to correctly match each annotation to the target image (from amidst a set of 12 distractors drawn heuristically from similar images on the full task dataset, and other images they themselves had described), and only annotations correctly matched to the target image were retained (workers were given a chance to redescribe pictures they failed to match to their own captions.) We preprocess the human dataset minimally to standardize number terms (e.g. we use the same token type for both \textit{3} and \textit{three}) and to split plurals into a lemma and suffix, as in the synthetic dataset. The final dataset has a vocabulary size of n=562 for both train and test. As with the string editing domain, we define a synthetic dataset using parameterized templates based on systematic language reused in the human annotations (see Figure 2A for a comparison between human annotations and synthetic language); as with that domain, we choose a synthetic dataset to ensure systematic re-use of high level terms for repeated compositional objects (such as the ``n-gon" or ``snowflake" terminology.) We then generate graphics tasks by defining parameterized templates over ground truth programs \textit{in $\mathcal L_0$}, and a corresponding generator for synthesizing natural language descriptions based on each ground truth program. It is important to note that the templates are defined at any extremely high level and were written with respect to low-level programs in a simple graphics language (many of which were derived by generalizing compositionally over complex structures in \cite{ellis2020dreamcoder}, such as the `snowflake' images). \textbf{Initial program primitives:} For comparison with prior work, our initial library on this domain (and the base language used to generate the ground truth graphics programs) is an implementation of the LOGO Graphics DSL used in \cite{ellis2020dreamcoder}, which consists of four typed, imperative primitives modeled within the $\lambda-$calculus with a state monad $S$: \begin{tabbing} \texttt{move: distance $\to$ angle $\to$ S $\to$ S} \\ \texttt{pen-up: (S $\to$ S) $\to$ S $\to$ S} \\ \texttt{for: int $\to$ (S $\to$ S) $\to$ S $\to$ S} \\ \texttt{get/set: (S $\to$ S) $\to$ S $\to$ S} \\ \end{tabbing} as well as four arithmetic operators (+, -, *. /), integer constants (1-9), unit distances and angles (1 meter and $2\pi$ radians), and special values $\infty$ and $\epsilon$. Figure 3 (main text) shows examples of the graphics tasks, synthetic descriptions, human descriptions, and sample programs in the ground truth initial DSL. \textbf{Domain hyperparameters} We largely follow prior work \cite{ellis2020dreamcoder} to set algorithm training parameters. Consistent with the graphics program experiments in \cite{ellis2020dreamcoder}, we train all models, including baselines and ablations, using an enumerative search budget of 1800s per task (both when using pure enumerative search from the DSL prior, and neurally-guided search conditioned on the task examples and language descriptions); the results in Table 1 compare the relative advantage of our model given this fixed search time. We train all models on 48 CPUs during parallel enumerative search, and run the algorithm for a maximum of 27 iterations (see learning curves. As we run multiple random seed replications of models in this domain, we tuned the iteration limit based on performance on the first replication, allowing models models to train while performance continued to increase. To conserve computational resources, we later stopped several of our own model replications before 27 iterations, as they had reached near ceiling performance. As we report the best held-out test score across all 27 iterations for any one model, the early stopping would only serve to give a conservative estimate on performance for these models.) We randomly reorder the training set of tasks once before the first loop, then iterate through batches of n=40 tasks at each iteration; learning curves show results from evaluating on held-out tasks every n=3 iterations. The encoder E(t) follows the domain-specific encoder used for the original graphics domain in \cite{ellis2020dreamcoder} for a more direct comparison: we use a 6-layer CNN, where each layer consists of a 64x64 2D convolutional sublayer with kernel size = 3, a RELU activation sublayer, and a max-pooling sublayer with kernel size = 2. The model is trained for a fixed gradient step budget (10,000 gradient steps) and we sample equally at random between supervision on the solved training tasks (and their solution programs in the current DSL) and samples from the joint generative model. \subsubsection*{S6.1.3 Scene Reasoning} \textbf{Tasks:} inductive scene reasoning tasks (n= 212 train; n=115 test) where each synthesis problem is specified by a structured input scene, and outputs can be a number (\textit{how many red rubber things are there?}), a boolean value (\textit{are there more blue things than green things?}), or another scene (\textit{what if all of the red things turned blue?}). This domain is modeled on CLEVR \cite{johnson2017inferring} but designed to support non-linguistic, inductive synthesis in the programming-by-example paradigm: each task is specified with \textit{n=7} paired input output examples. See \textbf{Figure 2B, main text} for example tasks showcasing the original and extended templates, synthetic language annotations, and human language annotations. The dataset includes questions randomly generated from the following subset of the \textit{original CLEVR question templates} (see \cite{johnson2017inferring} for additional details on the task generation process and question templates; we also release our own augmented question generation code and the full dataset): \begin{itemize} \item \textbf{zero\_hop}: questions that require counting or answering an attribute query about a subset of objects in the scene. (e.g. \textit{How many small cylinders are there?}; \textit{What material is the purple thing?}). \item \textbf{one\_hop}: questions similar to the \textit{zero\_hop} tasks, but that require reasoning over an additional relational query (e.g \textit{What number of things are right the small gray thing?}). \item \textbf{single\_or}: questions that additionally introduce a \textit{disjunction} between sets of objects. (e.g. \textit{How many objects are either large metal spheres or large rubber things?})). \item \textbf{(compare\_integer}: questions that additionally introduce a $\geq$ or $\leq$ operator between counts of sets of objects. (e.g. \textit{Is the number of large rubber cubes less than the number of large green rubber things?}) \item \textbf{same\_relate}: questions that additionally require reasoning about other objects with the same attribute as a specified object. (e.g. \textit{How many other things are there of the same size as the cyan thing?}). \end{itemize} We choose these templates as a representative subset of the style of the full CLEVR dataset, that requires the full language of high-level primitives in \cite{johnson2017inferring} to solve. We omit some longer questions in the same format (e.g. \textit{two\_hop}) as our intention is to compare synthesis baselines, rather than to achieve SOTA performance on CLEVR: this would likely only increase the computing resources needed to compare the various methods and we already found a significant differential between our model and the baselines on the shorter questions.) We also add \textit{new} question templates generated in the style of the original CLEVR tasks, but designed to model other common AI tasks (such as generating new scenes based on existing ones) and to require new abstractions (that were not expressible in the original restricted symbolic language used to generate scenes in \cite{johnson2017inferring}): \begin{itemize} \item \textbf{localization}: questions for object localization. These return an output \textit{scene} consisting of a localized set of objects based on a set of query attributes (e.g. \textit{Find the gray rubber thing.}). \item \textbf{remove}: questions that either return an output \textit{scene} with a subset of the objects removed, or that query about latent scenes where a subset of objects has bee removed. (e.g \textit{What if you removed all of the gray metal things?}; \textit{If you removed the green cubes, how many cubes would be left?}). \item \textbf{transform}: questions that either return an output \textit{scene} where a subset of the objects has been \textit{transformed} to set new attributes, or that query about latent scenes where a subset of objects has been modified this way. (e.g \textit{What if all the blue metal things became rubber things?}; \textit{If all of the large yellow rubber things became gray spheres, how many gray spheres would there be?}). \end{itemize} We treat these as program synthesis tasks: the input scenes are specified as \textit{symbolic scene graphs consisting of an array of structured, objects defined as a dictionary of their attributes}, and programs are designed to manipulate these structured arrays (this data structure is the original format in which scenes themselves are generated in \cite{johnson2017inferring}; the images displayed in Figure 3, main text are rendered using the original image rendering pipeline). Our intention is \textit{not} to build a visual reasoning architecture: rather, we are interested in learning structured manipulations of scenes. We see work in \textit{inverse graphics} (such as \cite{yi2018neural}) which outputs a structured scene graph based on pixel images as the \textit{first} step in a symbolic processing and reasoning pipeline as analogous; we are interested in the structured manipulation of these scene representations. \textbf{Language data:} Synthetic language annotations are generated based on the original high-level templates in \cite{johnson2017inferring}, as well as additional templates we define for the extended questions in the same style. We gather human language annotations by asking Mechanical Turk workers to write an instruction or question describing the set of inductive examples. However, due to the difficulty of solving certain tasks in a limited time frame based on the inductive examples alone (such as the questions about disjunctions over scenes), we show Mechanical Turk workers the synthetic descriptions for this domain and ask them to write a semantically similar description that changes more than one word in the original caption, and that would be "more natural for a human to understand". This paraphrasing paradigm is similar to that used in \cite{wang2015building}, though we find that in comparison to other domains it generates less diverse language data.) We remove all punctuation, tokenize on spaces, and use an additional domain heuristic to stem all plurals (e.g. \textit{cubes}). \textbf{Initial program primitives:} We initialize all models with a set $\mathcal L_0$ of LISP-like primitives. These are similar to the initial list manipulation primitives used in the \textit{string editing} domain: as both domains can be treated as manipulating structured arrays, we are interested in learning differentiated, domain-specific abstractions based on a very similar base language. $\mathcal L_0$ also includes primitives for querying attributes of objects on the domain (these are typed getters that simply query the object dictionary of attributes) and several domain-specific functions necessary for manipulating these attribute. We deliberately use a much more base level programming language than the high-level, domain-specific language hand-designed in \cite{johnson2017inferring}; our goal is to \textit{learn} the necessary abstractions. We give a semantic gloss for primitives that are not standard LISP primitives. \begin{itemize} \item \texttt{if (bool $\to$ t $\to$ t $\to$ t)} \item \texttt{cons (object $\to$ list(object) $\to$ list(object))} \item \texttt{car (list(object) $\to$ object)} \item \texttt{map ($(t_0 \to t_1) \to list(t_0) \to list(t_1)$)} \item \texttt{fold ($(list(t) \to list(t)) \to (t \to list(t) \to list(t)) \to list(t)$)} \item \texttt{len (list(t) $\to$ int)} \item \texttt{$>$ (list(t) $\to$ bool)} \item \texttt{$<$ (list(t) $\to$ bool)} \item \texttt{set\_union (list(t) $\to$ list(t) $\to$ list(t))} \item \texttt{set\_intersect (list(t) $\to$ list(t) $\to$ list(t))} \item \texttt{set\_difference (list(t) $\to$ list(t) $\to$ list(t))} \item \texttt{relate (object $\to$ relation $\to$ list(t))} Returns an array of objects that satisfy a spatial relation with respect to an input object. \end{itemize} We also include \textit{equality} comparators for each of the attribute types (e.g. \texttt{eq\_color?}; \textit{getters} for each attribute, and \textit{setters} for each attribute. We also include integer constants 0-9 for counting and constants for the attributes (\texttt{blue, red, big, small, rubber, metal}) based on the original object and spatial relation constants \cite{johnson2017inferring}.\\ \textbf{Domain hyperparameters:} We run a coarse hyperparameter search based on the baseline model to set the domain hyperparameters. We train all models, including baselines and ablations, using an enumerative search budget of 1000s per task and run the models for a maximum of 5 iterations. we run multiple random seed replications reordering the training set, in the same way as the compositional graphics domain. The results in Table 1 also compare a \textit{curriculum} ordering of the training set based on the number of tokens in the synthetic language captions (split on spaces.) The encoder E(t) is a variant of the RNN-based domain-specific encoder used for text and list editing problems in \cite{ellis2020dreamcoder} (as well as the string editing domain). The model is trained for a fixed gradient step budget (10,000 gradient steps) and we sample equally at random between supervision on the solved training tasks (and their solution programs in the current DSL) and samples from the joint generative model. As with \cite{ellis2020dreamcoder}, when generating tasks from the generative model, we use randomly sample inputs (on which we execute generated programs to produce an output.) We encode the symbolic scene data structures with the RNN by encoding a flattened version of the scene graph. The scene graph is originally stored as a dictionary of attributes; when flattened, we indicate the dictionary structure using special tokens to denote the keys and the start and end of any array delimiters (the original scene graph is fully reconstructable from the flattened version.) \subsection*{S 6.2 Results and Additional Qualitative Results} In this section, we discuss additional qualitative results from an in depth exploration of the graphics domain that were omitted from the main paper for space, but provide additional insight on the behavior of the learned model in the hardest learning domain (based on the differential between baseline and LAPS-augmented performance.) \textbf{Learned abstractions and synthesized programs.} Figure S\ref{graphics-abstractions-extended} (supplement) show sample abstractions in the final libraries $\mathcal L_f$ for the best performing models in the graphics domain as a concrete exemplar of abstractions that are learned and how they are used, along with sample tasks solved with these abstractions. The figures are shown as dependency graphs to indicate how progressively more complex abstractions \textit{build} on abstractions at prior iterations of learning; we also show selected probabilities from the translation model (depicted are examples from the top-3 primitive translations for a given word; some primitives are not high probability translations for any word.) \textbf{Joint generative model samples.} Figure S\ref{joint-generative} (supplement) shows samples from the joint generative model on the graphics domain (programs from the library which are executed to produce the task example image, and translated to produce language annotations) at early and later stages of training, indicating that the joint model itself improves as learning improves, which itself allows better training for the conditional inference model and better abstraction guiding based on language. \begin{figure*} \centering \includegraphics[width =\textwidth]{figures/generative_bayes_and_samples.pdf} \vspace{-5mm} \caption{(left) Joint generative model J over programs sampled from the DSL prior and natural language produced by the translation model $T(D | \mathcal L)$, inferred from solved training tasks. Samples from the model are used to train a neural synthesizer to guide search on more challenging, unsolved tasks. (right) Samples from the $J$ generative model in the graphics domain shows how program complexity increases and generated language improves across iterations, as the system both adds richer abstractions to the DSL and learns better alignments over the solution set, enabling the trained neural model to solve more complex tasks} \label{joint-generative}. \end{figure*} \begin{figure*} \centering \includegraphics[width =\textwidth]{figures/graphics_abstractions_extended.pdf} \vspace{-3mm} \caption{Abstractions and programs learned for the graphics domain. Sample abstractions (right) learned from a minimal starting DSL (left) for solving progressively more complex graphics program synthesis tasks with language annotations. Also shown with translation probabilities. Our iterative algorithm learns alignment-based translation probabilities between natural language words and program primitives to guide program search and abstraction (depicted are examples from the top-3 primitive translations for a given word; some primitives are not high probability translations for any word.} \label{graphics-abstractions-extended} \end{figure*}
1,116,691,496,995
arxiv
\section{Introduction} The point scatterer, namely the Laplacian with a delta potential, on a two-dimensional flat manifold is a popular model in the study of the transition between chaos and integrability in quantum systems. In 1990 Seba \cite{Seba} considered this operator on a rectangle with irrational aspect ratio and Dirichlet boundary conditions and argued that the spectrum and eigenfunctions of the point scatterer display features such as level repulsion and a Gaussian value distribution, both of which are present in quantum systems with chaotic classical dynamics (cf. \cite{CdV2} and \cite{BohigasGiannoniSchmit}), such as the quantization of the geodesic flow on hyperbolic manifolds or the flow in the Sinai billiard. In fact the point scatterer can be understood as a limit of the Sinai billiard where the radius shrinks to zero faster than the semiclassical wavelength. The subject of this paper is a point scatterer on a flat torus. It has two types of eigenfunctions: first there are {\em old} eigenfunctions of the Laplacian, namely those which vanish at the position of the scatterer; the nonzero eigenvalues remain the same, though with multiplicities reduced by $1$. Secondly, there are {\em new} eigenfunctions which diverge logarithmically near the position of the scatterer; the corresponding eigenvalues have multiplicity $1$ and interlace with the old Laplace eigenvalues. We shall only be concerned with the set of new eigenfunctions, i.e., the ones which are affected by the scatterer. In \cite{RU} it was proved that a full density subsequence\footnote{See Section~\ref{PureModes} for a precise definition of a ``full density subsequence''.} of the new eigenfunctions {\em equidistribute in position} space in the special case of a square torus. We extend the results of \cite{RU} and prove that a full density subsequence of these eigenfunctions in fact {\em equidistribute in phase space} --- we thus establish an analogue of Shnirelman, Zelditch and Colin de Verdi\`ere's Quantum Ergodicity Theorem in a case where there is no underlying chaotic dynamics and no classical ergodicity. An analogue of this result for a cubic 3D torus was recently obtained by N. Yesha \cite{Yesha2}. The situation for a square torus is very different from irrational tori, where the eigenfunctions are expected to localise in phase space on a finite number of momentum vectors \cite{KeatingMarklofWinn2,BerkolaikoKeatingWinn2}. In the case where the aspect ratio is diophantine this can be proven rigorously for a full density subsequence of new eigenfunctions \cite{KU2}. \subsection{Spectrum of the point scatterer} The formal operator $$-\Delta+\alpha\delta_{x_0}, \quad \alpha\in{\mathbb R}$$ is realized using von Neumann's theory of self-adjoint extensions. We simply state the most important facts in this section in order to formulate the results of this paper. For a more detailed discussion of the self-adjoint realization of the point scatterer we refer the reader to the introduction and appendix of the paper \cite{RU}. Let ${\mathbb T}^2={\mathbb R}^2/2\pi\Z^2$. We consider the restriction of the positive Laplacian $-\Delta$ to the domain $$D_0=C^\infty_c({\mathbb T}^2\setminus\{x_0\})$$ of functions which vanish near the position of the scatterer: $$H=-\Delta|_{D_0}$$ The operator $H$ is symmetric, but fails to be self-adjoint, in fact $H$ has deficiency indices $(1,1)$. Self-adjoint extension theory tells us that there exists a one-parameter family of self-adjoint extensions $H_\varphi$, $\varphi\in(-\pi,\pi]$, which are restrictions of the adjoint $H^*$ to the domain of functions $f\in \operatorname{Dom}(H^*)$ which satisfy the asymptotic $$f(x)=C\left(\cos\left(\frac{\varphi}{2}\right)\frac{\log|x-x_0|}{2\pi}+\sin\left(\frac{\varphi}{2}\right)\right)+o(1), \quad x\to x_0$$ for some constant $C\in{\mathbb C}$. The case $\varphi=\pi$ corresponds to $\alpha=0$. In this paper we will study the operators $H_\varphi$, $\varphi\in(-\pi,\pi)$. The spectrum of the operator $H_\varphi$ consists of two parts: ``old'' and``new'' eigenvalues. Since $H_\varphi$ is a self-adjoint realization of a rank one perturbation of the Laplacian, the effect is that each nonzero old Laplace eigenvalue appears, with multiplicity reduced by $1$, in the spectrum of $H_\varphi$. Further, each old Laplace eigenvalue gives rise to a new eigenvalue with multiplicity $1$. In fact, these new eigenvalues {\em interlace} with the multiplicity one sequence associated with the old Laplace eigenvalues. There are two types of eigenfunctions of $H_\varphi$ associated with the two parts of the spectrum: \begin{itemize} \item[(A)] ``Old'' eigenfunctions which vanish at $x_0$ and therefore are not affected by the scatterer. These are simply eigenfunctions of the unperturbed Laplacian. \item[(B)] ``New'' eigenfunctions which feature a logarithmic singularity at $x_0$; in fact they are given by Green's functions $G_\lambda=(\Delta+\lambda)^{-1}\delta_{x_0}$. \end{itemize} We will study how eigenfunctions of type (B) are distributed in phase space as the eigenvalue tends to infinity. Denote by $S$ the set of distinct eigenvalues of the Laplacian on ${\mathbb T}^2$, namely integers which can be represented as a sum of two squares: $$S:=\{n \in \Z : n =x^2+y^2 \mid x,y\in\Z\}$$ For given $n\in S$ denote its multiplicity by $$r_2(n):=\sum_{\substack{n=|\xi|^2 \\ \xi\in\Z^2}}1,$$ i.e. the number of ways $n$ can be written as a sum of two squares. The eigenvalues of type (B) are solutions to the equation \begin{figure}\label{figure} \centering \includegraphics[scale=0.65]{SpectralFunction2} \caption{The picture shows a plot of the l. h. s. of equation \eqref{weak coupling quantization} as a function of $\lambda$. The zeroes are the new eigenvalues corresponding to the self-adjoint extension with parameter $\varphi=0$.} \end{figure} \begin{equation}\label{weak coupling quantization} \sum_{n\in S}r_2(n)\left(\frac{1}{n-\lambda}-\frac{n}{n^2+1}\right)=c_0\tan\left(\frac{\varphi}{2}\right) \end{equation} (see Figure~1 for a plot of the l. h. s.) where \begin{equation} c_0=\sum_{n\in S}\frac{r_2(n)}{n^2+1}. \end{equation} As mentioned earlier, they interlace with the distinct Laplace eigenvalues $$S=\{0<1<2<4<5<8<\cdots\}$$ as follows \begin{equation} \lambda_{0,\varphi}<0<\lambda_{1,\varphi}<1<\lambda_{2,\varphi}<2<\lambda_{4,\varphi}<4<\lambda_{5,\varphi}<5<\lambda_{8,\varphi}<8<\cdots \end{equation} where the new eigenvalue associated with $n\in S$ is denoted by $\lambda_{n,\varphi}$; note that $\lambda_{n,\varphi} < n$. \subsection{Strong coupling} In the physics literature equation \eqref{weak coupling quantization} is referred to as a ``weak coupling'' quantization. In fact, (cf. \cite{RU2}) the new eigenvalues $\lambda_{m,\varphi}$ ``clump'' with the Laplace eigenvalues $m\in S$ in the sense that for a full density subsequence of S, $$ 0 <m-\lambda_{m,\varphi} \ll \frac{1}{(\log m)^{1-o(1)}}.$$ In particular the eigenvalue spacing distribution of the point scatterer coincides with that of the Laplacian, and the effect of the scatterer on the spectrum is quite weak in this quantization. (In a sense it corresponds to letting $\alpha \to 0$ as $\lambda \to\infty$.) Shigehara \cite{Shigehara1} and later Bogomolny, Gerland and Schmit \cite{BogomolnyGerlandSchmit}, with the intent of finding a model exhibiting level repulsion, considered another quantization sometimes referred to as a ``strong coupling'' quantization. There are various ways to arrive at this quantization condition from equation \eqref{weak coupling quantization}. For example, one may truncate the summation outside an energy window of size $O(\lambda^\delta)$ where $\delta>0$ is fixed, and the new eigenvalues of the strong coupling quantization are then defined to be the solutions to the equation \begin{equation} \label{strong coupling quantization} \sum_{\substack{n\in S \\ |n-n_+(\lambda)|\leq n_+(\lambda)^\delta}}r_2(n)\left(\frac{1}{n-\lambda}-\frac{n}{n^2+1}\right)=c_0\tan\left(\frac{\varphi}{2}\right), \end{equation} where $n_+(\lambda)$ denotes the smallest element of $S$ which is larger than $\lambda$. (With $\lambda$ denoting such a solution, the corresponding ``new'' eigenfunction is defined as a certain Green's function $G_{\lambda}$, cf. Section~\ref{sec:semicl-meas}.) A summation by parts argument (see for instance Lemma 3.1 in \cite{U2}) shows that $$ \sum_{\substack{n\in S \\ |n-n_+(\lambda)|> n_+(\lambda)^\delta}}r_2(n)\left(\frac{1}{n-\lambda}-\frac{n}{n^2+1}\right) = -\pi \log \lambda + O_{\delta}(1) $$ and hence the truncation given by \eqref{strong coupling quantization} is equivalent to a logarithmic renormalisation of the r.~h.~s. of \eqref{weak coupling quantization}, namely, as $\lambda \to \infty$, \begin{equation} \label{renormalization} \sum_{n\in S}r_2(n)\left(\frac{1}{n-\lambda}-\frac{n}{n^2+1}\right)= -\pi(1+o_\delta(1))\log\lambda = c_0\tan\left(\frac{\varphi_\lambda}{2}\right), \end{equation} if we allow $\varphi_{\lambda}$ to depend on $\lambda$ appropriately, and where the $o_\delta(1)$ error term depends on the exponent $\delta$. Since the error term depends on $\delta$ we note that there is no unique choice of strong coupling quantization; the key point is matching the leading order logarithmic term. The renormalization in \eqref{renormalization} can be viewed as letting a boundary condition vary with the energy. Consequently $D_{\varphi_{\lambda}}$, the domain of the operator $H_{\varphi_{\lambda}}$ is varying; this setting is reminiscent of problems in semiclassical analysis where boundary conditions are allowed to depend on the semiclassical parameter $\hbar$. \begin{remark} In the weak coupling quantization the lowest new eigenvalue is always negative, but for the strong coupling quantization the lowest new eigenvalue may be either positive or negative. In the case of a positive lowest new eigenvalue, this eigenvalue would be denoted $\lambda_1$ to keep our notation consistent, in particular ensuring that $\lambda_n<n$ for $n \in S$ always holds. \end{remark} We remark that in the statement of our main result, Theorem \ref{QE}, the sequence $\Lambda=\{\lambda_n\}$ will denote {\em {\bf any} increasing sequence of numbers which interlace with $S$}. In particular, it applies to the eigenvalues of the weak, as well as the strong, coupling quantizations. \subsection{Semiclassical Measures} \label{sec:semicl-meas} Let $a\in C^\infty(S^*{\mathbb T}^2)$. Denote by $\operatorname{Op}(a)$ a zero-order pseudo-differential operator associated with $a$ (see subsection \ref{PseudoDiffCalc} for more details.) Let $g_\lambda=G_\lambda/\|G_\lambda\|_2$, $\lambda\notin S$, where we recall that $S$ denotes the set of Laplace eigenvalues and that $G_\lambda=(\Delta+\lambda)^{-1}\delta_{x_0}$. We are interested in weak limits of measures $d\mu_\lambda$ defined by the identity \begin{equation}\label{semimeas} \left\langle \operatorname{Op}(a)g_\lambda,g_\lambda\right\rangle=\int_{S^*{\mathbb T}^2} a d\mu_\lambda. \end{equation} \subsection{Main Result} The following theorem holds generally for the $L^2$-normalized Green's functions $g_\lambda$. It states that the measures $d\mu_\lambda$ defined by \eqref{semimeas} converge weakly to Liouville measure as $\lambda \to \infty$ along a full density subsequence of {\em any} increasing sequence $\Lambda$ which interlaces with $S$. (Recall that $S$ denotes set of unperturbed Laplace eigenvalues, namely the set of integers which can be represented as a sum of two squares.) \begin{thm}\label{QE} Let $\Lambda$ be an increasing sequence which interlaces with $S$. For $m \in S$, denote by $\lambda_m$ the largest element of $\Lambda$ which is smaller than $m\in S$. There exists a full density subsequence $S'\subset S$, that does not depend on $\Lambda$, such that for all $a\in C^\infty(S^*{\mathbb T}^2)$, \begin{equation} \lim_{\substack{m\to\infty \\ m\in S'}}\left\langle \operatorname{Op}(a)g_{\lambda_m},g_{\lambda_m}\right\rangle = \int_{S^*{\mathbb T}^2} a(x,\varphi)\frac{dx\;d\varphi}{\operatorname{vol}(S^*{\mathbb T}^2)}. \end{equation} \end{thm} As already noted, the theorem holds in particular for the new eigenvalues of the weak and strong coupling quantizations of a point scatterer. Hence we have the following corollary of Theorem~\ref{QE}. \begin{cor} Quantum Ergodicity holds for the new eigenfunctions of weakly, as well as strongly, coupled point scatterers on ${\mathbb T}^2$. \end{cor} \begin{remark} Recall that the new eigenvalues in the strong coupling limit are given by the set of solutions $\{ \lambda_{m}\}_{m}$ to \eqref{renormalization} (or alternatively, solutions to \eqref{strong coupling quantization}), with corresponding new eigenfunctions given by the Green's functions $G_{\lambda_{m}}$. Although these Green's functions are eigenfunctions of different operators $\{ H_{\varphi_{\lambda_{m}}}\}_{m}$ (in fact, the domains of the operators change), it is natural to say that quantum ergodicity holds in the strong coupling limit if a full density subset of the collection of new eigenfunctions equidistribute. \end{remark} We further note that the counting function, or Weyl's law, for the set of new eigenvalues (cf. Theorem~\ref{thm:landau}) satisfies $$|\{ n : \lambda_n \leq x\}| \ll \frac{x}{ \sqrt{\log x}} = o(x), $$ while the counting function for the full set of eigenvalues (new and old, with multiplicity) is the same as for the unperturbed Laplacian, hence $ \gg x$. Consequently, the sequence of new eigenvalues is of density zero within the full spectrum, and the approach of proving Quantum Ergodicity for the set of new eigenfunctions by computing first or second moments of matrix coefficients (e.g., see \cite{Zelditch}) with respect to the full set of eigenfunctions seems unlikely to succeed. \subsection{Acknowledgements} \label{sec:acknowledgements} We would like to thank Zeev Rudnick and Stephane Nonnenmacher for valuable discussions about this problem and for many helpful remarks which have led to the improvement of this paper. The authors are also very grateful to the referee for a careful reading of the paper and for many comments and suggestions that improved the exposition. \section{The matrix elements} \subsection{Quantization of phase space observables}\label{PseudoDiffCalc} Consider a classical symbol $a\in C^\infty(S^*{\mathbb T}^2)$, where $S^*{\mathbb T}^2\simeq {\mathbb T}^2\times S^1$ denotes the unit cotangent bundle of ${\mathbb T}^2$. We may expand $a$ in the Fourier series (note that the Fourier coefficients decay rapidly since $a$ is smooth) \begin{equation} a(x,\phi)=\sum_{\zeta\in\Z^2,k\in\Z}\hat{a}(\zeta,k)e^{\i \left\langle \zeta,x \right\rangle+\i k\phi}. \end{equation} We choose a complex realization of the unit cotangent bundle $S^*{\mathbb T}^2$ and parametrise the unit circle $S^1$ at position $x\in{\mathbb T}^2$ by the complex exponential map $\varphi \mapsto e^{\i\varphi}$. We now want to associate with $a$ a pseudodifferential operator $\operatorname{Op}(a):C^\infty({\mathbb T}^2)\to C^\infty({\mathbb T}^2)$. We choose the following symbol (we associate with $\xi=(\xi_1,\xi_2)$ the complex number $\tilde{\xi}:=\xi_1+\i\xi_2$ and note that $e^{\i k\arg\tilde{\xi}}=(\tilde\xi/|\tilde\xi|)^k$) \begin{equation} \sigma_{a}(x,\xi)= \begin{cases} \sum_{\zeta\in\Z^2, k\in\Z} \hat{a}(\zeta,k)\left(\frac{\tilde{\xi}}{|\tilde{\xi}|}\right)^k e^{\i\left\langle \zeta,x \right\rangle}, \quad \xi\neq0\\ \\ \sum_{\zeta\in\Z^2, k\in\Z} \hat{a}(\zeta,k)e^{\i\left\langle \zeta,x \right\rangle}, \quad \xi=0. \end{cases} \end{equation} Claim: The symbol $\sigma_{a}$, as defined above, belongs to the class of toroidal symbols $S^{0}_{1,0}({\mathbb T}^2\times\Z^2)$ as defined in \cite{RT}, Part II, Section 4.1.2, Defn. 4.1.7, p. 344. To see this, define the difference operators $$\Delta_{\xi_j} f(\xi)=f(\xi+e_j)-f(\xi),$$ where $e_1=(1,0)$, $e_2=(0,1)$. By the mean value theorem for repeated differences, for $f : {\mathbb R}^2 \to {\mathbb R}$ a smooth function, $$ \Delta_{\xi_1}^{\beta_{1}} \Delta_{\xi_2}^{\beta_{2}} f(\xi)= \partial_{\xi_1}^{\beta_{1}} \partial_{\xi_2}^{\beta_{2}} f( \xi) \Big|_{\xi=\xi'}, $$ for some $\xi'=\xi+(\beta_{1}',\beta_{2}')$ with $(\beta_{1}',\beta_{2}') \in [0,\beta_1] \times [0,\beta_{2}] $. With $f_{k}(\xi)$ denoting the real, or imaginary, part of $( \tilde{\xi}/|\tilde{\xi}|)^k$, a quick calculation then gives that for integers $\alpha_1,\alpha_{2},\beta_1,\beta_{2} \geq 0$, $$ \left| \partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2} \Delta_{\xi_1}^{\beta_1}\Delta_{\xi_2}^{\beta_2} f_{k}(\xi) e^{\i\left\langle \zeta,x \right\rangle} \right| \ll_{\alpha_1,\alpha_{2},\beta_1,\beta_{2}} k^{\beta_1+\beta_2} |\zeta_1|^{\alpha_{1}} |\zeta_2|^{\alpha_{2}} (1+|\xi|)^{-\beta_1-\beta_2} $$ This bound, together with the rapid decay of Fourier coefficients of $a$, implies that $$|\partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2} \Delta_{\xi_1}^{\beta_1}\Delta_{\xi_2}^{\beta_2}\sigma_{a}(x,\xi)| \leq C_{a,\alpha_1,\alpha_2,\beta_1,\beta_2} (1+|\xi|)^{-\beta_1-\beta_2}, $$ thus confirming the claim. The action of the pseudodifferential operator $\operatorname{Op}(a)$ is then defined by multiplication on the Fourier side, analogously to Defn. 4.1.9 in \cite{RT}. Therefore, we have \begin{equation} \begin{split} (\operatorname{Op}(a)f)(x)=&\sum_{\xi\in\Z^2}\sigma_{a}(x,\xi)\hat{f}(\xi)e^{\i\left\langle\xi,x\right\rangle}\\ =&\sum_{\xi\in\Z^2\setminus\{0\}}\sum_{\zeta\in\Z^2, k\in\Z} \hat{a}(\zeta,k)\left(\frac{\tilde{\xi}}{|\tilde{\xi}|}\right)^k \hat{f}(\xi) e^{\i\left\langle\xi+\zeta,x\right\rangle}\\ &+\sum_{\zeta\in\Z^2, k\in\Z} \hat{a}(\zeta,k)\hat{f}(0)e^{\i\left\langle \zeta,x \right\rangle} \end{split} \end{equation} We can now read off the action of $\operatorname{Op}(a)$ on the Fourier coefficients: \begin{equation}\label{pseudo} \widehat{(\operatorname{Op}(a)f)}(\xi)=\sum_{\zeta\in\Z^2,k\in\Z}\hat{a}(\zeta,k) \left(\frac{\tilde{\xi}-\tilde{\zeta}}{|\tilde{\xi}-\tilde{\zeta}|}\right)^k \hat{f}(\xi-\zeta), \quad \xi\neq\zeta \end{equation} (recall that $\tilde{\xi}:=\xi_1+\i\xi_2$ and that the Fourier coefficients $\hat{a}(\zeta,k)$ decay rapidly) and \begin{equation} \widehat{(\operatorname{Op}(a)f)}(\zeta)=\sum_{\zeta\in\Z^2,k\in\Z}\hat{a}(\zeta,k)\hat{f}(0). \end{equation} In terms of the Fourier coefficients the matrix elements of $\operatorname{Op}(a)$ can be written as \begin{equation} \left\langle \operatorname{Op}(a)f,f \right\rangle=\sum_{\xi\in\Z^2} \widehat{(\operatorname{Op}(a)f)}(\xi)\overline{\hat{f}(\xi)} \end{equation} In particular, for the observable $e_{\zeta,k}(x,\phi)=e^{\i\left\langle \zeta,x \right\rangle+\i k\phi}$, we have \begin{equation} \left\langle \operatorname{Op}(e_{\zeta,k})f,f \right\rangle=\sum_{\xi\in\Z^2\setminus\{\zeta\}} \left(\frac{\tilde{\xi}-\tilde{\zeta}}{|\tilde{\xi}-\tilde{\zeta}|}\right)^k\overline{\hat{f}(\xi)}\hat{f}(\xi-\zeta)+\overline{\hat{f}(\zeta)}\hat{f}(0). \end{equation} \subsection{Mixed modes} If $\zeta\neq0$, we have the bound \begin{equation}\label{shift bound} |\left\langle \operatorname{Op}(e_{\zeta,k})f,f \right\rangle|\leq\sum_{\xi\in\Z^2} |\hat{f}(\xi)||\hat{f}(\xi-\zeta)|. \end{equation} In the case $f=g_\lambda=G_\lambda/\|G_\lambda\|_2$ we have the $L^2$-expansion $$G_\lambda(x,x_0)=\frac{1}{4\pi^2}\sum_{\xi\in\Z^2} c(\xi)e^{\i\left\langle x,\xi \right\rangle}$$ where $c(\xi)=\frac{1}{|\xi|^2-\lambda}$. We obtain \begin{equation} |\left\langle \operatorname{Op}(e_{\zeta,k})g_\lambda,g_\lambda \right\rangle|\leq \frac{\sum_{\xi\in\Z^2} |c(\xi)||c(\xi-\zeta)|}{\sum_{\xi\in \Z^2} |c(\xi)|^2}. \end{equation} In \cite{RU} it was proved that there exists a full density subsequence $S'\subset S$ such that for any nonzero lattice vector $\zeta\in\Z^2$ the matrix elements of $\operatorname{Op}(e_{\zeta,k})$ vanish as $n\to\infty$ along $S'$. The following result was obtained. \begin{thm}\label{mixed}{\bf (Rudnick-U., 2012)} Let $\Lambda$ be an increasing sequence which interlaces with $S$. Denote by $\lambda_m$ the largest element of $\Lambda$ which is smaller than $m\in S$. There exists a subsequence $S'\subset S$ of full density such that for any $\zeta\in\Z^2$, $\zeta\neq0$, $k\in\Z$ \begin{equation} \lim_{\substack{n\to\infty \\ n\in S'}}\left\langle \operatorname{Op}(e_{\zeta,k})g_{\lambda_n},g_{\lambda_n}\right\rangle=0. \end{equation} \end{thm} \begin{remark} The above result is only stated for the weak coupling quantization in \cite{RU}. However, the proof in fact works for any interlacing sequence, in particular for the strong coupling quantization. To see this, we briefly recall the key steps of the proof in \cite{RU}. The first step is to show that the Green's functions $G_\lambda$ can be approximated by truncated Green's functions $G_{\lambda,L}$, where $L=\lambda^\delta$ for a specific choice of $\delta>0$ and the truncation drops all lattice vectors $\xi$ outside an annulus $A(\lambda,L)=\{\xi\in\Z^2 \mid ||\xi|^2-\lambda|\leq L\}.$ The subsequence $S'\subset S$ is chosen in such a way to ensure that the lattice points inside the annulus $A(\lambda,L)$ are sufficiently well-spaced; this then implies that $c(\xi-\zeta) \ll 1/L$ for $\zeta \in \Z^2$ fixed and $\xi \in A(\lambda,L)$. A second condition requires that the neighboring Laplace eigenvalues are not too far apart in order for the lower bound $\fnorm{G_{\lambda}}_{2} \gg 1/\lambda^{o(1)}$ to hold. These two key properties only depend on the arithmetic properties of the neighboring Laplace eigenvalues, and not on the location of the new eigenvalue itself. \end{remark} \subsection{Pure momentum modes} Let us consider the case $\zeta=0$. We rewrite the matrix elements as (cf. eq. \eqref{pseudo}) \begin{equation}\label{pure momentum matrix element} \begin{split} \left\langle \operatorname{Op}(e_{0,k})g_\lambda,g_\lambda\right\rangle=&\frac{\sum_{\xi\in\Z^2\setminus\{0\}} (\tilde{\xi}/|\tilde{\xi}|)^k|c(\xi)|^2+|c(0)|^2}{\sum_{\xi\in\Z^2}|c(\xi)|^2}\\ =&\frac{\frac{1}{\lambda^2}+\sum_{n\in S\setminus\{0\}}\frac{w_k(n)}{(n-\lambda)^2}}{\frac{1}{\lambda^2}+\sum_{n\in S\setminus\{0\}}\frac{r_2(n)}{(n-\lambda)^2}} \end{split} \end{equation} where $w_{k}(n)$, for $n\in S$, is a certain exponential sum defined as follows: with $$ \Lambda_{n} := \{ z = x+iy \in \Z[i] : |z|^{2} = n \}, $$ denoting the set Gaussian integers of norm $n$ (we can interpret these as lattice points lying on a circle of radius $\sqrt{n}$), we define \begin{equation} w_k(n):= \sum_{z \in \Lambda_n } \left(\frac{z}{|z|}\right)^k. \end{equation} \section{Pure momentum observables on the square torus}\label{PureModes} \label{sec:pure-moment-observ} We begin by introducing some convenient notation. Given a set $S \subset \Z$, let $S(x) := S \cap [1,x]$. We say that a subset $S_{1} \subset S$ is of {\em full density} if $|S_{1}(x)| = (1+o(1))\cdot |S(x)|$ as $x \to \infty$. In what follows, $S$ will always denote the set of integers that can be represented as sums of two integer squares. For $k \neq 0$, we can now construct a full density subsequence $S'_k\subset S$ such that $\left\langle \operatorname{Op}(e_{0,k})g_{\lambda_n},g_{\lambda_n}\right\rangle\to0$ as $\lambda_n\to\infty$ along $n\in S'_k$. (Recall that $\lambda_n$ denotes the perturbed eigenvalue associated with the Laplace eigenvalue $n\in S$.) \begin{prop}\label{momentumbound} For a given integer $k\neq0$, there exists a subsequence $S'_k\subset S$, of full density, such that for $n\in S'_k$ \begin{equation} |\left\langle \operatorname{Op}(e_{0,k})g_{\lambda_n},g_{\lambda_n}\right\rangle| \ll (\log\lambda_n)^{1/4-\log 2/2+o(1)}. \end{equation} \end{prop} \noindent We note that $1/4-\log 2/2 = -0.09657\cdots < 0$. \subsection{Preliminary Results} \label{sec:preliminary-results} Before we can give the proof of Proposition \ref{momentumbound} we state a number of necessary results whose proof can be found in the number theory literature, or in Section~\ref{sec:numb-theor-backgr}. We first recall Rieger's bound on pair correlation type sums for integers that are sums of two squares. \begin{thm}[\cite{rieger-sums-of-square-twins}, Satz~2] \label{thm:rieger-satz-2} \label{thm:pair-corr-bound} Let $f(n)$ denote the characteristic function of $S$, the set integers representable as sums of two integer squares. If $0 < |h| \ll x$ then $$ \sum_{n \leq x} f(n) f(n+h) \ll \frac{c(h)x }{\log x} $$ where $ c(h) := \prod_{\substack{ p|h \\ p \equiv 3 \mod 4} } (1 + 1/p). $ \end{thm} \begin{remark} Rieger's result is stated for $h>0$ and summing over $n \leq x+h$, but since $f(n) = 0$ for $n < 0$ and we assume $|h|\ll x$, the above formulation follows immediately (albeit possibly with a worse absolute constant.) Moreover, since $c(h) \leq \sum_{d|h} 1/d$, it easily follows that \begin{equation} \label{eq:ck-bounded-on-average} \sum_{0<|h| \leq T} c(h) \ll T, \quad \text{ $T \to \infty$}. \end{equation} \end{remark} We shall also need to recall a fundamental fact about the size of $S(x)$. \begin{thm}[Landau, see \cite{Landau2}, \S183.] \label{thm:landau} There exists $c>0$ such that \begin{equation} |S(x)| = \frac{c \cdot x}{\sqrt{\log x}} (1+O( 1/\log x )) \end{equation} as $x \to \infty$. \end{thm} Given $n \in S$, let $\omega_1(n)$ denote the number of prime divisors of $n$ that are congruent to one modulo four, i.e., $$ \omega_1(n) := \sum_{p |n, p \equiv 1 \mod 4} 1 $$ We shall use Erd\"os-Kac type techniques to prove (see Section~\ref{Erdos Kac}) the following structure result about the factorizations of ``typical'' integers in the set $S$. \begin{prop} \label{prop:erdos-kac-moment-estimates} We have \begin{equation} \frac{1}{|S(x)|} \sum_{n \in S(x)} \omega_{1}(n) = \frac{1}{2} \log \log x + O(\log \log \log x) \end{equation} and \begin{equation} \frac{1}{|S(x)|} \sum_{n \in S(x)} \omega_{1}(n)^{2} = \frac{1}{4} (\log \log x)^{2} + O( (\log \log x) \cdot \log \log \log x ) \end{equation} \end{prop} This, together with Chebychev's inequality, immediately gives the following normal order result on $\omega_{1}(n)$. \begin{cor} \label{cor:r2-log-normal-order} Fix $\epsilon>0$. Then, as $x \to \infty$, \begin{equation} | \{n \in S(x) : |\omega_{1}(n) - \frac{1}{2} \log \log n| < (\log \log n )^{1/2+\epsilon} \}|= |S(x)| \cdot (1+o_{\epsilon}(1)). \end{equation} \end{cor} From the corollary, we deduce (see Section~\ref{Erdos Kac} for details) the following weak analog of a normal order result for $r_{2}(n)$. \begin{cor} \label{cor:r2-normal-order} As $x \to \infty$, \begin{equation} | \{n \in S(x) : r_{2}(n) = (\log n)^{(\log 2)/2 \pm o(1)} \}| = |S(x)| \cdot (1+o(1)). \end{equation} \end{cor} We shall also need the following $L^{2}$-bound on the exponential sums $w_{k}(n)$, \begin{prop} \label{prop:w-k-l2-bound} If $k \neq 0$, then \begin{equation} \label{eq:1} \sum_{n \in{ S(x)}} |w_k(n)|^{2} \ll_{k} x \end{equation} In particular, by Chebychev's inequality, the number of $n \in S(x)$ for which $|w_{k}(n)|>T$ is at most $x/T^{2}$, and we find that $|w_{k}(n)| \leq (\log n)^{1/4+\epsilon}$ holds for almost all $n \in S(x)$. \end{prop} The result readily follows from a Halberstam-Richert type inequality, see section \ref{Mean Values} for more details. \subsection{Proof of Proposition \ref{momentumbound}} We begin by noting that since the set $\Lambda_{n}$ is invariant under multiplication by $i$, $w_{k}(n) = 0$ unless $4|k$. Hence the case $k \not \equiv 0 \mod 4$ is essentially trivial on recalling \eqref{pure momentum matrix element}. Thus, in what follows we will always assume that $4|k$, and $k \neq 0$. We next introduce some further notation. Given $m \in S$, let $m_{+}, m_{-} \in S$ denote the nearest neighbor (in $S$) to the right, respectively left, and similarly, let $m_{++},m_{--}$ denote the second nearest neighbors to the right, respectively left. Define $S_{1} \subset S$ by successively removing a zero density subset of elements for which the following properties do not hold. Namely, let $S_{1}$ consist of those $m \in S$ for which the following properties, as $m \to \infty$, all hold: \begin{enumerate} \item \label{item:mult-near-mean} Multiplicities are near their (logarithmic) normal order in the following sense: $$r_{2}(m) = (\log m)^{(\log 2)/2 \pm o(1)}, \quad r_{2}(m_{-}) = (\log m)^{(\log 2)/2 \pm o(1)}. $$ \item \label{item:almost-square-root-cancellation} There is nearly square root cancellation in exponential sums: $$ |w_{k}(m)| \leq (\log m)^{1/4+o(1)}, \quad |w_{k}(m_{-})| \leq (\log m)^{1/4+o(1)}. $$ \item \label{item:no-near-nbrs} There are no near neighbors: $m_{+}-m \geq (\log m)^{1/2-o(1)}$, and $m-m_{-} \geq (\log m)^{1/2-o(1)}$. \item \label{item:no-near-second-nbrs} There are no near second neighbors: $m_{++}-m_{+} \geq (\log m)^{1/2-o(1)}$, and $m_{-}-m_{--} \geq (\log m)^{1/2-o(1)}$. \item \label{item:no-far-nbrs} Neighbors are not too far away: $m_{+}-m \leq (\log m)^{1/2+o(1)}$, $m-m_{-} \leq (\log m)^{1/2+o(1)}$, and $m_{-}-m_{--} \leq (\log m)^{1/2+o(1)}$. \item \label{item:not-many-close-nbrs} There are not too many ``close'' neighbors in the following sense: for $T \ll m$, $$ |\{ n \in S: |n-m| \leq T \}| \ll \frac{T (\log T)^{2}}{(\log m)^{1/2-o(1)}} $$ \item \label{item:remove-w-bad} For $W \in [(\log m)^{1/4} \cdot (\log \log m)^{2}, (\log m)^{2}]$ there are $$ \gg W^{2}/( (\log m)^{1/2} (\log \log m) (\log W)^{2}) $$ elements in $S$ that lie between $m$ and $n$ if $|w_{k}(n)| \geq W$. \item \label{item:remove-w-really-bad} For $W \geq (\log m)^{2}$ there are $ \gg W^{3/2}/\log W $ elements in $S$ that lie between $m$ and $n$ if $|w_{k}(n)| \geq W$. \item \label{item:crucial-sum-bound} For $\epsilon>0$ and $G \in [2, m^{1-\epsilon}]$, $$ H_{G}(m) := \sum_{\substack{n \in S, n \neq m\\ |m-n| \geq G} } \frac{1}{|m-n|^{2}} \ll_{\epsilon} \frac{(\log G)^{2}}{G (\log m)^{1/2-o(1)}}. $$ \end{enumerate} \begin{remark} We tacitly assume that $o(1)$ is chosen so that $(\log m)^{o(1)} \to \infty$ as $m \in S$ tends to infinity; we will also use the convention that the sign of $o(1)$ is important, in particular $(\log m)^{-o(1)} \to 0$. \end{remark} We defer the proof that $S_{1}$ has full density inside $S$ to Section~\ref{sec:proof-that-s_1}. \begin{remark} \label{rem:same-size} Here, and what follows we will without comment make use of the fact that $\log m \sim \log m_{-} \sim \log m_{+} \sim \log m_{++}$ etc. To see this, any crude bound on $|m_{+}-m|$, $|m_{++}-m|$ etc suffices, e.g. the trivial bound $|m_{+}-m| \ll m^{1/2}$ which follows from bounding the distance to the nearest square. Moreover, we also use the fact that for almost all $m \in S(x)$, e.g. $m \in [x/\log x,x]$, we have $\log m \sim \log x$. \end{remark} The following will be used to show that the numerator in \eqref{pure momentum matrix element} is essentially given by two terms. \begin{lem} \label{lem:claim-lemma} If $m \in S_{1}$, then \begin{equation} \sum_{n \in S, n \neq m, m_{-}} \frac{|w_{k}(n)|}{|m-n|^{2}} \ll \frac{1}{(\log m)^{3/4-o(1)}} \end{equation} \end{lem} \begin{proof} Fix $m \in S_{1}(x)$. To simplify the notation, let $L = \log m$. To bound the sum $$ \sum_{n \in S, n \neq m, m_{-}} \frac{|w_{k}(n)|}{|m-n|^{2}} $$ we split it into parts according to the size of $|w_{k}(n)|$. {\em Small $|w_{k}(n)|$}: $|w_{k}(n)| \leq L^{1/4} (\log L)^{2}$. Since $m \in S_{1}$, its nearest neighbors, by property~\ref{item:no-near-nbrs} are of distance at least $ L^{1/2-o(1)}$ away from $m$. Thus, the contribution from $n$ for which $|w_{k}(n)| \leq L^{1/4} (\log L)^{2}$ is, by property \ref{item:crucial-sum-bound}, $$ \ll \sum_{\substack{n \in S \\|n-m| \geq L^{1/2-o(1)}}} \frac{ L^{1/4} (\log L)^{2}}{|m-n|^{2}} = L^{1/4} \cdot (\log L)^{2} \cdot H_{L^{1/2-o(1)}}(m) $$ $$ \ll \frac{L^{1/4} (\log L)^{4}}{L^{1/2-o(1)} L^{1/2-o(1)}} = \frac{1}{L^{3/4 -o(1)}}. $$ {\em Medium $|w_{k}(n)|$}: $|w_{k}(n)| \in [L^{1/4} (\log L)^{2},L^{2}]$. For terms in the sum for which $n \geq 2m$, we use the crude bound $|w_{k}(n)| \leq r_{2}(n) \ll \sqrt{n}$ and find that the total contribution is $\ll \sum_{n \geq m} n^{-3/2} \ll m^{-1/2}$, and hence it is enough to consider terms for which $n < 2m$. Let $W_{i} = 2^{i} L^{1/4} (\log L)^{2}$ for integer $i \geq 0$ such that $2^{i} L^{1/4} (\log L)^{2} \leq L^{2}$ and consider $n$ such that $|w_{k}(n)| \in [W_{i}, W_{i+1}]$. By property~\ref{item:remove-w-bad}, the number of elements in $S$ between $n$ and $m$ is $\gg W_{i}^{2}/(L^{1/2+o(1)} (\log W_{i})^{2})$. Thus, using the bound on the number of close neighbors, i.e., take $T = |n-m|$ in property~\ref{item:not-many-close-nbrs} (note that $T \ll m$ when $n < 2m$), we must have $\frac{T (\log T)^{2}}{(\log m)^{1/2-o(1)}} \gg W_{i}^{2}/(L^{1/2+o(1)} (\log W_{i})^{2})$), which implies that $|n-m|\gg W_{i}^{2-o(1)}$. Thus, by property~\ref{item:crucial-sum-bound} (take $G = W_{i}^{2-o(1)}$), \begin{multline*} \sum_{\substack{n \in S \\ |w_{k}(n)| \in [W_{i},W_{i+1}]}} \frac{|w_{k}(n)|}{|n-m|^{2}} \ll \sum_{n \in S : |m-n| \gg W_{i}^{2-o(1)}} \frac{W_{i}}{|n-m|^{2}} \ll \frac{W_{i} (\log W_{i})^{2}}{L^{1/2-o(1)}W_{i}^{2-o(1)}} \\= \frac{1}{L^{1/2-o(1)} \cdot W_{i}^{1-o(1)}} = \frac{1}{L^{1/2-o(1)}\cdot (2^{i}L^{1/4}(\log L)^{2})^{1-o(1)}} \ll \frac{1}{L^{3/4-o(1)} (3/2)^{i}}. \end{multline*} Summing over relevant $i \geq 0$, we find that the total contribution is $ \ll \frac{1}{L^{3/4-o(1)}}. $ {\em Large $|w_{k}(n)|$}: $|w_k(n)| \geq L^{2}$. Let $W_{i} = 2^{i} L^{2}$ and consider $n$ such that $|w_{k}(n)| \in [W_{i}, 2W_{i}]$. By property \ref{item:remove-w-really-bad}, we must then have $|n-m| \gg W_{i}^{3/2}/\log W_{i}$, and hence the contribution is (using the bound $\sum_{k \geq A} 1/k^{2} \ll 1/A$) \begin{equation*} \begin{split} \ll \sum_{\substack{n \in S \\ |n-m|\gg W_{i}^{3/2}/\log W_{i}}} \frac{W_{i}}{|n-m|^{2}} \ll \frac{W_{i} \cdot \log W_{i}}{W_{i}^{3/2}} \ll \frac{1}{W_{i}^{1/2-o(1)}} = \frac{1}{(2^{i}L^{2})^{1/2-o(1)}}. \end{split} \end{equation*} Summing over $i \geq 0$, the total contribution is $$ \ll \frac{1}{L^{1-o(1)}} \sum_{i \geq 0} 2^{-(1/2-o(1))i} \ll \frac{1}{L^{1-o(1)}}. $$ \end{proof} \begin{proof}[Proof of Proposition~\ref{momentumbound} using Lemma~\ref{lem:claim-lemma}] Recalling that $m_{-} < \lambda_{m} < m$, we note that that $|\lambda_{m}-n| \geq |m-n|$ if $n>m$. Moreover, for $n \leq m_{--}$ the minimum of $|\lambda_{m}-n|/|m-n| = 1 - \frac{m-\lambda_{m}}{m-n}$ (as $n \leq m_{--}$ ranges over elements in $S$) is attained for $n=m_{--}$, and consequently $$ |\lambda_{m}-n|/|m-n| \geq |\lambda_{m}-m_{--}|/|m-m_{--}| \geq |m_{-}-m_{--}|/|m-m_{--}| $$ which, by properties \ref{item:no-near-second-nbrs}, and \ref{item:no-far-nbrs}, is $\gg (\log m)^{1/2-o(1)}/(\log m)^{1/2+o(1)} = 1/(\log m)^{o(1)}$. Hence $|\lambda_{m}-n| \gg (\log m)^{-o(1)} |m-n|$ holds for $n \neq m, m_{-}$, and thus $$ \sum_{n \in S, n \neq m, m_{-}} \frac{|w_{k}(n)|}{|\lambda_{m}-n|^{2}} \leq (\log m)^{o(1)} \cdot \sum_{n \in S, n \neq m, m_{-}} \frac{|w_{k}(n)|}{|m-n|^{2}} $$ Let $M = \min( |\lambda_{m}-m|^{2}, |\lambda_{m}-m_{-}|^{2})$. Trivially $\lambda_m \gg m^{1/2}$, and by property~\ref{item:mult-near-mean}, Lemma~\ref{lem:claim-lemma} implies that \begin{multline*} \frac{1/\lambda_m^{2} + \sum_{n \in S} \frac{|w_{k}(n)|}{|\lambda_{m}-n|^{2}} } {1/\lambda_m^{2} + \sum_{n \in S} \frac{r_{2}(n)}{|\lambda_{m}-n|^{2}} } \ll \frac{ O(1/m)+ (|w_{k}(m)|+|w_{k}(m_{-})|)/M + \frac{1}{(\log m)^{3/4-o(1)}} } {\frac{O(1/m)+ (\log m)^{(\log 2)/2-o(1)}}{M}} \end{multline*} which, by property~\ref{item:almost-square-root-cancellation}, is \begin{equation} \label{eq:cant-name-this} \ll \frac{(\log m)^{1/4+o(1)}+\frac{M }{(\log m)^{3/4-o(1)}}} {(\log m)^{(\log 2)/2-o(1)}}. \end{equation} Recalling that $\lambda_{m}\in [m_{-},m]$, property~\ref{item:no-far-nbrs} implies that $M \ll (\log m)^{1+o(1)}$ and we thus find that (\ref{eq:cant-name-this}) is $$ \ll \frac{(\log m)^{1/4+o(1)}+\frac{(\log m)^{1+o(1)}}{(\log m)^{3/4-o(1)}}} {(\log m)^{(\log 2)/2-o(1)}} = \frac{1}{(\log m)^{(\log 2)/2-1/4-o(1)}} = o(1) $$ as $m \to \infty$, since $(\log 2)/2-1/4 = 0.09657\cdots$. Recalling that $\log m \gg \log \lambda_{m}$ (cf. Remark~\ref{rem:same-size}) the proof is concluded. \end{proof} \subsection{Proof that $S_1$ has full density} \label{sec:proof-that-s_1} \subsubsection{Property (\ref{item:mult-near-mean})} That $r_{2}(m) = (\log m)^{(\log 2)/2+o(1)}$ holds for almost all $m \in S$ follows from corollary~\ref{cor:r2-normal-order}. To ensure that $r_{2}(m_{-}) = (\log m)^{(\log 2)/2+o(1)}$ also holds, we remove the right neighbor of those $m$ for which $r_{2}(m) = (\log m)^{(\log 2)/2+o(1)}$ is not true; this removes another zero density set. (By Remark~\ref{rem:same-size}, $\log m_{+} = (1+o(1)) \log m$.) \subsubsection{Property (\ref{item:almost-square-root-cancellation})} By Proposition~\ref{prop:w-k-l2-bound} $$ \sum_{n \in S(x) } |w_{k}(n)|^{2} \ll x $$ and Chebychev's inequality, together with $|S(x)| \sim cx/\sqrt{\log x}$, then gives that $|w_k(m)| \leq (\log m)^{1/4+o(1)}$ holds for almost all $m \in S(x)$. Removing right neighbors, as in the proof of Property~(\ref{item:mult-near-mean}), the same holds for $|w_{k}(m_{-})|$. \subsubsection{Property (\ref{item:no-near-nbrs})} Let $f$ denote the characteristic function of $S$. By Theorem~\ref{thm:pair-corr-bound}, $$ \sum_{m \leq x} \sum_{h : 0 <|h| \leq (\log m)^{1/2-o(1)} } f(m) f(m+h) \ll \frac{x}{\log x} \sum_{h : 0< |h| \leq (\log x)^{1/2-o(1)} } c(h) $$ $$ \ll \frac{x }{\log x} \cdot (\log x)^{1/2-o(1)}. $$ Thus, by Chebychev's inequality, $$ \sum_{h : 0<|h| \leq (\log m)^{1/2-o(1)} } f(m) f(m+h) < 1 $$ holds for almost all $m$ in $S(x)$. Consequently, almost all $m \in S$ have no nearby neighbors. \subsubsection{Property (\ref{item:no-near-second-nbrs})} We use the same proof as the one used for showing that Property (\ref{item:no-near-nbrs}) holds. \subsubsection{Property (\ref{item:no-far-nbrs})} Let $n_{1} < n_{2} \ldots < n_I \leq x$ denote ordered representatives of the elements in $S(x)$, and let $s_{i} = n_{i+1}-n_{i}$. Since $\sum_{i < I} s_{i} \leq x$, Chebychev's inequality implies that $s_{i} \leq (\log n_{i})^{1/2+o(1)}$ holds for almost all $n_{i} \in S$; consequently $n_{+}-n \leq (\log n_{i})^{1/2+o(1)}$ for almost all $n \in S$. A similar argument shows that $n_{i}-n_{i-2} \leq (\log n_{i})^{1/2+o(1)}$ also holds for almost all $n_{i}$. Hence both $n-n_{-} \leq (\log n_{i})^{1/2+o(1)}$ and $n_{-}-n_{--} \leq (\log n_{i})^{1/2+o(1)}$ holds for almost all $n \in S$. \subsubsection{Property (\ref{item:not-many-close-nbrs})} The argument is similar to the one used to prove property \ref{item:no-near-nbrs}: again let $f$ denote the characteristic function of $S$. Then, as $T \ll m \leq x$, Theorem~\ref{thm:pair-corr-bound} gives that $$ \sum_{m \leq x} \sum_{h : 0 <|h| \leq T} f(m) f(m+h) \ll \frac{x}{\log x} \sum_{h : 0< |h| \leq T } c(h) \ll \frac{xT}{\log x} $$ and Chebychev's inequality implies that $$ \sum_{h : 0<|h| \leq T } f(m) f(m+h) \geq \frac{T (\log T)^{2}}{(\log x)^{1/2-o(1)}} $$ holds for at most $\frac{x}{(\log x)^{1/2+o(1)} (\log T)^{2}}$ exceptional elements $m \in S(x)$. Taking $T_{i}=2^{i}$, removing the exceptional elements, and summing over $i$ we find that we have removed $$ \ll \frac{x}{(\log x)^{1/2+o(1)}} \sum_{i \geq 0} 1/i^{2} = o(|S(x)|) $$ elements. Thus, property~\ref{item:not-many-close-nbrs} holds for $T$ being a power of two. To see that it holds for all $T \ll m$, take $i$ to be the smallest integer such that $T_{i} = 2^{i}\geq T$ and note that $T \in [T_{i}/2, T_{i}]$. \subsubsection{Property (\ref{item:remove-w-bad})} Given $n \in S$ such that $$w = |w_{k}(n)| \in [(\log n)^{1/4}(\log \log n), (\log n)^{2}],$$ remove $2\cdot w^{2}/( (\log n)^{1/2} (\log \log n) (\log w)^{2})$ neighbors to the left of $n$, and $2 \cdot w^{2}/( (\log n)^{1/2} (\log \log n) (\log w)^{2})$ neighbors to the right, and let $R_{n}$ denote the set of such removed elements. Fix $x$ and consider the number of removed elements in $[1,x]$. We claim that if $l \in S(x)$ has been removed, then $l \in R_{n}$ for some $n \leq 2x$. To show this, we note that given an integer $t$, we can always find $l_{1},l_{2} \in S$ such that $l_{1} < t < l_{2}$, and $l_{2}-l_{1} \ll \sqrt{t}$ (just take nearby squares), and since $|w_{k}(n)|\leq r_{2}(n) \leq n^{o(1)}$, any $R_{n}$ will be contained in an interval of length $\ll n^{1/2+o(1)}$. Hence it suffices to bound the union of $R_{n}$ for $n \leq 2x$. The removed contribution from $n$ for which $n \leq x/(\log x)^{10}$ is at most $\frac{x \cdot (\log x)^{4}}{(\log x)^{10}} = o(|S(x)|)$ (here we use the assumption $w \leq (\log n)^{2}$). On the other hand, for $n \in [x/(\log x)^{10},2x]$, we have $\log n = (1+o(1))\log x$. Let $W_{i} = 2^{i} (\log x)^{1/2}(\log \log x)$, and consider $n \in S(x)$ such that $|w_{k}(n)| \in [W_{i},2W_{i}]$. By Proposition~\ref{prop:w-k-l2-bound} and Chebychev's inequality, the number of such $n$ is $\ll \frac{x}{W_{i}^{2}}$, and the total number of removed elements is thus $$ \ll \frac{x}{W_{i}^{2}} \cdot \frac{W_{i}^{2}}{ (\log x)^{1/2} (\log \log x) (\log W_{i})^{2}} \ll \frac{x}{ (\log x)^{1/2} (\log \log x) \cdot i^{2}} $$ Summing over $i \geq 0$ we find that the total number of removed elements is $$ \ll \frac{x}{ (\log x)^{1/2} (\log \log x) \cdot i^{2}} = o(|S(x)|). $$ \subsubsection{Property (\ref{item:remove-w-really-bad})} Arguing as before, if $|w_{k}(n)| \geq (\log n)^{2}$ let $w=|w_{k}(n)|$ and remove the nearest $2w^{3/2}/\log w$ neighbors to the right and left of $n$; let $R_{n}$ denote the set of removed neighbors. Fix $x$ and consider the number of removed elements in $[1,x]$. We first note that $|w_{k}(n)| \leq r_{2}(n) \ll n^{1/100}$ holds for all $n \in S$. Consequently $R_{n}$, if non-empty, contains at most $n^{3/200}$ neighbors of $n$ which (since $|S(2y)|-|S(y)| \gg y/\sqrt{\log y}$ for all $y$ by Landau) implies that if $l \in S(x)$ and $l$ belongs to some $R_{n}$, then $n \leq 2x$. Consider first the removed contribution coming from $R_{n}$ for which $n \leq \sqrt{x}$. Since $|w_{k}(n)| \leq r_{2}(n) \ll n^{1/100}$, the total contribution is $$ \ll \sqrt{x} \cdot (x^{1/100})^{3/2} = o(|S(x)|). $$ If $n \in [\sqrt{x}, 2x]$ and $|w_{k}(n)| \geq (\log n)^{2}$, we have $$ |w_{k}(n)| \geq (\log x)^{2}/100. $$ Define $W_{i}= 2^{i} \cdot (\log x)^{2}/100$ and consider the removed contribution from $R_{n}$ for which $|w_{k}(n )| \in [W_{i},2W_{i}]$. By Proposition~\ref{prop:w-k-l2-bound} and Chebychev's inequality, the number of such $n \in S(2x)$ is $ \ll \frac{x}{W_{i}^{2}} $ and the associated removed contribution is $$ \ll \frac{x \cdot (W_{i}^{3/2}/\log W_{i})}{W_{i}^{2}} \ll \frac{x}{W_{i}^{1/2} \log W_{i}} \ll \frac{x}{(2^{i}( \log x)^{2})^{1/2} } = \frac{x}{2^{i/2} \log x}. $$ Summing over $i \geq 0$ we find that the total contribution is $$ \ll \sum_{i \geq 0}\frac{x}{2^{i/2} \log x} = o(|S(x)|). $$ \subsubsection{Property (\ref{item:crucial-sum-bound})} The final property is an immediate consequence of the following Lemma. \begin{lem} If $\epsilon>0$ then for almost all $m \in S(x)$, we have, for any $T \in [2, x^{1-\epsilon}]$, $$ \sum_{\substack{n \in S\\ |n-m| \geq T}} \frac{1}{(m-n)^{2}} \ll \frac{(\log T)^{2}}{T (\log x)^{1/2-o(1)}} $$ \end{lem} \begin{proof} We first bound the sum over $n \in S \setminus S(2x)$, i.e., those $n$ for which $n \geq 2x$: $$ \sum_{\substack{n \in S\\ |n-m| \geq T\\ n \geq 2x}} \frac{1}{(m-n)^{2}} \leq \sum_{k\geq x} 1/k^{2} \ll 1/x = o \left( \frac{(\log T)^{2}}{T (\log x)^{1/2-o(1)}} \right). $$ (Recall that $m \leq x$ since $m \in S(x)$, and that $T \leq x^{1-\epsilon}$.) Next we note that $$ \sum_{\substack{m,n \in S(2x)\\ |n-m| \geq T}} \frac{1}{(m-n)^{2}} = \sum_{k \geq T} \frac{|\{ m,n \in S(2x): |m-n| = k \}|}{k^{2}} $$ By Theorem~\ref{thm:pair-corr-bound}, $$ |\{ m,n \in S(2x): |m-n| = h \}| \ll \frac{x \cdot c(h)}{\log x} $$ and, by partial summation and using that $c(h)$ is bounded on average (cf. (\ref{eq:ck-bounded-on-average}), $$ \sum_{\substack{m,n \in S(2x)\\ |n-m| \geq T}} \frac{1}{(m-n)^{2}} \ll \frac{x}{\log x} \sum_{h \geq T} \frac{c(h) }{h^{2}} \ll \frac{x}{T \log x}. $$ By Chebychev's inequality, the number of $m \in S(2x)$ for which $\sum_{\substack{n \in S(2x)\\ |n-m| \geq T}} \frac{1}{(m-n)^{2}} \geq \frac{(\log T)^{2}}{T (\log x)^{1/2-o(1)}}$ holds is thus $$ \ll \frac{x}{T \log x} \bigg/ \frac{(\log T)^{2}}{T (\log x)^{1/2-o(1)}} = \frac{x}{(\log x)^{1/2+o(1)} \cdot (\log T)^{2}} $$ Taking $T_{i} = 2^{i}$, summing over $i \ll \log x$, and recalling that $|S(x)| \sim x/\sqrt{\log x}$ we find that the property holds in the special case of $T$ being a power of two. The result for general $T$ follows by taking the largest $i$ such that $T_{i} = 2^{i} \leq T$ and noting that $T_{i} \in [T/2, T]$. \end{proof} \section{Proof of Theorem \ref{QE}} Let $a\in C^\infty(S^*{\mathbb T}^2)$ be a smooth observable with rapidly decaying Fourier expansion $$ a(x,\phi)=\sum_{\zeta\in\Z^2 ,k\in\Z}\hat{a}(\zeta,k)e^{\i\left\langle \zeta,x \right\rangle+\i k\phi}. $$ Since $\operatorname{Op}(e_{\zeta,k})$ is unitary (cf. \eqref{pseudo}) for all $\zeta,k$, we have $|\left\langle \operatorname{Op}(e_{\zeta,k})g_\lambda, g_\lambda\right\rangle| = |\left\langle g_\lambda, g_\lambda\right\rangle|$, and the rapid decay of Fourier coefficients then shows that given $\epsilon>0$, there exists $J$ such that \begin{equation} \begin{split} |\left\langle(\operatorname{Op}(a)-\operatorname{Op}(P_J))g_\lambda,g_\lambda\right\rangle| \leq & \sum_{|\zeta|,|k|>J}|\hat{a}(\zeta,k)||\left\langle \operatorname{Op}(e_{\zeta,k})g_\lambda, g_\lambda\right\rangle| \leq \epsilon \end{split} \end{equation} holds {\em uniformly} in $\lambda$, where $P_J(x,\phi)$ is the trigonometric polynomial \begin{equation}\label{trigpoly} P_J(x,\phi)=\sum_{\substack{\zeta\in\Z^2,k\in\Z\\ |\zeta|,|k|\leq J}}\hat{a}(\zeta,k)e^{\i\left\langle \zeta,x \right\rangle+\i k\phi} \end{equation} obtained by truncating the Fourier expansion of $a$. Hence it is enough to show that for any fixed $J \geq 1$, \begin{equation} \left\langle \operatorname{Op}(P_J) g_\lambda,g_\lambda\right\rangle \to \frac{1}{\operatorname{vol}(S^*{\mathbb T}^2)}\int_{S^*{\mathbb T}^2} a \, d\mu = \hat{a}(0,0) \end{equation} as $\lambda \to \infty$ along a full density subsequence of $S$. Now, given $J \geq 1$, let $$\tilde{S}_J:= \bigcap_{|k|\leq J} \left( S'_k\cap S' \right)$$ where $S'\subset S$ denotes the full density sequence of Theorem \ref{mixed}. (Since $S'$ and $S_{k}'$ have full densities for all $k \neq 0$, so does $\tilde{S}_{J}$ for all $J$.) It follows from the previous two sections that \begin{equation} \label{polylimit} \left\langle \operatorname{Op}(P_J) g_\lambda,g_\lambda\right\rangle\to\frac{1}{\operatorname{vol}(S^*{\mathbb T}^2)}\int_{S^*{\mathbb T}^2}P_J d\mu = \hat{a}(0,0) \end{equation} as $\lambda\in \tilde{S}_J\to\infty$. In order to construct the full density sequence of Theorem \ref{QE} we use a standard diagonalisation argument (see for instance \cite{CdV2}) to extract such a sequence from the list of sequences $\{\tilde{S}_J\}_{J}$. By construction $\tilde{S}_{J+1}\subset \tilde{S}_J$. Choose $M_J$ such that for all $X>M_J$ \begin{equation} \frac{\#\{\lambda\in \tilde{S}_J \mid \lambda\leq X\}}{\#\{\lambda\in S \mid \lambda\leq X\}}\geq 1-\frac{1}{2^J} \end{equation} and let $S'_\infty$ be such that $S'_\infty\cap[M_J,M_{J+1}]=\tilde{S}_J\cap[M_J,M_{J+1}]$ for all $J$. Then $S'_\infty\cap[0,M_{J+1}]$ contains $\tilde{S}_J\cap[0,M_{J+1}]$ and therefore $S'_\infty$ is of full density in $S$. Moreover, for any $J\geq 1$, we have \begin{equation} \left\langle \operatorname{Op}(P_J)g_\lambda,g_\lambda \right\rangle = \int_{S^*{\mathbb T}^2}P_J d\mu_\lambda \to \frac{1}{\operatorname{vol}(S^*{\mathbb T}^2)}\int_{S^*{\mathbb T}^2}P_J d\mu =\hat{a}(0,0) \end{equation} as $\lambda\to\infty$ along $S'_\infty$ since $S'_{\infty} \cap (M_{J+1},\infty) \subset \tilde{S}_{J} \cap (M_{J+1},\infty)$. \section{Number theoretic background} \label{sec:numb-theor-backgr} \subsection{Bounding mean values of multiplicative functions} \label{Mean Values} We recall that $r_{2}(n)/4$ is a {\em multiplicative function}, i.e., $r_{2}(mn)/4 = r_{2}(m)/4 \cdot r_{2}(n)/4$ if $(m,n)=1$, and similarly $w_{k}(n)/4$ is also multiplicative (e.g., see the proof of Proposition~6 in \cite{fkw-lattice}.) In particular, both functions are determined by their values at prime powers, and we have $$ \frac{r_{2}(p^{e})}{4} = \begin{cases} e+1 & \text{if $p \equiv 1 \mod 4$,} \\ 1 & \text{if $p \equiv 3 \mod 4$ and $e$ is even, or if $p=2$,} \\ 0 & \text{if $p \equiv 3 \mod 4$ and $e$ is odd.} \end{cases} $$ For $p \equiv 1 \mod 4$, define the angle $\theta_{p} \in [0,\pi/4)$ by $\cos \theta_p = x/\sqrt{x^{2}+y^{2}}$, where $x^{2}+y^{2} = p$ for $x,y \in \Z$ and $0 \leq y \leq x$. We then have (if $4|k$) $$ \frac{w_{k}(p^{e})}{4} = \begin{cases} \sum_{l=0}^{e} e^{i \cdot \theta_{p} \cdot (e-2l)} & \text{if $p \equiv 1 \mod 4$,} \\ 1 & \text{if $p \equiv 3 \mod 4$ and $e$ is even,} \\ 0 & \text{if $p \equiv 3 \mod 4$ and $e$ is odd,} \\ \pm 1 & \text{if $p=2$.} \end{cases} $$ In particular (again for $4|k$), $w_{k}(2)/4 = (-1)^{k/4}$, and for odd primes we have \begin{equation} \label{eq:weyl-sum-on-primes} \frac{w_{k}(p)}{4} = \begin{cases} 2 \cos(k \theta_{p}) & \text{for $p \equiv 1 \mod 4$,}\\ 0 &\text{for $p \equiv 3 \mod 4$.} \end{cases} \end{equation} Now, let $f$ be a non-negative multiplicative function such that for all prime powers $f(p^{k}) \ll \gamma^{k}$ holds for some $\gamma < 2$, and $$ \sum_{p \leq x } f(p) = \frac{x}{\log x} \cdot (\tau + o(1)), $$ as $x \to \infty$, for some constant $\tau$. Satz~1 of Wirsing \cite{Wirsing} then implies that $$ \sum_{n \leq x } f(n) \ll_{\tau} \frac{x}{\log x} \cdot \prod_{p \leq x} \left( 1+\frac{f(p)}{p}+ \frac{f(p^{2})}{p^{2}}+ \ldots \right). $$ \subsubsection{Proof of Proposition~\ref{prop:w-k-l2-bound}} For $k$ fixed, define a multiplicative function $f(n) := (|w_{k}(n)|/4)^{2}$ (recall that $|w_{k}(n)|/4$ is multiplicative.) By \eqref{eq:weyl-sum-on-primes}, we have $$ f(p) = \begin{cases} 1 & \text{for $p=2$}, \\ (2 \cos(k\theta_p))^{2} & \text{for $p \equiv 1 \mod 4$}, \\ 0 & \text{for $p \equiv 3 \mod 4$,} \end{cases} $$ and we find that $$ \sum_{p \leq x} f(p) = 1 + \sum_{p \leq x, p \equiv 1 \mod 4} f(p) = 1 + \sum_{p \leq x, p \equiv 1 \mod 4} (2 \cos( 2 \pi k \theta_{p}))^{2}. $$ Thus Hecke's result on angular equidistribution of split Gaussian primes (see \cite{Hecke}) gives that \begin{equation} \label{eq:prime-sum} \sum_{p \leq x} f(p) = \frac{x}{\log x} \cdot \left( \frac{1}{2} \cdot \int_0^{1} (2 \cos(2 \pi k \theta))^{2} \, d \theta +o(1) \right) = \frac{x}{\log x} \cdot \left( 1 +o(1) \right) \end{equation} Hence Wirsing's Satz~1 applies (also note that $f(p^{k}) \ll k^{2}$ for all $p,k$), thus $$ \sum_{n \leq x} |w_{k}(n)|^{2} \ll \sum_{n \leq x} f(n) \ll \frac{x}{\log x} \cdot \prod_{p \leq x} \left( 1+\frac{f(p)}{p}+ \frac{f(p^{2})}{p^{2}}+ \ldots \right). $$ Now, since $\sum_{k=2}^{\infty} f(p^{k})/p^{k} \leq \sum_{k=2}^{\infty} (k+1)^{2}/p^{k} \ll 1/p^{2}$, we find that $$ \prod_{p \leq x} \left( 1+\frac{f(p)}{p}+ \frac{f(p^{2})}{p^{2}}+ \ldots \right) \ll \exp \left( \sum_{p \leq x} \frac{f(p)}{p} \right) = \exp \left( \log \log x + O(1) \right), $$ where the final equality follows from (\ref{eq:prime-sum}) and partial summation. Hence $$ \sum_{n \leq x} |w_{k}(n)|^{2} \ll \frac{x}{\log x} \cdot \exp( \log \log x + O(1)) \ll x. $$ \subsection{Erd\"os-Kac Theory}\label{Erdos Kac} Let $\omega(n)$ denote the number of distinct prime factors of an integer $n$. The celebrated Erd\"os-Kac theorem assert that the distribution of $ \left\{ \frac{\omega(n)-\log\log n}{\sqrt{\log\log n}} \right\}_{n \leq x} $ is given by the standard normal distribution as $x\to \infty$; in particular, a typical integer of size $x$ has about $\log \log x$ prime factors. We shall need some analogous, but weaker, results for elements in $S$. Recall that given $n\in S$, $\omega_{1}(n)$ denotes the number of prime factors, congruent to one modulo four, of $n$; i.e., with $\sum'_{p}$ denoting the sum over $p \equiv 1 \mod 4$, $$ \omega_1(n) := \sum'_{p|n} 1. $$ \subsubsection{Proof of Proposition~\ref{prop:erdos-kac-moment-estimates}} Using that at most four primes $p \geq x^{1/4}$ can divide an integer $n \leq x$, together with $\sum'_{p \leq x^{1/4} } |\{ n \in S(x) : p|n \}|= \sum'_{p \leq x^{1/4} } |S(x/p)|,$ we find that \begin{multline*} \sum_{n \in S(x)} \omega_1(n) = \sum_{n \in S(x)} \sum'_{p|n} 1 = \sum_{n \in S(x)} \left( \sum'_{p|n, p \leq x^{1/4}} 1 + O(1) \right) \\= \sum'_{ p \leq x^{1/4}} |S(x/p)| + O(|S(x)|) \end{multline*} By Landau, $|S(x/p)| = \frac{cx}{p \sqrt{\log(x/p)}} \cdot (1+O(1/\log (x/p))$, and thus, what will be the main term, is given by $$ \sum'_{ p \leq x^{1/4}} |S(x/p)| = \sum'_{ p \leq x^{1/4}} \frac{cx}{p \sqrt{\log(x/p)}} \cdot (1+O(1/\log (x/p)) $$ If $p \in [x^{1/\log\log x}, x^{1/4}]$ then $\log(x/p) \gg \log x$ and thus the contribution from such primes $p$ is $$ \ll \frac{x}{\sqrt{\log x}} \sum'_{p \in [x^{1/\log\log x}, x^{1/4}]} 1/p \ll \frac{x}{\sqrt{\log x}} \cdot \log \left( \frac{\log x^{1/4}}{ \log x^{1/\log\log x} } \right) \ll \frac{x \log \log \log x}{\sqrt{\log x}} $$ which is of the same order as the claimed error term in the first assertion of the Proposition. Now, if $ p \leq x^{1/\log\log x}$ then $\log p \leq \log x/\log \log x$ and thus $$ \sqrt{\log(x/p)} = \sqrt{\log x- \log p} = \sqrt{ \log x} \left(1 - O \left( \frac{1 }{\log \log x} \right) \right) $$ Hence, by the analogue of Mertens' theorem for primes in progressions\footnote{We shall only need that $\sum'_{p \leq x}1/p = 1/2 \cdot \log \log x + O(1)$, a simple consequence of the prime number theorem for arithmetic progressions.}, together with $1/\log(x/p) \ll 1/\log x$ for $p \leq x^{1/\log\log x}$, we find that \begin{multline*} \sum'_{p \leq x^{1/\log\log x}} \frac{cx }{p \sqrt{\log(x/p)}} \cdot (1+O(1/\log (x/p)) \\= \frac{cx }{\sqrt{\log x}} \cdot (1+O(1/\log x)) \cdot (1+O(1/\log \log x))) \sum'_{p \leq x^{1/\log\log x}} 1/p = \\ \frac{cx }{\sqrt{\log x}} \cdot (1+O(1/\log \log x))) \cdot \left( \frac{1}{2} \log \left( \frac{\log x}{\log\log x} \right) +O(1) \right) = \\ \frac{cx }{\sqrt{\log x}} \cdot (1+O(1/\log \log x))) \cdot (\frac{1}{2}\log \log x + O(\log \log \log x)) = \\ \frac{cx }{\sqrt{\log x}} ( \frac{1}{2} \log \log x + O(\log \log \log x)) \end{multline*} Dividing by $|S(x)|$ and again using Landau's Theorem, the proof of the first assertion is concluded. The variance estimate is similar: since $n \leq x$ can have at most $4$ prime divisors $p \geq x^{1/4}$, we have \begin{multline*} \sum_{n \in S(x)} \omega_1(n)^{2} = \sum_{n \in S(x)} \left( \sum'_{p \leq x, p|n} 1 \right)^{2} = \sum_{n \in S(x)} \left( \sum'_{p \leq x^{1/4}, p|n} 1 + O(1) \right)^{2} \\ = \sum_{n \in S(x)} \left( \left( \sum'_{p \leq x^{1/4}, p|n} 1 \right)^{2} + 2 \sum'_{p \leq x^{1/4}, p|n} 1 + O(1) \right) \end{multline*} The total contribution from the last two terms in the inner sum is, by our first assertion (regarding the mean value of $\omega_1(n)$), $$ \ll \frac{x \log \log x}{\sqrt{\log x}} + |S(x)| \ll \frac{x \log \log x}{\sqrt{\log x}}. $$ With $[a,b] = ab/(a,b)$ denoting the least common multiple of integers $a,b$, we have \begin{multline*} \sum_{n \in S(x)} \left( \sum'_{p \leq x^{1/4}, p|n} 1 \right)^{2} = \sum'_{p_{1},p_{2} \leq x^{1/4}} |S(x/[p_{1},p_{2}])| \\= \sum'_{p_{1},p_{2} \leq x^{1/4}} |S(x/p_{1}p_{2})| + \sum'_{p \leq x^{1/4}} |S(x/p)| - \sum'_{p \leq x^{1/4}} |S(x/p^{2})| \end{multline*} The latter two terms are of lower order than the claimed main term --- the argument used to estimate the mean of $\omega_1(n)$ implies that $$ \sum'_{p \leq x^{1/4}} |S(x/p)| \ll \frac{x \log \log x}{\sqrt{\log x}} $$ and, again using Landau, we find that $$ \sum'_{p \leq x^{1/4}} |S(x/p^{2})| \ll \frac{x}{\sqrt{\log x}} \sum'_{p \leq x^{1/4}} 1/p^{2} \ll \frac{x}{\sqrt{\log x}}. $$ As for the double sum over small primes, again by Landau, \begin{multline} \label{eq:variance-small-prime-double-sum} \sum'_{p_{1},p_{2} \leq x^{1/4}} |S(x/p_{1}p_{2})| = \sum'_{p_{1},p_{2} \leq x^{1/4}} \frac{c x}{p_{1}p_{2} \sqrt{\log(x/(p_{1}p_{2}))}} \left( 1 + O(1/\log(x/(p_{1}p_{2})))\right) \\= \sum'_{p_{1},p_{2} < x^{1/\log \log x}} \ldots + 2 \cdot \sum'_{\substack{ p_{1} \leq x^{1/\log \log x}\\ p_{2} \in [x^{1/\log \log x}, x^{1/4}]}} \ldots + \sum'_{p_{1}, p_{2} \in [x^{1/\log \log x}, x^{1/4}]} \ldots \end{multline} Again by the analogue of Mertens' Theorem for arithmetic progressions, $$ \sum'_{p\in [x^{1/\log \log x}, x^{1/4}]} 1/p \ll \log \log \log x $$ and $$ \sum'_{p \leq x^{1/\log \log x}} 1/p = \frac{1}{2} \log \log x +O(\log \log \log x) $$ hence the contribution from the latter two sums in (\ref{eq:variance-small-prime-double-sum}) is $$ \ll \frac{x}{\sqrt{\log x}} ( (\log \log \log x) \cdot \log \log x + (\log \log \log x)^{2} ) \ll \frac{x \cdot (\log \log \log x) \cdot \log \log x }{\sqrt{\log x}} $$ which is of the same size as the claimed error term. Finally, yet again by Landau, and that $\log(x/(p_{1}p_{2})) = \log x (1+O(1/\log \log x))$ for $p_{1},p_{2} \leq x^{1/\log \log x}$, we find that \begin{multline*} \sum'_{p_{1},p_{2} < x^{1/\log \log x}} |S(x/p_{1}p_{2})| = \sum'_{p_{1},p_{2} < x^{1/\log \log x}} \frac{c x}{p_{1}p_{2} \sqrt{\log( x/(p_{1}p_{2}))}} (1+O(1/\log x)) \\= \sum'_{p_{1},p_{2} < x^{1/\log \log x}} \frac{c x}{p_{1}p_{2} \sqrt{\log x}} (1+O(1/\log x)) (1+O(1/\log\log x)) \\= \frac{cx }{\sqrt{\log x}} \left( \sum'_{p < x^{1/\log \log x}} 1/p \right)^{2} (1+O(1/\log \log x)) \end{multline*} Again by Mertens's for primes in progressions, we find that the main term equals $$ \frac{cx }{\sqrt{\log x}} \left( \frac{1}{2} \log \log x - O(\log\log \log x) \right)^{2} \cdot (1+O(1/\log \log x)) $$ Dividing by $|S(x)|$ and using Landau again, the main term is thus $$ \frac{1}{4} (\log \log x)^{2} + O( (\log \log x) \cdot \log \log \log x ). $$ \subsubsection{Proof of Corollary~\ref{cor:r2-normal-order}} \label{sec:proof-coroll-refc} Define a multiplicative function $$ f(n) := \frac{r_{2}(n)}{4 \cdot 2^{\omega_1(n)}} $$ If $p \equiv 1 \mod 4$, then $f(p^{e}) = (e+1)/2$; if $p \equiv 3 \mod 4$ then $f(p^{2e+1})=0$ whereas for even exponents $f(p^{2e})=1$. Using Wirsing's Satz~1 again, we find (recall that $\sum'_{p\leq x}$ denotes the over primes $p \equiv 1 \mod 4$) that \begin{multline*} \sum_{n \in S(x)} f(n) = \sum_{n \leq x} f(n) \ll \frac{x}{\log x} \exp \left( \sum_{p \leq x} f(p)/p \right) = \frac{x}{\log x} \exp \left( \sum'_{\substack{p \leq x }} 1/p + O(1)\right) \\ \ll \frac{x}{\log x} \exp \left( \frac{1}{2} \log \log x +O(1) \right) \ll \frac{x}{(\log x)^{1/2}} \ll |S(x)| \end{multline*} (here we again have used Mertens' Theorem for arithmetic progressions.) Chebyshev's inequality then implies that the number of $n \in S(x)$ for which $f(n) \geq \log \log \log n$ is $o(|S(x)|)$. In particular, we find that $$ 2^{\omega_1(n)} \leq r_{2}(n)/4 \leq 2^{\omega_1(n)} \cdot \log \log \log n $$ holds for almost all $n \in S(x)$. Now, since Corollary~\ref{cor:r2-log-normal-order} implies that $\omega_1(n) = (1/2+o(1)) \log \log n$ for almost all $n \in S(x)$, we find that $$ r_{2}(n) = 2^{(1/2+o(1)) \log \log n } = (\log n)^{(\log 2)/2+o(1)} $$ holds for almost all $n \in S(x)$.
1,116,691,496,996
arxiv
\section{Introduction} \gls{rl} solves sequential decision making problems by utilizing a trial-and-error approach guided by a reward signal. \gls{rl} has achieved tremendous successes, especially in beating humans in games \citep{silver2018alphazero, jaderberg2019pbt_ctf} and robotics \citep{levine2016e2e_visuomotor}. However, RL also suffers form various open problems, such as its sample inefficiency. This sample inefficiency is often caused by reward function-specification. A sparse and delayed reward signal makes it difficult for the agent to experience, and learn from, meaningful reward signals. Designing tasks suitable to solve with \gls{rl} algorithms is often challenging \citep{ng1999reward_shaping}, and mostly involves designing a task-specific reward function. A recent line of research, surveyed by \citet{luketina2019suvery_rl_nlp}, has proposed methods that allow task descriptions to be specified using natural language. However, such methods \citep{chevalierboisvert2019babyai} have proven to still be very sample inefficient, requiring the usage of up to 50 GPUs during weeks in order to learn relatively simple tasks. One promising approach includes \citet{jiang2019hal}, which proposed to tackle this sample inefficiency by decomposing the problem into an hierarchical structure, guided by the compositional nature of natural language. Humans follow a similar strategy, and when confronted with a new problem, humans are generally capable of forming \textit{intuitive theories} about how to tackle the problem at hand. These intuitive theories often consist of sequences of high-level actions (e.g. \textit{first go to store x}, then \textit{stop for gas before driving home}). An interesting approach to make \gls{rl} more sample efficient, would be to combine high level human intuitive theories, expressed using natural language, and low-level automated trial and error learning. An essential part of such a symbiosis is the ability of an agent to quickly adapt from one task to a similar task. A human does not need to learn each individual task from scratch, but has a set of base strategies from which a new strategy can be quickly formed. Current algorithms capable of quickly adapting their control policies to solve related tasks, mostly rely on intensive training using a diverse set of tasks, often guided by a curriculum of increasingly more difficult and diverse tasks \citep{bengio2009curriculumlearning}. In this paper, we take a different approach and examine if we can facilitate fast task adaptation by utilizing semantic meaning from task descriptions formulated in natural language. Our method is capable of, given a set of pre-trained control policies, and a new previously unseen task, making a decision about which previously developed control policy will adapt best, in order to solve a new previously unseen task, solely from its instruction. In the following sections of this paper, we first briefly review key research related to ours (Section~\ref{sec:related_work}). Section~\ref{sec:babyai} contains a description of the environment, and tasks we use to demonstrate our method. In Section~\ref{sec:method} we describe the proposed method. Section~\ref{sec:experiments} demonstrates experimentally how well our method is capable of performing task-adaptation in a simple environment. \section{Related work} \label{sec:related_work} Our proposed method can be situated on the intersection of transfer learning and natural language usage in \gls{rl}. In this section, we first briefly review how our method relates to key research in transfer learning in \gls{rl}, how natural language has been used in \gls{rl}, and what research has been conducted on this intersection. \paragraph{Transfer learning in reinforcement learning} Utilizing knowledge gained from learning one task to another task has been a widely studied field. The goal of this field is to make \gls{rl} more sample efficient \citep{konidaris2006transfer_rl, taylor2009transfer_rl_survey}. Common approaches include to train the agent on multiple tasks \citep{hessel2019popart}, or to construct parameterized policies \citep{schaul2015uvfa, andreas2016policy_sketches, oh2017zeroshottaskgeneralization}, which can be configured to perform new tasks. An alternative approach consists of learning inter-task mappings \citep{taylor2007representationtransferreinforcement}, based on task similarities. Our method similarly is capable of detecting task similarities, using additional information captured in task descriptions. \paragraph{Language instructions in reinforcement learning} Recent advances in \gls{rl}, surveyed by \citet{luketina2019suvery_rl_nlp}, have demonstrated the usage of natural language in order to build models capable of capturing domain knowledge. A commonly used approach consists of directly embedding both visual observation and language instruction in order to train a control policy \citep{hermann2017grounded, misra2017mapping, chevalierboisvert2019babyai}. Alternatively, \citet{goyal2019rewardshapelanguage} uses natural language reward shaping, by predicting if an action in a trajectory matches a task description. \citet{jiang2019hal} explores the compositional structure of natural language in order to train a hierarchical algorithm, capable of discovering abstractions that generalize over different sub-tasks using language instructions. However, current approaches commonly heavily depend on large amounts of human labeled data and hand-designed policies. In this context, our method can reduce the dependency on expensive human labeling by providing fast task-adaptation. \paragraph{Transfer learning guided by language in reinforcement learning} \citet{co-reyes2018metalearning} proposed a meta-learning algorithm capable of utilizing corrective instructions formulated in natural langue in order to facilitate task-adaptation. Most similar to our research, is the work done by \citet{narasimhan2018language_transfer}, which includes a way to use entity descriptions in natural language as a layer of abstraction, in order to facilitate transfer of an \gls{rl}-policy, to a new environment. \section{BabyAI environment} \label{sec:babyai} \paragraph{Environment} In order to demonstrate the capabilities of our method, we make use of the \textit{BabyAI environment} proposed by \citet{chevalierboisvert2019babyai}. In this environment, the agent is tasked with completing various tasks in a 2D gridworld. The environment supports multiple rooms, but for our preliminary experiments, we only consider a single room, and use the \textit{goto} and \textit{pickup} tasks. The task the agent is charged with, is described using a synthetic \textit{baby language}. The pixels of the screen, together with this instruction, form the observation of the agent. The environment supports partial observability of the state. However for our experiments we use the fully observable configuration. The action-space we consider for our experiments consists of moving forward, turning left/right, object-pickup/drop, opening doors, and a \textit{finish} action. Notice that in order to solve the \textit{goto} and \textit{pickup} tasks, only a subset of the action-space is required. In this environment, the reward-signal is only sparsely observed, as the agent only receives a reward upon task completion. A few example tasks are presented in Figure~\ref{fig:env}. \begin{figure}[ht] \centering \includegraphics[scale=0.25]{env.pdf} \caption{Sample tasks taken from the BabyAI environment \citep{chevalierboisvert2019babyai}. The current position of the agent is represented with a red triangle.} \label{fig:env} \end{figure} \paragraph{Vocabulary} The instructions used in the BabyAI environment are all generated using the proposed \textit{Baby Language}. This language consists of a small vocabulary, but can be used combinatorially to express a relatively rich set of different tasks. Instructions we use in our transfer experiments follow the same \textit{verb}, \textit{object color}, \textit{object} pattern (e.g. \textit{pickup the yellow box}). The following words make up the vocabulary used in our experiments: \begin{itemize} \item \textbf{Verbs}: pickup, goto \item \textbf{Objects}: box, key, ball \item \textbf{Colors}: blue, red, green, yellow \end{itemize} In total, this allows us to express 24 different tasks. While the BabyAI platform is a great platform to demonstrate the qualities of our method, our method is not environment-specific, and we plan to extend this research to multiple environments. \section{Method} \label{sec:method} The main idea of our approach is to utilize a limited set of pre-trained base control policies. When confronted with a new task, described using natural language (the transfer instruction), the best base policy is selected and the new task is learned based on this base policy. As such our method consists of two parts: the first part is a pre-training step, while the second part deals with the effective task-adaptation. A pseudo-code summary of our method can be found below in Algorithm~\ref{alg:summary}. \begin{algorithm}[H] $\alpha$: $k$ instructions sampled from the set of possible instructions $Z$ \\ $\beta$: $p$ instructions sampled from the set of possible instructions $Z$ \\ \ForEach{instruction $z_i \in \alpha$} { Train base policy $\pi_i$ until convergence \\ \ForEach{instruction $z_j \in \beta$} { Sample task-adaptation during $n$ training steps, from base policy $\pi_i$ (with instruction $z_i$) to task $z_j$ } } Train the transfer model \caption{Summary of our task-adaptation method} \label{alg:summary} \end{algorithm} \subsection{Pre-training base control policies} In this pre-training phase, we first train a set of $k$ base control policies $\{\pi_{0},...,\pi_{k}\}$. A control policy $\pi_{i}(s_t)$ determines the action $a$ an agent takes, based on the state $s_t$ the agent resides in. Each base control policy should reliably be able to perform one instruction $z_{i}$. This task instruction is expressed in natural language (e.g. \textit{go to the blue ball} or \textit{pickup the yellow key}). Training base control policies can be done using any \gls{rl} algorithm. In this preliminary research, the set of possible instructions $Z$ is limited. This is due to the fixed vocabulary described in Section~\ref{sec:babyai}. The amount of pre-trained control policies should be sufficiently large, but smaller than the entire set of possible instructions ($k \ll |Z|$). For a base control policy to facilitate efficient task-adaptation, it is beneficial to make slight adaptations to the environment. An example of such variations includes spawning the agent in a different position after each iteration. Our method can be used with a fixed number of base control policies, which are trained during a single pre-training phase. However, our method can also be extended to work in an iterative fashion. In this iterative approach, the agent starts with a small set of $k$ pre-trained base control policies. When confronted with a new task, our method is used to determine the best base control policy to facilitate task-adaptation (e.g. $\pi_i$). After training the new policy $\pi_j$ by adapting the selected base control policy $\pi_i$, the new policy $\pi_j$ can be added to the set of base control policies. This will allow executing more efficient task-adaptations, as more base control policies become available. In the proposed method, we select $k$ instructions $\{z_{0},...,z_{k}\}$ to train base control policies $\{\pi_{0},...,\pi_{k}\}$, from a uniform random distribution. However, an interesting extension to this method might be to select base control policies based on a more advanced selection objective. For example, maximizing distance between the task instructions (in a language-embedding). \subsection{Sampling task-adaptations} The second phase of our method consists of utilizing the developed base control policies in order to sample a limited number of task adaptations. A single task adaptation sample $\langle \pi_i, z_j \rangle$ consists of taking a fully developed base control policy $\pi_i$, and using it to perform a new instruction $z_j$, different from the one it was trained on. An example of such a sample would include to start from a policy trained on an instruction \textit{go to the yellow box}, and ask it to perform a different task, such as \textit{pickup the yellow box}. A task-adaptation from one policy to a new one is done by loading the parameters of the base policy as the initialization of the new policy we want to develop. Training can be performed using any \gls{rl}-algorithm. During this sampling phase the policy does not need to converge. Training only needs to happen for a limited number of $n$ steps. This amount of required steps is significantly lower than fully developing the policy. After the sampled task adaptation has been executed for $n$-steps, we measure the performance. This can be done by, for example, calculating the success rate of the agent satisfying the instruction over the last 100 iterations. Table~\ref{table:transfer_samples} contains a few examples of this sampling process. \begin{table}[H] \centering \begin{tabular}{lll} \toprule \textbf{Base control policy instruction} & \textbf{Transfer instruction} & \textbf{Measured performance} \\ \midrule Pickup the red ball & Goto the green key & 0.91 \\ Pickup the red ball & Goto the red ball & 0.76 \\ Goto the yellow box & Goto the green key & 0.86 \\ Goto the yellow box & Goto the red ball & 0.86 \\ \bottomrule \\ \end{tabular} \caption{Example task-adaptation sampling results ($k=2$ base policies, $p=2$ transfer instructions). The measured performance is calculated as the success rate of the 100 last episodes, after $n$ training steps. Displayed task performance is exemplary.} \label{table:transfer_samples} \end{table} For each base control policy, we randomly select $p$ different tasks from $Z$ to sample task adaptation. So in summary, our method requires running $k \times p$ task adaptation samples, each consisting of $n$ training steps. Similarly to the selection of the base control policies, we leave a more advanced sampling strategy as future work. This sampling method allows the generation of a dataset that can be used to generalize expected task adaptation over unseen tasks. Furthermore, the resulting policies, which were partially developed during the sampled task adaptations, could be used by the agent to further develop, when tasked with the linked instruction. \subsection{Training the transfer-model} In the next stage of our method, we train a binary classification model $f(z_{x},z_{i},z_{j}) \rightarrow \{1, 0\}$ in order to generalize the perceived task adaptation. The input of the proposed model consists of a concatenation of the sampled transfer instruction $z_{x}$, combined with the instructions attached to two sampled base policies ($z_{i}$ and $z_{j}$). The output of the model consists of a single binary output. This output is trained to be positive, if the first base policy with instruction $z_{i}$ performed better during transfer sampling than the second base policy with instruction $z_{j}$. An example dataset is presented in Table~\ref{table:transfer_dataset} \begin{table}[H] \centering \begin{tabular}{llll} \toprule \textbf{Instruction $z_{x}$} & \textbf{Transfer instruction $z_{i}$} & \textbf{Transfer instruction $z_{j}$} & \textbf{Class} \\ \midrule Goto the green key & Pickup the red ball & Goto the yellow box & 1 \\ Goto the red ball & Pickup the red ball & Goto the yellow box & 0 \\ \bottomrule \\ \end{tabular} \caption{Example input dataset, used to train the transfer-model.} \label{table:transfer_dataset} \end{table} \begin{figure}[H] \centering \includegraphics[scale=0.50]{model.pdf} \caption{The high level transfer-model architecture. The input consists of a concatenation of the transfer instruction $z_{x}$, and instructions $\langle z_{i}, z_{j}\rangle$ linked to two base policies. A language-embedding layer is used in order to learn a task specific language-embedding. This layer is followed by a set of fully connected layers which finally output a binary variable.} \label{fig:model} \end{figure} In order to work directly with instructions in natural language, a language embedding is used. This embedding is trained end-to-end, and thus is specifically trained to encode instructions based on their transfer capabilities. \subsection{Transfer-model usage} The resulting transfer-model can be used when the agent is confronted with a new task, it currently has no developed base control policy for. Given a set of labeled base policies, and a task instruction, the various possibilities can be tested in order to make an assessment about which base policy will result in the fastest task-adaptation. \section{Experiments} \label{sec:experiments} \subsection{Task-adaptation in the BabyAI environment} In order to find out whether patterns can be discovered in task adaptations using instructions expressed using natural language, we performed a large set of transfer experiments in the BabyAI environment. In this experiments, we wanted to find out which parts of the instructions (verb, object, color) matter in making efficient task adaptation decisions. \begin{figure}[p] \centering \includegraphics[scale=0.45]{transfer_of_verbs.pdf} \caption{Comparison of how well different base control policies adapt to new tasks, based on whether the verb in the instruction is the same or different.} \label{fig:transfer_verbs} \end{figure} \begin{figure}[p] \centering \includegraphics[scale=0.45]{transfer_of_objects.pdf} \caption{Comparison of how well different base policies adapt to new tasks, based on whether the object in the instruction is the same or different.} \label{fig:transfer_objects} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.45]{transfer_of_colors.pdf} \caption{Comparison of how well different base policies adapt to new tasks, based on whether the object color in the instruction is the same or different.} \label{fig:transfer_colors} \end{figure} The results of this experiment are summarized in Figure~\ref{fig:transfer_verbs}, \ref{fig:transfer_objects} and \ref{fig:transfer_colors}. Each of these plots shows results averaged over 636 task adaptations. The green line represents performance when training a policy from scratch, while the blue line shows transfer performance averaged over all performed transfer experiments. We see some clear patterns. The verb seems to be the most important part of the task instruction. For example, when confronted with a new task which has a \textit{goto} verb, base control policies which are also trained on a \textit{goto} instruction seem to transfer best. This is an expected result, as the verb-part of the instruction, also determines the required set of primitive actions to solve the task. \subsection{Transfer model} As experimentally demonstrated in the previous experiment, various parts of task instructions have a different impact on the task adaptation performance. In this second experiment we trained different amounts $k$ of randomly sampled base control policies. While training can be done using any \gls{rl} algorithm, we used DQN \citep{mnih2015dqn} in our experiments. Training a base control policy is done using at least 1 million steps, and ends when the policy achieves a success rate of at least 95\%, measured on the previous 100 iterations. The full set of used training hyperparameters is described in Appendix~\ref{appendix:hyperparameters}. After developing $k$ different base control policies, we sampled $p$ task adaptations for each base control policy. The results gathered from these task adaptations were used to train the transfer model. In table~\ref{table:transfer_perf}, we show performance of our model when using various numbers of base control policies ($k$), and different numbers of task adaptation samples ($p$). We measure model accuracy over a holdout-set consisting of all possible expressible task-adaptations not seen during sampling. \begin{table}[H] \centering \begin{tabular}{rllllll} \toprule & \textbf{p=8} & \textbf{p=10} & \textbf{p=12} & \textbf{p=14} & \textbf{p=18} & \textbf{p=20} \\ \midrule \textbf{k=8} & 0.61 $\pm$0.03 & 0.62 $\pm$0.03 & 0.61 $\pm$0.05 & 0.64 $\pm$0.05 & 0.65 $\pm$0.02 & 0.66 $\pm$0.03 \\ \textbf{k=10} & 0.62 $\pm$0.03 & 0.62 $\pm$0.05 & 0.64 $\pm$0.06 & 0.62 $\pm$0.04 & 0.66 $\pm$0.03 & 0.67 $\pm$0.02 \\ \textbf{k=12} & 0.67 $\pm$0.02 & 0.67 $\pm$0.01 & 0.66 $\pm$0.02 & 0.67 $\pm$0.02 & 0.68 $\pm$0.02 & 0.66 $\pm$0.04 \\ \textbf{k=14} & 0.64 $\pm$0.04 & 0.66 $\pm$0.02 & 0.67 $\pm$0.03 & 0.69 $\pm$0.01 & 0.69 $\pm$0.03 & 0.68 $\pm$0.01 \\ \textbf{k=18} & 0.67 $\pm$0.03 & 0.68 $\pm$0.02 & 0.68 $\pm$0.03 & 0.71 $\pm$0.01 & 0.70 $\pm$0.02 & 0.71 $\pm$0.02 \\ \textbf{k=20} & 0.69 $\pm$0.01 & 0.68 $\pm$0.05 & 0.70 $\pm$0.02 & 0.69 $\pm$0.04 & 0.71 $\pm$0.03 & 0.71 $\pm$0.03 \\ \bottomrule \\ \end{tabular} \caption{Accuracy of the binary task adaptation classifier model. The different rows represent the various amount of base control policies used during training, the columns represent the amount of task adaptations sampled for each base control policy. Results are averaged over 5 runs.} \label{table:transfer_perf} \end{table} Our preliminary results show that even with a limited number of base control policies, and sampled task adaptations, a transfer model can be developed. There is still room for improvement regarding the accuracy of the model, however the stochastic nature of \gls{rl} makes task transfer inherently noisy. However the increased sample efficiency, due to efficient task-adaptation provided by our method, is a quintessential building block, in a lifelong learning setting \citep{silver2013lifelong}. \section{Discussion and future work} \label{sec:discussion} In this paper, we presented a method capable of predicting, given a set of base control policies, which of these base control policies will adapt the fastest to a new previously unseen task. In order to make assessments about task adaptation, our method uses a for this task specifically trained language embedding on the task instructions. Our preliminary results show that a binary classification approach can make assessments about task-adaptation by utilizing semantic meaning of task instructions formatted in natural language. When confronted with an expanding set of tasks in a lifelong-learning setting, our method has the potential to vastly improve sample efficiency. However, our method still relies on a set of randomly selected base control policies, and task transfer samples. Future research could optimize our method by introducing an iterative sampling method based on a more advanced selection criterion such as \textit{instruction diversity}. Another interesting extension to our method includes the usage of an open vocabulary.
1,116,691,496,997
arxiv
\section{Introduction} There are three basic approaches to estimating the smoothness of Generalized Additive Models within a penalized likelihood framework. The first is to develop an efficient GCV (Craven and Wahba, 1979) or AIC (Akaike, 1973) based smoothness selection method for the simple (i.e. non-generalized) additive model case (e.g Gu and Wahba, 1991, for smoothing spline ANOVA models; see also Mammen and Park, 2005, for backfit GAMs), and then to apply this method to each working additive model of the penalized iteratively re-weighted (P-IRLS) scheme used to fit the GAM. This was initially proposed by Gu, (1992) for generalized smoothing spline ANOVA models: he termed the approach `Performance Oriented Iteration'. Wood (2000) extended the method to computationally efficient low rank GAMs based on penalized regression splines, while Wood (2004) developed it further by providing an optimally stable smoothness selection method for simple additive models. The second approach is to treat the GAM as a generalized linear mixed model (see e.g. Ruppert et al. 2003), so that smoothing parameters become variance components. In the non-generalized case these variance components can be estimated by maximum likelihood or REML, but in the generalized case methods based on iterative fitting of working linear mixed models are used (e.g. Breslow and Claytons's, 1993, `PQL' approach). There can be no guarantee of convergence for methods based on iteratively selecting the smoothness of working linear (mixed) models, so the third approach avoids this problem by selecting smoothing parameters directly, based on AIC or GCV for the actual model: it goes back at least as far as O'Sullivan et al. (1986). Gu's (2004) {\tt gss} package has an implementation of this, based on work by Kim and Gu (2004), and Wood (2006) attempted to extend an earlier performance oriented iteration method (Wood, 2004) in this direction. The difficulty with the direct approach is that its extra non-linearity makes efficient and stable methods difficult to develop: in consequence, the Gu (2004) and Wood (2006) methods are based on inefficient finite differencing based optimization, which is not always reliable. Figures \ref{mack.bin.data} and \ref{concurvity.data} show data sets for which these three existing approaches fail. \begin{figure} \eps{-90}{.32}{mack.bin.eps} \vspace*{-.5cm} \caption{ Presence ($\bullet$) absence ($+$) data for mackerel eggs off the west coast of France. The data are based on the larger data set discussed in section \ref{mack.section}. There are substantial problems fitting a GAM to these data with existing methods. \label{mack.bin.data}} \end{figure} Figure \ref{mack.bin.data} shows data on the presence or absence of Mackerel eggs from a sub region of a survey where absences are sufficiently common that it might be worthwhile to model presence/absence before attempting to model abundance, the real quantity of interest. Covariates longitude, latitude, distance from the continental shelf edge, sea bed depth, surface temperature and temperature at 20m depth are available, and in the absence of more detailed prior knowledge a generalized additive model: \begin{equation} {\rm logit}\{E(p_i)\} = f_1({\tt lon}_i,{\tt lat}_i) + f_2({\tt c.dist}_i) + f_3({\tt b.depth}_i) + f_4({\tt t.surf}_i) + f_5({\tt t.20m}_i) \label{mack.logistic} \end{equation} might be appropriate, where $p_i$ is 1 for presence and 0 for absence of eggs. A Bernouilli distribution for $p_i$ is assumed. A tensor product of two rank 8 penalized regression splines should be more than sufficiently flexible for $f_1$, while the remaining terms can be represented by rank 10 penalized regression splines. Performance oriented iteration diverges when attempting to fit this model. PQL based model fitting either fails to converge or `converges' to a clearly sub-optimal `completely smooth' model, depending on numerical details (if treated in the same way as a standard penalized likelihood based model, the completely smooth model would have an AIC around 84 larger than the AIC optimal model). Direct optimization of the whole model (generalized) AIC also fails, using either a pure finite difference based optimization, or Wood's (2006) approach based on finite differencing only for second derivatives. In these cases careful examination of the optimization results does indicate the possibility of problems, and estimated minimum AICs are approximately 20 above the actual optimum. All computations were performed using R 2.4.0. with GAM setup based on the {\tt mgcv} package and mixed model estimation based on the {\tt nlme} and {\tt MASS} packages. Direct optimization was performed using the {\tt nlm} routine (results from {\tt optim} generally being substantially more problematic). \begin{figure} \eps{-90}{.4}{concurvity2.eps} \vspace*{-.5cm} \caption{ Simulated data with a serious concurvity problem. Response $y$ depends on covariates $z$, $x$ and $d$. Existing methods have difficulty fitting an appropriate GAM to these data. \label{concurvity.data}} \end{figure} Data sets that cause problems for existing GAM estimation and smoothness selection methods are not hard to find, and the root cause is often some form of concurvity (i.e. the presence of covariates which are themselves well modelled as smooth functions of other covariates). Often such problems are difficult to isolate, but figure \ref{concurvity.data} shows simulated data for which the problem is clear. PQL and POI based methods both fail when used to fit the model: \begin{equation} {\rm logit}\{E(y_i)\} = f_1(x_i,z_i) + f_2(d_i),~~~ y_i \sim {\rm Bernoulli}. \label{conc.model} \end{equation} to these data. $f_1$ and $f_2$ were represented using thin plate regression splines of basis dimension 30 and 10, with additional shrinkage so that a large enough smoothing parameter can shrink the function to zero (see Wood, 2006, 4.1.6). PQL diverges until the linear mixed model estimator fails, while POI simply fails to converge. Again, direct smoothness selection using general purpose optimizers and finite difference derivatives fails (substantially) to locates the AIC optimal model. This latter failure occurs whether or not extra help in the form of exact first derivatives is supplied. It should be noted that the impact of concurvity in these examples is different in kind to the well publicized difficulties discussed by Ramsay, Burnett and Krewski (2003) in which concurvity can cause backfitting approaches to GAM estimation (as in Hastie and Tibshirani, 1990) to substantially underestimate estimator variances. For the direct GAM fitting approach, discussed here, the issue is reliably estimating the model in the presence of concurvity driven ill-conditioning. Once the model is estimated the corresponding variance estimates will automatically take into account the effect of the concurvity, so that variance correction approaches of the sort discussed, for example, in Figueiras, Roca-Pardi\~nas and Cadarso-Su\'arez (2005) are not needed (and might actually be counter productive). In other words, if the computational difficulties caused by concurvity can be solved for the direct fitting approach, then we avoid the major part of concurvity driven variance bias `for free'. For applied use of GAMs these convergence failures are obviously problematic and the aim of this paper is to develop methods that eliminate them to the maximum extent possible. General fitting methods for GAMs should simply work (much as we expect of GLM fitting routines), without the need for tuning, otherwise the fitting methods get in the way of practical modelling. So the objective here is to produce the most reliable method possible for penalized likelihood based GAM estimation with AIC or GCV type smoothness selection. This suggests: \begin{enumerate} \item The method must be `direct' in the sense already defined, so that fixed optimum of the smoothness selection criteria exist. \item For maximal reliability, optimization of the criteria should be based on a full Newton method, utilizing exact first and second derivatives, not approximations. \item The method must be able to deal with any linear degeneracy in the model, such as that caused by severe concurvity, or by the heavy smoothing that is appropriate in many practical modelling situations. \end{enumerate} In addition the method must meet the practical consideration of being computationally efficient, and, given the non-linearities imposed by the direct approach, point 2 presents the main challenge in this regard. None of the forgoing aims are very difficult to achieve if the method is allowed an operation count that is cubic in the number of data, but such an expensive scheme would be of no practical interest. Faced with the task of producing such a method it might be tempting to abandon penalized likelihood in favour of a fully Bayesian MCMC approach (e.g. Fahrmeir and Lang, 2001; Fahrmeir, Kneib and Lang, 2004 or Brezger and Lang, 2006), but this is not always the answer, and can make problems harder to detect. Firstly, convergence problems can often become mixing problems. For example, using the data in figure \ref{mack.bin.data}, MCMC simulation with the mackerel model (\ref{mack.logistic}) gives markedly reduced acceptance rates (minimum down to 10\% from 75\%), increased between chain variability and appears to require increased burn in, relative to similar models for data simulated without serious concurvity problems. Computations were performed using the state of the art BayesX package (Brezger, Kneib and Lang, 2007), so the model representation was slightly different to that used for the other computations in that the Bayesian P-splines of Lang and Brezger (2004) were used to represent the smooths components. The second issue with MCMC methods is that computational feasibility requires that the prior covariance matrix (smoothing penalty matrix) is sparse (this is the consideration that drives the choice of P-splines in BayesX). Many practically useful smoothers do not have this property (e.g. thin plate splines, as used in model (\ref{conc.model})). In the absence of sparsity then the computational cost is of the order of the cube of the largest (non-sparse) smoothing basis dimension, multiplied by the chain length. For smooths with a simple prior covariance matrix structure, then in principle this cost can be reduced to the {\em square} of the largest basis dimension, multiplied by the number of steps of the chain actually used for posterior inference. Some form of Demmler-Reinsch orthogonalization (see e.g. Wood, 2006, 4.10.4) is what is needed to achieve this. However, such orthogonalization has yet to be tried in practice, may lead to more difficulties in setting the hyper-prior on smoothing parameters, and can not be done at all for the kind of penalty required in order to ensure scale invariance in smooth interaction terms (e.g. Wood, 2006, 4.1.8 or Wahba 1990, Chapter 10). Situations in which quasi-likelihood is appropriate are also awkward to handle in an MCMC context. On the other hand the Bayesian MCMC approach improves what can be done with non smooth random effects, and is usually the best option when large numbers of such random effects are required. The remainder of this paper is structured as follows. Section 2 reviews essential background and discusses smoothness selection criteria. Section 3 proposes a method for efficient and stable optimization of such criteria, and hence for GAM fitting. Section 4 illustrates the comparative performance of the new method using simulated and real data, including a GAMM example. \section{GAM estimation and selection \label{section.gam.fit}} Generalized additive models (Hastie and Tibshirani, 1986) are generalized linear models (Nelder and Wedderburn, 1972) in which the linear predictor is partly composed from a sum of smooth functions of some or all of those covariates. Hence the basic model structure is \begin{equation} g\{E(y_i)\} = {\bf X}_i^*{\bm \theta} + \sum_j f_j(x_j) \label{a.gam} \end{equation} where the $y_i$ are observations on independent random variables from some exponential family distribution, or failing that, have a mean variance relationship appropriate for use of a quasi-likelihood approach to inference. $g$ is a smooth monotonic `link' function. ${\bf X}_i^*$ is the $i^{\rm th}$ row of the model matrix for any strictly parametric model components, and $\bm \theta$ is the corresponding parameter vector. The $f_j$ are smooth functions of covariates $x_j$, which may be vector covariates. The $f_j$ are subject to identifiability constraints, typically that $\sum_i f_j(x_{ji})=0~\forall~j$. Sometimes each smooth function may also be multiplied by some covariate, yielding a `variable coefficient' model (Hastie and Tibshirani, 1993): the extension is trivial to handle in practice (as recognized by Eilers and Marx, 2002; it has also been available in R package {\tt mgcv} since early 2002). The model can further be extended to include extra random effect terms to arrive at the generalized additive mixed model (GAMM, eg. Lin and Zhang, 1999). The link between penalized regression and mixed modelling that lets GAMs be estimated as GLMMs also means that GAMMs can be estimated by the methods discussed here (see section \ref{gamm.sim}). The first step in GAM estimation is to represent the smooth terms in (\ref{a.gam}) using bases with associated penalties (see, e.g., Marx and Eilers, 1998; Wood, 2006). Each smooth term is represented as $$ f_j(x_j) = \sum_{k=1}^{K_j} \beta_{jk} b_{jk}(x_j) $$ where the $b_{jk}(x_j)$ are known basis functions, chosen to have convenient properties, while the $\beta_{jk}$ are unknown coefficients, to be estimated. Associated with each smooth function is one or more measures of function `wiggliness' ${\bm \beta}_j\ts \tilde {\bf S}_j {\bm \beta}_j$, where $\tilde {\bf S}_j$ is a matrix of known coefficients. Typically the wiggliness measure evaluates something like the univariate spline penalty $\int f_j^{\prime\prime}(x)^2dx$ or its thin-plate spline generalization, but it may also be more complex, such as tensor product smooth penalty with multiple ${\bm \beta}_j\ts \tilde {\bf S}_j {\bm \beta}_j$ terms, (e.g. Wood, 2006, section 4.1.8). Intermediate rank basis- penalty smoothers of this sort go back at least as far as Wahba (1980) and Parker and Rice (1985). Hastie and Tibshirani (1990, section 9.3.6) discussed using them for GAMs and O'Sullivan (1986) demonstrated their use in a wide variety of problems. Given bases for each smooth term, the GAM, (\ref{a.gam}), can be re-written as a GLM, $g\{E(y_i)\} = {\bf X}_i {\bm \beta}$, where ${\bf X}$ includes the columns of ${\bf X}^*$ and columns representing the basis functions evaluated at the covariate values, while ${\bm \beta}$ contains ${\bm \theta}^*$ and all the smooth coefficient vectors, ${\bm \beta}_j$. The fit of this GLM is most conveniently measured using the deviance: $$ D({\bm \beta}) = 2 \{l_{\rm max} - l({\bm \beta})\}\phi $$ where $l$ is the log-likelihood, or log-quasi-likelihood of the model, and $l_{\rm max}$ is the maximum possible value for $l$ given the observed data, which is obtained by considering the MLE of a model with one parameter per datum (under which the model predicted $E(y_i)$ is simply $y_i$). $\phi$ is a scale parameter, and the definition of $D$ means that it can be calculated without knowledge of $\phi$. Maximizing the (quasi-) likelihood is equivalent to minimizing the deviance, and in several ways the deviance behaves rather like the residual sum of squares in linear modeling (see McCullagh and Nelder, 1989 for further details). If the bases used for the smooth functions, $f_j$, are large enough to be reasonably sure of avoiding mis-specification, then the model will almost certainly overfit if it is estimated by minimizing the deviance. For this reason GAMs are estimated by minimizing $$ D({\bm \beta}) + \sum_j \lambda_j {{\bm \beta} \ts}{\bf S}_j {\bm \beta} $$ where the $\lambda_j$ are smoothing parameters and the ${\bf S}_j$ are the $\tilde {\bf S}_j$ suitably padded with zeroes so that ${\bm \beta} \ts {\bf S}_j{\bm \beta} = {\bm \beta}_j\ts \tilde {\bf S}_j {\bm \beta}_j$. For later notational convenience, define ${\bf S} = \sum_j \lambda_j {\bf S}_j$. The $\lambda_j$ control the smoothness of the component smooth functions. Smoothness selection is about choosing values for the $\lambda_j$. Given values for the $\lambda_j$, the penalized deviance can be minimized by penalized iteratively re-weighted least squares (P-IRLS, see e.g. Wood, 2006, for one derivation, and Green and Silverman, 1994, for more information on penalized likelihood and GLMs). Let $V(\mu)$ be the function such that ${\rm var}(y_i) = V(\mu_i) \phi $. $V$ can be written down for all exponential family distributions, and is always available if using quasi-likelihood. Let $\omega_i$ denote any prior weights on particular data points (used to weight the component of deviance attributable to each datum). Then iterate the following steps to convergence. \begin{enumerate} \item Using the current $\mu_i$ estimate, evaluate the weights, $w_i = \omega_i^{1/2}V(\mu_i)^{-1/2}/g^{\prime}(\mu_i)$, and the pseudodata, $z_i = g^\prime(\mu_i)(y_i - \mu_i) + \eta_i$, where $\eta_i = g(\mu_i)$ (the `linear predictor'). \item Let $\bf W$ be the diagonal matrix of $w_i$ values. Minimize the penalized least squares objective \begin{equation} \| {\bf W}({\bf z} - {\bf X}{\bm \beta}) \|^2 + \sum_j \lambda_j {{\bm \beta} \ts}{\bf S}_j {\bm \beta} \label{work.obj} \end{equation} w.r.t. ${\bm \beta}$ to find the next estimate of ${\bm \beta}$, and hence of ${\bm \eta} = {\bf X}{\bm \beta}$ and $\mu_i = g^{-1}(\eta_i)$. \end{enumerate} The iteration can be initialized by setting $\hat \mu_i = y_i$ (with adjustment to avoid infinite $\hat \eta_i$). Divergence is rare, but can be dealt with by step halving (provided an MLE exists). At convergence the parameter estimates, $\hat {\bm \beta}$, minimize the penalized deviance. Note that this direct fitting approach makes it straightforward to directly estimate coefficient variances (see e.g. Wood, 2006, section 4.8) thereby completely sidestepping the well publicized problem of concurvity driven variance underestimation, that can affect backfitting methods of GAM fitting (see Ramsay, Burnett and Krewski, 2003, for example). \subsection{Smoothness selection} Performance oriented iteration (POI) uses GCV or Mallows' $C_p$ (Mallows, 1973) applied to each fitting problem (\ref{work.obj}) in order to select smoothing parameters. This often converges to fixed $\hat {\bm \beta}, \hat {\bm \lambda}$, but less often it diverges, or cycles, with failure being particularly frequent for binary data (see section \ref{examples.section} or the Introduction). Mixed model alternatives such as PQL are no better. An alternative, which avoids this fundamental convergence issue, is to base smoothness selection on a criterion applied to the GAM itself and evaluated at convergence of the P-IRLS. If $\tau$ denotes the effective degrees of freedom of the penalized fit, then one could seek to minimize the generalized AIC: $$ {\cal V}_a({\bm \lambda}) = D(\hat {\bm \beta}) + 2 \gamma \tau $$ in the case where $\phi$ is known, or the generalized GCV score $$ {\cal V}_g({\bm \lambda}) = nD(\hat {\bm \beta})/(n - \gamma \tau)^2 $$ otherwise (see Hastie and Tibshirani, 1990, Section 6.9 or Wood, 2006, section 4.5). $\tau= \tr{{\bf A}}$ where ${\bf A} = {\bf WX}({\bf X}\ts{\bf W}^2{\bf X} + {\bf S})^{-1} {\bf X} \ts{\bf W}$ is the `influence matrix' of the fitted model (${\bf W}$ is evaluated at convergence). $\gamma$ is an ad hoc tuning parameter, sometimes increased from its usual value of 1 in order to obtain smoother models than would otherwise be selected ($\gamma$ can itself be chosen automatically by, e.g., 10-fold cross validation, but this will not be pursued further here). Another alternative, proposed by Xiang and Wahba (1996) and Gu and Xiang (2001) is Generalized Approximate Cross Validation, GACV (see also Gu, 2002, section 5.2.2, for a clear exposition). It was initially derived for the situation in which only the canonical link function is used, but the restriction can be relaxed (in the process providing one possible justification for ${\cal V}_a$ and ${\cal V}_g$). Some modification of Gu and Xiang's (2001) approach is the key, as outlined next. The basic idea is that the Kullback-Leibler distance depends on the model only through the `predictive deviance' of the model, which can be estimated by some version of leave-one-out cross validation. The resulting leave-one-out cross validation criterion is then replaced with a {\em generalized} cross validation criterion. To this end, first write the model deviance as $D(\hat {\bm \eta}) = \sum D_i(\hat \eta_i)$, where $D_i$ is the contribution to the deviance associated with the $i^{\rm th}$ datum. Now the mean predictive deviance of the model can be estimated by $$ D_{\rm cv} = \sum_{i=1}^n D_i(\hat \eta^{[-i]}_i) $$ where $\hat {\bm \eta}^{[-i]}$ is the linear predictor obtained by estimating the model from all the data except the $i^{\rm th}$ datum. Minimization of $D_{\rm cv}$ is an attractive way of choosing smoothing parameters as it seeks to minimize the KL distance between the estimated model and the truth; however it is an impractically expensive quantity to attempt to minimize directly. To progress, follow Gu and Xiang (2001) and let $\hat {\bm \eta}^{[-i]}$ be the linear predictor which results if $z_i$ is omitted from the working linear fit at the final stage of the P-IRLS. This can be shown to imply that $ \hat \eta_i - \hat \eta^{[-i]}_i = (z_i - \hat \eta_i) {A_{ii}}/({1-A_{ii}}). $ But $z_i - \hat \eta_i = g^\prime(\hat \mu_i) (y_i - \hat \mu_i)$, so $ \hat \eta_i - \hat \eta^{[-i]}_i = g^\prime(\hat \mu_i) (y_i - \hat \mu_i) {A_{ii}}/({1-A_{ii}}). $ Now take a first order Taylor expansion $$ D_i(\hat \eta^{[-i]}_i) \simeq D_i(\hat \eta_i) + \pdif{D_i}{\hat \eta_i} (\hat \eta^{[-i]}_i - \hat \eta_i) = D_i(\hat \eta_i) - \pdif{D_i}{\hat \eta_i} \frac{A_{ii}}{1-A_{ii}}g^\prime(\hat \mu_i) (y_i - \hat \mu_i). $$ Noting that $$ \pdif{D_i}{\hat \eta_i} = - 2 \omega_i \frac{y_i - \hat \mu_i}{V(\hat \mu_i) g^{\prime}(\hat \mu_i)}, {\rm ~~ we ~have~~} D_i(\hat \eta^{[-i]}_i) \simeq D_i(\hat \eta_i) + 2 \frac{A_{ii}}{1-A_{ii}} \omega_i \frac{(y_i - \hat \mu_i)^2}{V(\hat \mu_i)}. $$ Using the same approximation employed in the derivation of GCV, the individual $A_{ii}$ terms are replaced by their average, $\tr{{\bf A}}/n$, to yield, $$ D_i(\hat \eta^{[-i]}_i) \simeq D_i(\hat \eta_i) + 2 \frac{\tr{{\bf A}}}{n-\tr{{\bf A}}} \omega_i \frac{(y_i - \hat \mu_i)^2}{V(\hat \mu_i)}, $$ and averaging over the data gives a GACV score $$ {\cal V}_g^* = D(\hat {\bm \eta})/n + \frac{2}{n} \frac{\tr{{\bf A}}}{n-\tr{{\bf A}}} \sum_{i=1}^n \omega_i \frac{(y_i - \hat \mu_i)^2}{V(\hat \mu_i)} = D(\hat {\bm \eta})/n + \frac{2}{n} \frac{\tau}{n-\tau} P(\hat {\bm \eta}). $$ where $P = \sum_i \omega_i(y_i - \hat \mu_i)^2/V(\hat \mu_i)$ is a `Pearson statistic'. (The final term on the RHS might also be multiplied by $\gamma \ge 1 $ of course.) Although the basic motivation and approach comes directly from the cited references, the need to accommodate non-canonical links means that the final score differs a little from Gu and Xiang (2001) in the terms in the final summation. Notice how ${\cal V}^*_g$ is just a linear transformation of (generalized) AIC, with $\hat \phi = P(\hat {\bm \eta}) /\{n-\tr{{\bf A}}\}$ in place of the MLE of $\phi$, and $\tr{\bf A} $ as the model degrees of freedom: so if $\phi$ is known we might as well use ${\cal V}_a$. Viewed from this perspective there is also no reason not to use $D(\hat {\bm \eta}) /\{n-\tr{{\bf A}}\}$ for $\hat \phi$, in which case, for $\tr{{\bf A}} \ll n $, the resulting criterion would be approximately ${\cal V}_g$. Of course these connections are unsurprising: see Stone (1977). The next section discusses how best to optimize these criteria with respect to the smoothing parameters. \section{Stable fitting and $\cal V$ optimization \label{fit.details}} Optimization of the ${\cal V}$ type criteria is basically hierarchical. The criteria are optimized with respect to the smoothing parameters, with any set of smoothing parameters implying a particular set of coefficient estimates $\hat {\bm \beta}$, which are found by an `inner' P-IRLS iteration. The dependence of ${\cal V}_g$, ${\cal V}_g^*$ and ${\cal V}_a$ on the smoothing parameters is via $D(\hat {\bm \beta}) $, $\tau$ and possibly $P(\hat {\bm \beta})$, so that the key to successful $\cal V$ optimization is to obtain first and second derivatives of $D(\hat {\bm \beta}) $, $\tau$ and $P(\hat {\bm \beta})$ with respect to the log smoothing parameters, $\rho_j = \log(\lambda_j)$, in an efficient and stable way (logs are used to simplify optimization, since the $\lambda_j$ must be positive). Given these derivatives, the derivatives of the criteria themselves are easily obtained, and they can be minimized by modified Newton, or quasi-Newton methods to select smoothing parameters (e.g. Gill et al., 1981, Dennis and Schnabel, 1983). The required derivatives in turn depend on the derivatives of $\hat {\bm \beta}$ with respect to $\bm \rho$. Conceptually, these can be obtained by differentiating the P-IRLS scheme, and updating derivatives alongside the parameters as the P-IRLS progresses. However, while this is the way to obtain expressions for the derivatives, it is a poor way to arrange the computations (a prototype {\em first} derivative based scheme of this sort is proposed in Wood, 2006, but is built on the method of Wood, 2004, making it both inefficient and difficult to extend to a second derivative scheme). Instead, for fixed $\bm \rho$, \begin{enumerate} \item Iterate the P-IRLS scheme to convergence of the $\hat {\bm \beta}$, ignoring derivatives and using the fastest available stable method for solving each P-IRLS problem. \item At convergence, with the weights, pseudodata and hence all matrix decompositions now fixed, iterate the expressions for the derivatives of the coefficient, $\hat {\bm \beta}$, with respect to the log smoothing parameters ${\bm \rho}$ to convergence. \item Evaluate the derivatives of $\tau = \tr{{\bf A}}$ with respect to $\bm \rho$. \end{enumerate} Relative to accumulating derivatives alongside the P-IRLS, the method has a number of advantages. Firstly, the basic matrix decompositions and some other component matrix expressions stay fixed throughout the derivative iteration, reducing the cost of using the most stable decompositions for the derivative calculations, and avoiding re-calculation at each step of the P-IRLS. In addition, fewer steps are typically required for the derivative iteration than for the P-IRLS itself, thereby saving the cost of several derivative system updates, relative to parallel accumulation of derivatives and estimates. Purely practically, the derivative update becomes a separate `add-on' to the P-IRLS iteration, which simplifies implementation. The following subsections explain how the method works in more detail. \subsection{Iterating for derivatives of $\hat {\bm \beta}$ \label{section.pirls}} At any step of the P-IRLS, let ${\bf B} = ({\bf X}\ts{\bf W}^2{\bf X} + {\bf S})^{-1} {\bf X} \ts{\bf W}$ and ${\bf z}^\prime = {\bf Wz}$ so that $\hat {\bm \beta} = {\bf Bz}^\prime$. By differentiating the P-IRLS presented in section \ref{section.gam.fit}, the following update algorithm results. \noindent {\bf Initialization:} ${\bf z}$ is fixed at its converged value from the P-IRLS, and its derivatives w.r.t. $\bm \rho$ are initially set to zero. The corresponding initial derivatives of $\hat {\bm \beta}$ are then given by $$ \pdif{\hat {\bm \beta}}{\rho_k} = \pdif{{\bf B}}{\rho_k} {\bf z}^\prime {\rm~~~~and~~~~} \pddif{\hat {\bm \beta}}{\rho_k}{\rho_m} = \pddif{\bf B}{\rho_k}{\rho_m} {\bf z}^\prime $$ where the derivatives of $\bf B$ are evaluated with all the ${\bf T}_k$, ${\bf T}_m$ and ${\bf T}_{km}$ terms defined in section \ref{B.deriv} set to zero. \noindent {\bf Iteration:} The following steps are iterated to convergence (for all $k,m$, such that $k \ge m$). \begin{enumerate} \item Update $$ \pdif{z_i^\prime}{\rho_k} {\rm ~~and~~} \pddif{z_i^\prime}{\rho_k}{\rho_m} $$ using the current derivatives of $\hat {\bm \beta}$, as described in appendix A. \item Using the ${\bf z}^\prime$ derivative from step 1, the derivatives of $\hat {\bm \beta} $ are updated as follows, $$ \pdif{\hat {\bm \beta}}{\rho_k} = \pdif{{\bf B}}{\rho_k} {\bf z}^\prime + {\bf B} \pdif{{\bf z}^\prime}{\rho_k} {\rm ~~and~~} \pddif{\hat {\bm \beta}}{\rho_k}{\rho_m} = \pddif{\bf B}{\rho_k}{\rho_m} {\bf z}^\prime + \pdif{\bf B}{\rho_k}\pdif{{\bf z}^\prime}{\rho_m} + \pdif{\bf B}{\rho_m}\pdif{{\bf z}^\prime}{\rho_k} + {\bf B} \pddif{{\bf z}^\prime}{\rho_k}{\rho_m}. $$ Note that while $\bf B$ is fixed, its derivatives will change as the iteration progresses. \end{enumerate} Convergence of the iteration would usually be judged by examining convergence of the the first and second derivatives of the deviance with respect to ${\bm \rho}$. Calculation of these is routine, given the derivatives of $\hat {\bm \beta}$: the expressions are provided in Appendix B, and are also needed for obtaining the derivatives of the smoothness selection criteria themselves. \subsection{Computing with the derivatives of $\bf B$ \label{B.deriv}} The $\hat {\bm \beta}$ derivative update scheme involves derivatives of $\bf B$, which need to be spelled out. Initially expressions for the derivatives will be obtained in terms of $\bf B$, ${\bf A} = {\bf WXB}$, ${\bf G} = {\bf X}\ts{\bf W}^2{\bf X} + {\bf S}$ and the diagonal matrices: $$ {\bf T}_k = {\rm diag}\left (\pdif{w_i}{\rho_k} \frac{1}{w_i} \right ) {\rm ~~~~and~~~~} {\bf T}_{km} = {\rm diag} \left ( \pddif{w_i}{\rho_k}{\rho_m} \frac{1}{w_i} - \pdif{w_i}{\rho_k} \pdif{w_i}{\rho_m} \frac{1}{w_i^2} \right ) $$ (see appendix A for the derivatives of $w_i$). Noting that $ \ilpdif{{\bf G}^{-1}}{\rho_k} = - 2 {\bf B}{\bf T}_k{\bf B}\ts - e^{\rho_k}{\bf G}^{-1}{\bf S}_k {\bf G}^{-1}, $ the first derivative of ${\bf B}$ is $$ \pdif{{\bf B}}{\rho_k} = -2 {\bf B}{\bf T}_k{\bf A} - e^{\rho_k}{\bf G}^{-1}{\bf S}_k{\bf B} + {\bf B}{\bf T}_k, $$ while \begin{multline*} \pddif{{\bf B}}{\rho_k}{\rho_m} = -2 \pdif{{\bf B}}{\rho_m} {\bf T}_k {\bf A} - 2 {\bf B}{\bf T}_{km}{\bf A} - 2 {\bf B}{\bf T}_k \pdif{{\bf A}}{\rho_m} + \pdif{{\bf B}}{\rho_m}{\bf T}_k + {\bf B} {\bf T}_{km} \\- e^{\rho_k} \left (\pdif{{\bf G}^{-1}}{\rho_m} {\bf S}_k {\bf B} + {\bf G}^{-1} {\bf S}_k \pdif{{\bf B}}{\rho_m} \right ) - \delta_m^k e^{\rho_k} {\bf G}^{-1}{\bf S}_k{\bf B} \end{multline*} (where $\delta_m^k=1$ if $m=k$ and 0 otherwise). Direct naive use of these expressions would be both computationally expensive (just forming $\bf A$ has $O(n^2q)$ cost) and potentially unstable, since it would not address the ill-conditioning problems that can be a feature of GAMs, especially when there is concurvity present. It is therefore necessary to develop a way of computing with these derivatives which is both maximally stable and keeps computational costs down to a small multiple of $O(nq^2)$, the leading order cost of the P-IRLS. There are a number of more or less equivalent starting points for such a method, of which the following two are the most straightforward. \begin{enumerate} \item Find the Choleski factor, $\bf L$, such that $$ {\bf L}\ts{\bf L} = {\bf X} {\bf W}^2{\bf X} + {\bf S}. $$ This should actually be performed with pivoting (available in LINPACK, Dongarra et al. 1978), in case of co-linearity problems, and the pivoting applied explicitly to the columns of ${\bf X}$. The rank, $r$, of the pivoted Choleski factor $\bf L$ can then be estimated, by finding the largest upper left sub-matrix of $\bf L$ with acceptably low condition number (e.g. Golub and van Loan, 1996, section 5.5.7). $\bf L$ is upper triangular, and efficient and reliable estimation of the condition number of triangular matrices is fairly straightforward following Cline et al. (1979). If rank deficiency is detected then all but the $r$ upper left rows and columns of $\bf L$ are dropped along with the corresponding columns of $\bf X$ (for the current iteration only, of course). Now define $q \times r$ matrix $\bf P$, the first $r$ rows of which are given by $ {\bf L}^{-1} $ with the remaining rows being zero. Also define $n \times r$ matrix ${\bf K} = {\bf WXL}^{-1}$. \item A somewhat more stable approach first finds a square root $\bf E$ of $\bf S$, so that ${\bf E}\ts{\bf E} = {\bf S} $. A pivoted Choleski decomposition or (symmetric) eigen-decomposition can be used to do this. Next form the QR decomposition $$ \bmat{c} {\bf WX} \\ {\bf E} \end{array}\right ) = {\bf QR}, $$ where ${\bf Q}$ is a matrix with orthogonal (strictly orthonormal) columns and $\bf R$ is upper triangular. Again this should be performed with pivoting (Golub and van Loan, 1996), which must subsequently be applied to the columns of ${\bf X}$. An LAPACK routine was used for the decomposition (Anderson et al. 1999). Rank deficiency can be dealt with in exactly the same way as it was for ${\bf L}$. Again the redundant columns of $\bf Q$ and rows and columns of $\bf R$ are dropped. Now let $n \times r$ matrix $\bf K$ be the first $n$ rows of $\bf Q$ so that ${\bf WX} = {\bf KR}$, and define $q \times r$ matrix ${\bf P}$ as the matrix with first $r$ rows given by ${\bf R}^{-1}$ and remainder packed with zeroes. \end{enumerate} ${\bf K}$ is formed explicitly, while ${\bf P}$ can either be formed explicitly or computed with using its definition. ${\bf K}$ and $\bf P$ are all that are required from the decompositions for subsequent computation. Although the values taken by these two matrices depend on method of calculation, they are used in the same way irrespective of origin. Of the two calculation methods, the second, QR based, method is usually to be preferred over the first, since it avoids the exacerbation of any numerical ill-conditioning that accompanies explicit formation of ${\bf X} {\bf W}^2{\bf X}$, and instead is based on a stable orthogonal decomposition. The Choleski based method is faster, by about a factor of 2, but irreducible costs of the second derivative calculations, which are the same for both methods, substantially dilute this advantage in practice. A singular value decomposition (see Golub and van Loan, 1996 or Watkins, 1991) based method is also possible, but is more costly and is not pursued here. Note that for the most part the pivoting used in either method does not effect the subsequent algorithm: it is simply that quantities such as the estimated coefficient vector and its covariance matrix must have the pivoting reversed at the end of the method. The only exception is the one already mentioned, that the pivoting will have to be applied to the columns of ${\bf X}$ before it can be used as part of the derivative updating iteration. It is now straightforward to show that ${\bf G}^{-1} = {\bf P}\p\ts$ (strictly a sort of pseudo-inverse if the problem is rank deficient), ${\bf B} = {\bf P}{\bf K}\ts$ and ${\bf A} = {\bf K}\K\ts$, and some work establishes that $$ \pdif{{\bf B}}{\rho_k} = -2 {\bf P}{\bf K}\ts{\bf T}_k{\bf KK}\ts - e^{\rho_k}{\bf PP}\ts{\bf S}_k{\bf P}{\bf K}\ts + {\bf P}{\bf K}\ts{\bf T}_k $$ and \begin{multline*} \pddif{{\bf B}}{\rho_k}{\rho_m} = 4 {\bf P}{\bf K}\ts{\bf T}_m{\bf KK}\ts{\bf T}_k{\bf KK}\ts + 2 e^{\rho_m}{\bf PP}\ts{\bf S}_m {\bf P}{\bf K}\ts{\bf T}_k{\bf KK}\ts- 4 {\bf P}{\bf K}\ts{\bf T}_m{\bf T}_k{\bf KK}\ts\\ - 2{\bf P}{\bf K}\ts{\bf T}_{km}{\bf KK}\ts + 4 {\bf P}{\bf K}\ts{\bf T}_k{\bf KK}\ts{\bf T}_m{\bf KK}\ts + 2 e^{\rho_m}{\bf P}{\bf K}\ts{\bf T}_k{\bf KP}\ts {\bf S}_m{\bf P}{\bf K}\ts \\ - 2{\bf P}{\bf K}\ts{\bf T}_k{\bf KK}\ts{\bf T}_m - 2 {\bf P}{\bf K}\ts{\bf T}_m{\bf KK}\ts{\bf T}_k - e^{\rho_m}{\bf PP}\ts{\bf S}_m {\bf P}{\bf K}\ts {\bf T}_k + {\bf P}{\bf K}\ts{\bf T}_m{\bf T}_k\\ + {\bf P}{\bf K}\ts{\bf T}_{km} + 2e^{\rho_k} {\bf P}{\bf K}\ts{\bf T}_m{\bf KP}\ts{\bf S}_k{\bf P}{\bf K}\ts + e^{\rho_k}e^{\rho_m}{\bf PP}\ts{\bf S}_m{\bf PP}\ts{\bf S}_k {\bf P}{\bf K}\ts \\ + 2 e^{\rho_k} {\bf PP}\ts {\bf S}_k {\bf P}{\bf K}\ts{\bf T}_m{\bf KK}\ts + e^{\rho_k} e^{\rho_m}{\bf PP}\ts{\bf S}_k {\bf PP}\ts {\bf S}_m {\bf P}{\bf K}\ts - e^{\rho_k} {\bf PP}\ts {\bf S}_k {\bf P}{\bf K}\ts {\bf T}_m\\ - \delta_k^m e^{\rho_k} {\bf PP}\ts{\bf S}_k {\bf P}{\bf K}\ts. \end{multline*} By inspection of the preceding two equations, it is clear that, given the one off $O(nq^2)$ start up cost of forming ${\bf K}$, the multiplication of a vector by either derivative of ${\bf B}$ is $O(nq)$. i.e. the leading order cost of computing the smoothing parameter derivatives of $\hat {\bm \beta}$ and hence of the deviance or Pearson statistic has been kept at $O(nq^2)$. \subsection{The derivatives of $\tr{\bf A}$\label{trA.deriv}} Once the derivative iterations detailed in the previous sections are complete, it is necessary to obtain the derivatives of the effective degrees of freedom $\tr{\bf A}$. These are $$ \pdif{\tr{{\bf A}}}{\rho_k} = \tr{ {\bf T}_k {\bf A} - 2{\bf A}{\bf T}_k{\bf A} - e^{\rho_k}{\bf B}\ts {\bf S}_k {\bf B} + {\bf A}{\bf T}_k } $$ and \begin{multline} \pddif{\tr{{\bf A}}}{\rho_k}{\rho_m}=2 \tr{{\bf T}_{km}{\bf A} + 2{\bf T}_k{\bf T}_m{\bf A}} - 4 \tr{{\bf T}_k{\bf A}{\bf T}_m{\bf A}+{\bf T}_m{\bf A}{\bf T}_k{\bf A}} \\- 2 \tr{2{\bf A}{\bf T}_m{\bf T}_k{\bf A}+{\bf A}{\bf T}_{km}{\bf A}} + 8 \tr{{\bf A}{\bf T}_k{\bf A}{\bf T}_m{\bf A}} + 4 \tr{e^{\rho_m}{\bf A}{\bf T}_k{\bf B}\ts{\bf S}_m{\bf B} + e^{\rho_k}{\bf A}{\bf T}_m{\bf B}\ts{\bf S}_k{\bf B}}\\ - \tr{2e^{\rho_m}{\bf T}_k{\bf B}\ts{\bf S}_m{\bf B}+2 e^{\rho_k}{\bf T}_m{\bf B}\ts{\bf S}_k{\bf B} + \delta_k^m e^{\rho_k}{\bf B}\ts{\bf S}_k{\bf B}} + 2 e^{\rho_m}e^{\rho_k} \tr{{\bf B}\ts {\bf S}_m {\bf G}^{-1} {\bf S}_k{\bf B}}. \label{trA.2deriv} \end{multline} Obviously it would be hideously inefficient to evaluate the terms in (\ref{trA.2deriv}) by explicitly evaluating its various component matrices and then evaluating the traces. Rather, efficient calculation of the trace terms rests on: (i) the fact that evaluation of ${\rm diag}({\bf CH)}$, where ${\bf C}$ is $n \times p$, and ${\bf H}$ is $p \times n$, takes $np$ floating point operations, (ii) $\tr{\bf CH} = \tr{\bf HC}$, (iii) careful choice of what to store and in what order to calculate it and (iv) the use of `minimum column' square roots of the penalty matrices ${\bf S}_m$. The actual evaluation uses the matrices $\bf K$ and $\bf P$, defined in section \ref{B.deriv}, and is detailed in appendix C. The leading order cost of the evaluation is $Mnq^2/2$ operations, where $M$ is the number of smoothing parameters. This is a considerable saving over finite differencing for second derivatives. Demonstrating that computing with ${\bf K}$ and ${\bf P}$ is as computationally efficient as is possible actually requires that the derivatives of ${\bf B}$ and $\tr{{\bf A}}$ be written out in terms of the original matrix decomposition components, and the most efficient computation of each term then be considered. The minimum set of quantities required for the whole calculation is then assembled, at which point it becomes clear that maximum efficiency can be obtained by computing with ${\bf K}$, ${\bf P}$. This process is exceedingly tedious, and is omitted here. \subsection{Optimizing AIC, GCV or GACV criteria} Given the derivatives of $\tau$, $D$ and $P$ the derivatives of the $ {\cal V}_g$, ${\cal V}_g^*$ or ${\cal V}_a $ are easily obtained, and the criteria can be optimized by Newton type methods. ${\cal V}_g$, ${\cal V}^*_g$ and ${\cal V}_a$ are indefinite over some parts of the smoothing parameter space, since they flatten out completely at very high or very low $\rho_k$ values. In many modeling situations such regions are unavoidable, since the optimal $\rho_k$ for a term that is not needed in a model {\em should} tend to $\infty$. When taking a full Newton approach such indefiniteness is readily identifiable and addressable using an eigen-decomposition, ${\bm \Xi \bm \Lambda \bm \Xi}\ts $, of the Hessian matrix of ${\cal V}$. Following Gill, Murray and Wright (1981) the Hessian is replaced in Newton's method by ${\bm \Xi} \bar {\bm \Lambda}{\bm \Xi}\ts$ where $\bar{\bm \Lambda}_{ii} = |{\bm \Lambda}_{ii}| $. Since the replacement is positive definite, the resulting modified Newton direction is guaranteed to be a descent direction. Note that the eigen-decomposition is a trivial part of the total computational burden here. Another way of increasing convergence rates is to only optimize smoothing parameters for which the corresponding gradient of $\cal V$ is large enough to be treated as unconverged. When the converged parameters are optimized at `working infinity', dropping them from optimization tends to improve the quadratic model underlying the Newton update of the remaining parameters. (Parameters re-enter the optimization if their corresponding gradient becomes large again.) Alternatively, one can work only with first derivatives, and use the quasi-Newton or Newton type algorithms built into R routines {\tt optim} and {\tt nlm}, for example (see R Core Development Team, 2006 and Dennis and Schnabel, 1983). The associated loss of computational speed is smaller than might be expected, as first derivatives are very cheap to obtain, and indefiniteness produces some degradation in the convergence rates of the `pure' Newton method. However, as the initial example in the introduction emphasizes, a finite differencing based method will not always be reliable when faced with complex models and strong concurvity effects. \section{Examples \label{examples.section}} \subsection{Performance in `straightforward' situations \label{easy.sim}} A small simulation study was undertaken to illustrate the method's performance in non-problematic situations, which should not generate numerical problems and where the data do not display concurvity. The example is adapted from one presented in Wahba (1990, section 11.3). For each replicate, 400 values for each of 4 covariates, $x_1, \ldots, x_4$, were simulated independently from a uniform distribution on $(0,1)$. The covariates were used to produce a scaled linear predictor of the form $\tilde \eta_i = f_1(x_{1i})+f_2(x_{2i})+f_3(x_{3i})$, where, $f_1(x)=2 \sin(\pi x)$, $f_2(x)=\exp(2x)$ and $f_3(x)=x^{11}\{10(1-x)\}^{6}/5 + 10^{4}x^3(1-x)^{10}$. Response data, $y_i$, were then generated under one of 4 models. (i) Independent Bernoulli random deviates were generated, taking the value 1 with probability $e^{\eta_i}/(1+e^{\eta_i})$, where $\eta_i = (\tilde \eta_i-5)/2.5$; (ii) independent Poisson random deviates were generated with mean $\exp(\eta_i)$, where $\eta_i=\tilde \eta_i/7$; (iii) independent gamma random deviates were generated with mean $\exp(\eta_i)$, with $\eta_i=\tilde \eta_i/7$ (and scale parameter 1); (iv) independent Gaussian random deviates were generated from $N(\eta_i,4 \eta_i)$ truncated (below) at zero, where $\eta_i = \exp(\tilde \eta_i/6)$. To each replicate a 4 term generalized additive model $$ g\{E(y_i)\} = f_1(x_{1i})+f_2(x_{2i})+f_3(x_{3i}) + f_4(x_{4i}) $$ was fitted. The link function, $g$, was the logit for the binary data, and log for the other cases. The $f_j$ were represented using rank 10 thin plate regression splines (Wood, 2003). The correct distribution was assumed for response models (i) to (iii), and for (iv) a quasi-likelihood approach was taken, with the variance assumed proportional to the mean. Each model was estimated by 5 alternative methods. (a) By the method presented in this paper using full first and second derivative information; (b) using the first derivative scheme presented here to optimize GCV/AIC scores using the `nlm' routine from R (this seems to be the fastest and most reliabale R general purpose optimizer for this problem); (c) optimizing the GCV/AIC scores using finite difference based gradients with R optimizer `nlm'; (d) using Gu's (1992) performance oriented iteration as implemented in Wood (2004); (e) representing the model as a mixed model and estimating via Breslow and Clayton's (1993) PQL (using the `nlme' library as the underlying mixed model fitter; Pinheiro and Bates, 2000). For methods a-d AIC was used for the binary and Poisson cases, and GCV for the other two. GACV was also tried but had marginally worse MSE/ predictive deviance than GCV, and is therefore not reported here. To measure model fit, 10000 new data were generated from the model concerned, and the fitted model was used to predict the (expected value of the) response variable. The prediction error was measured using the mean deviance of the prediction of the 10000 simulated response data {\em minus} the mean predictive deviance using the known truth. This {\em predictive deviance loss} is therefor zero if the model exactly reproduces the truth. In the quasi case the error model is incorrect, so the predictive deviance is not such a natural measure of performance and the mean square error in predicting the (covariate conditional) means over 10000 independent replicate data was used as the prediction error measure (although in fact the conclusions are no different if predictive deviance is used). \begin{figure} \eps{-90}{.4}{timing.eps} \vspace*{-.5cm} \caption{Boxplots of the distribution of the $\log_{10}$(CPU seconds) used to fit the models to the simulated data in section \ref{easy.sim}. `new' is the new method; `nlm' is the new method, but using only first derivatives; `nlm.fd' is the same optimization performed without derivative information; `PI' is performance oriented iteration; `PQL' is for the GAM estimated as a mixed model. The skew in the PI timing distributions means that the new method has the lowest mean time for all 4 models. The new method is also the most reliable --- see text. \label{timing.fig}} \end{figure} The PQL iteration failed in 20, 8, 13 and 23 out of 200 replicates for the binary, Poisson, gamma and quasi models, respectively, while the POI failed to converge in 13, 5, 5 and 10 out of 200 replicates. The smaller failure rate for performance oriented iteration as opposed to PQL may reflect the fact that the penalized least squares estimator used for POI was specifically designed for use in this application (Wood, 2004). The new method converged successfully for all replicates. The {\tt nlm} based methods produced warnings of potential convergence problems in about 2\% of cases, but none of these in fact appear to be real failures: rather the warnings seem to be triggered by the indefinite nature of the smoothness objective. Failures are of course excluded from the reported predictive deviance (MSE) comparisons and timings. Time, in CPU seconds, to fit each replicate model was also recorded (on a Pentium M 2.13Ghz processor with a Linux operating system). The results are summarized in figure \ref{timing.fig}. The new method clearly has lower computational cost than all the other methods apart from performance oriented iteration, although it turns out that, for each error model, the {\em mean} time required for the new method is actually less than that required by performance oriented iteration, as a result of the skew in the latter's timing distributions. Figure \ref{how.it.did} summarizes the predictive performance of the various estimation methods for the 4 types of model. A `(-)' after the label for a method indicates that the method was significantly worse than the new method in a paired comparison using a Wilcoxon signed rank test (using the .05 level); a `(+)' indicates a significant improvement over the new method. The only {\em operationally} significant differences are the worse performance of performance oriented iteration in the gamma and quasi cases, the worse performance of PQL in the Poisson case, and the better performance of PQL in the quasi case, but even these differences are rather modest. Note that the PQL failures were mostly for replicates producing quite high MSE or PD loss results by the other methods (which needs to be born in mind when viewing figure \ref{how.it.did}.) So the new method appears to greatly improve speed and reliability without sacrificing prediction performance. \begin{figure} \eps{-90}{.4}{performance.eps} \vspace*{-.5cm} \caption{Boxplots of prediction loss measures for the 4 models used in the section \ref{easy.sim} simulations, for each of 5 fitting methods. Labels are as in figure \ref{timing.fig}. The improved speed and reliability is not at the price of prediction performance. \label{how.it.did}} \end{figure} \subsection{Generalized Additive Mixed Models \label{gamm.sim}} The same argument that allows a GAM to be estimated as a Generalized Linear Mixed Model using PQL implies that many GLMMs can be estimated by the method developed in this paper. To illustrate this the simulations in the previous section were modified by splitting the data into 40 groups of size 10, and redefining the unscaled linear predictor as $\tilde \eta_i = f_1(x_{1i})+f_2(x_{2i})+f_3(x_{3i} + b_j$ if observation $i$ is from group $j$. The $b_j$ are i.i.d. $N(0,2^2)$ random deviates. The models fitted to each replicate were modified to GAMMs with linear predictors, $$ g\{E(y_i)\} = f_1(x_{1i})+f_2(x_{2i})+f_3(x_{3i}) + f_4(x_{4i}) + b_j {\rm ~if~}i~{\rm ~from~group~}j. $$ where the $b_j$ are assumed i.i.d. $N(0,\sigma^2_b)$. The simulations were otherwise un-modified except that only the new method and PQL were compared. For the new method the random effects are treated just like a smooth with the identity matrix as the penalty coefficient matrix, and the associated smoothing parameter controlling $\sigma^2_b$. \begin{figure} \eps{-90}{.4}{gamm.eps} \vspace*{-.5cm} \caption{Upper: Boxplots of the distribution of the $\log_{10}$(CPU seconds) used to fit the models to the simulated data in section \ref{gamm.sim}. In the labels {\tt p}, {\tt b}, {\tt q} and {\tt g} refer to the Poisson, binomial, quasi and gamma distributional assumptions. Lower: Boxplots of MSE$^{.25}$ for the various GAMM fits to simulated data discussed in section \ref{gamm.sim}. The new method is faster and more reliable at little or no performance cost. \label{gamm.fig}} \end{figure} The mean square error in predicting $\eta_i$ with the $b_j$ set to zero, was used as the measure of model performance, and the number of CPU seconds needed for model estimation was recorded. Out of 200 replicates PQL failed in 22, 12, 16 and 12 replicates for the binary, Poisson, gamma and quasi cases, respectively. The new method did not fail. Timings and MSE performances are shown in figure \ref{gamm.fig}. Wilcoxon tests (paired) fail to detect significant differences between the MSE performance of the methods ($p>.4$), except in the quasi case ($p < 10^{-4}$), where penalized quasi likelihood is significantly better than the new method, perhaps unsurprisingly. Operationally, the MSE differences seem to be small, while the improvements of the new method in terms of speed and reliability are substantial. \subsection{Severe concurvity \label{concurvity}} Using R 2.4.0, data were simulated with severe concurvity problems. \begin{verbatim} set.seed(23);n <- 400;x <- runif(n);z <- runif(n) d <- x^3 + rnorm(n)*0.01;f <- (d-.5 + 10*(d-.5)^3)*10 g<-binomial()$linkinv(f);y <- rbinom(g,1,g) \end{verbatim} See figure \ref{concurvity.data}. These data are rather extreme, but concurvity problems of this sort are not uncommon in real data examples with many predictors, although they are usually less obvious. The advantage of a simple simulated example is that the root cause of the associated fitting problems is clear, while the `right answer' is known. The data were modeled using (\ref{conc.model}) as described in the introduction, and existing methods fail to provide satisfactory fits, if they produce a fit at all. The new method converged to an estimated model which substantially suppressed $f_1$ (its effective degrees of freedom were reduced to .8) while doing a reasonable job at estimating $f_2$. The estimated model components are shown in figure \ref{concurvity.fit}. Following Wood (2004) the performance oriented iteration can be made convergent by regularizing the working penalized linear models at each iterate. However the results are very sensitive to the exact degree of regularization performed, with only a narrow window between convergence failure and gross oversmoothing. This is clearly unsatisfactory. \begin{figure} \eps{-90}{.4}{concurvity.fit.eps} \vspace*{-.5cm} \caption{ The estimates of $f_1$ (left) and $f_2$ (right, estimates and 95\% confidence limits) from (\ref{conc.model}), obtained using the new method. $\hat f_1$ has been almost shrunk to zero. The right hand figure also shows the true $f_2$ as a thick black curve (centered in the same way as the smooth). Previous methods fail for this example. \label{concurvity.fit}} \end{figure} In replicate simulations of this sort the new method is persistently more reliable than the alternatives, although there are of course a substantial number of replicates where all methods perform reasonably (the given replicate is unusual in that {\em all} the alternatives perform badly). No replicates where found where the new method performed less well than the alternatives. \subsection{A fisheries survey \label{mack.section}} This final example concerns modelling of fish egg data from a survey of mackerel eggs conducted in 1992 off the west coast of the British Isles and France. The purpose of the survey is to assess the abundance of fish eggs in order to infer the total mass of spawning fish producing them. The data consist of egg counts from samples collected by hauling a sampling net through the water column from below the maximum depth at which eggs are found, to the surface, and are shown in figure \ref{mack.fig}. Along with egg densities, {\tt egg}, the covariates {\tt long}, {\tt lat}, {\tt b.depth}, {\tt c.dist}, {\tt temp.surf} and {\tt temp.20m} were recorded, these being longitude, latitude, depth of sea bed below surface, distance from the 200m sea bed depth contour (a proxy for distance from the continental shelf edge), surface water temperature and water temperature at 20 m depth, respectively. In addition the area of the sampling net was recorded. There are 634 egg counts spread over the survey area. See Borchers et al. (1997) or Bowman and Azzalini (1997) for further details. This survey also formed the basis for the presence absence data in figure \ref{mack.bin.data}, used to illustrate convergence failure in the introduction. Unlike previous methods, the new method successfully fits (\ref{mack.logistic}), identifying a genuine `minimum AIC' model (i.e. the AIC has zero gradient and positive definite Hessian at the optimum). One can make the same point by modelling presence absence over the whole survey area, but given the spatial distribution of presences such a model is not practically defensible. \begin{figure} \eps{-90}{.5}{mack2.eps} \vspace*{-.5cm} \caption{The raw mackerel data (left, symbol area proportional to egg density) and the non-zero estimated terms from the mackerel egg model of section \ref{mack.section}. The central figure shows the spatial smooth over the survey region. The right hand figures show the estimated smooths of square root of sea bed depth and water temperature at 20 metres depth. PQL and performance oriented iteration fail for this example. \label{mack.fig}} \end{figure} Turning to the modelling of egg densities (and neglecting any zero inflation problems), a reasonable initial model for the data is \begin{multline*} \log\{E({\tt egg}_i)\} = f_1({\tt long}_i,{\tt lat}_i) + f_2(\sqrt{{\tt b.depth}_i}) + f_3({\tt c.dist}_i) + f_4({\tt temp.surf}_i)\\ + f_5({\tt temp.20m}_i) + \log({\tt net.area}_i) \end{multline*} along with the `quasi-Poisson' assumption ${\rm var}({\tt egg}_i) \propto E({\tt egg}_i)$ and the assumption that the response variable is independent (conditional on the covariates). $f_1, \ldots, f_5$ can be represented using penalized thin plate regression splines with shrinkage (see Wood, 2006), employing basis dimensions of 100 for $f_1$ and 10 for each of the remaining terms. Attempts to fit this model using performance oriented iteration fail, without extra regularization: the iteration cycles without ever converging. PQL is no more successful: it diverges until the routine for estimating the working linear mixed model fails. In contrast the new method fits the model without difficulty. The raw fit shows signs of overfitting (informal significance measures for several terms indicate that they have no real effect on the response, despite having fairly high estimated effective degrees of freedom). For this reason the model was re-fitted with $\gamma=1.4$ in the GCV score (see Kim and Gu, 2004). Two model terms were then estimated to have zero effective degrees of freedom (i.e. were penalized out of the model). The remaining terms are shown in figure \ref{mack.fig}. The difficulties in estimating the model by performance oriented iteration or PQL are again likely to relate to concurvity issues: all the covariates are functions of spatial location, some of them quite smooth functions. In addition the data contain 265 zeroes, and over half the counts are 0 or 1. At these very low counts the assumptions underlying PQL are likely to be somewhat poor, while the linearized problem used in performance oriented iteration is unlikely to capture the full model's dependency on smoothing parameters very precisely. \section{Conclusions} Relative to PQL or performance oriented iteration the new method offers two substantial advantages for GAM (or GAMM) estimation and smoothness selection. \begin{enumerate} \item It is more computationally reliable. Since smoothing parameter is based on optimizing a properly defined function, fitting does not suffer from the convergence problems suffered by PQL or performance oriented iteration. \item The value of the optimized smoothness selection criteria (GCV/AIC) is useful for model comparisons, since it relates to the model being fitted, rather than to some working approximation as is the case for PQL or POI. \end{enumerate} In addition the new method is much quicker than PQL, and competitive with performance oriented iteration (in simulations the median cost of the new method is higher while the mean cost is lower). Another less obvious benefit of the new approach is that it integrates easily with step reduction procedures for stabilizing the P-IRLS algorithm if it diverges, as it occasionally does, particularly in the early steps of fitting binary data. Since the P-IRLS is run to convergence with fixed smoothing parameters, it is easy to detect divergence --- this is not the case with performance oriented iteration or PQL, where the smoothing parameters change alongside the parameter estimates at each step of the iteration, so that any possible measure of fit may legitimately increase or decrease from one iteration step to the next. The disadvantage of the new method is the complexity of sections \ref{B.deriv}, \ref{trA.deriv} and associated appendices, with little carrying over from the linear problem. However, this disadvantage is a one off. Once the method has been implemented, it is hard to imagine circumstances in which performance oriented iteration or a finite differencing based method would be preferable. Relative to finite difference based optimization of GCV/AIC scores, the new method offers much improved computational speed. In difficult modelling situations it also offers enhanced reliability, by elimination of the finite difference approximation error which can lead to false convergence. It is not hard to see why problems might arise in finite differencing. The quantities being differentiated are the converged state of an iterative algorithm, which has to adaptively cope with ill-conditioning problems. Unless very elaborate finite difference schemes are applied there is always a danger that the values that get differenced result from different numbers of steps of the P-IRLS, or have had different levels of truncation applied to cope with ill-conditioning: either case can easily cause the finite difference approximation to fail even to get the sign of the derivative right. The new method eliminates this issue. An obvious alternative to section \ref{fit.details} would be to use auto-differentiation to automatically accumulate derivatives of the smoothness criteria directly from the computer code evaluating the criteria (see Skaug and Fournier, 2006, for a good statistically based introduction). However, `forward mode' auto-differentiation has an operations count of the same order as finite differencing making it uncompetitive here, while the alternative `reverse mode' requires storage of every intermediate result in the algorithm being differentiated, which is impractical in the current context. How far does the proposed method go towards the aim, stated in the introduction, of making GAM fitting with smoothness selection as routine as GLM fitting? The aim is the same as that given in Wood (2004), but that paper was restricted to performance oriented iteration, a method for which convergence to any sort of fixed point can not be guaranteed (and may have to be forced by ad hoc regularization). By taking the direct approach the new method is based on optimizing criteria which have well defined optima for any model. This avoids the convergence issue, but replaces it with the problem of how to find the optimum in as efficient and stable a manner as possible, something that is made difficult by the additional non-linearities introduced by the direct approach. The new method succeeds in providing very efficient direct calculation of the derivatives of the smoothness selection criteria, as is evident in the surprising timing results, relative to performance oriented iteration, given in section \ref{easy.sim}. It is unlikely that further substantial improvements are possible in this regard. As highlighted in the introduction, numerical stability is an important and unavoidable issue when working with models as flexible as GAMs, and the methods proposed here directly address the rank deficiency that may cause this. The QR approach to the basic fitting problem is the most stable method known, while the approach taken to rank determination has performance close to the `gold standard' of SVD (see Golub and van Loan, 1996). Again then, there is no obvious alternative that might result in a more stable method. In short, the proposed method achieves the stated aim as closely as is likely to be achievable (which seems to be quite close). The method described here is implemented in R package {\tt mgcv} (\verb+cran.r-project.org+). \section*{Acknowledgements} I am grateful to Stefan Lang for a good deal of help in understanding the issues surrounding GAMs and Bayesian MCMC computation and for help getting started with BayesX, and to the R core for providing the statistical computing environment that made this work a feasible undertaking. I would also like to thank two referees for helpful suggestions on the structure of the paper, and for some other comments which I think have improved it. \section*{Appendix A: The derivatives of $\bf z$} The $\bf z$ derivative update referred to in section \ref{section.pirls} is given here. Note that $\mu_i$, $\eta_i$, $z_i$ and $w_i$ are always taken as being evaluated at the converged $\hat {\bm \beta}$. \noindent {\bf Initialization:} ${\bf z}$, $\bf w$, $\bm \mu$ and $\bm \eta$ are fixed at their converged values from the P-IRLS, but all their derivatives w.r.t. $\bm \rho$ are initially set to zero. The initial derivatives of $\hat {\bm \beta}$ w.r.t. $\bm \rho$ are as in section \ref{section.pirls}. At the converged estimate of $\mu_i$, evaluate the constants: $c_{1i}=(y_i - \mu_i) g^{\prime\prime}(\mu_i)/g^\prime(\mu_i)$, $c_{2i} = [(y_i-\mu_i) \{ g^{\prime\prime\prime}(\mu_i)/g^{\prime}(\mu_i)^2 - g^{\prime\prime}(\mu_i)^2/g^{\prime}(\mu_i)^3 \} - g^{\prime\prime}(\mu_i)/g^{\prime}(\mu_i)^2 ]$, $c_{3i} = w_i^3\{V^\prime(\mu_i)g^\prime(\mu_i) + 2 V(\mu_i) g^{\prime\prime}(\mu_i) \}/(2 \omega_i)$ and $c_{4i} = w_i^3\{ V^{\prime\prime}(\mu_i)g^\prime(\mu_i) + 2 g^{\prime \prime\prime}(\mu_i)V(\mu_i) + 3 g^{\prime\prime}(\mu_i) V^{\prime}(\mu_i) \} / \{2 \omega_i{g^\prime(\mu_i)}\}$. \noindent {\bf Update:} The following steps update the $\bf z$ derivatives given the $\hat {\bm \beta}$ derivatives (for all $k,m$, such that $k \ge m$). \begin{enumerate} \item Evaluate $$ \pdif{\bm \eta}{\rho_k} = {\bf X} \pdif{\hat {\bm \beta}}{\rho_k} ~~~{\rm and}~~~ \pddif{\bm \eta} {\rho_k}{\rho_m} = {\bf X} \pddif{\hat {\bm \beta}}{\rho_k}{\rho_m}. $$ \item Update the derivatives of ${\bf z}$: $$ \pdif{z_i}{\rho_k} = c_{1i}\pdif{\eta_i}{\rho_k} {\rm ~~and~~} \pddif{z_i}{\rho_k}{\rho_m} = c_{1i} \pddif{\eta_i^2}{\rho_k}{\rho_m} + c_{2i} \pdif{\eta_i}{\rho_m} \pdif{\eta_i}{\rho_k}. $$ \item Update the derivatives of $w_i = \omega_i^{1/2}V(\mu_i)^{-1/2}/g^{\prime}(\mu_i)$: $$ \pdif{w_i}{\rho_k} = - c_{3i} \pdif{\eta_i}{\rho_k} {\rm ~~and~~} \pddif{w_i}{\rho_k}{\rho_m} = \frac{3}{ w_i} \pdif{w_i}{\rho_k} \pdif{w_i}{\rho_m} - c_{3i} \pddif{\eta_i}{\rho_k}{\rho_m} - c_{4i} \pdif{\eta_i}{\rho_m}\pdif{\eta_i}{\rho_k}. $$ \item The derivatives of ${\bf z}^{\prime}$ are evaluated: $$ \pdif{z_i^\prime}{\rho_k} = \pdif{w_i}{\rho_k}z_i + w_i \pdif{z_i}{\rho_k} {\rm ~~and~~} \pddif{z_i^\prime}{\rho_k}{\rho_m} = \pddif{w_i}{\rho_k}{\rho_m} z_i + \pdif{w_i}{\rho_k}\pdif{z_i}{\rho_m} + \pdif{w_i}{\rho_m}\pdif{z_i}{\rho_k} + w_i\pddif{z_i}{\rho_k}{\rho_m}. $$ \end{enumerate} \section*{Appendix B: Deviance and Pearson statistic derivatives} The derivatives of the deviance can be obtained as follows. $$ \pdif{D}{\rho_k} = \sum_j\pdif{D}{\hat \beta_j}\pdif{\hat \beta_j}{\rho_k} {\rm ~~and~~} \pddif{D}{\rho_k}{\rho_m} = \sum_j \left (\sum_l \pddif{D}{\hat \beta_j}{\hat \beta_l} \pdif{\hat \beta_l}{\rho_m} \pdif{\hat \beta_j}{\rho_k}\right ) + \pdif{D}{\hat \beta_j} \pddif{\hat \beta_j}{\rho_k}{\rho_m}. $$ The required derivatives of the deviance w.r.t. $\hat {\bm \beta}$ are $$ \pdif{D}{\hat \beta_j} = -2 \sum_i \omega_i \frac{y_i - \mu_i}{V(\mu_i)g^\prime(\mu_i)} X_{ij} {\rm ~~and} $$ $$ \pddif{D}{\hat \beta_j}{\hat \beta_l} = 2 \sum_i \omega_i \left [ \frac{1}{V(\mu_i)g^\prime(\mu_i)} \pdif{\mu_i}{\hat \beta_l} + \frac{y_i - \mu_i}{[V(\mu_i)g^\prime(\mu_i)]^2} \left \{ V^\prime(\mu_i) g^\prime(\mu_i) + V(\mu_i) g^{\prime\prime}(\mu_i) \right \} \pdif{\mu_i}{\hat \beta_l} \right ] X_{ij}. $$ So, defining ${\bf c}$ as the vector with elements $ c_i = -2 \omega_i (y_i - \mu_i)/\{V(\mu_i)g^\prime(\mu_i) \}, $ the vector of first derivatives of $D$ w.r.t. the $\hat \beta_j$ is ${\bf X}\ts {\bf c}$. Now noting that $\ilpdif{\mu_i}{\hat \beta_l} = X_{il}/g^\prime(\mu_i)$ and defining $$ e_i = 2 \omega_i \left [\frac{1}{V(\mu_i)g^\prime(\mu_i)^2} + \frac{y_i - \mu_i}{V(\mu_i)^2g^\prime(\mu_i)^3} \left \{V^\prime(\mu_i)g^\prime(\mu_i) + V(\mu_i) g^{\prime\prime}(\mu_i) \right \} \right ], $$ the second derivative matrix (Hessian) of $D$ is ${\bf X}\ts{\rm diag}(e_i){\bf X}$. The derivatives of the Pearson statistic, $P$, are easily obtained by noting that $$ P = \sum_{i=1}^n \omega_i \frac{(y_i - \hat \mu_i)^2}{V(\hat \mu_i)} = \sum_{i=1}^2 w_i^2(z_i - \hat \eta_i)^2. $$ The expression in terms of the iterative weights, pseudodata and linear predictor makes evaluation of the derivatives of $P$ particularly straightforward, since the derivatives of all $w_i$, $z_i$ and $\hat \eta_i$ are available directly from the derivative iteration. \section*{Appendix C: Efficient evaluation of the derivatives of $\tr{\bf A}$} In the following, wherever $\sqrt{{\bf S}_m}$ is written it denotes the $q \times {\rm rank}({\bf S}_m)$ matrix such that $\sqrt{{\bf S}_m}\sqrt{{\bf S}_m}\ts={\bf S}_m$ (pivoted Choleski decomposition can be used to find these, see Golub and van Loan, 1996 and Dongarra et al. 1978). The following list gives the key steps for evaluating each of the different types of term making up the second derivatives of $\tr{\bf A}$ as given on the RHS of equation (\ref{trA.2deriv}). \begin{enumerate} \item For $\tr{{\bf T}_{km} {\bf A}}$ etc. first form and store $\diag{{\bf A}} = \diag{{\bf K}\K\ts}$ and the term follows. \item $\tr{{\bf T}_k{\bf A}{\bf T}_m{\bf A}} = \tr{[{\bf K}\ts{\bf T}_k{\bf K}][{\bf K}\ts{\bf T}_m{\bf K}]} = \tr{{\bf T}_m{\bf A}{\bf T}_k{\bf A}}$ (the second equality follows from transposing the matrix expression in the middle trace). This requires storage of ${\bf K}\ts{\bf T}_k{\bf K}$, in advance. \item Terms like $\tr{{\bf A}{\bf T}_{km}{\bf A}}$ follow from $\tr{{\bf A}{\bf T}_{km}{\bf A}}=\tr{{\bf T}_{km}{\bf A}\A}$. So, $\diag{{\bf A}\A} = \diag{[{\bf K} {\bf K} \ts {\bf K}][{\bf K} \ts]}$ is evaluated once up front (having first formed ${\bf K}\ts {\bf K} $ and then ${\bf K} {\bf K} \ts {\bf K} $) and the result is then readily computed. \item ${\bf K}\ts{\bf T}_k{\bf K}\K\ts{\bf K}$ is stored up front so that use can be made of\\ $\tr{{\bf A}{\bf T}_k{\bf A}{\bf T}_m{\bf A}}=\tr{[{\bf K}\ts{\bf T}_k{\bf K}][{\bf K}\ts{\bf T}_m{\bf K}\K\ts{\bf K}]}$. \item $\tr{{\bf A}{\bf T}_k{\bf B}\ts{\bf S}_m{\bf B}}= \tr{{\bf T}_k[{\bf K}{\bf P}\ts\sqrt{{\bf S}_m}][\sqrt{{\bf S}_m}\ts{\bf P}{\bf K}\ts{\bf K}\K\ts]}$, so evaluate \\ $\diag{[{\bf K}{\bf P}\ts\sqrt{{\bf S}_m}][\sqrt{{\bf S}_m}\ts{\bf P}{\bf K}\ts{\bf K}\K\ts}$ and the result is easily obtained. This required up front storage of ${\bf K}\K\ts{\bf K}{\bf P}\ts\sqrt{{\bf S}_m}$ and ${\bf K}{\bf P}\ts\sqrt{{\bf S}_m}$. \item Evaluate $\diag{{\bf B}\ts{\bf S}_m {\bf B}} = \diag{[{\bf K}{\bf P}\ts\sqrt{{\bf S}_m}][\sqrt{{\bf S}_m}\ts{\bf P}{\bf K}\ts]}$ and terms like \\ $\tr{{\bf T}_k{\bf B}\ts{\bf S}_m{\bf B}}$ follow easily. \item Finally, if ${\bf P}\ts{\bf S}_m {\bf P}$ and ${\bf P}\ts{\bf S}_m {\bf P} {\bf K} \ts {\bf K}$ are stored up front, then\\ $\tr{{\bf B}\ts {\bf S}_m {\bf G}^{-1} {\bf S}_k{\bf B}} = \tr{[{\bf P}\ts{\bf S}_m {\bf P}][{\bf P}\ts{\bf S}_k{\bf P}{\bf K}\ts{\bf K}]}$ is easily obtained. \end{enumerate} Notice that, if $M$ is the number of smoothing parameters, then by far the most expensive calculation here is the evaluation of the $M$ terms ${\bf K}\ts{\bf T}_k{\bf K}$ in step 2. This has a total cost of $nq^2M/2$ floating point operations, which is still a considerable saving over finite differencing to get second derivatives. Note also that all terms in $\tr{\bf A} $ and its first derivatives are covered in the above list, and have a total leading order computational cost of $O(nq^2)$, the same as model estimation: this is an $M+1$ fold saving over finite differencing. \subsection*{References} \begin{trivlist} \item Anderson, E., Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Donngarra, J. Du Croz, A. Greenbaum, S. Hammerling, A. McKenney \& D. Sorenson (1999) {\em LAPACK Users' Guide} (3rd ed.) SIAM, Philadelphia. \item Akaike, H. (1973) Information theory and an extension of the maximum likelihood principle. In B. Petran \& F. Csaaki (Eds.) {\em International Symposium on Information Theory}, Akadeemiai Kiadi, Budapest, Hungary, pp. 267-281. \item Borchers, D.L., S.T. Buckland, I.G. Priede \& S. Ahmadi (1997) Improving the precision of the daily egg production method using generalized additive models. {\em Canadian Journal of fisheries and aquatic science} 54, 2727-2742. \item Bowman, A.W. \& A. Azzalini (1997) {\em Applied Smoothing Techniques for Data Analysis} Oxford University Press. \item Breslow, N.E. \& D.G. Clayton (1993) Approximate inference in generalized linear mixed models. {\em Journal of the American Statistical Association} 88, 9-25. \item Brezger, A. \& S. Lang (2006) Generalized structured additive regression based on Bayesian P-splines. {\em Computational Statistics and Data Analysis} 50, 967-991. \item Brezger, A., T Kneib \& S. Lang (April 2007) {\em BayesX} 1.5.0 \verb+http://www.stat.uni-muenchen.de/~bayesx+ \item Cline, A.K, C.B. Moler, G.W. Stewart \& J.H. Wilkinson (1979) An Estimate for the Condition Number of a Matrix. {\em SIAM Journal of Numerical Analysis} 13, 293-309. \item Craven P. \& G. Wahba (1979) Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross validation {\em Numerische Mathematik} 31, 377-403. \item Dennis, J. E. \& R.B. Schnabel (1983) {\em Numerical Methods for Unconstrained Optimization and Nonlinear Equations.} Prentice-Hall, Englewood Cliffs, NJ. \item Dongarra, J. J., J.R Bunch, C.B. Moler \& G.W. Stewart (1978) {\em LINPACK Users Guide} SIAM, Philadelphia. \item Eilers, P.H.C. \& B.D. Marx (2002) Generalized linear additive smooth structures. {\em Journal of computational and graphical statistics} 11(4), 758-783. \item Fahrmeir, L. T. Kneib \& S. Lang (2004) Penalized structured additive regression for space time dataL: A Bayesian perspective. {\em Statistica Sinica} 14, 731-761. \item Fahrmeir, L. \& S. Lang (2001) Bayesian inference for generalized additive mixed models based on Markov random field priors. {\em Applied Statistics} 50, 201-220 \item Figueiras, A., J. Roca-Pardi\~nas \& C. A. Cadarso-Su\'arez (2005) A bootstrap method to avoid the effect of concurvity in generalized additive models in time series studies of air pollution. {\em Journal of Epidemiology and Community Health} 59, 881-884. \item Gill, P.E., W. Murray \& M.H. Wright (1981) {\em Practical Optimization} Academic Press, London. \item Golub, G.H. \& C.F. van Loan (1996) {\em Matrix Computations} (3rd edition). Johns Hopkins University Press, Baltimore. \item Green, P.J. \& B.W. Silverman (1994) {\em Nonparametric Regression and Generalized Linear Models} Chapman \& Hall, London. \item Gu, C. \& G. Wahba (1991) Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method. {\em SIAM Journal on Scientific and Statistical Computing} 12, 383-398. \item Gu, C. (1992) Cross validating non-Gaussian data. {\em Journal of Computational and Graphical Statistics} 1, 169-179. \item Gu, C. (2002) Smoothing Spline ANOVA Models. Springer, New York. \item Gu, C. (2004) {\tt gss}: General Smoothing Splines. R package version 0.9-3. \item Gu, C. \& D. Xiang (2001) Cross-validating non-Gaussian data: Generalized approximate cross-validation revisited. {\em Journal of Computational and Graphical Statistics} 10, 581-591. \item Hastie, T. \& R. Tibshirani (1986) Generalized additive models (with discussion). {\em Statistical Science} 1, 297-318. \item Hastie, T. \& R. Tibshirani (1990) {\em Generalized additive models} Chapman \& Hall, London. \item Hastie, T. \& Tibshirani (1993) Varying-coefficient models. {\em Journal of the Royal Statistical Society, Series B} 55, 757-796. \item Kim, Y.J. \& Gu, C. (2004) Smoothing spline Gaussian regression: more scalable computation via efficient approximation. {\em Journal of the Royal Statistical Society, Series B} 66, 337-356. \item Lang, S \& A. Brezger (2004) Bayesian P-splines. {\em Journal of Computational and Graphical Statistics} 13, 183-212 \item Lin, X. \& D. Zhang (1999) Inference in generalized additive mixed models using smoothing splines. {\em Journal of the Royal Statistical Society, Series B} 61, 381-400. \item Mallows, C.L. (1973) Some comments on $C_p$ {\em Technometrics} 15, 661-675. \item Marx B. D. \& P.H. Eilers (1998) Direct generalized additive modeling with penalized likelihood. {\em Computational Statistics and Data Analysis} 28, 193-209. \item McCullagh P. \& J. A. Nelder (1989) Generalized linear models (2nd ed.) Chapman \& Hall, London. \item Nelder, J.A. \& R.W.M. Wedderburn (1972) Generalized linear models. {\em Journal of the Royal Statistical Society, Series A} 135, 370-384 \item O'Sullivan, F. (1986) A statistical perspective on ill-posed inverse problems. {\em Statistical Science} 1, 502-518 \item O'Sullivan, F.B., B. Yandall \& W. Raynor (1986) Automatic smoothing of regression functions in generalized linear models. {\em Journal of the American Statistical Association} 81, 96-103. \item Pinheiro, J.C. \& D.M. Bates (2000) {\em Mixed-Effects Models in S and S-PLUS} Springer, New York. \item R Core Development Team (2006) R 2.4.0: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna. \item Ramsay, T., R. Burnett \& D. Krewski (2003) Exploring bias in a generalized additive model for spatial air pollution data. {\em Environmental Health Perspectives} 111, 1283-1288 \item Ruppert, D., M.P. Wand \& R.J. Carroll (2003) {\em Semiparametric Regression} Cambridge University Press. \item Skaug, H.J. \& D. Fournier (2006) Automatic approximation of the Marginal Likelihood in non-Gaussian Hierarchical Models. {\em Computational Statistics and Data Analysis} 51, 699-709 \item Stone, M. (1977) An asymptotic equivalence of choice of model by cross-validation and Akaike's criterion. {\em Journal of the Royal Statistical Society, Series B} 39, 44-47. \item Wahba, G (1990) {\em Spline models for observational data} SIAM, Philadelphia. \item Watkins, D.S. (1991) {\em Fundamentals of Matrix Computations} Wiley, New York. \item Wood, S.N. (2000) Modelling and smoothing parameter estimation with multiple quadratic penalties. {\em Journal of the Royal Statstical Society, Series B} 62, 413-428. \item Wood, S.N. (2003) Thin plate regression splines. {\em Journal of the Royal Statistical Society, Series B} 65, 95-114. \item Wood, S.N. (2004) Stable and efficient multiple smoothing parameter estimation for generalized additive models. {\em Journal of the American Statistical Association} 99, 673-686. \item Wood, S.N. (2006) {\em Generalized Additive Models: An Introduction with R} CRC/Chapman \& Hall, Boca Raton, Florida. \item Xiang, D. \& G. Wahba (1996) A generalized approximate cross validation for smoothing splines with non-Gaussian data. {\em Statistica Sinica} 6, 675-692. \end{trivlist} \end{document}
1,116,691,496,998
arxiv
\section{Introduction} \label{sec:intro} Decades of research on core-collapse supernovae (CCSNe) have not accomplished full understanding of the explosion mechanism yet. One of the most important issues in supernova theory is the dynamics of shock wave in the stellar core. In fact, it is widely accepted that the prompt shock wave, which is generated by core bounce, experiences stagnation at $r \sim 100$km owing to the energy losses by photodissociations of heavy elements and neutrino emissions as well as due to the ram pressure of accreting matter. For successful explosion, the stalled shock wave needs re-invigoration one way or another. The most promising is the heating by neutrinos diffusing out of a proto-neutron star. Only a percent of the total amount of energy neutrinos are carrying is sufficient to give the canonical explosion energy of supernova ($\sim 10^{51}$erg). Although it is still unclear whether the energy transfer from neutrinos to ejecta is indeed large enough, the neutrino heating is currently the most favored mechanism of CCSNe (see e.g. \citet{2012arXiv1204.2330K} for a recent review). In the last five years, we have witnessed some successful explosions in advanced multi-dimensional numerical simulations. (see e.g. \citet{2006ApJ...640..878B,2008ApJ...685.1069O,2009ApJ...694..664M,2010PASJ...62L..49S,2012ApJ...749...98T,2010arXiv1002.4914B,2012arXiv1202.0815M}. The results of different groups still have some discrepancies, though. In particular, 3D effects are still controversial. Although large-scale numerical simulations are making a rapid progress toward sufficient reality, we believe that phenomenological approaches such as proposed in this paper can still play an important and complimentary role for extracting key physical elements from complex and non-linear dynamics of CCSNe. It was \citet{1993ApJ...416L..75B} who took the lead in such an approach. They examined a sequence of steady accretion flows through a standing shock wave onto a proto-neutron star to discuss the criterion for shock revival. They varied the luminosity of electron-type neutrinos ($L_{\nu_{e}}$) and mass accretion rate ($\dot{M}$) as free parameters and revealed that there is a critical luminosity for a given mass accretion rate, above which no steady shocked accretion flow obtains. They argued that the critical luminosity marks the trigger point for shock revival. This approach was later extended to rotational configurations by \citet{2005ApJ...623.1000Y} and linear stability analysis was also applied \citep{2007ApJ...656.1019Y}. The latter authors pointed out that the critical luminosity could be smaller if one takes into account hydrodynamical instabilities such as radial overstabilization modes, which were also observed in dynamical simulations (see e.g. \citet{2006ApJ...641.1018O}) as well as non-radial modes. Although the approach certainly has limitations in reproducing full complexity of explosion mechanism, the critical luminosity has become one of the most useful measures for shock revival and some new analyses have been published in recent years (see e.g. \citet{2012ApJ...746..106P,2012ApJ...749..142F,2012arXiv1202.2359K} and references therein). The existence of the critical luminosity has been also demonstrated in multi-dimensional numerical simulations with simplified treatments of neutrino transfer such as the gray or light bulb approximation \citep{2006ApJ...641.1018O,2008ApJ...678.1207I,2008ApJ...688.1159M,2010ApJ...720..694N,2011arXiv1108.4355H}. There is a wide consensus at present that multi-dimensionally dynamics such as the standing accretion shock instability (SASI) (see e.g. \citet{2003ApJ...584..971B,2007ApJ...654.1006F}) and neutrino-driven convection (see .e.g. \citet{2011ApJ...742...74M,2012arXiv1205.3491M} and references therein) reduce the critical luminosity. This is mainly because the advection time scale tends to be longer owing to turbulent motions, which then leads to longer heating and provides a favorable condition for shock revival. In addition, the instabilities push the shock wave outward and expand the gain region. Hence the exploration of multi-dimensional neutrino-heating mechanism is currently the hot topic in the main-stream research on CCSNe. In particular, 3D dynamics is one of the central issues. \citet{2010ApJ...720..694N} found in their 2D and 3D experimental simulations that the critical luminosity is monotonically reduced with increasing spatial dimensions, i.e., 3D dynamics provides the conditions that are most favorable for shock revival. On the other hand, \citet{2011arXiv1108.4355H} obtained in their similar computations the results that are at odds with those presented by \citet{2010ApJ...720..694N}: they found no significant difference in the critical luminosity between their 2D and 3D models. The reason of the difference is not clear at the moment, since they made different simplifying assumptions and employed different numerical techniques. In this paper, we address the condition for shock revival again by a simplified phenomenological approach. We do not discuss the critical luminosity, however. Instead we propose to introduce the third parameter, {\it fluctuations}, and discuss the condition for shock revival in terms of them. In so doing we employ a semi-dynamical approach instead of dynamical simulations to approximately describe shock motions that are induced by fluctuations. This is an extension of the previous works that employed only steady states. One of the drawbacks in the latter approach is that by definition it cannot handle the temporal evolution of shock wave and, consequently, cannot address what will happen to the shock wave after it restarts to move. The semi-dynamical approach can remove these problems and demonstrate that there is a threshold of fluctuation amplitudes for a given combination of neutrino luminosity and mass accretion rate, beyond which shock revival occurs and a continuous outward propagation of the revived shock wave follows, which we think is the new criterion for successful explosions. The paper is organized as follows. In Section 2, we describe the semi-dynamical model. Confirming that it can capture the shock dynamics by a comparison with dynamical simulations, we apply the model systematically to various initial conditions that are appropriate for the shock-stagnation phase in Section 3. Based on these results we give the critical fluctuation amplitudes for shock revival. Finally, we discuss the possible implications of the new criterion for the multi-dimensional neutrino heating mechanism in supernova theory and give conclusions in Section 4. \section{Semi-dynamical Method} \label{sec:semidynami} We are interested in the phase at several hundred milliseconds after bounce, in which the prompt shock wave is stalled and matter flows through an almost steady shock and accretes onto a proto-neutron star; neutrinos are emitted from the neutrino sphere and heat up the gain region; the location of quasi-steady stagnated shock wave is determined by the neutrino heating and the ram pressure of accreting matter; the onset of shock revival is determined not by the non-existence of steady accretion flows but by some sort of hydrodynamical instabilities, e.g., radial over-stabilizing oscillations in spherical symmetry and non-radial SASI in multi dimensions~\citep{2006ApJ...641.1018O,2012ApJ...749..142F}. In this paper we take a standpoint that the onset of shock revival is determined by the neutrino luminosity and mass accretion rate as well as fluctuations by the instabilities. Our semi-dynamical approach deals with the transition from the quasi-steady accretion to the re-expansion of shock wave as well as the ensuing outward propagation of shock wave in 1D. Before going into detail, we first describe the essence of this new approach. The semi-dynamical model begins with an addition of perturbation to the steady accretion shock by hand. Then the shock wave starts to move. We follow the subsequent shock motions not by hydrodynamical simulations but by the integration of a simplified equation of motion, which is base on the local Riemann problem. By local we mean that our formulation considers only the neighborhood of shock wave. This enables us to avoid simulations. This local approximation is found to be reasonable agreement with the results of full dynamical simulations near the shock wave (see Section~\ref{subsec:stage2}, \ref{subsec:sec2rangeofappli} and Appendix~\ref{ape:reliablocalappro}). The model calculations are computationally very cheap, by virtue of which we can investigate long-term ($\gtrsim 1 {\rm s}$) evolutions of shock wave for a large number of models with different backgrounds so that we could obtain the critical fluctuation amplitudes for shock revival very efficiently. Another benefit for the semi-dynamical approach is that it can study finite fluctuation amplitudes, which is in sharp contrast to linear stability analysis, in which the stability is considered only for infinitesimal perturbations. We stress that even if the shock wave is linearly stable, large enough perturbations may trigger shock revival, which we will see is really the case in the later section. \begin{figure*} \vspace{15mm} \epsscale{1.0} \plotone{f1.eps} \caption{Schematic pictures of individual steps in the semi-dynamical model. The horizontal axis in each panel denotes the radius whereas the vertical axis represents the density. See the text for more detailed explanations of each step. \label{f1}} \end{figure*} \subsection{Details of the semi-dynamical model} The actual calculations of shock motions in our semi-dynamical approach consist of a couple of steps, which we will describe more in detail in the following section (see also Figure~\ref{f1}). \subsubsection{Step 1: preparation of steady shocked accretion flows} \label{subsec:stage1} The first step is a preparation of initial conditions, which are assumed to be steady and spherically symmetric. The procedure to obtain such flows is exactly the same as in \citet{2006ApJ...641.1018O} (see also \citet{2005ApJ...623.1000Y}) and the details are given in Appendix~\ref{ape:steadyshock}. \subsubsection{Step 2: addition of perturbations} \label{subsec:stage2} After setting up the steady shocked accretion flows, we put some radial perturbations, which are not necessarily small, to the initial conditions. Although the actual form of perturbation is rather arbitrary, we choose in this study to give a finite velocity to the standing accretion shock wave by adjusting post-shock quantities so that the Rankine-Hugoniot relation would be satisfied for the given shock velocity, $v_{sh}(t_{0})$. In this expression, $t_{0}$ denotes the initial time. \subsubsection{Step 3: displacement of shock wave} \label{subsec:stage3} The shock motions induced by the initial perturbations are calculated in a finite difference fashion. Suppose that the shock location $r_{sh}(t_{n})$ and velocity $v_{sh}(t_{n})$ are given at $t_n$. Then the shock location at the next time $t_{n+1}$is given by \begin{eqnarray} && r_{sh}(t_{n+1}) = r_{sh}(t_{n}) + v_{sh}(t_{n}) \times \Delta t \label{eqshevo}, \end{eqnarray} where $\Delta t = t_{n+1} -t_{n}$ is the interval between the two successive times. In this study we set $\Delta t = 10^{-4}{\rm s}$, which is sufficiently short. The initial time corresponds to $n=0$. If the shock velocity at $t_{n+1}$ is obtained somehow, then the procedure is iterated until a designated time is reached. \subsubsection{Step 4: determination of shock velocity -- setting up Riemann problem --} \label{subsec:stage4} How to obtain the new shock velocity $v_{sh}(t_{n+1})$ at $t_{n+1}$ is the most important part of this model. The idea here is similar to the one of the Godonov scheme in numerical hydrodynamics: a Riemann problem is set up at the beginning of each time step and the evolution during the subsequent interval is determined by the solution of the Riemann problem. In our model, we consider a certain Riemann problem at the new shock location $r_{sh}(t_{n+1})$ and solve it to find the velocity of the forward shock that exists always in the solution (see Step~5). In order to set up the Riemann problem, we need to specify hydrodynamical quantities on both sides of the discontinuity located at $r_{sh}(t_{n+1})$. The upstream quantities are easily obtained from the unperturbed steady accretion flow. The downstream quantities are not so easy to obtain. In this model we assume that the downstream flow between $r_{sh} (t_{n})$ and $r_{sh} (t_{n+1})$ is steady. We solve Eqs. (\ref{eqmasscon})-(\ref{eqelecfrac}) for steady flows from $r_{sh} (t_{n})$ to $r_{sh} (t_{n+1})$ to obtain the hydrodynamical quantities at the latter point and use them for the Riemann problem. The post-shock flows are not steady. Since they are subsonic, it takes at least the advection time plus sound crossing time between the shock and the proto-neutron star to reach a steady state. In the shock revival phase, however, this time scale is longer than the typical time scales of variations in the shock radius, which means that the steady state is never realized over the entire post-shock region. On the other hand, we find in hydrodynamical simulations that the post-shock flows are approximately steady in the vicinity of the shock wave (see Appendix~\ref{ape:reliablocalappro}). By virtue of this approximation, we can avoid hydrodynamical simulations in the entire region and instead treat the shock motion alone. \subsubsection{Step 5: determination of shock velocity -- solving Riemann Problems --} \label{subsec:stage5} Now that the Riemann problem has been constructed, we can obtain the new shock velocity, based on the solution of the Riemann problem. The bottom panels in Figure~\ref{f1} show the schematic pictures of two possible solutions of the Riemann problem. In the current situation, the solution always contain a forward shock wave, i.e., the right-going shock wave in Figure~\ref{f1}, which we regard as the displaced shock wave. The left-going wave can be either a rarefaction wave (left panel) or a shock wave (right panel). Regardless, we adopt the velocity of the right-going shock wave as the new shock velocity at $t_{n+1}$, i.e, $v_{sh}(t_{n+1})$. This closes a single time step. We go back to Step~3 and repeat the following steps for the next time step. The iteration is terminated when the designated time is reached. \subsection{Miscellaneous} \label{subsec:sec2rangeofappli} In this study we fix the neutrino luminosity, mass accretion rate and mass of a central object during the time evolution for simplicity. It should be stressed, however, that the semi-dynamical model can handle the time dependence of these parameters with no difficulty. It is also important to note that the only difference between our model and full (1D) hydrodynamical simulations is how to obtain the hydrodynamical quantities just behind the shock wave. Hence the accuracy of our model depends entirely on the validity of the ``locally-steady'' approximation. As will be demonstrated in \S\ref{sec:demon} and Appendix~\ref{ape:reliablocalappro}, it looks reasonably good in general. Since the post-shock flows are subsonic, they are always affected by the physical conditions of inner regions in principle. This will be particularly so if the relaunched shock wave is stalled again. On the contrary, if the shock continues to propagate outward briskly, we expect that our approximation will work reasonably well. \section{Results} \label{sec:demon} In this section we apply the semi-dynamical method to shock revival in the post-bounce supernova cores. We first study the characteristics of solutions and observe the existence of the critical fluctuations. Then we investigate the dependence of the critical fluctuation for shock revival on some key parameters, which is our main result in this paper. \subsection{Characteristics of shock evolutions} \label{subsec:validity} \begin{figure*} \vspace{15mm} \epsscale{1.0} \plotone{f2.eps} \caption{The evolutions of shock wave for the fiducial background model. The left panel shows the shock radii as a function of time whereas the right panel presents the corresponding shock velocities. Three lines with different colors in each panel correspond to different shock velocities added initially. See the text for more details. \label{f2}} \end{figure*} We first investigate the evolutions of shock wave after the addition of perturbations of different amplitudes to the fiducial background model, which is characterized as follows: $L_{52} (\equiv L_{\nu}/({10^{52}}$erg/s))$=5$, $\dot{M}_{sun} (\equiv - \dot{M}/(1 M_{\odot}$/s))$=1$ and $M_{in} = 1.4 M_{\odot}$ (see section~\ref{subsec:stage1}) with $L_{\nu}$, $\dot{M}$ and $M_{in}$ denoting the neutrino luminosity, mass accretion rate and mass of a central object, respectively. In this fiducial model the standing shock wave is located at $r_{sh}=1.5 \times 10^{7}$cm. Figure~\ref{f2} shows three representative evolutions of shock radius (left panel) and velocity (right panel). As demonstrated in the left panel, the shock wave either settles down to a new position (red line) or sustains the outward propagation (green and blue lines). The green line is in fact a dividing line of the two cases. These lines correspond to different initial shock velocities, with the blue and green lines having the largest and smallest initial shock velocities, respectively. We refer to the initial perturbation for the green line as the critical shock velocity. The value of the critical shock velocity is $v_{sh(crit)} = 1.65 \times 10^{9}$cm/s in this case. As is evident from the right panel, the out-going shock wave is decelerated initially in all cases. The model for the red line, which is given half the critical shock velocity initially, finds continuous deceleration of shock wave until it comes to a halt at $t \sim 40{\rm ms}$. On the contrary, in the models of the green and blue lines, the latter of which is given twice the critical shock velocity at the beginning, the shock wave begins to accelerate at some point of time ($t \sim 0.2{\rm s}$ for the green line and $t \sim 0.01{\rm s}$ for the blue line) and keeps the outward motion up to the end of calculation. It is also clear that the shock wave evolves faster for larger initial shock velocities among the models that do not fizzle out. The deceleration is efficient until the shock wave reaches $r_{sh} \sim 10^{8}{\rm cm}$ (see Figure~\ref{f2}). In fact, it seems that the shock waves that cannot make this distance fail to revive and vice versa. The shock deceleration in the early phase of shock revival is consistent with the fact that the fiducial background model is linearly stable against radial perturbations. Note that the linear stability of spherically symmetric, shocked accretion flows can be judged by the Nakayama's criterion \citep{1994MNRAS.270..871N,1996MNRAS.281..226N}, which states that such flows are linearly stable if the post-shock matter is decelerated (see also \citet{2007ApJ...656.1019Y}). We confirm that this is indeed the case for the fiducial model employed here. Of the three models shown in Figure~\ref{f2}, the failed case (red line) has relatively small initial perturbations, and linear analysis will be applicable. To the other two cases, which are given larger perturbations, linear analysis may not be applicable. We think, however, the cause of the shock deceleration can be understood in the same way also in these cases at least qualitatively. It is then nice that the results of semi-dynamical method are consistent with the linear stability analyses. \begin{figure*} \vspace{15mm} \epsscale{1.0} \plotone{f3.eps} \caption{Momentum fluxes just ahead of and behind the shock wave if it were moved to the position specified by the the horizontal axis. The shock radius is normalized by the original radius, $r_{sh(0)}$. The vertical axis denotes the momentum flux normalized by the value at $r_{sh(0)}$. The red solid lines give the post-shock values whereas the green dotted lines present the pre-shock ones. The left panel shows the momentum flux imbalance for the fiducial background model whereas the other two panels represent different background models with the neutrino luminosity and mass accretion rate displayed in each panel. Both models have larger shock radii initially than the fiducial model: $r_{sh(0)}=9.8 \times 10^{7 }{\rm cm}$ for the model in the middle panel whereas $r_{sh(0)}= 6.4 \times 10^{7}{\rm cm}$ for the model in the right panel. \label{f3}} \end{figure*} The shock deceleration may be understood yet another way. \citet{1994PASJ...46..257N} claimed that the stability of standing shock wave can be judged by the momentum-flux imbalance between the pre- and post-shock flows. Of course they are exactly equal to each other for the initial standing shock wave. If the shock wave is shifted slightly outward from the original position, then the momentum flux loses its balance and the shock wave will be either pulled back or pushed further. The former occurs if the momentum flux in the pre-shock flow is larger than that in the post-shock flow and this implies the stability of shock wave. The latter case corresponds to instability, on the other hand. Note that this stability criterion is consistent with that of Nakayama's (see e.g. \citet{2008ApJ...689..391N,2009ApJ...696.2026N,2010ApJ...711..222N}). It is also expected that the greater the imbalance of the momentum fluxes is, the stronger the deceleration or acceleration will be. We apply this criterion to the shock deceleration at $r_{sh} \lesssim 10^{8}{\rm cm}$. We show in Figure~\ref{f3} the momentum fluxes just ahead of and behind the shock wave if it were moved to the specified position by the initial perturbation. The three panels correspond to different background models (cf. Figure~10 in \citet{1994PASJ...46..257N}). The horizontal axis denotes the radius normalized by the unperturbed shock radius $r_{sh(0)}$, whereas the vertical axis expresses the momentum flux normalized by the value at $r_{sh(0)}$. The red (green) lines give the momentum fluxes just behind (ahead of) the shock wave. The post-shock momentum flux is calculated by extending the steady post-shock flow up to the perturbed shock front. On the other hand, the pre-shock momentum flux is obtained by assuming that the pre-shock flow is unaffected by the perturbation. The left panel of Figure~\ref{f3} corresponds to the fiducial background model. As is clear from the figure, the exact momentum balance is satisfied at the original shock location whereas the pre-shock momentum flux overwhelms the post-shock momentum flux as the shock wave is moved outward. In this case, as already mentioned, the shock wave will tend to be pulled back to the original position, i.e, the shock wave will experience deceleration. We also note that the strength of deceleration differs among the background models. The middle and right panels of Figure~\ref{f3} correspond to the cases either with a higher neutrino luminosity or with a lower mass accretion rate than the fiducial case. In both cases, the unperturbed shock waves are located at larger radii than in the fiducial model. As is evident in these panels, the momentum flux imbalance between the pre- and post-shock flows is smaller than in the fiducial model. This indicates that the pull-back force that decelerates the shock wave gets weaker as the original shock radius becomes larger, the fact which is important in analyzing the critical fluctuation for shock revival in the next section. \subsection{The Critical Fluctuation} \label{subsec:critfluctu} In the previous section, we show the existence of the critical shock velocity for the fiducial model, i.e., if the shock velocity administered initially is larger than this value, the shock continues to propagate outward even if the neutrino luminosity is smaller than the critical value. In this subsection, we perform a larger number of calculations for different background models, determining the critical shock velocity for each model, and then we analyze their property in detail. \begin{figure*} \vspace{15mm} \epsscale{1.0} \plotone{f4.eps} \caption{The critical fluctuations as a function of the initial shock radius. The horizontal axis denotes the original shock radii in the unperturbed steady accretion flows, whereas the vertical axis represents the normalized critical fluctuations, which are defined by Eq.~(\ref{eqfluctu}). In the left panel the red line represents the sequence with a fixed neutrino luminosity ($L_{52}=5$) whereas the green line corresponds to the series with a fixed mass accretion rate ($\dot{M}_{sun} = 1$). The mass of the central object is set to $M_{in}=1.4 M_{\odot}$. The dotted line shows the pressure fluctuations that give the vanishing post-shock flow velocity. The star symbol displays the result of the 2D simulation detailed in Appendix~\ref{ape:dynamicalsimulation}. In the right panel, on the other hand, we compare the results for different masses of the central object. The red line represents the model with $M_{in}=1.4 M_{\odot}$ whereas the green line corresponds to the model with $M_{in}=1.2 M_{\odot}$. The mass accretion rate is fixed ($\dot{M}_{sun} = 1$) and the neutrino luminosity is varied in this case. \label{f4}} \end{figure*} We prepare the unperturbed steady accretion flows as follows: we fix the mass of a central object to $M_{in} = 1.4 M_{\odot}$; in one sequence we vary mass accretion rate with a fixed neutrino luminosity (the red line in the left panel of Figure~\ref{f4}) whereas we fix the mass accretion rate and change the neutrino luminosity in another series (the green line in the same figure). Note that in these sequences of background models, the location of shock wave ($r_{sh}$) has one-to-one correspondence either with the mass accretion rate (for the red line) or with the neutrino luminosity (for the green line). The dependence on the mass of central object will be studied later. Having in mind that the initial perturbations are generated by the hydrodynamical instabilities in reality, we refer to the dimensionless perturbation in the post-shock pressure, which is defined to be \begin{eqnarray} && f \equiv (p - p_{0})/p_{0} \label{eqfluctu}, \end{eqnarray} as the fluctuation hereafter. In this expression $p$ and $p_{0}$ stand for the post-shock pressures for the perturbed and unperturbed flows, respectively. The value of the fluctuation that corresponds to the critical shock velocity is called the critical fluctuation. Figure~\ref{f4} shows the critical fluctuations $f_{crit}$ as a function of the initial radius of shock wave ($r_{sh}$) for the two series of background models mentioned above. The critical fluctuation for each background model is obtained by many trials of semi-dynamical calculations with various initial shock velocities. It is beyond the scope of this paper to address the nature of the fluctuations, e.g. kinematic or thermal. We infer, however, that the shock wave could be revived equally by the increase in temperature (or internal energy). It is noted that the increase in the post-shock temperature by $\sim 20 \%$ corresponds to the rise in the internal energy that we see for the critical shock velocity. As we can see from the left panel of Figure~\ref{f4}, the critical fluctuation becomes smaller with the shock radius. It is interesting that the two lines are almost identical in the range of $10^{7}{\rm cm} \lesssim r \lesssim 6 \times 10^{7}{\rm cm}$. This is the most important region in discussing shock revival, since it roughly corresponds to the typical location of the stagnated shock wave. The above results suggest that the critical fluctuation is mainly determined by the shock location and that the neutrino luminosity and mass accretion rate do not directly dictate shock revival; instead, they are indirectly important in the sense that they determine the initial location of standing shock wave. The reason why large radii of standing shock wave are advantageous for shock revival can be understood from the momentum-flux imbalance. As explained in \S~\ref{subsec:validity} and Figure~\ref{f3}, the pull-back force exerted on the shock wave is weakened with the radius of standing shock wave and hence smaller fluctuations are sufficient for shock revival at larger distances. It is interesting to note that \citet{2005ApJ...623.1000Y} observed that the shock radius at the critical point does not differ much among various models, which suggests that the shock location is the important quantity to determine the critical point. \citet{2008ApJ...688.1159M} conducted experimental simulations in multi-dimensions and found that the larger neutrino luminosity induces the greater amplitude of shock oscillations, which is also consistent with our interpretation, since the high neutrino luminosities result in large shock radii and the pull back force acting on the shock wave is then weak. We next vary the mass of central object to see the dependence of our findings on the change. Fixing the PNS mass is admittedly inconsistent with the accretion flows with the substantial mass accretion rates adopted in this paper. The semi-dynamical method considers only the vicinity of shock wave. This means that we can not trace the evolution of PNS mass precisely, since the mass accretion rate ahead of the shock is different from that at PNS (note that the post-shock flow is no longer steady). Although, only the included mass is important in spherical symmetry and we can evaluate it by time-integrating the mass accretion rate, we do not think even this is necessary at the current level of approximation. Instead we have chosen to change the assumed (constant) PNS mass and study the dependence of critical fluctuations on it. The results are shown in the right panel of Figure~\ref{f4}. This time we investigate the sequence of steady accretion flows, in which the mass accretion rate is fixed ($\dot{M}_{sun} = 1$) and the neutrino luminosity is varied. The result would not be changed if the sequence with the fixed neutrino luminosity were studied. The red line in the figure represents the critical fluctuations for $M_{in} = 1.4 M_{\odot}$ (identical to the green line in the left panel of Figure~\ref{f4}) whereas the green line denotes the critical fluctuations for $M_{in} = 1.2 M_{\odot}$. Although the difference between the two cases is not so large, the critical fluctuation for $M_{in} = 1.4 M_{\odot}$ is systematically larger by several percentage points than that for $M_{in} = 1.2 M_{\odot}$ for the same initial shock radius. This is mainly attributed to the fact that gravity is stronger for the heavier central object, which then leads to the greater ram pressure. The result clearly demonstrates that larger masses of the central core are negative for shock revival, which is consistent with the analysis based on the critical luminosity by \citet{2012arXiv1202.2359K}. We find that the results given in Figure~\ref{f4} can be approximated by \begin{eqnarray} && f_{crit} \sim 0.8 \times (\frac{M_{in}}{1.4M_{\odot}}) \times \{ 1 - (\frac{r_{sh}}{10^{8} {\rm cm}}) \}. \label{eqfluctucrit} \end{eqnarray} This simple analytic expression will be useful in analyzing the onset of explosion in full dynamical simulations, since the fluctuation $f$ can be easily estimated at each time step. Note that $p_{0}$ is obtained from the values of hydrodynamical quantities just ahead of the stalled shock wave by using the Rankine-Hugoniot relation for $v_{sh}=0$. We expect that if the fluctuation $f$ at a certain time step in a simulation does not exceed $f_{crit}$ estimated this way, the shock wave does nothing but oscillations around the average position. It is interesting to compare our results with those by \citet{2012ApJ...749..142F}, which propose a sufficient condition for shock revival in spherical symmetry. According to their analysis, the shock wave starts a runaway expansion when a portion of the fluid in the post-shock flow achieves positive energy. They also find that this happens when the post-shock velocity becomes positive. Motivated by these findings, we calculate for various background models the fluctuations that give the vanishing fluid velocity just behind the shock wave. If the unperturbed state is unstable to a radial overstabilizing mode and the criterion for shock revival by \citet{2012ApJ...749..142F} holds, the fluctuation obtained this way should be the true critical fluctuation. We find that they are smaller than by a factor of 2-3 the critical fluctuations obtained by the semi-dynamical method (See Figure~\ref{f4}). This difference may be attributed to the approximation employed in the semi-dynamical approach, i.e. the post-shock flows are assumed to be determined locally, which seems indeed to be a rather poor approximation in the shock revival by the overstabilization mode. In this sense, we may claim that our estimate of the critical fluctuation (Eq.~(\ref{eqfluctucrit})) is conservative. It is worth noting that shock revival may be induced not by the overstabilization but by the multi-dimensional instabilities in reality. We confirm by 1D and 2D hydrodynamical simulations (see Appendix D) that the fiducial model in this paper is stable to the overstabilizing mode and does not produce shock revival in 1D, whereas large-amplitude fluctuations generated by SASI and/or neutrino driven convection revive the stalled shock wave in 2D. This is consistent with the previous papers \citep{2006ApJ...641.1018O,2007ApJ...656.1019Y}. The 2D model is also used to demonstrate a possible application of the critical fluctuation to the analysis of multi-D shock revival. In Figure~\ref{f10}, we show the pressure distributions along a certain radial ray around the shock revival in the 2D simulation. It is interesting that the pressure fluctuation obtained from this figure is comparable to the critical fluctuation given by Eq.~(\ref{eqfluctucrit}) as shown in Figure~\ref{f4}. Note, however, that this is just a single demonstration and the systematic comparison with full, 2D and 3D simulations is needed before we can make any quantitative assertion. Nonetheless the result certainly warrants further investigations. It is also important to note that the critical fluctuation given by Eq.~(\ref{eqfluctucrit}) is a conservative estimate, since we ignore the time evolution of mass accretion rate in this analysis. Since the mass accretion rate becomes smaller with time in reality, the ram pressure at shock front, which is one of the main obstacles for shock propagation, should be smaller, which would likely help the shock wave move outward. In fact, we confirm in another series of semi-dynamical calculations that decrease the mass accretion rate can induce shock revival (see Appendix~\ref{ape:decreaseacrate}). \section{Summary and Discussions} \label{sec:discussion} \begin{figure*} \vspace{15mm} \epsscale{1.0} \plotone{f5.eps} \caption{Radii of standing shock waves in unperturbed, steady accretion flows. In the left panel, we vary the mass accretion rate for fixed neutrino luminosities whereas in the right panel, we fix the mass accretion rate and change the neutrino luminosity for each line. \label{f5}} \end{figure*} In this paper, we have developed a new approach to shock revival in the neutrino heating mechanism in core-collapse supernova. The semi-dynamical model takes into account the temporal evolution of shock wave induced by fluctuations that are possibly generated by instabilities. The equation of motion of the shock wave is modeled in a finite difference manner, with Riemann problem being set up at each time step. In so doing the post-shock flows are approximated by locally steady flows, which are indeed observed in hydrodynamical simulations. The method is very efficient in conducting a large number of model calculations of the long-term shock evolution. As a result, we have demonstrated that there is a critical fluctuation for a given steady accretion flow, above which the shock can sustain outward propagation, leading hopefully to a successful supernova explosion. Varying the neutrino luminosity or mass accretion rate for different masses of the central object, we have obtained a simple fitting formula for the critical fluctuation as a function of the initial shock radius and mass of central object (see Eq.~(\ref{eqfluctucrit})). According to our results, the critical fluctuation decreases with an increase in the shock radius and/or a decrease in the mass of central object. In particular, we have found that the initial shock radius is the key parameter for shock revival. It should be noted that in our models the neutrino luminosities are all sub-critical and there is a steady shocked accretion flow for a given pair of neutrino luminosity and mass accretion rate. In fact, even if the shock wave is stagnated around $r_{sh} \sim 10^{7}{\rm cm}$, in which case the neutrino luminosity is much smaller than the critical value, a large enough fluctuation can put the shock wave into a sustained outward propagation. We have hence concluded that although the neutrino luminosity and mass accretion rate are important in determining the initial location of stagnated shock wave, it is the dynamical fluctuations that have direct leverage on shock revival. Based on our findings in this paper, we now discuss possible implications for the neutrino heating mechanism in multi-dimensions. According to the recent results of 3D core-collapse simulations by \citet{2012ApJ...749...98T,2012arXiv1210.5241D}, the maximum residency time of accreting matter in the gain region is longer in 3D than in 2D owing to the extra degree of freedom in motion. This leads to longer neutrino heating, which then results in larger average radii of stagnated shock wave in 3D than in 2D. According to our findings in this paper, the larger shock radius means a smaller critical fluctuation for shock revival. In this sense at least 3D models are more favorable for successful explosions. However, the fluctuations generated by SASI or neutrino-driven convections are smaller in 3D than in 2D, since the free energy of turbulence could be distributed over a larger number of oscillation modes in 3D \citep{2010ApJ...720..694N,2008ApJ...678.1207I} The inverse cascading nature of 2D turbulence may also contribute \citep{2012ApJ...759....5B}. In fact, the sloshing modes, which are always observed markedly in 2D axisymmetric simulations, are not so remarkable in 3D with non-axisymmetric modes being dominant. These two factors (the initial shock location and the amplitude of fluctuations in hydrodynamical instabilities) compete with each other and make it difficult to unravel the net effect of 3D hydrodynamics, the fact that may lead to the current controversy between different groups~\citep{2010ApJ...720..694N,2011arXiv1108.4355H,2012ApJ...749...98T}. Although it is beyond the scope of this paper to settle the dispute on the 3D effect, we can add some more considerations that may be useful. Figure~\ref{f5} shows the locations of standing shock wave in unperturbed steady accretion flows as a function of mass accretion rate (left panel) or neutrino luminosity (right panel). In the former the neutrino luminosity is fixed whereas the mass accretion rate is fixed in the latter. As shown clearly for both cases in this figure, the location of standing shock varies very rapidly as the mass accretion rate (left panel) or neutrino luminosity (right panel) exceeds a certain value. For instance, the blue line in the left panel, for which the neutrino luminosity is fixed at $L_{52}=5$, shows that $r_{sh}$ is almost constant as long as $|\dot{M}_{sun}| \gtrsim 1$ whereas it grows rapidly for $|\dot{M}_{sun}| \lesssim 0.5$. A similar trend is evident in the right panel if one replaces the mass accretion rate with the neutrino luminosity. This may imply then that if the average shock radius is already in this rapidly changing regime, the shock radius of the two competing factors will be more important and 3D may be more advantageous for shock revival. If the shock wave is stagnated at rather small radii, on the other hand, the larger fluctuations in 2D will be more important to provide favorable conditions for shock revival. In spite of its simplicity the semi-dynamical approach we have employed in this paper can capture the essential features of the shock dynamics such as linear stability. All the results we have obtained are of qualitative nature and the critical fluctuations will be somewhat changed if more realistic treatments of hydrodynamics and microphysics are incorporated. We believe, however, that the existence of the critical fluctuation and its qualitative dependence on the shock radius and mass of central object will not be changed qualitatively. It is true that there are some limitations to the semi-dynamical approach in this paper. In addition to the fact that it is 1D, some features in shock dynamics, such as re-stagnations, certainly cannot be described, since the locally-steady approximation will not be valid. More detailed comparison with multi-D hydrodynamical simulations will reveal both the merit and demerit of the approach, which will be the issue of our forth coming paper. \acknowledgements This work was supported by Grant-in-Aid for the Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan (21540281, 24244036, 22540297, 24740165) and HPCI Strategic Program of Japanese MEXT.
1,116,691,496,999
arxiv
\section{Introduction} Graphene \cite{Novoselov05,CastroNeto07}, a two-dimensional carbon based material forming a honeycomb lattice, has attracted a lot of attention since its experimental isolation has been proved possible \cite{Novoselov04,Berger04}. It is a gapless semiconductor in which, near half filling, electrons behave like massless Dirac particles, obeying a linear dispersion relation. Among the unusual properties of this two-dimensional carbon material stand out very distinctive quantum Hall properties, and in particular the $\sqrt{n}$ dependence of the energy in terms of the Landau level number $n$, and the existence of a Landau level with zero energy, which is associated with the presence of a Berry phase \cite{Novoselov05,Zhang05}. The existence of this Berry phase and its implications for the Landau levels have been discussed in many places in different contexts (see e.g.\ \cite{Ando98,Mikitik98,Zhang05}). The direct connection between the Berry phase and the observable quantities under discussion is however not always as transparent as one may wish, and situations where, either because of disorder, or because one would like to confine the electrons into a finite region of space, a position dependent electrostatic potential or mass term is introduced, are usually not addressed. The aim in this paper is to revisit this question of Berry phase in graphene within a semiclassical, and more specifically semiclassical Green's function, perspective. For sake of clarity, our emphasis in this present work will be more in providing this new point of view, and we shall therefore mainly illustrate it with the discussion of the standard problem of the Landau levels of electrons in a perpendicular and uniform magnetic field. Even in this familiar framework, we shall see however that our semiclassical approach makes it possible to address some non-trivial questions, such as the role of the Berry phase in situations for which a small mass term has to be included, opening in this way a gap at the Dirac point. This article is therefore organized as follows. In section~\ref{sec:deriv}, we derive, following closely the formalism of Bolte and Keppeler \cite{Bolte99}, the expression for the semiclassical Green's function in graphene. In particular we discuss in details the origin of the term corresponding to the Berry phase. These results are extended in section~\ref{sec:bilay&GTF} to a bilayer of graphene. We furthermore provide both for the monolayer and the bilayer cases the expression of the Gutzwiller trace formula for the semiclassical density of states, valid when classical periodic orbits are isolated. As an illustration of the Green's function formalism, we then apply it in section~\ref{sec:landau} to the computation of Landau levels for a graphene sheet in constant magnetic field. We will see in particular that the modifications brought in by, for instance, trigonal warping, are easily included within our semiclassical formalism. We then come back in section~\ref{sec:1/2classvsquant} to the discussion of the relationship between the semiclassical ``Berry-like'' phase obtained in our approach and the adiabatic Berry phase \cite{Berry84} usually discussed in this context. \section{Semiclassical Green's function for graphene} \label{sec:deriv} Starting from a tight-binding nearest neighbor model, the graphene Hamiltonian at low energies can be obtained by expanding the momentum near the Dirac points ${\bf K}$ and ${\bf K}'$ of the Brillouin zone. For pure graphene, one obtains in this way in momentum representation \cite{Wallace47,Slonczewski58,McClure56,Goerbig06} \begin{equation} \label{eq:Hog} {\cal H}^0_g = v_{\scriptscriptstyle F}(\alpha \sigma_x p_x + \sigma_y p_y) = v_{\scriptscriptstyle F}\begin{pmatrix} 0 & {\alpha} p_x - i p_y \\ {\alpha} p_x + i p_y & 0 \\ \end{pmatrix} \; , \end{equation} where the matrix structure originates from the existence of two sub-lattices (denoted $A$ and $B$ below) in the graphene honeycomb structure. In this equation, $v_{\scriptscriptstyle F} = {3ta}/({2\hbar})$ is the Fermi velocity, with $t$ the hopping parameter and $a$ the lattice constant, $\alpha$ is the valley index ($\alpha = \pm 1$) labelling the two inequivalent points ${\bf K}$ and ${\bf K}'$ in the Brillouin zone (not to be confused with the sub-lattice index), ${\bf p}$ is the momentum measured from these points, and $\sigma_{x,y}$ are Pauli matrices. This linear approximation to the graphene Hamiltonian will be valid as long as the condition $|{\bf p}| \ll {\hbar}/{a}$ is fulfilled. We are interested here in a more general situation than the one of pure graphene, and would like to consider the case where, because of either disorder or the need to confine the electrons in some part of the graphene sheet, an electrostatic potential $U({\bf r})$ and/or a [possibly position dependent] mass $m({\bf r})$ have to be taken into account. We will not consider however tunneling contributions related to the Klein paradox, or boundary effects that may occur at the (zigzag, armchair, or generic) edges of the graphene sample. The graphene Hamiltonian then takes the more general form \begin{equation} \label{eq:hamgen} {\cal H}_g = v_{\scriptscriptstyle F}(\alpha \sigma_x \hat \Pi_x + \sigma_y \hat \Pi_y) + U({\bf r}) \mbox{.\bfseries \large 1}_2 + m({\bf r})v_{\scriptscriptstyle F}^2\sigma_z \; , \end{equation} in which the magnetic field ${\bf B}({\bf r}) = {\bf \nabla} \times {\bf A}({\bf r})$ (if any) is taken into account by the Peierls substitution \begin{equation} \hat {\bf p} \to \hat {\bf \Pi} = \hat {\bf p} + e {\bf A} ({\bf r}) \; , \end{equation} with ${\bf A}({\bf r})$ the vector potential and $\hat {\bf p} \equiv -i\hbar\frac{\partial}{\partial {\bf r}}$. For this problem, the Green's function $G({\bf r}'',{\bf r}')$ is actually a $2 \times 2$ matrix defined by the differential equation \begin{equation} \label{eq:diff} (E\mbox{.\bfseries \large 1}_2 - {\cal H}_g)G({\bf r}'',{\bf r}';E) = \delta({\bf r}'' - {\bf r}')\mbox{.\bfseries \large 1}_2 \end{equation} (where ${\cal H}_g$ is applied to the variable ${\bf r}''$). To obtain a semiclassical solution of this equation, we shall proceed in two steps. First, assuming ${\bf r}''$ is far from the source location ${\bf r}'$, we solve semiclassically (i.e. in the WKB approximation) the Schr\"odinger equation \begin{equation} \label{eq:schroe} (E\mbox{.\bfseries \large 1}_2 -{\cal H}_g)G=0 \; . \end{equation} In a second stage we match this general solution to the exact Green's function of the ``free'' (i.e.\ with constant potential and mass) problem, valid near the singularity ${\bf r}'$. We proceed now with this derivation. \subsection{Far from the singularity: the WKB approximation} Following \cite{Bolte99}, we seek a semiclassical solution of eq.~(\ref{eq:diff}) with $G$ of the form \begin{equation} \label{eq:semiGr} G({\bf r}'',{\bf r}';E) = \Gamma({\bf r}'',{\bf r}') \exp\left[ {\frac{i}{\hbar}S({\bf r}'',{\bf r}')} \right], \end{equation} where $\Gamma$ is a 2x2 matrix. To lighten the notation, we drop for now the explicit dependence in the source position ${\bf r}'$. Inserting (\ref{eq:semiGr}) into (\ref{eq:schroe}) and expanding in $\hbar$ the resulting expression, we obtain at order $O(\hbar^0)$ \begin{equation} \left( E\mbox{.\bfseries \large 1}_2 - H(\frac{\partial S}{\partial {\bf r}''},{\bf r}'') \right) \Gamma({\bf r}'') = 0 \; , \label{eq:order0} \end{equation} and at order $O(\hbar^1)$ \begin{equation} \frac{\partial H}{\partial {\bf p}} \cdot \frac{\partial }{\partial {\bf r}''}\Gamma({\bf r}'') = v_{\scriptscriptstyle F} (\alpha\sigma_x\frac{\partial}{\partial x''} + \sigma_y\frac{\partial}{\partial y''})\Gamma({\bf r}'') = 0 \; , \label{eq:order1} \end{equation} where $H({\bf p},{\bf r})$ is the classical symbol associated with the quantum Hamiltonian ${\cal H}_g$. This classical Hamiltonian can be diagonalized, with the eigenvalues \begin{equation} \label{eq:Hclass} H^{\pm}({\bf p},{\bf r})=U({\bf r}) \pm\sqrt{m^2({\bf r})v_{\scriptscriptstyle F}^4+v_{\scriptscriptstyle F}^2{\bf \Pi}^2} \end{equation} and the corresponding normalized eigenvectors $V^{\pm}({\bf p},{\bf r})$ (whose explicit expressions are given in appendix~\ref{sec:appA}). Writing the matrix $\Gamma({\bf r}'')$ as $[V^{\pm}(\frac{\partial S}{\partial {\bf r}''},{\bf r}'') \cdot{\tilde \Gamma}^{\pm}({\bf r}'') ]$, with ${\tilde \Gamma}^{\pm}$ a $1 \times 2$ matrix, the order $\hbar^0$ equation becomes \begin{equation} \label{eq:HJchar} E-H^{\pm}(\frac{\partial S}{\partial {\bf r}''},{\bf r}'')=0 \; , \end{equation} where the $\pm$ sign must be taken according to the sign of $E-U({\bf r}'')$. Eq.~(\ref{eq:HJchar}) is the usual scalar Hamilton-Jacobi equation, which can be solved by the method of characteristics \cite{Maslov81}. This amounts to constructing a 2-dimensional Lagrangian manifold ${\cal L}$ (in the 3-dimensional energy surface in phase space) built as a 1-parameter family of trajectories following the classical equations of motion \begin{eqnarray*} \dot { {\bf r}} & = & \frac{\partial H^{\pm}}{\partial {\bf p}}({\bf p},{\bf r}) \; , \\ \dot { {\bf p}} & = & -\frac{\partial H^{\pm}}{\partial {\bf r}}({\bf p},{\bf r}) \; . \end{eqnarray*} Given any such manifold, the action $S({\bf r}'') = \int^{{\bf r}''} {\bf p} d {\bf r}$, where the integral is taken on an arbitrary path on ${\cal L}$, is a solution of (\ref{eq:HJchar}). The specific Lagrangian manifold that will correspond to the proper boundary conditions for $G({\bf r}'',{\bf r}')$ near the source ${\bf r}'$ is the one obtained from the trajectories leaving ${\bf r}'$ with an arbitrary initial momentum ${\bf p}'$ at energy $E$: \begin{equation} \label{eq:manif} \begin{split} {\cal L}^{\pm} = & \{ ( {\bf p}(t), {\bf r}(t)), t \in [0,\infty), \\ & \mbox{such that } {\bf r}(0) = {\bf r}', \, H^{\pm}({\bf p}(0), {\bf r}(0))=E \} \end{split} \end{equation} (each point on the manifold is therefore parameterized by the time $t$ and the initial momentum $ {\bf p}(0)$). The corresponding action can then be expressed as \begin{equation} \label{eq:Sgreen} S^{\pm}({\bf r}'',{\bf r}') = \int^{{\bf r}''}_{{\bf r}'} {\bf p} \cdot \dot { {\bf r}} \, dt \end{equation} along a trajectory $({\bf p}(t),{\bf r}(t))$ joining ${\bf r}'$ to ${\bf r}''$ at energy $E$. Having obtained a solution of the $O(\hbar^0)$ equation, the prefactor $\tilde \Gamma$ is then determined by the $O(\hbar^1)$ equation~(\ref{eq:order1}), which, after multiplication on the left by $V^{\pm\dagger}(\frac{\partial S^{\pm}}{\partial {\bf r}''},{\bf r}'')$, can be expressed as $\Box {\tilde \Gamma}^{\pm} = 0$, where \begin{equation*} \Box \equiv \left( V^{\pm\dagger}(\frac{\partial S^{\pm}}{\partial {\bf r}''},{\bf r}'') \frac{\partial H}{\partial {\bf p}}.\frac{\partial }{\partial {\bf r}''} \right) V^{\pm}(\frac{\partial S^{\pm}}{\partial {\bf r}''},{\bf r}'') \; . \end{equation*} The operator $\Box$ can be decomposed as $\Box = \Box_{(1)} + \Box_{(2)}$ with \begin{equation} \label{eq:Box1} \Box_{(1)} = \left( V^{\pm\dagger}\frac{\partial H}{\partial {\bf p}}V^{\pm} \right) .\frac{\partial }{\partial {\bf r}''} \end{equation} and \begin{equation} \label{eq:Box2} \Box_{(2)} = V^{\pm\dagger}\frac{\partial H}{\partial {\bf p}}. \left( \frac{\partial V^{\pm}}{\partial {\bf r}''} \right) \; . \end{equation} Noting that first order perturbation theory implies $V^{\pm\dagger} ({\partial H}/{\partial {\bf p}}) V^{\pm} = ({\partial H^{\pm}}/{\partial {\bf p}}) $, one has straightforwardly that \begin{equation} \Box_{(1)} = \frac{\partial H^{\pm}}{\partial {\bf p}}.\frac{\partial }{\partial {\bf r}''} \end{equation} and that \begin{equation} {\rm Re}(\Box_{(2)}) = \frac{1}{2}\frac{\partial }{\partial {\bf r}''}. \left( V^{\pm\dagger}\frac{\partial H} {\partial {\bf p}}V^{\pm} \right) = \frac{1}{2}\frac{\partial }{\partial {\bf r}''}.\frac{\partial H^{\pm}}{\partial {\bf p}} \; . \end{equation} (Note here that with respect to spatial derivation, $H^\pm \equiv H^\pm({\bf r}'') = H^\pm((\partial S^{\pm}/\partial {\bf r}'') ,{\bf r}'')$). One recovers in this way, for the real part of $\Box$, the usual expression valid for a scalar quantum system \cite{Maslov81}, which is expected since it basically expresses the conservation of probability. The imaginary part of $(\Box_{(2)})$ however is not constrained by such a conservation law, as it affects only the phase of $\tilde \Gamma$, but encodes information about the adiabatic variation of the eigenvector $V^\pm$ along the followed trajectory. It needs therefore to be computed from the explicit expressions of the eigenvector and eigenvalues of $H({\bf p},{\bf r})$. The details of the algebra are given in appendix~\ref{sec:appA}. One obtains \begin{equation} \label{eq:BK} \left( \frac{\partial H^{\pm}}{\partial {\bf p}}.\frac{\partial}{\partial {\bf r}''} + \frac{1}{2}\frac{\partial}{\partial {\bf r}''}.\frac{\partial H^{\pm}}{\partial {\bf p}} + iM^{\pm} \right) \tilde{\Gamma}^{\pm} = 0 \end{equation} with \begin{equation} \label{eq:Mmono} \begin{split} M^{\pm}= & \frac{ \alpha v_{\scriptscriptstyle F}^2}{2(E-U({\bf r}''))} \Big( e{\bf B}+ \\ & \frac{{\bf \Pi}\times \frac{\partial}{\partial {\bf r}''}(m({\bf r}'')v_{\scriptscriptstyle F}^2-U({\bf r}''))}{m({\bf r}'')v_{\scriptscriptstyle F}^2+E-U({\bf r}'')} \Big) .{\bf e}_z \end{split} \end{equation} (${\bf e}_z$ is the unit vector in the direction perpendicular to the graphene sheet). In the absence of the complex term $iM$, the scalar transport equation $\left( \frac{\partial H^{\pm}}{\partial {\bf p}}.\frac{\partial}{\partial {\bf r}''} + \frac{1}{2}\frac{\partial}{\partial {\bf r}''}.\frac{\partial H^{\pm}}{\partial {\bf p}} \right) {\gamma}^{\pm} = 0$ has the usual solution \cite{Maslov81} \begin{eqnarray} \gamma^{\pm} & = & C\frac{\exp({-i\frac{\pi}{2}\mu^{\pm}})}{\sqrt{|{\text J}^{\pm}({\bf r}'',{\bf r}')|}} \\ J^{\pm}({\bf r}'',{\bf r}') & = & -{\dot r''}_{\|} {\dot r'}_{\|} \left( \frac{\partial^2 S^{\pm}}{\partial r''_{\bot} \partial r'_{\bot}} \right)^{-1} \nonumber \\ & = & {\dot r''}_{\|} {\dot r'}_{\|} \left( \frac{\partial r''_\bot }{\partial p'_{\bot}} \right) \label{eq:J} \end{eqnarray} where $r_{\|}$ and $r_{\bot}$ are the coordinates parallel and transverse to the trajectory (actually Eq.~(\ref{eq:J}) remains valid for any system of coordinates) and $\mu^{\pm}$ is the Maslov index counting the (algebraic) number of caustic points. Writing \begin{equation*} \tilde{\Gamma}^{\pm} = \gamma^{\pm}\Sigma^{\pm} \end{equation*} we obtain that \begin{equation*} (\frac{\partial H^{\pm}}{\partial {\bf p}}.\frac{\partial}{\partial {\bf r}''} + iM^{\pm})\Sigma^{\pm} \equiv (\frac{d}{dt} + iM^{\pm})\Sigma^{\pm} = 0 \end{equation*} and therefore $\Sigma^{\pm}(t) = \exp\left(i \xi_{\rm sc} \right) \, \Sigma^{\pm}(t=0)$, with \begin{equation} \label{eq:gammasc} \xi_{\rm sc} = - \int_{0}^{t}M^{\pm}( {\bf p}(t') \; , {\bf r}(t'))dt' \; . \end{equation} Summing the contributions corresponding to different orbits $j$ joining ${\bf r}'$ to ${\bf r}''$ we get \begin{equation} \label{eq:Greensemi} \begin{split} G({\bf r}'',& {\bf r}';E) = \sum_{j:{\bf r}' \to {\bf r}''} \gamma^{\pm}_j V^{\pm}_j({\bf r}'') \Sigma_j^{\pm}(t=0) \\ & \exp\left( \frac{i}{\hbar}S^{\pm}_j({\bf r}'',{\bf r}') - i \int_{0}^{t_j} M^{\pm}_j( {\bf p}(t') \; , {\bf r}(t'))dt' \right) \; , \end{split} \end{equation} where $V^{\pm}_j({\bf r}'') \equiv V^{\pm}( {\partial S^{\pm}_j}/{\partial {\bf r}''},{\bf r}'')$ (and therefore depends not only on ${\bf r}''$ but also on the final momentum ${\bf p}''_j$ of the trajectory $j$). The semiclassical phase $\xi_{\rm sc}$ Eq.~(\ref{eq:gammasc}) is the analog, in our context, of a Berry phase \cite{Berry84}. In the same way, it has its origin in the adiabatic change of the eigenvectors of the ``internal degree of freedom'' Hamiltonian $H({\bf p}({\bf r}),{\bf r})$ along the classical paths contributing to the semiclassical Green's function. Furthermore, in some circumstances, $\xi_{\rm sc}$ {\em exactly} corresponds to the genuine Berry phase $\xi_{\rm ad}$ defined for the adiabatic motion along the trajectory. This will be the case in particular for ``pure'' (i.e. without mass term) graphene. In general, however, $\xi_{\rm sc}$ and $\xi_{\rm ad}$ differ \cite{Littlejohn91prl,Littlejohn91pra}. We will come back to this point in section~\ref{sec:1/2classvsquant}, and in particular clarify the question of which of the two phases is relevant for the Landau levels. \subsection{Matching to the exact solution near the source} Sufficiently close, on the classical scale, to the source ${\bf r}'$, we can neglect the variation of the various potentials and of the mass, i.e. assume $U({\bf r})=U_0, m({\bf r})=m_0$ and ${\bf A}({\bf r})=0$. In this case we have the expression for the exact retarded Green's function: \begin{equation*} G = \begin{pmatrix} G_{AA} & G_{AB} \\ G_{BA} & G_{BB} \end{pmatrix} \end{equation*} with \begin{widetext} \begin{eqnarray} G_{AA}({\bf r}'',{\bf r}',E+i\epsilon) & = & \left( -i\frac{m_0v_{\scriptscriptstyle F}^2 + \sqrt{\zeta^2 + m_0^2v_{\scriptscriptstyle F}^4}}{4({\hbar}v_{\scriptscriptstyle F})^2} \right) H_0(\frac{\zeta}{{\hbar}v_{\scriptscriptstyle F}}|{\bf r}'' - {\bf r}'|) \; , \\ G_{AB}({\bf r}'',{\bf r}',E+i\epsilon) &= &\left( \alpha\frac{\zeta e^{-i\alpha\phi}}{4({\hbar}v_{\scriptscriptstyle F})^2} \right) H_1(\frac{\zeta}{{\hbar}v_{\scriptscriptstyle F}}|{\bf r}'' - {\bf r}|) \end{eqnarray} \end{widetext} and $G_{BB} = G_{AA}(m_0 \to -m_0)$, $G_{BA} = G_{AB}(\phi \to -\phi)$. Here $\zeta=\sqrt{(E+i\epsilon-U_0)^2-m_0^2v_{\scriptscriptstyle F}^4}$, $\phi$ is the phase of $p_x + ip_y$ and $H_0$ and $H_1$ are Hankel functions of order 0 and 1. Asymptotically, as $|{\bf r}'' - {\bf r}'| \to +\infty$, $G_{AA}$ and $G_{AB}$ take the form \begin{equation} \label{eq:GaaAss} G_{AA} \simeq -i\frac{m_0v_{\scriptscriptstyle F}^2 + E - U_0}{4({\hbar}v_{\scriptscriptstyle F})^2}\sqrt{\frac{2}{\pi}}\frac{e^{i(k|{\bf r}'' - {\bf r}'|-\frac{\pi}{4})}}{\sqrt{k|{\bf r}'' - {\bf r}'|}} \end{equation} \begin{equation}\label{eq:GabAss} G_{AB} \simeq -i\alpha{e^{-i\alpha\phi}}\frac{\sqrt{(E - U_0)^2 - m_0^2v_{\scriptscriptstyle F}^4}}{4({\hbar}v_{\scriptscriptstyle F})^2}\sqrt{\frac{2}{\pi}}\frac{e^{i(k|{\bf r}'' - {\bf r}'|-\frac{\pi}{4})}}{\sqrt{k|{\bf r}'' - {\bf r}'|}} \end{equation} with $\hbar k = \frac{1}{v_{\scriptscriptstyle F}}\sqrt{(E-U_0)^2-m_0^2v_{\scriptscriptstyle F}^4} = |{\bf p}|$. Let us assume $E-U_0 \ge 0$, so that semiclassically we consider the positive eigenspace $H^{+}$. We note first that, in the free case considered here, the choice of the Lagrangian manifold ${\cal L}^+$ given by (\ref{eq:manif}) corresponds to the action $S^{+}({\bf r}'',{\bf r}') = |{\bf p}| \cdot |{\bf r}''-{\bf r}'|$ and to \begin{equation*} J^{+}({\bf r}'',{\bf r}' ) = \frac{v_{\scriptscriptstyle F}^4}{(E-U({\bf r}'))^2}{|{\bf p}|}.|{\bf r}''-{\bf r}'| \; , \end{equation*} so that, as anticipated, the expression (\ref{eq:Greensemi}) matches the asymptotic expressions (\ref{eq:GaaAss})-(\ref{eq:GabAss}), provided one chooses $C =\frac{1}{\sqrt{2i\pi\hbar}} \frac{1}{i\hbar}$ and \begin{equation} \Sigma^{+}(t=0) = V^{+\dagger}(\frac{\partial S^{+}}{\partial {\bf r}'},{\bf r}') \; . \end{equation} The asymptotic expressions (\ref{eq:GaaAss})-(\ref{eq:GabAss}) are valid as soon as $|{\bf r}''-{\bf r}'|$ is larger than a few Fermi wavelengths, which can still correspond to a distance short on the classical scale, and therefore such that the free Green's function is a good approximation. We can therefore use this matching condition to fix the prefactors $\Sigma^{+}(t=0)$ and $C$ in the generic case, obtaining finally \begin{equation} \label{eq:Green} \begin{split} & G({\bf r}'',{\bf r}';E) = \frac{1}{\sqrt{2i\pi\hbar}} \frac{1}{i\hbar} \\ & \sum_{j} \frac{\exp \left(\frac{i}{\hbar}S_j^{\pm} - i\int_{0}^{t_j} M_j^{\pm} dt' - i\frac{\pi}{2}\mu_j^{\pm} \right)}{\sqrt{|J_j^{\pm}|}} \, V_j^{\pm}({\bf r}'')\cdot V_j^{\pm\dagger}({\bf r}') \; . \end{split} \end{equation} \section{Bilayer graphene and Gutzwiller trace formulae} \label{sec:bilay&GTF} We turn now to a few extensions of the result derived in section~\ref{sec:deriv}. We start with a generalization to the bilayer graphene case, and then briefly discuss the resulting Gutzwiller trace formulae for the density of states, valid when classical periodic orbits are isolated in phase space (i.e., generically, for chaotic systems). \subsection{Semiclassical Green's function for the bilayer case} The bilayer graphene Hamiltonian can be written at low energy as \cite{McCann06} \begin{equation} \label{eq:Hobi} {\cal H}^{0}_{bi} = -\frac{1}{2m^{*}}\begin{pmatrix} 0 & (p_x - ip_y)^2 \\ (p_x + ip_y)^2 & 0 \\ \end{pmatrix} \end{equation} with $m^{*}={\gamma_1}/({2v_{\scriptscriptstyle F}^2})$, where $\gamma_1$ is the intra-layer coupling parameter. As before, we would like to include electric or magnetic fields, as well as a possibly position dependent mass term. We therefore consider the more general Hamiltonian \begin{equation} \label{eq:hambi} {\cal H}_{bi} = U({\bf r})\mbox{.\bfseries \large 1}_2 + m({\bf r})v_{\scriptscriptstyle F}^2\sigma_z + {\cal H}^{0}_{bi}({\bf p} \to {\bf \Pi}) \; . \end{equation} Following the same approach as above, one obtains the semiclassical Green's function in the form Eqs.~(\ref{eq:semiGr})-(\ref{eq:Greensemi}) except for a different expression of the classical Hamiltonian eigenenergies \begin{equation*} H^{\pm} = U({\bf r}) \pm \sqrt{m({\bf r})^2v_{\scriptscriptstyle F}^4 + \left( \frac{{\bf \Pi}^2}{2m^{*}} \right)^2} \end{equation*} and of the semiclassical (``Berry-like'') phase term \begin{equation} \label{eq:BerryBi} \begin{split} M^{\pm} & = \frac{1}{m^{*}}\sqrt{1-\frac{m({\bf r}'')^2v_{\scriptscriptstyle F}^4}{(E-U({\bf r}''))^2}}\\ &\left( \pm e{\bf B} + \frac{1}{2}\frac{{\bf \Pi}\times {\partial [m({\bf r}'')v_{\scriptscriptstyle F}^2-U({\bf r}'')]}/{\partial {\bf r}''}}{m({\bf r}'')v_{\scriptscriptstyle F}^2+E-U({\bf r}'')} \right) .{\bf e}_z \; . \end{split} \end{equation} In the free case ($m({\bf r}) \equiv m_0$, $U({\bf r}) \equiv U_0$), the exact Green's function can be shown to behave asymptotically as $|{\bf r}'' - {\bf r}'| \to +\infty$ as \begin{eqnarray*} G_{AA} & \simeq & \frac{-im^{*}}{4{\hbar}^2} \sqrt{\frac{m_0v_{\scriptscriptstyle F}^2+E-U_0} {-m_0v_{\scriptscriptstyle F}^2+E-U_0}} \sqrt{\frac{2}{\pi}}\frac{e^{i(k|{\bf r}''- {\bf r}'| - \frac{\pi}{4})}}{\sqrt{k|{\bf r}'' - {\bf r}'|}} \\ G_{\tilde{B}\tilde{B}} & = & G_{AA}(m_0 \to -m_0) \\ G_{A\tilde{B}} & \simeq & \frac{im^{*}}{4{\hbar}^2}e^{-2i\phi} \sqrt{\frac{2}{\pi}}\frac{e^{i(k|{\bf r}'' - {\bf r}'| - \frac{\pi}{4})}}{\sqrt{k|{\bf r}'' - {\bf r}'|}} \\ G_{\tilde{B}A} & = & G_{A\tilde{B}}(\phi \to -\phi) \end{eqnarray*} with $\phi$ the phase of $p_x + ip_y$. Matching the exact solution near the source to the semiclassical expression far from the source eventually gives the semiclassical Green's function as a sum over all trajectories $j$ joining ${\bf r}'$ to ${\bf r}''$ under the classical Hamiltonian $H^+$ or $H^-$ (depending on the sign of $(E-U({\bf r}'))$) \begin{equation} \label{eq:Greenbi} \begin{split} G({\bf r}'',&{\bf r}';E) = \frac{1}{\sqrt{2i\pi\hbar}} \frac{1}{i\hbar} \sum_{j} V_j^{\pm}({\bf r}'') V_j^{\pm\dagger} ({\bf r}') \\ &\frac{\exp\left( {\frac{i}{\hbar}S_j^{\pm} - i\int_{0}^{t_j}M_j^{\pm}dt' - i\frac{\pi}{2}\mu_j^{\pm}} \right)}{\sqrt{|J_j^{\pm}|}} \; , \end{split} \end{equation} with $J^\pm$ given by Eq.~(\ref{eq:J}). \subsection{Trace formulae for isolated orbits} \label{sec:trace} One important application of the semiclassical expressions for the Green's functions is that, by taking their trace, one obtains a semiclassical approximation for the density of states $\rho(E) = \sum_i \delta(E-E_i)$. We have in mind here a quantum dot defined in a finite region of a graphene sheet (with the confinement imposed for instance through the mass term), and the $E_i$ are the corresponding discrete energies of the confined system. We will furthermore assume in this subsection the classical motion within the dot fully chaotic, so that all trajectories are isolated. Starting from Eqs.~(\ref{eq:Green}) or (\ref{eq:Greenbi}), the semiclassical density of states can be obtained as the trace \begin{equation} \label{eq:trGr} \rho(E) \equiv -\frac{1}{\pi} {\rm Im} \int d{\bf r} {\rm Tr}[G({\bf r},{\bf r};E)] \; , \end{equation} (where $\rm Tr$ is the trace on the internal structure of the Green's function). The smooth (Weyl) part of the density of states, which is associated with ``zero length'' orbits, has the usual expression $\rho_{\rm Weyl}(E) = \rho^+_{\rm Weyl}(E) + \rho^-_{\rm Weyl}(E) $ with \begin{equation*} \rho^\pm_{\rm Weyl}(E) = \int \frac{d{\bf p}d{\bf r}}{(2\pi\hbar)^2} \delta(E - H^{\pm}({\bf p},{\bf r})) \; . \end{equation*} When potential and mass terms are constant this gives \begin{equation} \label{eq:weyl} \rho^\pm_{\rm Weyl}(E)=\frac{|E-U_0|{\cal A}}{2\pi(\hbar v_F)^2} \, \Theta \left( \pm(E-U_0) - m_{0}v_F^2 \right) \; , \end{equation} with ${\cal A}$ the area of the graphene sheet and $\Theta$ the Heaviside step function. The oscillating part $\rho_{\rm osc}(E)$ of the density of states can then be obtained inserting the semiclassical expression for the Green's function in Eq.~(\ref{eq:trGr}). Performing the integral on ${\bf r}$ in the stationary phase approximation imposes that, in the semiclassical sums Eqs.~(\ref{eq:Green}) or (\ref{eq:Greenbi}), only the trajectories with identical initial and final momentum should be kept. As a consequence, the sum over the index $j$ becomes a sum over periodic orbits. In particular, in Eqs.~(\ref{eq:Green}) or (\ref{eq:Greenbi}), $V^\pm_j({\bf r}'') = V^\pm_j({\bf r}')$ since ${\bf r}''={\bf r}'={\bf r}$ {\em and} ${\bf p}''_j = {\bf p}'_j$ (remember that $V^\pm_j({\bf r}) \equiv V^\pm_j({\bf p}_j({\bf r}),{\bf r})$, so the second condition is necessary here). Therefore ${\rm Tr} [V^{\pm }_j({\bf r}'') \cdot V^{\pm\dagger}_j({\bf r}')]_{|{\bf r}''={\bf r}'={\bf r}} = 1$. Once this point is recognized, the calculation of $\rho_{\rm osc}$ from the semiclassical Green's functions is, up to the inclusion of the semiclassical ``Berry-like'' phase term $\oint_j M^\pm(t) dt$, essentially the same as in the scalar case \cite{Gutzwiller71,Gutzwiller90} (see also the particularly clear discussion in \cite{Creagh90}). We thus just quote the final results: $\rho(E) = \rho^+(E) + \rho^-(E)$; $\rho^\pm(E) = \rho^\pm_{\rm Weyl}(E) + \rho^\pm_{\rm osc}(E)$, with \begin{equation} \label{eq:traceGr} \begin{split} \rho^\pm_{\rm osc}(E) = & \frac{1}{\pi\hbar}\sum_{\rm p.o.}\frac{T_{\rm ppo}}{\sqrt{|{\rm det} (\tilde{M_{\rm po}}-1)|}} \\ & \cos{\left( \frac{S_{\rm po}^{\pm}}{\hbar}-\frac{\pi}{2} \sigma_{\rm po}^{\pm}-\int_{0}^{T_{\rm po}} M^{\pm}dt' \right) } \; . \end{split} \end{equation} Here $\tilde M = \frac{\partial (p''_\bot,r''_\bot)}{\partial (p'_\bot,r'_\bot)} $ is the monodromy matrix, $\sigma^{\pm} = \mu^{\pm} + \nu^{\pm}$ is the topologically invariant Maslov index ($\nu = 0$ or $1$, depending on the sign of $d^2 S_j / dr_\bot^2$, see the discussion in \cite{Creagh90}), and $T_{\rm ppo}$ is the period of the primitive orbit ($T_{\rm po} = n T_{\rm ppo}$ if the orbit consists of $n$ repetitions of the same path). \section{Graphene in a constant magnetic field} \label{sec:landau} As an illustration of the semiclassical Green's function formalism, we consider in this section the simple (but useful) case of a graphene sheet immersed in a constant magnetic field, and show how some standard (and less standard) expressions can be easily re-obtained in this way. We start with the Landau levels in the monolayer and the bilayer, without potential or mass term ($U({\bf r})=m({\bf r})=0$), and assuming the low-energy approximations Eqs.~(\ref{eq:Hog})-(\ref{eq:Hobi}) of the Hamiltonian apply. We then study the influence of higher order corrections (e.g. trigonal warping) to this low-energy Hamiltonian. We finally consider the case where a finite mass term $m({\bf r})= m_0 = {\rm const.}$, is introduced. This last example will be used to introduce the discussion on the distinction between the semiclassical and adiabatic Berry phases, with which we shall end this paper in the next section. \subsection{Landau levels in monolayer graphene} \label{sec:LandauMono} In the absence of confining potential or mass term, and with a constant magnetic field, the classical equations of motion in graphene are integrable and lead to cyclotronic motion, i.e. circular periodic orbits with period $T$ and radius $R$ given in the monolayer case by \begin{eqnarray} T & = & \frac{2 \pi}{v_{\scriptscriptstyle F}^2}\frac{E}{eB} \\ R & = & \frac{v_{\scriptscriptstyle F}}{2\pi} T \; . \end{eqnarray} Since the periodic orbits are not isolated, we cannot use the Gutzwiller trace formula derived in the previous section and we have to obtain the density of states directly from inserting the semiclassical expression Eq.~(\ref{eq:Green}) in Eq.~(\ref{eq:trGr}). Here however the classical dynamics is extremely simple: there is only one primitive orbit, and the sum over $j$ is actually a sum over the number of repetitions of this primitive circular orbit. We therefore have $S_j^{\pm} = {E}t_j/2$, with $t_j=jT$. Two caustics are furthermore traversed for each iteration of the orbit, one midway through the circle, the other when the orbit comes back to its starting point, and the Maslov index is thus $\mu_j^{\pm} = 2j$ (note that, as discussed below, the last caustic should be included). Finally, the semiclassical ``Berry-like'' phase term Eq.~(\ref{eq:Mmono}) reduces here to $M_j^{\pm}({\bf r}(t)) = {{\alpha}v_{\scriptscriptstyle F}^2 e B}/({2E}) = {\rm const.}$ so that \begin{equation} \label{eq:BerryMono} \oint_0^{t_j} M^\pm_j(t) dt = \alpha j \pi \; . \end{equation} The only technical point in this calculation is therefore that since, whatever the initial momentum, all trajectories initiated in ${\bf r}' = {\bf r}$ eventually return there, the final point ${\bf r}'' = {\bf r}$ is a caustic ($\partial r''_\bot / \partial p'_\bot = 0 $) and the prefactor $1/\sqrt{|J_j|}$ diverges. As discussed in the appendix~E of \cite{Richter96}, this divergence can be cured using a mixed representation of the Green's function, i.e. by expressing the Green's function $G({\bf r}'',{\bf r}')$ in terms of its Fourier transform $\tilde G(p_x'',y'';x',y')$ as \begin{equation} \label{eq:FT} \begin{split} G(&x'',y''; x',y') = \\ & \frac{1}{\sqrt{-2i\pi\hbar}} \int dp''_x \tilde G(p_x'',y'';x',y') \exp(\frac{i}{\hbar} x'' p''_x) \; . \end{split} \end{equation} A semiclassical expression for $\tilde G$ can be derived in exactly the same way as for $G$, and leads to the same expression except for the transformations $S_j \to \tilde S_j = S_j - p''_x x''$ and $J_j= - {\dot y}'' {\dot y}' (\frac{\partial^2 S_j}{\partial x'' \partial x'})^{-1} \to \tilde J_j = - {\dot y}'' {\dot y}' (\frac{\partial^2 \tilde S_j}{\partial p''_x \partial x'})^{-1}$. Thus \begin{equation} \tilde J_j = {\dot y}'' {\dot y}' (\frac{\partial p''_x}{\partial p_x'}) \; , \end{equation} which is not diverging since for the cyclotron motion ${\partial p''_x}/{\partial p_x'} = 1$. The integral over $p''_x$ in (\ref{eq:FT}) becomes then straightforward (noting that $dp''_x/{\dot y''} = d\theta$, with $\theta$ the angle made by the initial velocity with the $x$ axis, this integral basically provides a factor $\int_0^{2\pi} d \theta = 2 \pi$). Furthermore the integration over position in Eq.~(\ref{eq:trGr}) amounts to a multiplication by the area ${\cal A}$ of the graphene sheet, and as in section~\ref{sec:trace}, ${\rm Tr} [V^{\pm}_j({\bf r}'') \cdot V^{\pm \dagger}_j({\bf r}')]_{|{\bf r}''={\bf r}'={\bf r}} = 1$ since the final and initial momenta are identical. One therefore obtains \begin{equation} \label{eq:dos} \rho^{\rm osc}(E) = \frac{|E|{\cal A} }{\pi({\hbar}v_{\scriptscriptstyle F})^2}\sum_{j=1}^{+\infty}\cos{2{\pi}j\frac{E^2}{2{\hbar}eBv_{\scriptscriptstyle F}^2}} \; . \end{equation} The total density of states is then $\rho(E) = \rho_{\rm Weyl}(E) + \rho^{\rm osc}(E)$ with $\rho_{\rm Weyl}(E)$ the smooth density of states (which is identical to the one without magnetic field) given by Eq.~(\ref{eq:weyl}). Using the Poisson formula, we therefore have \begin{equation} \label{eq:LL} \rho(E) = \frac{{\cal A} }{2{\pi}l_B^2}\sum_{n=-\infty}^{+\infty}\delta(E - E_n) \end{equation} with $l_B=\sqrt{{\hbar}/({eB})}$ and \begin{equation} \label{eq:EnMono} E_n={\rm sign}\,(n) v_{\scriptscriptstyle F}\sqrt{|2n{\hbar}eB|} \; . \end{equation} We recover in this way the expression of the Landau levels as obtained in a fully quantal derivation \cite{McClure56}. This approach furthermore provides a direct link between the phase $\oint_0^{t_j} M_j(t) dt = \alpha j \pi$ and the existence of a zero energy level, as it cancels out the phase associated with the Maslov indices (another example of such a cancellation can be found in \cite{Keppeler02}). An alternative semiclassical derivation of the graphene Landau levels can be obtained starting from the Dirac oscillator \cite{Keppeler03}, in the limit of massless carriers, provided the frequency of the oscillator is taken to be the cyclotronic one. \subsection{Landau levels in bilayer graphene} Considering now the bilayer case, we can proceed in exactly the same way as above except for two differences. First, the period $T$ and radius $R$ are now given by \begin{eqnarray} T & = & \frac{2\pi}{\omega} = {2 \pi} \frac{m^{*}}{eB}\\ R & = & \sqrt{\frac{|E|}{2\pi^2 m^{*}}} T \; . \end{eqnarray} Second, the semiclassical ``Berry-like'' phase term Eq.~(\ref{eq:BerryBi}) now reduces to $M_j^{\pm}({\bf r}(t)) = \pm{eB}/{m^{*}} = {\rm const.}$, so that \begin{equation} \oint_0^{t_j} M_j(t) \,dt = 2 j \pi \; . \end{equation} The Berry-like phase does not in this case compensate the phase associated with the Maslov index. Noting furthermore that, for the bilayer graphene, $\rho_{\rm Weyl}(E) = {m^{*}{\cal A} }/({2\pi{\hbar}^2})$, we obtain \begin{equation} \label{eq:LLbi} \rho(E) = \frac{{\cal A} }{2{\pi}l_B^2}\sum_{n=-\infty}^{+\infty}\delta(E - E_n^{\rm sc}) \, \end{equation} where \begin{equation} E_n^{\rm sc} = \hbar \omega (n-\frac{1}{2}) \end{equation} is the semiclassical approximation to the exact quantum values of the Landau levels, ${E_n}^{\rm quant} = \hbar \omega \sqrt{n(n-1)} = \hbar \omega (n-\frac{1}{2})+O(\frac{1}{n})$. The semiclassical calculation fails here to account for the $O(\frac{1}{n})$ term. The $n=0$ and $n=1$ Landau levels, which both have zero energy, are therefore not correctly described within this semiclassical approach. However, for $n \ge 2$, the agreement between the semiclassical approximation and the exact result is quantitatively very good. \subsection{Influence of higher order corrections (in the parameter $(a |{\bf p}| /\hbar)$)} \label{sec:trigonal} The next example to which we shall apply our semiclassical formalism is the shift of the Landau levels associated with deviations, for large momenta, to the linear approximation of the graphene dispersion relation Eq.~(\ref{eq:Hog}) \cite{Plochocka07}. Starting from a tight-binding description of the graphene monolayer in which the effect of the next-to-nearest neighbor hopping is taken into account via the parameter $t' \ll t$, and expanding the resulting dispersion relation near the ${\bf K}$ and ${\bf K}'$ points up to third order in $(a |{\bf p}| /\hbar)$ (the reason for expanding up to third order will become clear below), the resulting Hamiltonian reads (in the absence of electric or magnetic fields) \cite{Plochocka07} \begin{equation} \label{eq:ham2} {\cal H}'_g = {\cal H}_g^0 + \begin{pmatrix} h'({\bf p}) & h({\bf p})^{*} \vspace{0.2cm} \\ h({\bf p}) & h'({\bf p}) \\ \end{pmatrix} \end{equation} with ${\cal H}_g^0$ given by Eq.~(\ref{eq:Hog}) and \begin{eqnarray*} h'({\bf p}) & = & -3t' + 6\frac{t'}{t}v_{\scriptscriptstyle F}|{\bf p}| \left( \frac{v_{\scriptscriptstyle F}}{6t}|{\bf p}| - 2\alpha(\frac{v_{\scriptscriptstyle F}}{6t})^2 {{\bf p}}^2 \cos{3\phi_{{\bf p}}} \right) \\ h({\bf p}) & = & -v_{\scriptscriptstyle F} \left( \frac{v_{\scriptscriptstyle F}}{6t}({\alpha}p_x - ip_y)^2 + 2(\frac{v_{\scriptscriptstyle F}}{6t})^2{{\bf p}}^2({\alpha}p_x+ip_y) \right) \; . \end{eqnarray*} Keeping only terms no greater than third order in momentum, the eigenvalues of the associated classical Hamiltonian can be expressed as \begin{equation} \begin{split} H^{\pm}= h'({\bf p}) \; {\pm} \; v_{\scriptscriptstyle F}|{\bf p}|& \left(1-\alpha\frac{v_{\scriptscriptstyle F}}{6t}|{\bf p}| \cos{3\phi_{{\bf p}}} \right.\\ - & \left. \frac{1}{2} (\frac{v_{\scriptscriptstyle F}}{6t})^2 {{\bf p}}^2(3+\cos^2{3\phi_{{\bf p}}}) \right) \; , \end{split} \end{equation} with $\phi_{{\bf p}}=\arctan({p_y}/{p_x})$. The anisotropic terms, proportional to $\cos{3\phi_{{\bf p}}}$, are often referred to as trigonal warping. Recall now this expansion is valid if the condition $|{\bf p}| \ll {\hbar}/{a}$ is fulfilled. Rewriting the expression $({v_{\scriptscriptstyle F}}/{6t})|{\bf p}| = {|{\bf p}|a}/({4\hbar})$, higher order terms in $H^{\pm}$ can thus be viewed as a perturbation of the original eigenvalue $H^{\pm}={\pm}v_{\scriptscriptstyle F}|{\bf p}|$ in the small parameter $(\lambda v_{\scriptscriptstyle F} |{\bf p}|)$, where $\lambda \equiv {\alpha}/{6t}$ will be used below to identify the order in the perturbation. In the semiclassical limit ($\hbar \to 0$), only the modification of the action needs to be taken into account since this latter is multiplied by the large parameter $1/\hbar$. Our aim is therefore to compute the (first and second order here) corrections to the action in an expansion in $\lambda$ \begin{equation} \label{eq:action} S = S_0 + \lambda\delta^{(1)}S + {\lambda}^2\delta^{(2)}S \; . \end{equation} In the presence of a constant magnetic field, the classical equations of motion derived from the first order approximation $H^{\pm}=\pm v_{\scriptscriptstyle F}|{\bf \Pi}|$ are integrable, and this property is not modified by the addition of terms in $H^\pm$ depending only on $|{\bf \Pi}|$. This can be easily shown by performing a canonical transformation to the guiding center coordinates. For sake of completeness, this canonical change of variables is detailed in appendix~\ref{sec:appB}. The new coordinates read \begin{eqnarray*} {\bf R} & = & (\frac{1}{eB}\Pi_y,x_0) \\ {\bf P} & = & (\Pi_x,eBy_0) \end{eqnarray*} with ${\bf r}_0$ the center of the cyclotron orbit, so that $|{\bf \Pi}|=\sqrt{P_x^2+(eBX)^2}$ and $\tan{\phi_{\bf \Pi}}={eBX}/{P_x}$. We thus have \begin{equation*} H^{+} = -3t' + \rho -\mu_2\lambda\rho^2 -\mu_1{\lambda}^2\rho^3 \end{equation*} with $v_{\scriptscriptstyle F}(P_x+ieBX)=\rho e^{i\phi}$, $\mu_2=(\cos{3\phi}-6\alpha\frac{t'}{t})$, and $\mu_1=\frac{1}{2}(3+\cos^2{3\phi}+6\alpha\frac{t'}{t}\cos{3\phi})$. In this new system of coordinates, the action is easily calculated as \begin{equation*} S = \int {\bf P}d{\bf R} = \int P_xdX = \frac{1}{2v_{\scriptscriptstyle F}^2eB}\int_{0}^{2\pi}{\rho^2}(\phi)d\phi \end{equation*} with the constraint $E=H^{+}$. Therefore, to order ${\lambda}^2$, and with $E'=E+3t'$ \begin{equation*} \rho^2 = E'^2 + 2\mu_1{\lambda}E'^3 + (5{\mu_1}^2+2\mu_2){\lambda}^2E'^4 \end{equation*} which gives for the action \begin{equation*} \begin{split} S = \frac{1}{2v_{\scriptscriptstyle F}^2eB} & \left( 2\pi E'^2 - 24\pi\alpha\frac{t'}{t}{\lambda}E'^3 \right. \\ + & \left. 12\pi(1+30(\frac{t'}{t})^2){\lambda}^2E'^4 \right) \; . \end{split} \end{equation*} The third order terms had to be taken into account in the low-energy expansion, since their contribution in the second order correction of the action is of the same magnitude as that of second order terms. The third order term in the next-to-nearest neighbor contribution however cancels out in the calculation of $S$ and thus a second order expansion in $h'({\bf p})$ would have been sufficient. Introducing this shift in the action in the Landau-levels calculation of section~\ref{sec:LandauMono} finally gives \begin{equation} \label{eq:shift} \begin{split} E'_n = E_n \left( 1 \pm 6\alpha\frac{t'}{t}{\lambda}E_n - 3{\lambda}^2E_n^2 \right) \\ =E_n \left( 1 \pm \frac{3t'}{\sqrt{2}t}\frac{a}{l_B}\sqrt{n} - \frac{3}{8}(\frac{a}{l_B})^2n \right) \end{split} \end{equation} ($l_B=\sqrt{{\hbar}/({eB})}$ is the magnetic length). This result is identical to the one obtained purely quantum mechanically in [\onlinecite{Plochocka07}]. As discussed in this paper, the resulting effect is however too small to interpret shift in Landau levels observed experimentally by Plochocka et al. \cite{Plochocka07} . \subsection{Effect of a mass term} To end this section, let us consider the effect of a constant mass term $m_0 v_{\scriptscriptstyle F}^2 \sigma_z$ in the graphene Hamiltonian, so that \begin{equation} H^\pm = \pm \sqrt{m_0^2 v_{\scriptscriptstyle F}^4 + v_{\scriptscriptstyle F}^2 {\bf \Pi}^2} \; . \end{equation} Interestingly, a constant mass term does not modify the time derivative $M(t)$ of the semiclassical Berry-like phase since (see Eq.~(\ref{eq:Mmono})) it depends only on the gradient of $m(r)$. Furthermore, as shown by a direct calculation, the energy dependence of the Landau frequency is not affected either by the mass term. Therefore \begin{eqnarray} T & = & \frac{2\pi}{\omega} = \frac{2 \pi}{v_{\scriptscriptstyle F}^2} \frac{E}{eB}\\ M({\bf r}(t)) & = & \alpha v_{\scriptscriptstyle F}^2 \frac{eB}{2E} = {\rm const.} \; , \end{eqnarray} and the semiclassical phase \begin{equation} \label{eq:BerryGap} \oint_0^{t_j} M_j(t) dt = j \alpha \pi \end{equation} is the same as without the mass term. The $m_0$ dependence of the Landau level position is therefore entirely due to the $m_0$ variation of the action \begin{equation} S_j = j\pi \frac{E^2}{eB v_{\scriptscriptstyle F}^2} \left[1 - \left(\frac{m_0 v_{\scriptscriptstyle F}^2}{E} \right)^2 \right] \; , \end{equation} which, following the same steps as in section~\ref{sec:LandauMono}, gives $ \rho(E) = ({{\cal A}}/({2\pi l_B^2})) \sum_{n=0}^{+\infty} \delta(E \pm E_n)$, with \begin{eqnarray} E_n & = & \sqrt{E^2_n(0) + m_0^2 v_{\scriptscriptstyle F}^4} \\ & \simeq & E_n(0) \left( 1 + \frac{1}{4n} \frac{(m_0 v_{\scriptscriptstyle F})^2}{e\hbar B} \right) \end{eqnarray} ($E_n(0)$ is the value of $E_n$ at $m_0=0$ given by Eq.~(\ref{eq:EnMono})). One recovers semiclassically in this way the result originally derived by Haldane \cite{Haldane88}. \section{Semiclassical versus adiabatic Berry phase} \label{sec:1/2classvsquant} We would like to finish this paper with some general discussion concerning the semiclassical phase \begin{eqnarray} \xi_{\rm sc} & \equiv & - \oint_0^T M_{\rm sc}({\bf p}(t),{\bf r}(t)) dt \\ M_{\rm sc}({\bf p}(t),{\bf r}(t)) & = & {\rm Im} \left[ V^{\pm\dagger}\frac{\partial H}{\partial {\bf p}}. \left( \frac{\partial V^{\pm}}{\partial {\bf r}} \right) \right] \label{eq:Msc} \end{eqnarray} (see Eq.~(\ref{eq:Box2})) computed on a periodic orbit $({\bf p}(t),{\bf r}(t))$ (of period $T$). That, for a clean graphene monolayer without a mass term, $\xi_{\rm sc} = \mp \pi$ (as expressed by Eq.~(\ref{eq:BerryMono}), with $j=1$) is usually said to be expected since the corresponding configuration is exactly the one discussed in detail by Berry in his 1984 paper \cite{Berry84}: the path of integration corresponds to encircling once the Dirac point, where the $H^+$ and $H^-$ manifolds intersect. This argument however relies on an exact intersection between the two manifolds, and should a priori not apply when a mass term $m_0$ introduces a gap. From this perspective, one does not expect the Berry phase to be equal to $\pm\pi$ when $m_0 \neq 0$, and Eq.~(\ref{eq:BerryGap}) may come as a surprise. (Note though this was already observed in \cite{Gusynin06}.) The resolution of this apparent paradox is that, as discussed in [\onlinecite{Littlejohn91prl,Littlejohn91pra}], the semiclassical phase $\xi_{\rm sc}$ defined by Eq.~(\ref{eq:gammasc}) and the adiabatic phase introduced by Berry are closely related, but eventually different, quantities. Both of them are induced by the adiabatic variation of the eigenstates $V^+$ and $V^-$ along the trajectory. However, the point of view taken in the semiclassical approach is that both the internal space (associated here with the sub-lattices $(A,B)$) and the external space (position ${\bf r}$) are {\em coupled} dynamical variables. Treating the coupling between these variables in the semiclassical approximation (which indeed implicitly assumes that the ``external'' variable is slow and the internal variable fast) leads to the semiclassical expression (\ref{eq:Msc}). The problem Berry was considering in his seminal article \cite{Berry84} is however different: in that case, only the internal degree of freedom is considered a dynamical variable, and the external degrees of freedom are actually a space of parameters assumed to be entirely controlled by the experimentalist. One may in that case of course choose this path as the classical trajectory $({\bf r}(t))$ (with $H({\bf r}) \equiv H({\bf p}({\bf r}),{\bf r})$) determined by the dynamics in the semiclassical approach. In that case however the corresponding phase is given by \cite{Berry84} \begin{eqnarray} \xi_{\rm ad} & \equiv & \oint_0^T M_{\rm ad}({\bf r}(t)) dt \\ M_{\rm ad}({\bf r}(t)) & = & i V^{\pm\dagger} \frac{\partial V^{\pm}}{\partial {\bf r}} \cdot \dot{\bf r} \label{eq:Mad} \end{eqnarray} (the normalization of $V^\pm$ ensures that $M_{\rm ad}$ is real). Let us assume, for this discussion, that we are interested in the evolution of the eigenstate $V^+$ associated with the positive eigenvalue $H^+$. Furthermore, let us switch to the bra/ket notation for the eigenvector and write $V^\pm \equiv | \pm \rangle$, $V^{\pm \dagger} \equiv \langle \pm |$. First order perturbation theory implies $\dot {\bf r} = \partial H^+ / \partial {\bf p} = \langle \! + \! |(\partial H / \partial {\bf p}) | \! + \! \rangle$, and therefore Eq.~(\ref{eq:Mad}) can be rewritten as \begin{equation} M_{\rm ad}({\bf r}(t)) = i \langle \! + \! | \frac{\partial H}{\partial {\bf p}}| \! + \! \rangle \cdot \langle \! + \! | \frac{\partial }{\partial {\bf r}}\! + \! \rangle \; . \end{equation} On the other hand, inserting the identity $ \mbox{\bfseries \large 1}_2 = | \! + \! \rangle \langle \! + \! | + | \! - \! \rangle \langle \! - \! |$ in Eq.~(\ref{eq:Msc}), \begin{equation} M_{\rm sc}({\bf r}(t)) = {\rm Im} \left[ \langle \! + \! | \frac{\partial H}{\partial {\bf p}}| \! + \! \rangle \cdot \langle \! + \! | \frac{\partial }{\partial {\bf r}}\! + \! \rangle + \langle \! + \! | \frac{\partial H}{\partial {\bf p}}| \! - \! \rangle \cdot \langle \! - \! | \frac{\partial }{\partial {\bf r}}\! + \! \rangle \right] \; . \end{equation} Thus, the adiabatic and semiclassical phases actually differ from the quantity \begin{equation} \xi_{\rm ad} - \xi_{\rm sc} = \oint_0^T {\rm Im} \left[ \langle \! + \! | \frac{\partial H}{\partial {\bf p}}| \! - \! \rangle \cdot \langle \! - \! | \frac{\partial }{\partial {\bf r}}\! + \! \rangle \right] dt \; . \end{equation} From this expression, we can see that {\em in the absence of a mass term}, but for an arbitrary electrostatic potential $U({\bf r})$, the semiclassical and Berry phases are identical. Indeed, for $m({\bf r}) \equiv 0$, the expressions Eqs.~(\ref{eq:V+})-(\ref{eq:V-}) for the eigenvectors of $H({\bf p},{\bf r})$ take the simple form \begin{eqnarray} | \! + \!\rangle & = & \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ \alpha e^{i\alpha\phi} \end{pmatrix} \label{eq:V0+}\\ | \! - \! \rangle & = & \frac{1}{\sqrt{2}} \begin{pmatrix} \alpha e^{-i\alpha\phi} \\ -1 \end{pmatrix} \; , \label{eq:V0-} \end{eqnarray} with $\phi$ the phase of $\Pi_x + i \Pi_y$. As a consequence \begin{eqnarray} \langle \! + \! | \frac{\partial H}{\partial {\bf p}}| \! - \! \rangle \cdot \langle \! - \! | \frac{\partial }{\partial {\bf r}}\! + \! \rangle & = & \frac{v_{\scriptscriptstyle F}}{2} (- \sin \phi \partial_x \phi + \cos \phi \partial_y \phi) \label{eq:pm}\\ \langle \! + \! | \frac{\partial H}{\partial {\bf p}}| \! + \! \rangle \cdot \langle \! + \! | \frac{\partial }{\partial {\bf r}}\! + \! \rangle & = & i \frac{\alphav_{\scriptscriptstyle F}}{2} ( \cos \phi \partial_x \phi + \sin \phi \partial_y \phi) \nonumber \\ & = & \frac{i\alpha}{2} \frac{d \phi}{dt} \; .\label{eq:pp} \end{eqnarray} The right hand side of Eq.~(\ref{eq:pm}) is purely real, implying that, in the simple case $m=0$ considered here, $\xi_{\rm ad} - \xi_{\rm sc} = 0$. Eq.~(\ref{eq:pp}) then expresses that, independently of the nature of the electrostatic potential $U({\bf r})$, the -- here identical -- Berry phase and semiclassical phase are just given by plus or minus (depending on $\alpha$) half the angle of rotation of the velocity vector. In particular, as demonstrated by Berry from geometric arguments \cite{Berry84}, we see here from a direct calculation that for a periodic orbit, $\xi_{\rm ad} = \xi_{\rm sc} = -\alpha j \pi$, with $j$ the number of windings of the trajectory. This makes particularly simple the inclusion of the semiclassical phase in the Gutzwiller trace formula Eq.~(\ref{eq:traceGr}) when $m=0$. Similarly, for the bilayer Hamiltonian Eq.~(\ref{eq:hambi}) with $m({\bf r}) \equiv 0$, we have \begin{eqnarray} | \! + \!\rangle & = & \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\- e^{i2\phi} \end{pmatrix} \label{eq:V0+bi}\\ | \! - \! \rangle & = & \frac{1}{\sqrt{2}} \begin{pmatrix} e^{-i2\phi} \\ 1 \end{pmatrix} \; , \label{eq:V0-bi} \end{eqnarray} with $\phi$ the phase of $\Pi_x + i \Pi_y$, and \begin{eqnarray} \langle \! + \! | \frac{\partial H}{\partial {\bf p}}| \! - \! \rangle \cdot \langle \! - \! | \frac{\partial }{\partial {\bf r}}\! + \! \rangle & = & \frac{|\Pi|}{m^*} (- \sin \phi \partial_x \phi + \cos \phi \partial_y \phi) \label{eq:pmbi}\\ \langle \! + \! | \frac{\partial H}{\partial {\bf p}}| \! + \! \rangle \cdot \langle \! + \! | \frac{\partial }{\partial {\bf r}}\! + \! \rangle & = & i \frac{|\Pi|}{m^*} ( \cos \phi \partial_x \phi + \sin \phi \partial_y \phi) \nonumber \\ & = & i \frac{d \phi}{dt} \; .\label{eq:ppbi} \end{eqnarray} Again, the Berry phase and semiclassical phase are identical if $m({\bf r}) \equiv 0$ (as Eq.~(\ref{eq:pmbi}) is purely real), and both phases are given by the angle of rotation of the velocity vector. For both bilayer and monolayer graphene, it has to be born in mind however that in the generic case $m({\bf r}) \neq 0$, the semiclassical phase $\xi_{\rm sc}$ should in general differ from the Berry phase $\xi_{\rm ad}$. Furthermore, we do not have a general argument constraining any of the two phases to be directly related to the winding of the velocity vector (beyond the case where both the mass and the electrostatic potential are constant). \section{Conclusion} To conclude, we have derived an expression for the semiclassical Green's function in graphene and discussed in particular the semiclassical phase associated with the internal pseudo-spin structure. If no mass term is included in the graphene Hamiltonian, this semiclassical phase is identical to the corresponding (adiabatic) Berry phase. In that case both phases are, up to a sign, given by half the angle of rotation of the velocity vector. For a bilayer of graphene, the same result holds but with a phase which is twice as large. When a mass term is introduced however, the semiclassical and Berry phases in general differ. In particular, for a clean graphene sheet in a constant magnetic field, we have shown that the semiclassical phase remains unmodified upon the inclusion of a constant mass term $m ({\bf r}) = m_0$, while the corresponding Berry phase $\xi_{\rm ad} = [m_0/(E-U_0)-1] \alpha j \pi$ shows some dependence on $m_0$. We have shown furthermore that in this case, what is relevant to the calculation of the Landau levels is the semiclassical, rather than the Berry, phase. Other applications of our semiclassical formalism were also discussed, including the effect of higher order terms of the graphene Hamiltonian -- e.g.\ trigonal warping -- on the position of the Landau levels. The semiclassical approximation to the graphene Green's function should prove a useful tool when considering confined electron systems in graphene, such as graphene nanoribbons, or more complicated geometries. We have benefited from helpful discussions with E. Bogomolny, J.-N. Fuchs, M.-O. Goerbig, G. Montambaux and F. Pi\'echon, and thank as well all active participants of the weekly graphene ``journal-club'' held in LPS, Orsay.
1,116,691,497,000
arxiv
\section{Symmetry properties of bands} \end{document}
1,116,691,497,001
arxiv
\section{#1} \setcounter{equation}{0}} \def\mathbb{C}{\mathbb{C}} \def\mathbb{Z}{\mathbb{Z}} \begin{document} \title[Bellman's equations with VMO coefficients] {On Bellman's equations with VMO coefficients} \author{N.V. Krylov} \thanks{The work was partially supported by NSF Grant DMS-0653121} \email{[email protected]} \address{127 Vincent Hall, University of Minnesota, Minneapolis, MN, 55455} \keywords{Vanishing mean oscillation, fully nonlinear equations, Bellman's equations} \subjclass[2000]{35J60} \begin{abstract} We present a result about solvability in $W^{2}_{p}$, $p>d$, in the whole space $\bR^{d}$ of Bellman's equations with VMO ``coefficients''. Parabolic equations are touched upon as well. \end{abstract} \maketitle \mysection{Main result} Let $\bR^{d}$ be the Euclidean space of points $x=(x^{1},...,x^{d})$, $x^{i}\in\bR=(-\infty,\infty)$. Fix a $\delta\in(0,1)$ and denote by $\cS_{\delta}$ the set of symmetric $d\times d$-matrices $a=(a^{ij})$ satisfying $$ \delta|\xi|^{2}\leq a^{ij}\xi^{i}\xi^{j}\leq\delta^{-1}|\xi|^{2}, \quad\forall \xi\in\bR^{d}. $$ Let $\Omega$ be a separable metric space and assume that for any $\omega\in \Omega$ and $x\in\bR^{d}$ we are given $a(\omega,x)\in\cS_{\delta}$, $b(\omega,x)\in\bR^{d}$, and $c(\omega,x), f(\omega,x)\in\bR$. We assume that these functions are measurable in $x$ for each $\omega$, continuous in $\omega$ for each $x$, and $$ |b(\omega,x)|+c(\omega,x)\leq K,\quad c(\omega,x)\geq0, \quad\forall \omega,x, $$ $$ \bar{f}(x):=\sup_{\omega\in \Omega}|f(\omega,x)|<\infty \quad\forall x, $$ where $K$ is a fixed constant. Observe that, owing to the continuity of $f$ in $\omega$ and separability of $\Omega$, the function $\bar{f}$ is measurable. For $r>0$ and $x\in\bR^{d}$ set $$ B_{r}(x)=\{y\in\bR^{d}:|x-y|<r\},\quad B_{r}=B_{r}(0). $$ For a measurable set $\Gamma\subset\bR^{d}$ by $|\Gamma|$ we denote its volume. In Section \ref{section 12.23.1} we will use the same notation for measurable $\Gamma\subset\bR^{d+1}$. Introduce, $$ (u)_{\Gamma}= \dashint_{\Gamma}u(x)\,dx=\frac{1}{|\Gamma|}\int_{\Gamma} u(x)\,dx. $$ In particular, $$ (a)_{B_{r}(x)}(\omega )=\dashint_{B_{r}(x)}a(\omega,y)\,dy $$ In the following assumption there is a parameter $\theta \in(0,1]$, whose value will be specified later. \begin{assumption} \label{assumption 12.18.1} There exists an $R_{0}\in(0,\infty)$ such that for any $r\in(0,R_{0}]$ and $x\in\bR^{d}$ we have \begin{equation} \label{12.24.6} \dashint_{B_{r}(x)}\sup_{\omega\in \Omega} |a(\omega,y)-(a)_{B_{r}(x)}(\omega )|\,dy \leq\theta. \end{equation} \end{assumption} Observe that if $a$ is independent of $\omega$ (semilinear equations), and for any $\theta>0$ there is an $R_{0}>0$ such that \eqref{12.24.6} is satisfied for any $r\in(0,R_{0}]$ and $x\in\bR^{d}$, then $a\in VMO$. For constant $\lambda>0$ we will be considering the following equation $$ \sup_{\omega\in \Omega}[a^{ij}(\omega,x)D_{ij}u(x) +b^{i}(\omega,x)D_{i}u(x) $$ \begin{equation} \label{12.23.5} -(c(\omega,x)+\lambda)u(x)+f(\omega,x)]=0 \end{equation} in $\bR^{d}$. Of course, $D_{i}=\partial/\partial x^{i}$, $D_{ij}=D_{i}D_{j}$. By $Du$ we denote the gradient of $u$ and $D^{2}u$ the Hessian matrix of $u$. If $D$ is an open set and $p\geq1$, by $\cL_{p}(D)$ we denote the usual Lebesgue space and by $W^{2}_{p}(D)$ the usual Sobolev space. Denote $\cL_{p}=\cL_{p}(\bR^{d})$, $W^{2}_{p}=W^{2}_{p} (\bR^{d})$. We say that a function $u\in W^{2}_{p}$ satisfies \eqref{12.23.5} in $\bR^{d}$ if \eqref{12.23.5} holds almost everywhere in $\bR^{d}$. The main information about the solvability of \eqref{12.23.5} will be obtained while studying its reduced form \begin{equation} \label{12.6.1} \sup_{\omega\in \Omega}[a^{ij}(\omega,x)D_{ij}u(x)+f(\omega,x)]=0. \end{equation} \begin{theorem} \label{theorem 12.23.3} Let $p>d$. Then there exists a constant $\theta\in(0,1)$ depending only on $p,d$, and $\delta$ such that, if Assumption \ref{assumption 12.18.1} holds with this $\theta$, then there exists a $\lambda_{0}=\lambda_{0} (\delta,p,K,d)\geq0$ such that for any $\lambda \geq\lambda_{0}$ and $u\in W^{2}_{p}$ satisfying \eqref{12.23.5} we have $$ \lambda\|u\|_{\cL_{p}}+\|D^{2}u\|_{\cL_{p}} \leq N\|\bar{f}\|_{\cL_{p}}, $$ where $N=N(\delta,p, d)$. Moreover, for any $\lambda >\lambda_{0}$ there is a unique solution of \eqref{12.23.5} in $W^{2}_{p}$. \end{theorem} Previously the $W^{2}_{p}$-estimates for a class of fully nonlinear elliptic equations were obtained by Caffarelli in \cite{Ca} (see also \cite{CC}). It seems to the author that the class of equations from \cite{CC} does not include Bellman's equations like \eqref{12.6.1} when $\Omega$ consists of only two points: $$ \max\big(a^{ij}_{1}(x)D_{ij}u(x) +f_{1}(x), a^{ij}_{2}(x)D_{ij}u(x) +f_{2}(x)\big)=0 $$ unless $a_{1}$ and $a_{2}$ are uniformly sufficiently close to continuous functions and $f_{1}-f_{2}$ is uniformly sufficiently close to a uniformly continuous function. Our methods are slightly different from those from \cite{CC}. As in \cite{CC} we start from the result of \cite{FH} but then we follow more closely the general scheme from \cite{Kr08}. We are only dealing with equations in the whole space or interior estimates. One can probably obtain similar results for equations in half spaces and smooth domains by following the recent approach developed in \cite{DK} for {\em linear\/} equations and systems. In \cite{DK} one can also find an extensive list of references on linear equations with VMO coefficients. We prove Theorem \ref{theorem 12.23.3} in Section \ref{section 12.23.4} after we prepare some necessary tools in the next section. In Section \ref{section 12.23.1} we give some comments on how one can start obtaining similar results for parabolic equations. In the final Section \ref{section 12.24.3} we prove the necessary facts from Real Analysis. The author is sincerely grateful to Hongjie Dong for pointing out several glitches in the first draft of the paper. \mysection{Equations with constant coefficients} \label{section 12.23.3} In this section we consider equation \eqref{12.6.1} under the additional assumption that $a^{ij} $ are independent of $x$, namely, \begin{equation} \label{12.16.3} \sup_{\omega\in \Omega}\big[ a^{ij} (\omega )D_{ij}u(x)+f(\omega,x)\big]=0. \end{equation} \begin{lemma} \label{lemma 12.6.1} Let $u\in W^{2}_{d,loc}$. Then for any $r\in(0,\infty)$ $$ \sup_{B_{r}}|u(x)-x^{i}(u_{x^{i}})_{B_{r}}-(u )_{B_{r}}|^{d}\leq N(d)r^{2d}\dashint_{B_{r}}|D^{2}u|^{d}\,dx. $$ \end{lemma} Proof. First assume that $r=1$ and set $$ v(x)=u(x)-x^{i}(u_{x^{i}})_{B_{1}}-(u )_{B_{1}} $$ By Sobolev embedding theorems the left-hand side is less than a constant times $$ \sup_{B_{1}}|v|\leq N\|v\|_{W^{2}_{d}(B_{1})} $$ and by Poincar\'e's inequality $$ \|v\|_{W^{2}_{d}(B_{1})}^{d}= \|D^{2}u\|_{\cL_{d}(B_{1})}^{d}+ \|D u-(Du)_{B_{1}}\|_{\cL_{d}(B_{1})}^{d} $$ $$ +\|v\|_{\cL_{d}(B_{1})}^{d}\leq N\|D^{2}u\|_{\cL_{d}(B_{1})}^{d}. $$ For general $r$ it suffices to use dilations. The lemma is proved. \begin{lemma} \label{lemma 12.16.01} Let $r\in(0,\infty)$, $\nu\geq2$ and let $v\in C^{2}(\bar{B}_{\nu r})$ be a solution of \eqref{12.16.3} in $B_{\nu r}$ with $f \equiv0$. Then there are constants $\beta\in(0,1)$ and $N$, depending only on $d$ and $\delta$, such that $$ \dashint_{B_{r}}\dashint_{B_{r}}|D^{2}v(x)-D^{2}v(y)| \,dxdy\leq N\nu^{-2-\beta} r^{-2}\sup_{\partial B_{\nu r}}|v| $$ \end{lemma} Proof. Dilations show that it suffices to concentrate on $r=1/\nu$. In that case the result follows from Theorem 5.5.8 of \cite{Kr85} which states that $$ |D^{2}v(x)-D^{2}v(y)|\leq N|x-y|^{\beta}\sup_{B_{1}}|v| $$ as long as $x,y\in B_{1/2}$. The lemma is proved. Introduce $\bL_{\delta}$ as the collection of operators $Lu=a^{ij}D_{ij}u$ with $a=(a^{ij})$ being measurable and $\cS_{\delta}$-valued. \begin{lemma} \label{lemma 12.16.2} Let $r\in(0,\infty)$ and let $w\in C^{2}(\bar{B}_{r})$ be a function such that $w=0$ on $\partial B_{r}$. Then there are constants $\gamma\in(0,1]$ and $N$, depending only on $\delta$ and $d$, such that for any $L \in\bL_{\delta}$ we have $$ \dashint_{B_{r}}|D^{2}w|^{\gamma} \,dx \leq N\big(\dashint_{B_{r}}|Lw|^{d}\,dx\big)^{\gamma/d}. $$ \end{lemma} For $r=1$ the result is proved in \cite{FH} on the basis of an elliptic counterpart of Theorem \ref{theorem 12.21.1}. In the general case it suffices to use dilations. \begin{lemma} \label{lemma 12.16.3} Let $r\in(0,\infty)$, $\nu\geq2$, and let $u\in W^{2}_{d}(B_{\nu r})$ be a solution of \eqref{12.16.3} in $B_{\nu r}$. Then $$ \dashint_{B_{r}} \dashint_{B_{r}}|D^{2}u(x)-D^{2}u(y)|^{\gamma} \,dxdy $$ \begin{equation} \label{12.16.4} \leq N\nu^{d}\big(\dashint_{B_{\nu r}}\bar{f}^{d}\,dx\big)^{\gamma/d} +N\nu^{ -\gamma\beta} \big(\dashint_ {B_{\nu r}} |D^{2}u|^{d}\,dx\big)^{\gamma/d}, \end{equation} where $N=N(\delta,d)$. \end{lemma} Proof. Observe that it suffices to prove the lemma for $u\in C^{\infty}_{b}(\bar{B}_{\nu r})$. Indeed, if we denote by $u^{n}$ a sequence converging to $u$ in $W^{2}_{d}(B_{\nu r})$, then $$ \sup_{\omega}\big[a^{ij}(\omega,x)D_{ij}u^{n}(x) +f^{n}(\omega,x)\big]=0 $$ in $B_{\nu r}$, where $f^{n}(\omega,x)= a^{ij}(\omega,x)D_{ij}(u-u^{n})+f(\omega,x)$, so that $$ \|\sup_{\omega}|f^{n}(\omega,\cdot)|\|_{\cL_{d}(B_{\nu r})} \leq\|\bar{f}\|_{\cL_{d}(B_{\nu r})}+N \|u-u^{n}\|_{W^{2}_{d}(B_{\nu r})}, $$ where the last term tends to zero as $n\to\infty$. Next, let $v$ be a solution of \eqref{12.16.3} in $B_{\nu r}$ with $f^{\alpha}\equiv0$ and the boundary condition $\hat{u}(x):=u(x)-x^{i}(u_{x^{i}})_{B_{\nu r}}-(u)_{B_{\nu r}}$ on $\partial B_{\nu r}$. By classical results (see, for instance, \cite{CC}, \cite{Ev}, \cite{GT}, \cite{Kr85}) such a solution of class $C^{2}(\bar{B}_{\nu r})$ exists. Then by Lemmas \ref{lemma 12.6.1} and~\ref{lemma 12.16.01} \begin{equation} \label{12.17.1} \dashint_{B_{r}}\dashint_{B_{r}}|D^{2}v(x)-D^{2}v(y)|^{\gamma} \,dxdy \leq N\nu^{ -\gamma\beta} \big(\dashint_ {B_{\nu r}} |D^{2}u|^{d}\,dx\big)^{\gamma/d}. \end{equation} Furthermore, for $\hat{w}:=\hat{u}-v$ we have that in $B_{\nu r}$ $$ 0=\sup_{\omega}\big[a^{ij}(\omega)D_{ij}\hat{w}(x) +a^{ij}(\omega)D_{ij}v(x)+f(\omega,x)\big] $$ $$ \leq \sup_{\omega}\big[a^{ij}(\omega)D_{ij}\hat{w}(x) +f(\omega,x)\big] $$ and $$ 0=\sup_{\omega}\big[ a^{ij}(\omega)D_{ij}\hat{w}(x) +a^{ij}(\omega)D_{ij}v(x)+f(\omega,x)\big] $$ $$ \geq \inf_{\omega}\big[a^{ij}(\omega)D_{ij}\hat{w}(x) +f(\omega,x)\big]. $$ It follows that there exists an operator $L\in\bL_{\delta}$ and a function $g$ such that $L\hat{w}+g=0$ in $B_{\nu r}$ and $|g|\leq\bar{f}$. Therefore, by Lemma \ref{lemma 12.16.2} and by the fact that $D^{2}\hat{w}=D^{2}w$, where $w=u-v$, we get that $$ \dashint_{B_{r}}|D^{2}w|^{\gamma} \,dx \leq \nu^{d}\dashint_{B_{\nu r}}|D^{2}w|^{\gamma} \,dx \leq N\nu^{d}\big(\dashint_{B_{\nu r}}\bar{f}^{d}\,dx\big)^{\gamma/d}, $$ $$ \dashint_{B_{r}}\dashint_{B_{r}}|D^{2}w(x)-D^{2}w(y)|^{\gamma} \,dxdy \leq N\nu^{d}\big(\dashint_{B_{\nu r}}\bar{f}^{d}\,dx\big)^{\gamma/d}. $$ By combining this with \eqref{12.17.1} we get \eqref{12.16.4} and the lemma is proved. Lemma \ref{lemma 12.16.3} plays the main role in the proof of Theorem \ref{theorem 12.23.3}, a particular case of which is the following theorem about apriori estimates and the solvability of Bellman's equations with constant coefficients. \begin{theorem} \label{theorem 12.19.1} Let $p>d$. (i) Let $u\in W^{2}_{p}$ satisfy \eqref{12.16.3} in $\bR^{d}$. Then there is a constant $N=N(\delta,p,d)$ such that \begin{equation} \label{12.19.5} \|D^{2}u\|_{\cL_{p}}\leq N\|\bar{f}\|_{\cL_{p}}. \end{equation} (ii) There exists a $\lambda_{0}=\lambda_{0} (\delta,p,K,d)\geq0$ such that for any $\lambda \geq\lambda_{0}$ and $u\in W^{2}_{p}$ satisfying $$ \sup_{\omega\in \Omega}\big[a^{ij}(\omega )D_{ij}u(x) +b^{i}(\omega,x)D_{i}u(x) $$ \begin{equation} \label{12.19.2} -(c+\lambda)(\omega,x)u(x) +f(\omega,x)\big]=0 \end{equation} in $\bR^{d}$ we have \begin{equation} \label{12.19.3} \lambda\|u\|_{\cL_{p}}+\|D^{2}u\|_{\cL_{p}} \leq N\|\bar{f}\|_{\cL_{p}}, \end{equation} where $N=N(\delta,p, d)$. Moreover, for any $\lambda >\lambda_{0}$ there is a unique solution of \eqref{12.19.2} in $W^{2}_{p}$. Finally, if $K=0$, one can take $\lambda_{0}=0$. \end{theorem} Proof. (i) We use notation \eqref{12.19.4} for the filtration of dyadic cubes and $\mu(dx)=dx$. We also note that one can apply Lemma \ref{lemma 12.16.3} after shifting the origin. Then we easily get that for any $\nu\geq2$ $$ (D^{2}u)_{\gamma}^{\sharp} \leq N\nu^{d/\gamma}\bM^{1/d}(\bar{f}^{d})+ N\nu^{- \beta}\bM^{1/d}(|D^{2}u|^{d}) $$ on $\bR^{d}$, where $\bM g$ is the classical maximal function of $g$ and $N=N(\delta,d)$. It follows by Theorem \ref{theorem 12.16.1} and the Hardy-Littlewood theorem on maximal functions that (recall that $p>d$) \begin{equation} \label{12.16.9} \|D^{2}u\|_{\cL_{p}}\leq N\nu^{d/\gamma}\|\bar{f}\|_{\cL_{p}} +N\nu^{- \beta}\|D^{2}u\|_{\cL_{p}}, \end{equation} where $N=N(\delta,p,d)$. The arbitrariness of $\nu\geq2$ now leads to \eqref{12.19.5}. (ii) By standard arguments all assertions in (ii) follow from its last one, to prove which it suffices to prove apriori estimate \eqref{12.19.3}. Observe that by \eqref{12.19.5} we have (recall that $b=c=0$) $$ \|D^{2}u\|_{\cL_{p}}\leq N\|\bar{f}\|_{\cL_{p}} +N\lambda\|u\|_{\cL_{p}}, $$ which shows that it suffices to prove that \begin{equation} \label{12.19.6} \lambda\|u\|_{\cL_{p}} \leq N\|\bar{f}\|_{\cL_{p}}. \end{equation} Of course, we may assume that $\lambda>0$ and, as in the proof of Lemma \ref{lemma 12.16.3}, we may assume that $u\in C^{\infty}_{0}$. Then we take an operator $L\in\bL_{\delta}$ and a function $g$ such that $|g|\le\bar{f}$ and $Lu-\lambda u+g=0$. By Theorem 3.5.15 and Lemma 3.5.5 of \cite{Kr85} we have \begin{equation} \label{12.19.7} \lambda\|u\|_{\cL_{p}} \leq N\|g\|_{\cL_{p}} \leq N\|\bar{f}\|_{\cL_{p}}, \end{equation} with $N=N(\delta,d,p)$ provided that $\lambda\geq N$. Dilations show that the latter requirement can be reduced to $\lambda>0$. Actually, Theorem 3.5.15 of \cite{Kr85} is only proved there for $p=d$, but the proof is based on the results of Section 3.2 of \cite{Kr85}, which treats general $p\geq d$ (see Theorem 3.2.3 there). Therefore, we can repeat the corresponding arguments in the proofs of Theorem 3.5.15 and Lemma 3.5.5 of \cite{Kr85} almost word for word and get what we need. A different way to obtain the first estimate in \eqref{12.19.7} is to refer directly to a rather old result of \cite{Kr72} (see Lemma 7 there), but this would require the reader to know It\^o's formula from the theory of It\^o integrals. This way even has an advantage because it shows that $N$ in \eqref{12.19.6} is independent of $p$. The theorem is proved. \mysection{Proof of Theorem \protect\ref{theorem 12.23.3}} \label{section 12.23.4} We start with a few observations regarding equations with variable~$a^{ij}$. \begin{lemma} \label{lemma 12.19.1} Let $\kappa\in(1,\infty)$, $r\in(0,\infty)$, $\nu\geq2$, and let $u\in W^{2}_{d} $ be a solution of \eqref{12.6.1} in $\bR^{d}$. Assume that $u=0$ outside $B_{R_{0}}(x_{0})$ for some $x_{0} \in\bR^{d}$. Then $$ \dashint_{B_{r}} \dashint_{B_{r}}|D^{2}u(x)-D^{2}u(y)|^{\gamma} \,dxdy \leq N\nu^{d}\big(\dashint_{B_{\nu r}}\bar{f}^{d}\,dx\big)^{\gamma/d} $$ \begin{equation} \label{12.16.7} + N\nu^{d}\big(\dashint_{B_{\nu r}}|D^{2}u|^{\kappa d}\,dx \big)^{\gamma/(\kappa d)}\theta^{(1-1/\kappa)\gamma/d} +N\nu^{ -\gamma\beta} \big(\dashint_ {B_{\nu r}} |D^{2}u|^{d}\,dx\big)^{\gamma/d}, \end{equation} where $N=N(\delta,d)$. \end{lemma} Proof. We basically repeat the proof of Lemma 6.2.2 of \cite{Kr08}. Fix a $\nu\geq 2$ and an $r\in(0,\infty)$ and introduce $$ \bar{a}^{ij}(\omega)=(a^{ij})_{B_{R_{0}}}\quad\text{if}\quad \nu r\geq R_{0},\quad \bar{a}^{ij}(\omega)=(a^{ij})_{B_{\nu r}}\quad\text{if}\quad \nu r< R_{0}. $$ Observe that $$ \sup_{\omega\in \Omega}\big[\bar{a}^{ij}(\omega)D_{ij}u +\tilde{f}(\omega,x)\big]=0, $$ where $$ \tilde{f}(\omega,x) = (a^{ij}-\bar{a}^{ij})(\omega,x)D_{ij}u +f(\omega,x). $$ Denote $$ \tilde{a}(x)=\sup_{\omega\in \Omega}|a(\omega,x)-\bar{a}(\omega)|. $$ By Lemma \ref{lemma 12.16.3} $$ \dashint_{B_{r}} \dashint_{B_{r}}|D^{2}u(x)-D^{2}u(y)|^{\gamma} \,dxdy \leq N\nu^{d} I^{\gamma/d} $$ \begin{equation} \label{12.16.8} + N\nu^{d}\big(\dashint_{B_{\nu r}}\bar{f}^{d}\,dx\big)^{\gamma/d} +N\nu^{ -\gamma\beta} \big(\dashint_ {B_{\nu r}} |D^{2}u|^{d}\,dx\big)^{\gamma/d}, \end{equation} where $$ I= \dashint_{B_{\nu r}}\tilde{a}^{d}|D^{2}u|^{d}\,dx = \dashint_{B_{\nu r}}\tilde{a}^{d}|D^{2}u|^{d}I_{B_{R_{0}}}\,dx\leq J^{1/\kappa}_{1}J^{1-1/\kappa}_{2} $$ with $$ J_{1}=\dashint_{B_{\nu r}}|D^{2}u|^{\kappa d}\,dx, \quad J_{2}=\dashint_{B_{\nu r}}\tilde{a}^{\kappa d/(\kappa-1)} I_{B_{R_{0}}}\,dx\leq N \dashint_{B_{\nu r}}\tilde{a} I_{B_{R_{0}}}\,dx. $$ If $\nu r\geq R_{0}$, then $$ J_{2}\leq N(\nu r)^{-d}\int_{B_{R_{0}}}\tilde{a} \,dx \leq N(\nu r)^{-d}R^{d}_{0}\dashint_{B_{R_{0}}}\tilde{a} \,dx \leq N\theta. $$ In case $\nu r<R_{0}$ we have $$ J_{2}\leq N \dashint_{B_{\nu r}}\tilde{a} \,dx \leq N\theta. $$ Hence, $$ I\leq N\big(\dashint_{B_{\nu r}}|D^{2}u|^{\kappa d}\,dx \big)^{1/\kappa}\theta^{1-1/\kappa} $$ and by combining this with \eqref{12.16.8} we come to \eqref{12.16.7}. The lemma is proved. \begin{corollary} \label{corollary 12.16.01} Under the assumptions of Lemma \ref{lemma 12.19.1} let $p>\kappa d$. Then there is a constant $N_{1}=N_{1}(\delta,p,\kappa,d)$ such that $$ \|D^{2}u\|_{\cL_{p}}\leq N_{1} \nu^{ d/\gamma}\|\bar{f}\|_{\cL_{p}} +N_{1}(\nu^{d/\gamma}\theta^{(1-1/\kappa) /d}+\nu^{-\beta}) \|D^{2}u\|_{\cL_{p}} $$ \end{corollary} This is obtained in the same way as \eqref{12.16.9}. \begin{corollary} \label{corollary 12.16.2} Let $p>d$ and let $u\in W^{2}_{p} $ be a solution of \eqref{12.6.1} in $\bR^{d}$. Assume that $u=0$ outside $B_{R_{0}}(x_{0})$ for some $x_{0} \in\bR^{d}$. Then there exist $\theta=\theta(p,d,\delta)\in(0,1]$ and $N=N(p,d,\delta)$ such that if Assumption \ref{assumption 12.18.1} is satisfied with this $\theta$, then $\|D^{2}u\|_{\cL_{p}}\leq N\|\bar{f}\|_{\cL_{p}}$. \end{corollary} Indeed, it suffices to set $2\kappa=1+p/d$ and choose first $\nu$ and then $\theta$ in such a way that $$ N_{1}(\nu^{ d/\gamma} \theta^{(1-1/\kappa) /d}+\nu^{-\beta})\leq 1/2. $$ {\bf Proof of Theorem \ref{theorem 12.23.3}}. We suppose that Assumption \ref{assumption 12.18.1} holds with $\theta$ from Corollary \ref{corollary 12.16.2}. First assume that we are given a function $u\in W^{2}_{p}$, which satisfies \eqref{12.6.1} in $\bR^{d}$. Take a nonnegative $\zeta\in C^{\infty}_{0}$ which has support in $B_{R_{0}}$ and is such that $\zeta^{p}$ integrates to one. For the parameter $x_{0}\in\bR^{d}$ define $$ u_{x_{0}}(x)=u(x)\zeta(x-x_{0}) $$ and observe that $$ \sup_{\omega\in \Omega}\big[a^{ij}(\omega,x)D_{ij}u_{x_{0}}(x) +f_{x_{0}}(\omega,x)\big]=0, $$ where $$ f_{x_{0}}(\omega,x)=f (\omega,x)\zeta(x-x_{0}) -u(x)a^{ij}(\omega,x)D_{ij}\zeta(x-x_{0}) $$ $$ - 2a^{ij}(\omega,x)(D_{i}u(x))D_{j}\zeta(x-x_{0}). $$ By Corollary \ref{corollary 12.16.2} $$ \|\zeta(\cdot-x_{0})|D^{2}u|\|_{\cL_{p}}^{p} \leq N\|\zeta(\cdot-x_{0})\bar{f}\|_{\cL_{p}}^{p} $$ $$ +\||D\zeta(\cdot-x_{0})|\,|Du|\|_{\cL_{p}}^{p} +\||D^{2}\zeta(\cdot-x_{0})|u\|_{\cL_{p}}^{p}. $$ Upon integrating through this estimate we get \begin{equation} \label{12.16.10} \| D^{2}u \|_{\cL_{p}}^{p} \leq N_{1}\| \bar{f}\|_{\cL_{p}}^{p} +N_{2}(\| Du \|_{\cL_{p}}^{p} +\| u\|_{\cL_{p}}^{p}), \end{equation} where $N_{1}=N_{1}(p,d,\delta)$ and $N_{2}=N_{2} (p,d,\delta, R_{0})$. If $u\in W^{2}_{p}$ is a solution of \eqref{12.23.5}, then by absorbing the first- and zeroth-order terms into $f$ we see that $$ \| D^{2}u \|_{\cL_{p}} \leq N_{1}(\| \bar{f}\|_{\cL_{p}} +\lambda\| u\|_{\cL_{p}}) +N_{2}(\| Du \|_{\cL_{p}} +\| u\|_{\cL_{p}} ) $$ and if $\lambda\geq\lambda_{0} (p,d,\delta, R_{0})$, then multiplicative inequalities yield $$ \| D^{2}u \|_{\cL_{p}} \leq N_{1} (\| \bar{f}\|_{\cL_{p}} +\lambda\| u\|_{\cL_{p}}), $$ where $N_{1}$ still depends only on $ p,d,\delta $. Applying the results of \cite{Kr85} or \cite{Kr72} as in the proof of Theorem \ref{theorem 12.19.1}, we obtain that $$ \lambda\| u\|_{\cL_{p}}+ \| D^{2}u \|_{\cL_{p}} \leq N_{1} \| \bar{f}\|_{\cL_{p}}. $$ After that it suffices to repeat the corresponding argument from the proof of Theorem \ref{theorem 12.19.1}. \mysection{Comments on parabolic equations} \label{section 12.23.1} Denote by $\frL$ the set of operators $L$ of the form $$ L=\partial_{t}+a^{ij}(t,x)D_{ij}+b^{i}(t,x)D_{i}-c(t,x),\quad \partial_{t}=\frac{\partial}{\partial t}, $$ where $a(t,x)=(a^{ij}(t,x))$ is an $\cS_{\delta}$-valued, $b(t,x)=(b^{i}(t,x))$ is an $\bR^{d}$-valued, and $c(t,x)$ is a real-valued measurable functions defined on $\bR^{d+1}=\{(t,x):t\in\bR,x\in\bR^{d}\}$ satisfying $$ |b |+c \leq K,\quad c \geq0. $$ Let $\frL_{0}$ be a subset of $\frL$ consisting of operators with infinitely differentiable coefficients. For $\rho,r>0$ introduce $$ C_{\rho,r}=(0,\rho)\times B_{r},\quad \partial'C_{\rho,r}=\big([0,\rho]\times\partial B_{r}\big) \cup\big(\{\rho\}\times B_{r}\big), $$ $$ C_{\rho,r}(t,x)=(t,x)+C_{\rho,r},\quad \partial'C_{\rho,r}(t,x)=(t,x)+\partial'C_{\rho,r}. $$ \begin{theorem} \label{theorem 12.21.1} Let $u\in W^{1,2}_{d+1}(C_{2,1})$ and assume that $u\geq0$ on $\partial'C_{2,1}$ and there exists an operator $L\in\frL$ such that $Lu\leq0$ in $C_{2,1}$. Then there exist constants $\gamma=\gamma(\delta,d,K)\in(0,1)$ and $N =N (\delta,d,K)$ such that for any $\lambda>0$ \begin{equation} \label{12.21.2} |C_{1,1}(1,0)\cap\{-Lu\geq\lambda\}| \leq N \lambda^{-\gamma}u^{\gamma}(0,0). \end{equation} \end{theorem} Here is a consequence of this theorem, which can be used in constructing the theory of parabolic Bellman's equations along the lines in Sections \ref{section 12.23.3} and \ref{section 12.23.4}. \begin{corollary} \label{corollary 12.23.1} Let $w\in C^{1,2}(\bar{C}_{2,1})$ be a function such that $w=0$ on $\partial'C_{2,1}$. Then there are constants $\gamma\in(0,1]$ and $N$, depending only on $\delta$, $K$, and $d$, such that for any $L \in\frL$ we have $$ \int_{C_{1,1}}|D^{2}w|^{\gamma} \,dxdt \leq N\big(\int_{C_{2,1}}|Lw|^{d+1}\,dxdt\big)^{\gamma/(d+1)}. $$ \end{corollary} This corollary is deduced from Theorem \ref{theorem 12.21.1} in the same way as the theorem in \cite{FH} is deduced from estimate (2.1) of \cite{FH}, the only difference being that instead of the elliptic Alexandrov estimate one uses the parabolic one. To prove Theorem \ref{theorem 12.21.1} we need an auxiliary construction, but first we observe that replacing $u$ with $u/\lambda$ reduces the general case to the one with $\lambda=1$. For $q\in[0,1]$ denote by $\frU_{q}$ the set of functions $u$, which are bounded and continuous along with $\partial_{t}u$, $Du$, $D^{2}u$ in $\bar{C}_{2,1}$ and such that (i) $u=0$ on $\partial'C_{2,1}$; (ii) there exists an operator $L\in \frL_{0}$ such that $Lu\leq0$ in $C_{2,1}$ and $$ |C_{1,1}(1,0)\cap\{Lu\leq-1\}|\geq q|C_{1,1}|. $$ Finally introduce $$ m(t,x,q)=\inf\{u(t,x):u\in\frU_{q}\},\quad (t,x)\in \bar{C}_{2,1}. $$ \begin{remark} \label{remark 12.21.1} Denote by $m'(t,x,q)$ the function called $m(t,x,q)$ in Section 4.1 of \cite{Kr85}. Then obviously $m'(t,x,q)=m(1+t,x,q)$. In particular, by Lemmas 4.1.3 and 4.1.4 of \cite{Kr85} for any $\kappa\in(0,1)$ there exist $q_{0}\in(0,1)$ (close to 1) and $m_{0}>0$, depending only on $\delta,d,K$, and $\kappa$ such that $$ m(t,x,q)\geq m_{0} $$ for $q\in[q _{0},1]$ and $(t,x)\in\bar{C}_{\kappa^{2},\kappa}$. \end{remark} \begin{remark} \label{remark 12.21.2} By Lemma 4.1.1 of \cite{Kr85} we know that for $u$ from Theorem \ref{theorem 12.21.1} it holds that $$ u(0,0)\geq m(0,0,q) $$ if $q$ satisfies $$ q|C_{1,1}|=|C_{1,1}(1,0)\cap\{Lu\leq-1\}|. $$ Therefore, to prove the theorem, we only need to prove that \begin{equation} \label{12.21.3} N m(0,0,q) \geq q^{1/\gamma} \end{equation} for $N$ and $\gamma$ depending only on $d,\delta,K$. \end{remark} We also need to use Lemma 4.1.6 of \cite{Kr85}. Before stating it we introduce some notation. Let $\Gamma$ be a measurable subset of $C_{1,1}(1,0)$ and let $q,\eta,\zeta\in(0,1)$ be some numbers. Denote by $\frB=\frB(\Gamma,q)$ the collection of $Q=C_{\rho^{2},\rho}(t_{0},x_{0})$ such that $Q \subset C_{1,1}(1,0)$ and $$ |Q\cap\Gamma|\geq q|C_{1,1}|. $$ If $Q=C_{\rho^{2},\rho}(t_{0},x_{0})\in\frB$ we set $$ Q'=(t_{0},x_{0})-C_{\eta^{-1}\rho^{2},\rho},\quad Q''=(t_{0}-\eta^{-1}\rho^{2},x_{0})+C_{\eta^{-1}\rho^{2} \zeta^{2},\rho\zeta}. $$ Imagine that the $t$-axis is pointed up vertically. Then $Q'$ is immediately adjacent to $Q$ from below, the two cylinders have a common base, and along the $t$-axis $Q'$ is $\eta^{-1}$ times longer than $Q$. It is quite possible that part of $Q'$ comes out of $C_{1,1}(1,0)$ or, for that matter, out of $C_{2,1}$. The cylinder $Q''$ is obtained from $ Q'$ by parabolic compression centered at the center of the lower base of $Q'$, the compression coefficient being $\zeta^{-1}$. Finally, denote $$ \Gamma''=\bigcup_{Q\in\frB}Q''. $$ Here is Lemma 4.1.6 of \cite{Kr85}. \begin{lemma} \label{lemma 12.21.3} If $|\Gamma|\leq q|C_{1,1}|$, then $$ |\Gamma''|\geq \big(1-(1-q)3^{-d-1}\big)^{-1}(1+\eta)^{-1} \zeta^{d+2}|\Gamma|. $$ \end{lemma} {\bf Proof of Theorem \ref{theorem 12.21.1}}. We are going to slightly modify the proof of Theorem 4.1.2 of \cite{Kr85} while concentrating on \eqref{12.21.3}. Fix some $\eta,\zeta\in(0,1)$ to be specified later and such that \begin{equation} \label{12.21.7} \zeta^{2}\leq1-2\eta. \end{equation} Next, fix a $\kappa\in(0,1)$ such that $\kappa^{2}\geq1/2$ and set $$ \mu(q)=\inf\{m(0,x,q):|x|\leq\kappa\}. $$ By Remark \ref{remark 12.21.1} there is a $q_{0}\in(0,1)$ and $m_{0}>0$ such that $$ \mu(q_{0})\geq m_{0}. $$ Next, we take some $0<q'<q''<1$ and try to relate $\mu(q')$ to $\mu(q'')$. To this end take a $u\in \frU_{q'}$ and let $$ \Gamma= C_{1,1}(1,0)\cap\{Lu\leq-1\} $$ where $L$ is the operator associated with $u$. From chosen $\Gamma,q_{0},\eta$, and $\zeta$ we construct the set $\Gamma''$ as before Lemma \ref{lemma 12.21.3} by taking there $q_{0}$ in place of $q$ and consider two cases: (i) $|\Gamma''\setminus C_{1,1}(1,0)|\leq (q''-q')|C_{1,1}|$, (ii) $|\Gamma''\setminus C_{1,1}(1,0)|> (q''-q')|C_{1,1}|$. {\em Case (i)\/}. Denote by $\tilde{u}$ and $\tilde{v}$ the $W^{1,2}_{d+1}(C_{2,1})$-solutions of $$ L\tilde{u}=-I_{\Gamma},\quad L\tilde{v}=-I_{\Gamma_{0}} $$ vanishing on $\partial'C_{2,1}$, where $\Gamma_{0}=\Gamma''\cap C_{1,1}(1,0)$. Since the coeficients of $L$ are infinitely differentiable, such solutions exist. There are two possibilities: either (a) $|\Gamma|\geq q_{0}|C_{1,1}|$, \noindent or (b) $|\Gamma|< q_{0}|C_{1,1}|$. Under condition (a) by definition \begin{equation} \label{12.24.1} u(0,x)\geq \mu(q_{0}),\quad|x|\leq\kappa. \end{equation} In case (b) by definition and Lemma \ref{lemma 12.21.3} $$ q'|C_{1,1}|\leq|\Gamma|\leq \big(1-(1-q_{0})3^{-d-1}\big)(1+\eta) \zeta^{-d-2}|\Gamma''|. $$ Moreover, by assumption $$ |\Gamma''|=|\Gamma''\setminus C_{1,1}(1,0)|+|\Gamma_{0}| \leq (q''-q')|C_{1,1}|+|\Gamma_{0}|. $$ It follows that $$ |\Gamma_{0}|\geq q''|C_{1,1}|, $$ if \begin{equation} \label{1.5.1} (1+\xi) q'\geq 2q'', \end{equation} where $$ \xi:=(1-(1-q_{0})3^{-d-1}\big)^{-1}(1+\eta)^{-1} \zeta^{ d+2} . $$ Obviously there exist $\eta=\eta(q_{0})\in(0,1)$ and $\zeta=\zeta(q_{0})\in(0,1)$ such that \eqref{12.21.7} is satisfied and $\xi=\xi(q_{0})>1$, so that \eqref{1.5.1} holds for some $q'<q''$. Since $q_{0}$ depends only on $\delta,K,d$, and $\kappa$, so do $\eta$, $\zeta$, and $\xi$. We fix such $\eta$ and $\zeta$ from this moment on. Then by definition $$ \tilde{v}(0,x)\geq\mu(q'') ,\quad|x|\leq\kappa. $$ We thus have estimated $\tilde{v}$ from below in case (i), (b) for $q'<q''$ satisfying \eqref{1.5.1}. By the maximum principle $u\geq\tilde{u}$ and to estimate $u$ from below it suffices to estimate $\tilde{u}$ from below in terms of $\tilde{v}$. This will be done by use of Lemma 4.1.5 of \cite{Kr85}. If $(t_{0},x_{0})\in\Gamma_{0}$, then there exists a cylinder $Q\in\frB$ such that $(t_{0},x_{0})\in Q''$. Define $(t_{1},x_{1})$ and $\tau$, $\rho$ from the equation $$ C_{\tau,\rho}(t_{1},x_{1})=Q_{1}:=(Q'\cup Q)\cap C_{2,1} , $$ so that $$ Q=C_{ \rho^{2},\rho}(\tau+t_{1}-\rho^{2},x_{1}). $$ Furthermore, owing to \eqref{12.21.7}, the distance from $(t_{0},x_{0})$ to the bottom of $Q$ is bigger than $$ \eta^{-1}\rho^{2}-\eta^{-1}\rho^{2}\zeta^{2}\geq 2\rho^{2}. $$ In particular, \begin{equation} \label{12.21.8} 1<t_{0}\leq (\tau+t_{1}-\rho^{2})-2\rho^{2}. \end{equation} Next let $\bar{u}$ and $\bar{v}$ be the $W^{1,2}_{d+1}(Q_{1})$-solutions of $$ L\bar{u}=-N_{0}I_{\Gamma},\quad L\bar{v}=-I_{\Gamma_{0}}, $$ vanishing on $\partial'Q_{1}$, where the constant $N_{0}$ will be specified later in such a way that \begin{equation} \label{12.21.5} \bar{u}(t_{0},x_{0})\geq \bar{v}(t_{0},x_{0}). \end{equation} If we can do this, then by Lemma 4.1.5 of \cite{Kr85} we have $N_{0}\tilde{u}\geq\tilde{v}$ on $\bar{C}_{2,1}$ and for $q'<q''$ satisfying \eqref{1.5.1} \begin{equation} \label{12.21.6} u(0,x)\geq\tilde{u}(0,x)\geq N_{0}^{-1}\tilde{v}(0,x) \geq N_{0}^{-1}\mu(q''),\quad|x|\leq\kappa. \end{equation} To prove \eqref{12.21.5}, observe that by the maximum principle $\bar{v}(t,x)\leq t_{1}+\tau-t$, so that \begin{equation} \label{12.22.1} \bar{v}(t_{0},x_{0})\leq(1+\eta^{-1})\rho^{2}. \end{equation} On the other hand, by the choice of $Q$ we have $q_{0}|Q|\leq|\Gamma\cap Q|$. This inequality is preserved under the parabolic dilation $x\to\rho^{-1}x$, $t\to\rho^{-2}t$ which transforms $\bar{u}$ into a function $\hat{u}$, which satisfies $\hat{L}\hat{u}=-N_{0}\rho^{2}I_{\hat{\Gamma}}$ with an $\hat{L}\in\frL_{0}$ and $\hat{\Gamma}$ being the image of $\Gamma$. Observe that the hyperplane $t=\tau+t_{1}-2\rho^{2}$ passes at a distance $\rho^{2}$ from $Q$ and it intersects $Q_{1}$ above $t=t_{0}$. Hence, by definition $$ \bar{u}(\tau+t_{1}-2\rho^{2},x) \geq N_{0}\rho^{2}\mu(q_{0}),\quad|x-x_{1}|\leq\kappa\rho. $$ Since $|x_{0}-x_{1}|\leq (1-\zeta)\rho$ and the distance between the hyperplanes $t=\tau+t_{1}-2\rho^{2}$ and $t=t_{0}$ is bigger than $\rho^{2}$ and less than $\eta^{-1}\rho^{2}$, it follows by Lemma 4.1.3 of \cite{Kr85} that $$ \bar{u}(t_{0},x_{0}) \geq \alpha N_{0}\rho^{2}\mu(q_{0}), $$ where $\alpha>0$ depends only on $d,\delta,K$, and $\kappa$. By taking $$ N_{0}=\alpha^{-1}(1+\eta^{-1})\mu^{-1}(q_{0}) $$ and recalling \eqref{12.22.1} we come to \eqref{12.21.5} and thus \eqref{12.21.6} is established in case (i), (b) for $q'<q''$ satisfying \eqref{1.5.1}, so that generally in case (i) (recall \eqref{12.24.1}) for those $q',q''$ $$ u(0,x)\geq\min(\mu(q_{0}), N^{-1}_{0}\mu(q'')),\quad|x|\leq\kappa. $$ The arbitrariness in the choice of $u$ implies that $$ \mu(q')\geq\min(\mu(q_{0}), N^{-1}_{0}\mu(q'')), $$ which after introducing $$ \hat{\mu}(q)=\min(\mu(q_{0}),\mu(q)) $$ yields \begin{equation} \label{12.24.2} \hat{\mu}(q')\geq N^{-1}_{0}\hat{\mu}(q'')) \quad\text{if}\quad (1+\xi)q'\geq2q''. \end{equation} {\em Case (ii)\/}. First we claim that for some $(t,x)\in\Gamma''$ it holds that $t<q'-q''+1$. Indeed, otherwise $\Gamma''\in C_{q''-q',1}(1,0)$ and $|\Gamma''|\leq (q''-q')|C_{1,1}|$. It follows that there is a cylinder $$ Q=C_{\rho^{2},\rho}(t_{0},x_{0})\in\frB $$ such that $Q'$ contains points in the half-space $t<q'-q''+1$. Since $q'<q''$, $q'-q''+1<1$ and $Q'$ is adjacent to $Q\in C_{1,1}(1,0)$, this implies that the height of $Q'$ is at least $q''-q'$, that is, \begin{equation} \label{12.22.7} \rho^{2}\eta^{-1}\geq q''-q',\quad\rho^{2}\geq\eta(q''-q'). \end{equation} Moreover, by definition $|\Gamma\cap Q|\geq q_{0}|Q|$ and by using dilations and the maximum principle we see that \begin{equation} \label{12.22.4} u(t_{0}-\rho^{2},x)\geq \mu(q_{0})\rho^{2},\quad |x-x_{0}|\leq \kappa\rho. \end{equation} If $t_{0}-\rho^{2}\geq1/4$, then \eqref{12.22.4} by Lemma 4.1.3 of \cite{Kr85} implies that $$ u(0,x)\geq \mu(q_{0})\varepsilon\rho^{n} ,\quad |x|\leq \kappa , $$ where $\varepsilon>0$ and $n\geq1$ depend only on $\delta,K,d$, and $\kappa$. In particular (see \eqref{12.22.7}), \begin{equation} \label{12.22.6} u(0,x)\geq \mu(q_{0})\varepsilon\eta^{n} (q''-q')^{n},\quad |x|\leq \kappa . \end{equation} On the other hand, by Remark \ref{remark 12.21.1} for $t_{0}-\rho^{2} \leq t\leq t_{0}-\rho^{2}+\kappa^{2}\rho^{2}$ and $|x-x_{0}|\leq\kappa \rho$ it holds that $$ u(t,x)\geq m_{0}\rho^{2}. $$ In addition, if $t_{0}-\rho^{2}<1/4$, then in the above inequality one can take $t =\kappa^{2}/2$ since $t_{0}\geq1$, $\rho^{2}>3/4$, $\rho^{2}\leq1$ (also recall that $\kappa^{2}\geq1/2$) and $$ t_{0}-\rho^{2}<1/4\leq\kappa^{2}/2\leq t_{0}-\rho^{2}+\kappa^{2}/2 \leq t_{0}-\rho^{2}+\kappa^{2}\rho^{2}. $$ Hence, $u(\kappa^{2}/2,x)\geq m_{0}\rho^{2}$ for $|x-x_{0}|\leq \kappa\rho$ and as above $$ u(0,x)\geq m_{0}\varepsilon\eta^{n}(q''-q')^{n}, \quad |x|\leq\kappa, $$ with $\varepsilon$ and $n$ of the same kind as in \eqref{12.22.6} or just the same if we choose the minimum of $\varepsilon$'s and the maximum of $n$'s. Again the arbitrariness of $u$ yields $\mu(q')\geq m_{0}\varepsilon\eta^{n}(q''-q')^{n}$, which after reducing $\varepsilon$ if necessary, so that $\mu(q_{0})\geq m_{0}\varepsilon\eta^{n}$ leads to $$ \hat{\mu}(q')\geq m_{0}\varepsilon\eta^{n}(q''-q')^{n}. $$ As a result of considering the two cases (i) and (ii) we get that there exist $\varepsilon_{0}\in(0,1)$ and $n_{0}\geq1 $ depending only on $\delta,K,d$, and $\kappa$, such that for any $0<q'<q''<1$ such that $(1+\xi)q'\geq2q''$ we have \begin{equation} \label{12.23.2} \hat{\mu}(q')\geq\varepsilon_{0}\min\big( (q''-q')^{n_{0}},\hat{\mu}(q'')\big). \end{equation} We also know that $\hat{\mu}(q)\geq m_{0}>0$ for $q\geq q_{0}$. We may certainly assume that $\varepsilon_{0} \leq\bar{\varepsilon}:=2/(1+\xi)$ (recall that $\xi>1$) and we claim that for $q_{k}= \bar{\varepsilon}^{k} q_{0}$, $k=0,1,2,...$, we have \begin{equation} \label{12.23.1} \hat{\mu}(q_{k})\geq\varepsilon^{kn_{0}}_{0}\chi,\quad \chi:= \min\big(\mu(q_{0}),q_{0}^{n_{0}}(1- \bar{\varepsilon}^{n_{0}} \big). \end{equation} To prove the claim we use induction. If $k=0$, \eqref{12.23.1} is obvious. If it is true for a $k$, then $q_{k}-q_{k+1}= \bar{\varepsilon}^{k} q_{0}(1- \bar{\varepsilon})$ $$ (q_{k}-q_{k+1})^{n_{0}}= \bar{\varepsilon}^{kn_{0}} q_{0}^{n_{0}}(1- \bar{\varepsilon})^{n_{0}} \geq \varepsilon_{0}^{kn_{0}}\chi, $$ so that by \eqref{12.23.2} and the fact that $(1+\xi)q_{k+1}=2q_{k}$ $$ \hat{\mu}(q_{k+1})\geq\varepsilon_{0}\min\big(\varepsilon_{0}^{kn_{0}} \chi,\hat{\mu}(q_{k})\big)\geq\varepsilon_{0}\varepsilon_{0}^{kn_{0}} \chi\geq \varepsilon_{0}^{(k+1)n_{0}} \chi. $$ This proves \eqref{12.23.1} and shows that, if we define $r>1$ so that $\varepsilon_{0}^{n_{0}} =\bar{\varepsilon}^{r}$, then $ \hat{\mu}(q_{k})\geq Nq_{k}^{r}$ with $r,N>0$ depending only on $\delta,K,d$, and $\kappa$. By observing that $\hat{\mu}$ is an increasing function we obtain that $\hat{\mu}(q) \geq Nq^{r} $, $\mu(q)\geq Nq^{r}$ for $q\leq1$. Finally, since $m(0,0,q)\geq\mu(q)$ if in the construction of $\mu$, we take any $\kappa>0$, say $\kappa^{2}=1/2$, we come to \eqref{12.21.3} with $\gamma=1/r$, which, as it is explained in Remark \ref{remark 12.21.2}, proves the theorem. \mysection{Appendix} \label{section 12.24.3} We will be working in the setting of Chapter 3 of \cite{Kr08} using notation different from the previous sections of the present article. Thus, $(\Omega,\cF,\mu)$ is a measurable space with $\sigma$-finite measure $\mu$ satisfying $\mu(\Omega)=\infty$. For $\Gamma\in\cF$ we use the notation $|\Gamma|=\mu(\Gamma)$ and $$ f_{\Gamma}=\dashint_{\Gamma}f\,\mu(dx)=\frac{1}{|\Gamma|} \int_{\Gamma}f\,\mu(dx). $$ Next we take a filtration $\{\mathbb{C}_{n}:n\in\mathbb{Z}\}$ of partitions of $\Omega$ as in Section 3.1 of \cite{Kr08} and recall that for any $n\in\mathbb{Z}$ and $C\in \mathbb{C}_{n}$ there exists a unique ``parent'' $C'\in\mathbb{C}_{n-1}$ such that $C\subset C'$. It is assumed that whenever $C$ and $C'$ are related in the above described manner we have $|C'|\leq N_{0}|C|$, where $N_{0}$ is a constant independent of $n$, $C$, and $C'$. For functions $g$, for which it makes sense, denote $$ g_{|n}(x)=\dashint_{C_{n}(x)}g(y)\,\mu(dy), $$ where $C_{n}(x)$ is the element of the family $\mathbb{C}_{n}$ containing $x$. Also $$ \cM g(x):=\sup_{n<\infty}|g|_{|n}(x). $$ The most relevant filtration of partitions in this paper is the dyadic cube filtration of partitions of $\bR^{d}$ with Lebesgue measure when $$ \mathbb{C}_{n}=\{C_{n}(i_{1},...,i_{d}),i_{1},...,i_{d}\in\mathbb{Z}\}, $$ $$ C_{n}(i_{1},...,i_{d})=[i_{1}2^{-n},(i_{1}+1)2^{-n})\times...\times [i_{d}2^{-n},(i_{d}+1)2^{-n}). $$ In the remaining part of the section we consider two functions $u,v\in\cL_{1}(\Omega)$ and a nonnegative measurable function $g$ on $\Omega$. Below by $I_{\cM v(x)>\alpha \lambda}$ we mean the indicator function of the set $\{x:\cM v(x)>\alpha \lambda\}$. The most relevant case of Lemma \ref{lemma 12.16.1} for the purposes of the present article is when $u^{C}=|u|$. In the form it is stated and for $\gamma=1$ the lemma was used in \cite{Kr09} while treating linear elliptic equations with rather rough coefficients. \begin{lemma} \label{lemma 12.16.1} Let $\gamma\in(0,1]$. Assume that $|u|\leq v$ and for any $n\in\mathbb{Z}$ and $C\in\mathbb{C}_{n}$ there exists a measurable function $u^{C}$ given on $C$ such that $|u|\leq u^{C}\leq v$ on $C$ and \begin{equation} \label{6.29.3} \dashint_{C} \dashint_{C}|u^{C}(x)-u^{C}(y)|^{\gamma}\,\mu(dx)\mu(dy)\leq \dashint_{C}g^{\gamma}(x)\,\mu(dx) . \end{equation} Then for any $\lambda>0$ we have \begin{equation} \label{6.29.1} |\{x:|u(x)|\geq\lambda\}|\leq \nu^{-1}\lambda^{-\gamma} \int_{\Omega}g^{\gamma}(x)I_{\cM v(x)>\alpha \lambda}\,\mu(dx), \end{equation} where $\alpha=(2N_{0})^{-1}$ and $\nu= 1-2^{-\gamma}$. \end{lemma} Proof. Obviously we may assume that $u\geq0$. Fix a $\lambda>0$ and define $$ \tau(x)=\inf\{n\in\mathbb{Z}:v_{|n}(x)>\alpha\lambda\}. $$ We know that $\tau$ is a stopping time and if $\tau(x)<\infty$, then $$ v_{|n}(x)\leq \lambda/2,\quad\forall n\leq\tau(x). $$ We also know that $v_{|n}\to v\geq u$ (a.e.) as $n\to\infty$. It follows that (a.e.) $$ \{x:u(x)\geq\lambda\}=\{x:u(x)\geq\lambda,\tau(x)<\infty\} $$ $$ =\{x:u(x)\geq\lambda, v_{|\tau}(x)\leq \lambda/2\} =\bigcup_{n\in\mathbb{Z}}\bigcup_{C\in \mathbb{C}^{\tau}_{n}}A_{n}(C), $$ where $$ A_{n}(C):=\{x\in C:u(x)\geq\lambda, v_{|n}(x)\leq \lambda/2\}, $$ and $\mathbb{C}^{\tau}_{n}$ is the family of disjoint elements of $\mathbb{C}_{n}$ such that $$ \{x:\tau(x)=n\}=\bigcup_{C\in \mathbb{C}^{\tau}_{n}}C. $$ Next, for each $n\in\mathbb{Z}$ and $C\in\mathbb{C}_{n}$ on the set $A_{n}(C)$, if it is not empty, we have $v_{|n}=v_{C}$ and on $A_{n}(C)$ $$ u^{\gamma}-(v_{C})^{\gamma}\geq\lambda^{\gamma}(1-2^{-\gamma}) =\nu\lambda^{\gamma} . $$ We use this and the inequality $|a-b|^{\gamma}\geq|a|^{\gamma}-|b|^{\gamma}$ and conclude that for $x\in A_{n}(C)$ $$ \dashint_{C}|u^{C}(x)-u^{C}(y)|^{\gamma}\,\mu(dy) \geq(u^{C}(x))^{\gamma}-\dashint_{C}(u^{C}(y))^{\gamma}\,\mu(dy) $$ $$ \geq u^{\gamma}(x)-\dashint_{C}v^{\gamma}(y)\,\mu(dy) \geq u^{\gamma}(x)-(v_{C}(x))^{\gamma} \geq\nu\lambda^{\gamma}, $$ so that by Chebyshev's inequality $$ |A_{n}(C)|\leq \nu^{-1}\lambda^{-\gamma}\int_{C} \dashint_{C}|u^{C}(x)-u^{C}(y)|^{\gamma}\,\mu(dy)\mu(dx). $$ It follows by assumption \eqref{6.29.3} that $$ |A_{n}(C)|\leq \nu^{-1}\lambda^{-\gamma}\int_{C}g^{\gamma} \,\mu(dx), $$ $$ |\{x:u(x)\geq\lambda\}|\leq\nu^{-1}\lambda^{-\gamma} \sum_{n\in\mathbb{Z}}\sum_{C\in \mathbb{C}^{\tau}_{n}}\int_{C}g ^{\gamma}\,\mu(dx) $$ $$ =\nu^{-1}\lambda^{-\gamma}\int_{\Omega}g^{\gamma}I_{\tau<\infty}\,\mu(dx). $$ It only remains to observe that $\{\tau<\infty\}=\{\cM v>\alpha\lambda\}$. The lemma is proved. \begin{corollary} \label{corollary 12.16.1} Under the assumptions of Lemma \ref{lemma 12.16.1} for any $p>\gamma$ we have $$ \int_{\Omega}|u|^{p}\,\mu(dx)\leq\beta \big(\int_{\Omega}(\cM v)^{p}\,\mu(dx)\big)^{1-\gamma/p} \big(\int_{\Omega}g^{p}\,\mu(dx)\big)^{ \gamma/p}, $$ where $\beta=\nu^{-1}(1-\gamma/p)^{-1}\alpha^{\gamma-p}$. \end{corollary} Indeed, $$ \int_{\Omega}|u|^{p}\,\mu(dx)= \int_{0}^{\infty}|\{x:|u(x)|\geq\lambda^{1/p}\}|\,d\lambda $$ $$ \leq\nu^{-1}\int_{\Omega}g^{\gamma}\int_{0}^{\infty} \lambda^{-\gamma/p}I_{\cM v>\alpha\lambda^{1/p}}\,d\lambda \,\mu(dx) =\beta\int_{\Omega}g^{\gamma}(\cM v)^{p-\gamma}\,\mu(dx) $$ and it only remains to use H\"older's inequality. The second statement of the following theorem for $\gamma=1$ is, actually, the Fefferman-Stein theorem. \begin{theorem} \label{theorem 12.16.1} For $\gamma\in(0,1]$ define \begin{equation} \label{12.19.4} u_{\gamma}^{\sharp}(x)=\sup_{n}\sup_{C\in\mathbb{C}_{n}:x\in C} \big( \dashint_{C} \dashint_{C}|u (z)-u (y)|^{\gamma}\, \mu(dz)\mu(dy)\big)^{1/\gamma}. \end{equation} Then for $p>\gamma$ $$ \int_{\Omega}|u|^{p}\,\mu(dx)\leq\beta \big(\int_{\Omega}(\cM u )^{p}\,\mu(dx)\big)^{1-\gamma/p} \big(\int_{\Omega}(u_{\gamma}^{\sharp})^{p}\,\mu(dx)\big)^{ \gamma/p}. $$ In particular, by Hardy-Littlewood theorem, for $p>1$ and $u\in\cL_{p}$ we have $$ \|u\|_{\cL_{p}}\leq N\|u_{\gamma}^{\sharp}\|_{\cL_{p}} , $$ where $N=\beta^{1/\gamma}q^{(p-\gamma)/\gamma}$, $q=p/(p-1)$. \end{theorem} This is a simple consequence of Corollary \ref{corollary 12.16.1} since, obviously, for $n\in\mathbb{Z}$, $ C\in\mathbb{C}_{n}$, and $v=u^{C}=|u|$ we have $|u|\leq u^{C}\leq v$, and for any $x\in C$ $$ \dashint_{C} \dashint_{C}|u^{C}(z)-u^{C}(y)|^{\gamma}\, \mu(dz)\mu(dy) $$ $$ \leq \dashint_{C} \dashint_{C}|u (z)-u (y)|^{\gamma}\, \mu(dz)\mu(dy) \leq(u_{\gamma}^{\sharp}(x))^{\gamma}, $$ so that $$ \dashint_{C} \dashint_{C}|u^{C}(z)-u^{C}(y)|^{\gamma}\, \mu(dz)\mu(dy) \leq\dashint_{C}(u_{\gamma}^{\sharp} )^{\gamma} \,\mu(dx). $$
1,116,691,497,002
arxiv
\section{Introduction} Geodesic motion in a Schwarzschild field is one of introductory exercises in general relativity. The motion being completely integrable and planar (usually laid at $\theta=\pi/2$), its exact solution is mostly expressed as a $\phi(r)$ dependence. However, this involves elliptic integrals \citep{Hagihara-31,Darwin-59,MielnikP-62,Chandrasekhar-83} and may be quite uncomfortable if one needs to invert the result for $r(\phi)$ or for some parameter (usually the impact parameter). The solution for radius (or its reciprocal) as a function of angle was only given later \citep{Rodriguez-87,KraniotisW-03,HackmannL-08,Scharf-11,Kostic-12,GibbonsV-12} using elliptic functions. In particular, null geodesics (photon world-lines) have been treated notably by \citet{CadezK-05} and \cite{Munoz-14}. However, a {\em simple and easily invertible} approximation of the relativistic photon trajectories seems yet to be suggested. Although light bending has been treated at many places, usually the {\em total bending angle} is the aim (given by directions at the source and observer locations, or just by radial asymptotics), especially when the exercise is treated in connection with gravitational lensing --- see e.g. \cite{VirbhadraE-00}, \cite{MutkaM-02}, \cite{Amore-etal-07}, \cite{ConnellF-08}, \cite{Virbhadra-09}, or \cite{Bozza-10} (recently the discussion has been mainly focused on the effect of the cosmological constant, e.g. \citealt{BhadraBS-10}, \citealt{BiressaF-11}, and \citealt{ArakidaK-12}). Instead, we would like to reasonably approximate {\em whole trajectories}, which is of course more delicate. (An even higher level would also incorporate proper {\em timing}, which is also important, but we restrict this study to spatial trajectories.) Actually, one learns quickly that formulas obtained by linearization in some parameter do not reproduce well the strong-field behaviour, while, on the contrary, ``improving" this by hand tends to spoil their weak-field limit. Low-order prescriptions typically do not provide sufficient bending in the centre's vicinity and sufficiently quick straightening at larger distances, so even if they may be adjusted to have a correct pericentre radius as well as asymptotic directions, their overall shape is often far from satisfactory. Below, we first recall basic equations and fix parameterization of the problem (section \ref{Schwarzschild}). In section \ref{approximation}, several simple approximations of light rays, resulting from quite different approaches, are listed and their basic properties reviewed. Their performance at different radii is illustrated then and compared numerically with that of our simple suggestion presented (section \ref{examples}), showing that the latter is very accurate, even including trajectories with pericentres slightly below $r=4M$. Although we primarily focus on the behaviour of the approximations at finite radii from the central black hole, section \ref{asymptotic-angle} shows what answers the best of them give for an asymptotic angle along which the photons approach radial infinity. In section \ref{lensing-exercise} we illustrate the practical usage of our formula for solving the ray-deflection exercise, again with results almost identical to those obtained (purely numerically) from an exact treatment. Observations made are then briefly summarized in concluding remarks. \section{Null geodesics in the Schwarzschild space-time} \label{Schwarzschild} Mainly to fix notation, let us recall basic equations of the exact problem. Using Schwarzschild coordinates $(t,r,\theta,\phi)$ and geometrized units (in which $c=1$, $G=1$), we consider metric in the standard form \[{\rm d}s^2=-N^2{\rm d}t^2+\frac{{\rm d}r^2}{N^2}+r^2({\rm d}\theta^2+\sin^2\theta\,{\rm d}\phi^2)\] with $N^2:=1-2M/r$. Geodesic motion in the spherically symmetric field being planar, let us choose the orbital plane to be the equatorial one ($\theta=\pi/2$). In such a case, the photon four-momentum has non-zero components \begin{equation} \label{photon-momentum} p^t=\frac{E}{N^2} \,, \quad p^r=\epsilon^r\,\frac{E}{r}\,\sqrt{r^2-N^2 b^2} \;, \quad p^\phi=\frac{L}{r^2} \,, \end{equation} where $E:=-p_t$, $L:=p_\phi$ and $b:=|L|/E$ are the photon's energy, angular momentum and impact parameter, respectively, all remaining conserved along the ray; the sign $\epsilon^r\equiv\pm 1$ fixes the orientation of radial motion (while that of azimuthal motion is determined by the sign of $L$). Let us focus on photons with $b>3\sqrt{3}\,M$ which have a (one) turning point of radial motion, either pericentre (which is always above $r=3M$) or apocentre (which is always below $r=3M$).\footnote {Photons with $b<3\sqrt{3}\,M$ move from infinity to the centre or vice versa without any radial turning.} Let us then adjust, without loss of generality, the coordinates so that a given photon reaches this turning point at $\phi=0$. The ray thus gets symmetric with respect to the meridional plane $\{\phi=0,\pi\}$ and it is sufficient to only consider its half starting from that plane. The vanishing of radial momentum at $\phi=0$ constrains the constants of motion by the condition \begin{equation} \label{EL-constraint} r_0^2-N_0^2 b^2=0 \quad \Longrightarrow \quad b=\frac{r_0}{\sqrt{1-\frac{2M}{r_0}}} \;, \end{equation} where $r_0:=r(\phi=0)$ indicates the radius at the turning point of radial motion and $N_0:=N(r=r_0)$. Equations for $p^\phi$ and $p^r$ give \begin{equation} \label{dphi/dr} \frac{{\rm d}\phi}{{\rm d}r} =\frac{\epsilon^r}{r}\,\frac{1}{\sqrt{\frac{r^2}{b^2}-N^2}} \end{equation} which can further be expressed in terms of the extremal radius $r_0$, \begin{align} \label{dphi/dr,r0} \frac{{\rm d}\phi}{{\rm d}r} &= \frac{\epsilon^r\,\frac{r_0}{r}} {\sqrt{N_0^2 r^2-N^2 r_0^2}} \nonumber \\ &= \frac{\epsilon^r\,r_0^{3/2}} {\sqrt{r(r-r_0)}\; \sqrt{r(r+r_0)(r_0-2M)-2Mr_0^2}} \;. \end{align} This second form is more complicated, but it turns out to be much more suitable for integration. Assuming, without loss of generality, that we focus on the half of the trajectory which starts toward positive $\phi$ from $\phi=0$ (briefly, we assume $L>0$), the integration gives (see \citet{Darwin-59} or \citet{Chandrasekhar-83}, formula (260) in Chapter 3) \begin{align} \phi(r) &= \frac{2\,\sqrt{r_0}}{[(r_0-2M)(r_0+6M)]^{1/4}} \left[K(k)-F(\chi,k)\right] \label{phi(r)} \\ &= \frac{2\,\sqrt{r_0}}{[(r_0-2M)(r_0+6M)]^{1/4}}\; F(\chi',k) \,, \label{phi(r)'} \end{align} where $F(\chi,k):=\int_0^\chi\frac{{\rm d}\alpha}{\sqrt{1-k^2\sin^2\alpha}}$ is the elliptic integral of the 1st kind, with amplitude $\chi$ and modulus $k$ given by \begin{align} \sin^2\chi &:= 1-\frac{1}{k^2}\, \frac{2M\left(1-\frac{r_0}{r}\right)}{\sqrt{(r_0-2M)(r_0+6M)}} \;, \\ 2k^2 &:= 1-\frac{r_0-6M}{\sqrt{(r_0-2M)(r_0+6M)}} \;, \label{2k2} \end{align} and $K(k):=F(\pi/2,k)$ is its complete version. One can check immediately that $F(\chi,k)$ only reduces to $K(k)$ at the turning point, where $r=r_0$ and so $\chi=\pi/2$, which correctly yields $\phi(r\!=\!r_0)=0$. The second expression (\ref{phi(r)'}) contains a different amplitude $\chi'$ which is related to $\chi$ by \begin{align} \sin^2\chi' &= \frac{1-\sin^2\chi}{1-k^2\sin^2\chi} \nonumber \\ &= \frac{4Mk^{-2}\left(1-\frac{r_0}{r}\right)} {\sqrt{(r_0-2M)(r_0+6M)}+r_0-2M-4M\frac{r_0}{r}} \end{align} (while $k$ remains unprimed in both expressions). We add that the complementary modulus $k'$, which is related to $k$ by $k'^2=1-k^2$, is given by the same expression (\ref{2k2}) as $k$, just with {\em plus} after 1; their product is therefore quite short, \begin{equation} k^2 k'^2=\frac{4M\,(r_0-3M)}{(r_0-2M)(r_0+6M)} \;. \end{equation} With our parametrization, the azimuth $\phi$ of a photon increases monotonically from zero. For a photon starting (from $\phi=0$) from $r_0\simeq 3M$ the azimuth can finally reach very large values; with $r_0$ growing from $3M$ to infinity the asymptotic azimuth decreases from infinity to $\pi/2$, while with $r_0$ shrinking from $r=3M$ toward $2M$ the photon falls through the horizon at $\phi$ quickly decreasing from infinity toward zero. \section{Approximating the light rays} \label{approximation} The exact Darwin's formula (\ref{phi(r)}) is quite simple, yet it is not easy to invert it for $r(\phi)$ and mainly for $r_0$ (which would in turn yield constants of the motion as functions of $r$ and $\phi$). Such a problem is typically encountered when asking ``What are the parameters of the light/photons that arrive at a given location from some (which?) points of a given source?" (in the Schwarzschild field). Namely, the source generally emits photons of various different parameters (from different starting points, with different energies, in different directions, etc.), of which each follows a different world-line. When studying some radiation-influenced process at a given location outside the source, it is first necessary to find {\em which} of the photons get to that location (which means to find from where in the source they started) and then to infer what are their properties (in order to be able to say what will be their effect). An important system that raises such questions is an accretion disc around a black hole, because i) its inner part is a powerful source of radiation which strongly affects matter around (on the other hand, the disc can be significantly irradiated by a hot ``corona", and possibly even self-irradiated), and ii) the inner part of the disc lies in a very strong gravitational field where the propagation of light is not trivial (linear). To give a specific example, we came across the demand to invert Darwin's formula for $r_0$ when studying, in Schwarzschild space-time, the motion of test particles influenced by a radiation field emitted from the equatorial plane in a perpendicular direction (such a pattern was considered to approximate the radiation generated by an equatorial thin disc) --- see \cite{BiniGJS-15}. In such an arrangement, one needs to find the impact parameters of the (two) ``vertical" photons interacting with the particle at a given location, which precisely requires one to find the $r_0$'s from where those photons started. Therefore, it is worthwhile to look for an approximation of photon trajectories which would be simple, invertible for $r_0$, yet reasonably accurate. In particular, it should work well for as small a radius ($r_0$) as possible, because in real astrophysical situations there is often much radiation just quite close to the horizon. This is natural since matter inflows into these regions with extremely high speeds, so collisions of its streams dissipate huge amounts of energy which intensively outflow as radiation. In particular, real accretion discs are supposed to radiate most from their innermost regions which probably reach to the innermost stable circular geodesic at $r=6M$ or even below. A natural starting attempt for how to simplify the inversion problem is to linearize the exact formula in some small parameter. We have either $2M<r\leq r_0<3M$ or $r\geq r_0>3M$, with $b>r_0$ anyway. Clearly the $r_0>3M$ case is more astrophysically interesting. Then the relation between $r$ and $b$ changes in time: the photon starts from $r\equiv r_0<b$, but quickly gets to $r>b$ (and then even to $r\gg b$). Surely $M$ is the least of all parameters and hence the usual linearization in it. However, the weak-field approximations --- like the one obtained by linearization in $M$ --- generally yield trajectories ``less bent about" the central gravitating body, since the centre's field is weakened effectively. If such an approximation is employed for the inversion of the $\phi(r;r_0)$ formula for $r_0$, it may lead to errors when applied to strongly bent rays. Actually: adopt our parametrization, i.e., adjust the plane $\phi=0$ so that the ray crosses it (``starts from it") in a perpendicular direction (it is purely azimuthal there). Now, imagine a photon approaching $\phi=\pi$ (from $\phi<\pi$) while having a small radius: it must have started (purely tangentially) from $\phi=0$ from a {\em very small} radius $r_0$ in order for it to have been bent about the centre sufficiently. For such photons, weak-field approximations may easily yield a starting radius $r_0$ even lying {\em below horizon} (note that there is actually no horizon in most such approximations), which may then bring errors if substituted into the Schwarzschild-metric lapse function $N\equiv\sqrt{1-2M/r}$. One could surely improve the approximation by taking into account higher order(s) of the small-parameter expansion, but higher-order formulas mostly can{\em not} be inverted easily. Restricting oneself to low-order formulas, one can resort to some more ``pragmatic" construct instead. First, ``pseudo-Newtonian" descriptions are often employed in the astrophysical literature, based on the Newtonian potential suitably modified to mimic certain features of the actual relativistic field. Second, our particular problem of scattering-type light trajectories can be approximated by a hyperbola adjusted to the desired asymptotic directions. And third, one can try to design a suitable formula ``by hand", simply observing the main properties it should reproduce. However, all such ad hoc formulas, not relying on any clearly justified procedure, must be handled with care; in particular, even if they were successful close to the black hole (or rather {\em more if} they were successful there), their weak-field (large-radius) behaviour may not be satisfactory. They can also hardly yield trustful replies for tiny, velocity-dependent (dragging), non-stationary, radiation etc. effects, but in the simple case of static space-times, they have mostly proved quite practical. Let us compare the above possibilities, including mainly several successful suggestions from literature. \begin{figure*} \epsscale{.85} \plotone{fig1.eps} \caption {Azimuthal description of a photon trajectory used by Beloborodov \citep{Beloborodov-02} ($\psi$, {\it left}) and by us in this note ($\phi$, {\em right}); $\beta$ is the total deflection angle. Axes $r\cos\phi$, $r\sin\phi$ are also shown (with values in units of $M$), the black hole is gray.} \label{ray-description} \end{figure*} \subsection{Linearization of Darwin's formula in $M$} \label{linearization-in-M} Darwin's formula (\ref{phi(r)}) can be linearized in $M$ to obtain \begin{equation} \label{Darwin,lin} \cos\phi= \frac{r_0}{r}-\frac{M}{r_0}\,\frac{(r-r_0)(2r+r_0)}{r^2} +O(M^2) \,. \end{equation} Clearly $\phi=0$ corresponds to $r=r_0$ correctly, while for $r\rightarrow\infty$ one has asymptotics $\cos\phi_\infty=-2M/r_0$ which is always $>-1$ (so $\phi_\infty<\pi$ which means that the ray's half-bending is less than $90^\circ$). The linear part can easily be inverted for $r_0$. \subsection{First $M$-orders from the Binet formula} Although the treatments on lensing mostly aim at the total deflection angle, some of them also provide prescription for the photon trajectory. Both are usually obtained by perturbative solution of the well-known Binet formula which in the null case reads \begin{equation} \frac{{\rm d}^2 u}{{\rm d}\phi^2}+u=3Mu^2, \qquad {\rm where} \quad u:=\frac{1}{r} \,. \end{equation} Let us mention three of them. \cite{BiressaF-11} solved the equation up to linear order by\footnote {Papers on lensing usually adjust the azimuth so that the pericentre lies at $\phi=\pi/2$, so we must shift their formulas by $\pi/2$.} \begin{align} \frac{r_0}{r} &= \cos\phi+\frac{M}{r_0}\,(1+\sin^2\phi-\cos\phi) \nonumber \\ &= \cos\phi+\frac{M}{r_0}\,(2+\cos^2\phi)(1-\cos\phi) \label{BiressaF,lin} \end{align} in their equation (14). Clearly $r(\phi=0)=r_0$ as it should be and solving for $\cos\phi$ yields (\ref{Darwin,lin}) in linear order. A slightly different linear-order solution was given by \cite{BhadraBS-10} (their equation (5)), \begin{equation} \frac{R}{r}=\cos\phi+\frac{M}{R}\,(1+\sin^2\phi), \end{equation} where $R$ is a length whose meaning follows by setting $\phi=0$ (and $r=r_0$): \begin{equation} \frac{R}{r_0}=1+\frac{M}{R} \,. \end{equation} Solving this for $R$, substituting above and linearizing in $M$, one has \begin{equation} \label{BhadraBS,lin} \frac{r_0+M}{r}=\cos\phi+\frac{M}{r_0}\,(1+\sin^2\phi). \end{equation} Although this differs from (\ref{BiressaF,lin}), its solution for $\cos\phi$ again agrees with (\ref{Darwin,lin}) in linear order. Finally, let us turn to \cite{ArakidaK-12} who presented a second-order solution in their equation (7), \begin{align} \frac{b}{r} =\cos\phi &+\frac{M}{b}\,(1+\sin^2\phi) \nonumber \\ &+\frac{M^2}{4b^2}\,(7\cos\phi+15\,\phi\,\sin\phi+3\cos^3\phi) \,. \end{align} The corresponding equation expressed in terms of $r_0$ follows by substituting (\ref{EL-constraint}) for $b$ and expanding in $M$ accordingly, \begin{align} & \frac{r_0+M}{r}+\frac{3M^2}{2r_0 r} =\cos\phi+\frac{M}{r_0}\,(1+\sin^2\phi) \nonumber \\ & +\frac{M^2}{4r_0^2}\,(7\cos\phi+15\,\phi\,\sin\phi+3\cos^3\phi-4-4\sin^2\phi) \,. \label{ArakidaK,lin} \end{align} In linear order this reduces to (\ref{BhadraBS,lin}) and its $\phi=0$ form \[\frac{r_0+M}{r}+\frac{3M^2}{2r_0 r}=1+\frac{M}{r_0}+\frac{3M^2}{2r_0^2}\] properly yields $r=r_0$. Inversion for $b$ or $r_0$ is of course more uncomfortable if keeping the second order in $M$. \subsection{Using a suitable pseudo-Newtonian potential} \label{pseudo} In Newton's theory, the motion of test particles in the velocity-independent spherical potential $V(r)$ is also confined to a plane (which we again identify with $\theta=\pi/2$) and described by \begin{equation} \label{ddot:r,phi} \ddot{r}=-V_{,r}+r\dot{\phi}^2, \qquad r\ddot{\phi}=-2\dot{\phi}\,\dot{r} \,. \end{equation} These equations have usual integrals of energy and angular momentum \begin{equation} E=\frac{m}{2}\,(\dot{r}^2+r^2\dot{\phi}^2)+mV, \qquad L=m r^2\dot{\phi} \,, \end{equation} which invert for velocities as \begin{equation} \label{velocities} r\dot{\phi}=\frac{L}{mr} \,, \qquad \dot{r}^2=\frac{2mr^2(E-mV)-L^2}{m^2 r^2} \;. \end{equation} The ratio of the velocities gives an equivalent of the relativistic equation (\ref{dphi/dr}), \begin{equation} \label{dphi/dr,pseudo} \frac{{\rm d}\phi}{{\rm d}r}= \frac{\epsilon^r}{r}\left[\frac{2mr^2}{L^2}(E-mV)-1\right]^{-1/2}, \end{equation} where $\epsilon^r=+1$ will again be chosen below. Should the trajectory be strictly tangential (azimuthal) at $\phi=0$ ($\dot{r}=0$ at $r=r_0$), it has to satisfy the condition \begin{equation} 2mr_0^2(E-mV_0)=L^2 \,, \quad {\rm where} \;\; V_0:=V(r=r_0). \end{equation} Besides the overall caution in following the pseudo-Newtonian approach, it is also necessary to distinguish between various potentials proposed in the literature, because different ones are suitable for different purposes (see e.g. \citealt{Crispino-etal-11}). The Paczy\'nski-Wiita potential $V=-M/(r-2M)$ is a most simple and efficient mimicker of the Schwarzschild field, whose main advantages are correct values of important circular geodesics, so it is mainly suitable for modeling accretion discs. However, if plugged into the above, the resulting equation for ${\rm d}\phi/{\rm d}r$ is not easier to integrate than its relativistic counter-part. The same also applies to the Nowak-Wagoner quadratic-expansion potential \[V=-\frac{M}{r}\left(1-\frac{3M}{r}+\frac{12M^2}{r^2}\right)\] which has often proved the best of the ``simple" suggestions.\footnote {A more advanced yet elegant possibility, namely a potential suitably dependent on velocity, was suggested by \citet{TejedaR-13}. It would be worth checking whether it is also useful for null geodesics.} It is clear from equation (\ref{dphi/dr,pseudo}) that for its integration to be simpler than that of the relativistic case (\ref{dphi/dr}), the resulting polynomial under the square root has to be exactly quadratic in $r$. This is the case if the potential is of the form \begin{equation} \label{potential-Wegg} V=-\frac{M}{r}\left(1+\frac{\alpha M}{r}\right), \end{equation} where $\alpha$ is some constant. Such a form has been advocated by \citet{Wegg-12}, specifically with $\alpha=3$. Equation (\ref{dphi/dr,pseudo}) supplied with the initial condition $\phi_0:=\phi(\dot{r}=0)=0$ is then (with the above form of the potential) solved by \begin{equation} \phi(r)= \frac{1}{k}\; {\rm arccot}\frac{k-\frac{m^2 Mr}{kL^2}} {\sqrt{\frac{2mr^2}{L^2}(E-mV)-1}} \;, \end{equation} where we introduced the dimensionless constant \[k^2:=1-\frac{2m^2 \alpha M^2}{L^2} \;.\] The pseudo-Newtonian picture is doubly problematic when describing photons, in particular, their speed cannot be considered constant. However, choosing their {\em initial} linear speed to be equal to one, which means $r_0\dot\phi_0=1$ in our case when photons start in a pure azimuthal direction, one has \[E=\frac{m}{2}+mV_0, \quad L=m r_0, \quad k^2=1-\frac{2\alpha M^2}{r_0^2} \,,\] so\footnote {Note that the photon has enough energy to escape to infinity, $E>0$, if $V_0>-1/2$, which holds for $r_0\!>\!(1\!+\!\sqrt{1+2\alpha})M$; for $\alpha\!=\!3$ this means $r_0>3.65M$, which is not so bad.} the equatorial trajectory can be rewritten as \begin{align} \phi(r) &= \frac{r_0}{\sqrt{r_0^2-2\alpha M^2}} \,\times \nonumber \\ & \quad \times \; {\rm arccot} \frac{r_0^2-2\alpha M^2-Mr} {\sqrt{r_0^2-2\alpha M^2}\;\sqrt{r^2-r_0^2+2r^2(V_0-V)}} \;. \label{Wegg-trajectory} \end{align} Clearly $\phi(r=r_0)=0$ correctly and the asymptotic value at radial infinity amounts to \begin{align} \phi_\infty &= \frac{r_0}{\sqrt{r_0^2-2\alpha M^2}} \,\times \nonumber \\ & \quad \times \left(\pi - {\rm arccot}\frac{M}{\sqrt{r_0^2-2\alpha M^2}\;\sqrt{1+2V_0}}\right). \label{Wegg,phi_infty} \end{align} When speaking about asymptotics, it should be stressed that photons exist which remain bound on ``elliptic-type" orbits. Actually, recalling equations (\ref{velocities}), we see that $\dot{r}=0$ if $2mr^2(E-mV)=L^2$, which after substitution of our $E=m/2+mV_0$, $L=m r_0$ implies $r^2-r_0^2=2r^2(V-V_0)$, i.e., with $\alpha=3$ and expanded, \begin{equation} (r-r_0)\left[(r_0^2-6M^2)(r+r_0)-2Mrr_0\right]=0. \end{equation} Besides the automatic zero at $r=r_0$, this also has a second root at \[r=-r_0\,\frac{r_0^2-6M^2}{r_0^2-2Mr_0-6M^2} \;.\] However, this root is only relevant in a narrow interval $r_0\in(2.8922,3.6458)M$ (it grows from $2M$ to infinity very fast within this range). There is one case of particular interest within the above range of $r_0$: using $L\equiv m r^2\dot{\phi}=m r_0$ back in the first of equations (\ref{ddot:r,phi}), we have \begin{equation} r^3\ddot{r}=-r^3 V_{,r}+r_0^2 =r_0^2-Mr-2\alpha M^2 \,, \end{equation} which specifically for $\alpha=3$ reads \begin{align*} r^3\ddot{r} &= r_0^2-Mr_0-6M^2+M(r_0-r) \\ &= (r_0-3M)(r_0+2M)+M(r_0-r) \end{align*} and thus implies that $\ddot{r}=0$ at $r=r_0=3M$. Therefore, the potential (\ref{potential-Wegg}) with $\alpha=3$ correctly reproduces the Schwarzschild circular photon geodesic (which indicates that it may be successful in simulating photon motion in the innermost region). \subsection{Approximation by a hyperbola} \label{hyperbola-approximation} Another possibility is to approximate the photon trajectory by a suitable hyperbola. Placing its focus at $r=0$ and its vertex at ($\phi=0$, $r=r_0$), and prescribing some asymptotic azimuth $\phi_\infty$, it is given by the equation \begin{equation} \label{hyperbola} r\cos\phi = r_0+(r-r_0)\,\cos\phi_\infty \,. \end{equation} Now $\phi_\infty$ can be prescribed somehow, for example chosen according to the exact formula. However, since the latter makes the above expression uncomfortably long, let us instead illustrate it with $\phi_\infty$ provided by the Beloborodov formula (see next subsection), i.e. with $\cos\phi_\infty=-\frac{2M}{r_0-2M}$ (which proved quite accurate): \begin{equation} r\cos\phi = r_0-2M\,\frac{r-r_0}{r_0-2M} \,. \end{equation} The limit possibility is to choose $\phi_\infty=\pi$ which yields the parabola \[r\cos\phi = 2r_0-r.\] \subsection{Beloborodov's approximation} A simple approximation of photon trajectories has been provided by \citet{Beloborodov-02} in his formula (3), \begin{align} r(\psi)&= \sqrt{M^2\,\frac{(1-\cos\psi)^2}{(1+\cos\psi)^2}+\frac{b^2}{\sin^2\psi}} -M\,\frac{1-\cos\psi}{1+\cos\psi} \nonumber \\ &= \sqrt{M^2\,\tan^2\frac{\psi}{2}+\frac{b^2}{\sin^2\psi}} -M\,\tan\frac{\psi}{2} \;, \end{align} where the position angle $\psi$ is measured (from the centre) so that $\psi=0$ fixes the asymptotic escape direction (with all the orbit having $\psi>0$, so the photon is considered to move {\em against} the $\psi$ orientation). The value of $\psi$ at pericentre ($\psi_0$) is determined by \begin{equation} \frac{{\rm d}r}{{\rm d}\psi}=0 \quad \Longrightarrow \quad \cos\psi=-\frac{2M}{r_0-2M}=:\cos\psi_0 \end{equation} and lies between $\pi/2$ and $\pi$. The angle $\beta=2\psi_0-\pi$ represents the total bending angle; $2\psi_0$ is the asymptotic {\em ingoing} direction if the trajectory is extended to infinity in both directions (see figure \ref{ray-description}). Beloborodov derived the above result from the elegant equation \begin{equation} \label{alpha-r,psi} 1-\cos\alpha=(1-\cos\psi)\,N^2 \end{equation} which pretty accurately approximates the relation between the photon's momentary radius $r$, position angle $\psi$ and the local direction of flight measured by the angular deflection from the radial direction in a local static frame, $\alpha$. (How to understand the success of this ``cosine relation" is explained in section 2 of \citealt{Beloborodov-02}.) The local direction $\alpha$ is given by the photon momentum, \begin{align} \tan\alpha &= \frac{\sqrt{g_{\phi\phi}}\,|p^\phi|}{\sqrt{g_{rr}}\,|p^r|} = \frac{bN}{\sqrt{r^2-b^2 N^2}} \nonumber \\ &\Rightarrow \quad \sin\alpha=\frac{bN}{r} \,, \;\; \cos\alpha=\sqrt{1-\frac{b^2 N^2}{r^2}} \,, \end{align} so one has, by solving equation (\ref{alpha-r,psi}) for $\cos\psi$ and then substituting for $\cos\alpha$, \begin{equation} \label{cos(psi)} \cos\psi=\frac{r\cos\alpha-2M}{r-2M} =\frac{\sqrt{r^2-b^2 N^2}-2M}{r-2M} \;. \end{equation} This increases monotonically from $-\frac{2M}{r_0-2M}\equiv\cos\psi_0$ at pericentre through zero to positive values and approaches unity at $r\rightarrow\infty$. In our parametrization the photon has pericentre at $\phi=0$ and moves in a positive $\phi$ direction (to some asymptotic $\phi_\infty\equiv\psi_0$), and hence the angles are related by $\psi=\psi_0-\phi=\phi_\infty-\phi$ (see figure \ref{ray-description}), which means \begin{align} \cos\psi&= \cos(\psi_0-\phi) = \cos\psi_0\cos\phi+\sin\psi_0\sin\phi \nonumber \\ \quad &= -\frac{2M}{r_0-2M}\,\cos\phi +\frac{\sqrt{r_0(r_0-4M)}}{r_0-2M}\,\sin\phi \,. \label{cos(psi),phi} \end{align} Hence, equating (\ref{cos(psi),phi})=(\ref{cos(psi)}) gives the implicit relation \begin{equation} \label{Beloborodov-approx} \frac{N_0^2}{N^2}\left(\!\sqrt{1\!-\!\frac{b^2 N^2}{r^2}}-\frac{2M}{r}\!\right) =\sqrt{1\!-\!\frac{4M}{r_0}}\,\sin\phi-\frac{2M}{r_0}\cos\phi \end{equation} for the $(r,\phi)$ trajectory. The Beloborodov's formula is only applicable at $r_0>4M$ (it yields $\phi_\infty=\pi$ for $r_0=4M$), but it really provides a very good approximation almost all the way down to there. It is less suitable for finding $r_0$ as a function of $r$ and $\theta$, because the above relation yields for it an equation of the 16th (or at least the 8th) degree. \subsection{New suggestion} \label{new-suggestion} The main purpose of this note is to suggest and test another ray-approximating formula, \begin{equation} \label{our-approximation} \cos\phi=\frac{r_0}{r}- \frac{M}{r_0-\alpha M}\, \frac{(r-r_0)(2r+r_0)}{(r-\omega M)^2} \;, \end{equation} where $\alpha$ and $\omega$ are real constants. It correctly gives $\cos\phi=1$ at $r=r_0$, its radial asymptotics reads \begin{equation} \label{our-approximation,infty} \cos\phi_\infty=-\frac{2M}{r_0-\alpha M} \end{equation} and it can be inverted for $r_0$ as \begin{equation} \label{our-approximation,r0} r_0=\frac{{\cal R} +\sqrt{{\cal R}^2+4Mr{\cal A}{\cal B}}}{2{\cal A}} \;, \end{equation} where \begin{align} {\cal R} &\equiv (r-\omega M)^2 (r\cos\phi+\alpha M)-Mr^2 \,, \nonumber \\ {\cal A} &\equiv (r-\omega M)^2+Mr, \nonumber \\ {\cal B} &\equiv 2r^2-(r-\omega M)^2\alpha\cos\phi \,. \nonumber \end{align} The formula works reasonably within a certain range of parameters $\alpha$ and $\omega$ and it is hard to say which particular combination is the best, because accuracy at small radii favours somewhat different values than accuracy farther away. We will specifically show that very good results are obtained with $\alpha=1.77$ and $\omega=1.45$, for example. As in the case of Wegg's pseudo-Newtonian potential, ``elliptic-type" bound orbits do exist. Since \[\frac{{\rm d}r}{{\rm d}\phi}=-\frac{\sin\phi}{{\rm d}(\cos\phi)/{\rm d}r} \;,\] this would require either $\sin\phi=0$, or ${\rm d}(\cos\phi)/{\rm d}r\rightarrow\infty$. The latter would only hold for $r_0=\alpha M$ or $r=\omega M$, of which none applies with our choice of $\alpha$ and $\omega$ (namely, we always have $r_0>\alpha M$ and $r>\omega M$). Hence, purely tangential motion can only happen at $\sin\phi=0$. This holds automatically at $r=r_0$ (where $\cos\phi=+1$), and it can also hold on the other side of the equatorial plane, $\cos\phi=-1$. Solving this equation for $r$, one finds that for our constants $\alpha=1.77$, $\omega=1.45$ the solution grows very quickly from $2.025M$ to infinity with $r_0$ increased from $2M$ to $(2+\alpha)M$ (this last value is valid generally and does not depend on $\omega$). In other words, all photons launched from $r_0>(2+\alpha)M$ escape to infinity and have $\phi_\infty<\pi$ there, consistent with the asymptotic formula (\ref{our-approximation,infty}). \subsection{Comparison on expansions in $M$} \label{expansions} Before embarking on a numerical comparison of the approximations with the exact formula, let us further check their algebraic properties by expanding them in powers of $M$. To be more specific, let us thus expand $\cos\phi(r)$. First, the cosine of the exact formula (\ref{phi(r)}) expands as \begin{align} \cos\phi = &\,\frac{r_0}{r} -\frac{M}{r_0}\,\frac{(r-r_0)(2r+r_0)}{r^2} \nonumber \\ &-\frac{M^2}{r_0^2}\,\frac{r-r_0}{r}\,\frac{{\cal D}}{4r^2} +O(M^3) \,, \end{align} where abbreviation has been used \[{\cal D}:= 30r^2\sqrt{\frac{r+r_0}{r-r_0}}\;{\rm arccos}\sqrt{\frac{r+r_0}{2r}} -8r^2+9rr_0+5r_0^2 \,.\] Now to the approximations. Of the prescriptions obtained from perturbative solution of the Binet formula, we choose the result (\ref{BiressaF,lin}) by \cite{BiressaF-11}. When expressed in terms of $\cos\phi$, it expands as \begin{align} \cos\phi = &\,\frac{r_0}{r} -\frac{M}{r_0}\,\frac{(r-r_0)(2r+r_0)}{r^2} \nonumber \\ &-\frac{M^2}{r_0^2}\,\frac{(r-r_0)(2r+r_0)(r+2r_0)}{r^3} +O(M^3) \,. \label{Biressa,expansion} \end{align} Performing the same expansion with the pseudo-Newtonian result (\ref{Wegg-trajectory}) involving Wegg's potential $V=-(M/r)(1+3M/r)$, one obtains \begin{align} \cos\phi = &\,\frac{r_0}{r} -\frac{M}{r_0}\,\frac{r-r_0}{r} \nonumber \\ &-\frac{M^2}{r_0^2}\,\frac{r-r_0+3\sqrt{r^2-r_0^2}\;{\rm arccos}\,\frac{r_0}{r}}{r} +O(M^3) \,. \end{align} The hyperbola (\ref{hyperbola}), if ``endowed with" the exact asymptotic angle (given by Darwin's formula), expands to \begin{align} \cos\phi = &\,\frac{r_0}{r} -\frac{2M}{r_0}\,\frac{r-r_0}{r} \nonumber \\ &-\frac{M^2}{r_0^2}\,\frac{(15\pi-16)(r-r_0)}{8r} +O(M^3) \,. \label{hyperbola,expansion} \end{align} Next approximation is the one owing to Beloborodov's relation (\ref{Beloborodov-approx}). Solving the latter for $\cos\phi$ and expanding as above, we get \begin{align} \cos\phi = &\,\frac{r_0}{r} -\frac{M}{r_0}\,\frac{(r-r_0)(2r+r_0)}{r^2} \nonumber \\ &-\frac{M^2}{r_0^2}\,\frac{r-r_0}{r_0}\, \frac{{\cal B}}{2r^3} +O(M^3) \,, \end{align} where \[{\cal B}:= 8r^3+4r^2 r_0+5rr_0^2+3r_0^3-4r(2r-r_0)\sqrt{r^2-r_0^2} \;.\] Finally, our formula (\ref{our-approximation}) expands as \begin{align} \cos\phi = &\,\frac{r_0}{r} -\frac{M}{r_0}\,\frac{(r-r_0)(2r+r_0)}{r^2} \nonumber \\ &-\frac{M^2}{r_0^2}\,\frac{(r-r_0)(2r+r_0)(\alpha r+2\omega r_0)}{r^3} +O(M^3) \,. \label{our,expansion} \end{align} As expected, the absolute term (corresponding to a straight line) is the same for all of the formulas. The linear terms are also common (and corresponding to the expansion of the exact formula), except for the pseudo-Newtonian formula and the hyperbola. The quadratic terms are somewhat longer, only in cases (\ref{Biressa,expansion}), (\ref{hyperbola,expansion}) and (\ref{our,expansion}) they remain rather simple. In particular, notice that the expansion of our suggestion (\ref{our,expansion}) is a generalization of the expansion (\ref{Biressa,expansion}) obtained from the approximate solution by \cite{BiressaF-11}. It is also seen that if our constants $\alpha$ and $\omega$ are larger than 1 (recall that we are actually suggesting the values $\alpha=1.77$, $\omega=1.45$), then our quadratic term is bigger (more negative) and thus results in a trajectory more bent than the one provided by (\ref{BiressaF,lin}). \begin{figure*} \includegraphics[width=\textwidth]{fig2.eps} \caption {Comparison of the light-ray approximations in the strong field near the black hole. Arrangement and parametrization of the photon trajectories in the equatorial $(r\cos\phi,r\sin\phi)$ plane corresponds to the right plot of figure \ref{ray-description}. The curves show the exact trajectory is in {\it solid red}, the linearization of Darwin's formula (\ref{Darwin,lin}) yields the {\it dot-dashed black} line, the linearized solution of the Binet formula by \cite{BhadraBS-10} (our equation \ref{BhadraBS,lin}) is {\it dotted cyan}, the one by \cite{BiressaF-11} (our equation \ref{BiressaF,lin}) is the {\it dashed cyan} line, the quadratic-order solution following from \cite{ArakidaK-12} (our equation \ref{ArakidaK,lin}) yields the {\it solid cyan} line, the approximation due to Beloborodov \cite{Beloborodov-02} (our equation \ref{Beloborodov-approx}) is the {\it dotted black} line, a hyperbola (\ref{hyperbola}) adjusted to the given pericentre and to correct asymptotics is the {\it dashed brown} line, the approximation using the pseudo-Newtonian potential recommended by \cite{Wegg-12} (our equation \ref{Wegg-trajectory}) is the {\it dashed (dark) blue} line and our newly suggested approximation (\ref{our-approximation}) is the {\it solid light green} line. From top left to bottom right, the plots show trajectories of photons starting tangentially (from $\phi_0=0$) from radii $r_0=2M$, $r_0=2.3M$, $r_0=3M$, $r_0=3.22M$, $r_0=3.4M$, $r_0=3.6M$, $r_0=4M$, $r_0=5M$ and $r_0=8M$. Beloborodov's prescription (dotted black) can only be employed for $r_0\geq 4M$ and the hyperbola-approximation (dashed brown line) for $\phi_\infty<\pi$. See the main text for further commentary.} \label{comparison-near} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{fig3.eps} \caption {Same curves as those in figure \ref{comparison-near}, i.e. again showing rays with pericentres at $r_0=2M$, $r_0=2.3M$, $r_0=3M$, $r_0=3.22M$, $r_0=3.4M$, $r_0=3.6M$, $r_0=4M$, $r_0=5M$ and $r_0=8M$, but now grouped into plots by approximations (rather than by pericentre radii), so that it is better seen how these depend on radius in comparison with the exact ideal: from top left to bottom right and with the same colouring as in previous figure, one can see exact rays (solid red line), our approximation (solid light green line), pseudo-Newtonian result with Wegg's potential (dashed dark blue line), Beloborodov's approximation (dotted black line; only applicable to the last three trajectories), suitably adjusted hyperbolas (dashed brown line; only applicable to the last four trajectories), the linear approximation by \cite{BiressaF-11} (dashed light blue line), linearized Darwin's formula (dot-dashed black line), the quadratic approximation by \cite{ArakidaK-12} (solid light blue line) and the linear approximation by \cite{BhadraBS-10} (dotted light blue line). The figure reveals that mainly the first-row prescriptions behave satisfactorily down to the very horizon; they are not very accurate down there, but follow the actual rays qualitatively. Also their overall dependence on radius proves to be very close to that visible in the exact pattern.} \label{near-approximations} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{fig4.eps} \caption {Counterpart of figure \ref{comparison-near} with equatorial $(r\cos\phi,r\sin\phi)$ plots extended to larger radial region, in order to also compare the light-ray approximations in weaker fields farther from the black hole. Again, the exact trajectory is the {\it solid red} line, the linearization of Darwin's formula yields the {\it dot-dashed black} line, the linearized solution of the Binet formula by \cite{BhadraBS-10} is the {\it dotted cyan} line, the one by \cite{BiressaF-11} is the {\it dashed cyan} line, the quadratic-order solution following from \cite{ArakidaK-12} yields the {\it solid cyan} line, the approximation due to \cite{Beloborodov-02} is the {\it dotted black} line, a hyperbola adjusted to the given pericentre and to correct asymptotics is the {\it dashed brown} line, the approximation using the pseudo-Newtonian potential by \cite{Wegg-12} is the {\it dashed (dark) blue} line and our newly suggested approximation is the {\it solid light green} line. From top left to bottom right, the plots show trajectories of photons starting tangentially (from $\phi_0=0$) from radii $r_0=3.5M$, $r_0=3.75M$, $r_0=4M$, $r_0=4.3M$, $r_0=5.5M$, $r_0=8M$ and $r_0=30M$. See the main text for interpretation.} \label{comparison-far} \end{figure*} \section{Comparison on ray examples} \label{examples} Let us compare the above approximations of light rays passing at different distances from the centre; we are even including those approaching the horizon very closely. Since the approximate formulas are primarily required to excel at weak-field regions, they cannot be expected to perform well down there. However, it is good to know where and how much they fail. In particular, when using such a formula in a code, it matters whether it is just very inaccurate, or rather yields complete nonsense that has to be discarded. Numerical results are presented in four figures: figure \ref{comparison-near} compares the approximations at small radii (down to the very horizon), figure \ref{near-approximations} illustrates the radius-dependence of all of them, figure \ref{comparison-far} shows the behaviour at larger radii and \ref{total-deflection} accompanies the remark \ref{asymptotic-angle} on asymptotics (hence total deflection angle) added below. The figures demonstrate that the approximations obtained by low-order expansions of the exact formulas (namely by linearization or quadratic expansion in $M$) are only usable for rays whose pericentres are above $6M\!\div\!8M$, say. Below this radius, all the ``ad hoc" prescriptions provide much better results, including the pseudo-Newtonian one using the potential suggested by Wegg. Actually, the latter even provides {\em the best} results in a narrow region around the circular photon geodesic at $r=3M$ since it reproduces its location exactly; on the other hand, for pericentres at larger radii it is not as precise as other approximations, though it also falls off to zero deflection correctly. When the pericentre shifts below $4M$, even good approximations become rather problematic. The one by Beloborodov is only usable above this radius, being not very accurate up to some $r=5M$. Our newly suggested formula is very accurate almost everywhere, including the rays with pericentres slightly below $4M$, but at $r_0=3.77M$ it also deviates from the correct behaviour, switching to ``elliptic" behaviour (not at all present in the relativistic treatment, besides the circular photon orbit at $r=3M$) which can only mimic the relativistic trajectories locally. Note that the decisive value $r_0=3.77M$ comes from $r_0=(2+\alpha)M$, so it might be improved (shifted down) by choosing for our constant $\alpha$ a value lower than $1.77$, but this would almost certainly spoil the behaviour farther away. Ad approximation by a hyperbola: rough as this idea may have seemed, the figures show that it reproduces the large-scale features of the rays very well when tied to a correct asymptotics. However, its bending about the black hole is not sharp enough and, also, just when endowed with {\em correct} asymptotics, it gets quite complicated and not invertible for $r_0$. Finally, it is important that the two approximations which are satisfactory in general and can be used also below $r_0=4M$, namely our newly suggested formula (\ref{our-approximation}) and the pseudo-Newtonian result using Wegg's potential, serve acceptably even there (at $4M>r_0>2M$), mainly they nowhere return {\em error}. Above $r_0\simeq 5M$, the approximation by Beloborodov remains a benchmark whose main advantage is the extraordinarily simple relation (\ref{alpha-r,psi}) between the angular position on the trajectory and the latter's local direction. \begin{figure*} \includegraphics[width=\textwidth]{fig5.eps} \caption {Cosine of the azimuth $\phi$ along which the ray approaches radial infinity, $\cos\phi_\infty$, as given by the exact formula (solid red), pseudo-Newtonian treatment using Wegg's potential (dashed blue), by Beloborodov's approximation (dotted black) and by formula we suggest here (solid green). The right plot provides a more detailed view of the small-pericentre behaviour (the pseudo-Newtonian result is not shown there). Just for the sake of interest, we add a curve (dotted gold) given by the expression (\ref{deflection-formula}) in order to illustrate that it is not difficult to suggest a very good formula solely for the asymptotic angle.} \label{total-deflection} \end{figure*} \section{Asymptotic angle} \label{asymptotic-angle} Asymptotic behaviour of the approximations studied here is clearly visible in figure \ref{comparison-far}, but let us add a special remark (and figure) comparing the $\phi_\infty$ values. It is of course useless to include the approximation by the hyperbola here, because, where applicable, its asymptotic angle has been {\em prescribed} by the correct (exact) value. Therefore, we are left with the results following from the expansion of the exact formula or approximate solution of the Binet formula, with the pseudo-Newtonian result (\ref{Wegg,phi_infty}) using Wegg's potential, with the asymptotics $\cos\phi_\infty=-2M/(r_0-2M)$ of Beloborodov and with $\cos\phi_\infty=-2M/(r_0-\alpha M)$ following from our formula suggested above. These asymptotic forms can naturally be inverted for $r_0$ easier than the general formulas describing the whole trajectory. Inversion is especially simple for Beloborodov's and our approximations, and also for the linearized solution of Binet's formula by \cite{BiressaF-11} given in equation (\ref{BiressaF,lin}): \begin{align} \frac{r_0}{M} &= -\frac{(2+\cos\phi_\infty)(1-\cos\phi_\infty)}{\cos\phi_\infty} & & {\rm Biressa} \\ &= -\frac{2-2\cos\phi_\infty}{\cos\phi_\infty} & & {\rm Beloborodov} \\ &= -\frac{2-\alpha\,\cos\phi_\infty}{\cos\phi_\infty} & & {\rm our~formula} \,. \end{align} The $\cos\phi_\infty$ plots given in figure \ref{total-deflection} confirm that the exact behaviour (solid red curve) is best reproduced by our formula (green curve), followed by the formula of Beloborodov (dotted black). Note that we only include there results given by the best of the approximations. In particular, we omit those provided by linearizations in $M$ as well as by the quadratic formula. We can supplement the figure by the $r\rightarrow\infty$ limits of the expansions given in section \ref{expansions}, to obtain, up to second order in $M$, \begin{align} \cos\phi_\infty &=-\frac{2M}{r_0}-\frac{M^2}{r_0^2}\left(\frac{15\pi}{8}-2\right) && {\rm exact} \\ &=-\frac{2M}{r_0}-\frac{2M^2}{r_0^2} && {\rm Biressa} \\ &=-\frac{M}{r_0}-\frac{M^2}{r_0^2}\left(\frac{3\pi}{2}+1\right) && {\rm Wegg} \\ &=-\frac{2M}{r_0}-\frac{4M^2}{r_0^2} && \hspace{-5mm} {\rm Beloborodov} \\ &=-\frac{2M}{r_0}-\frac{2\alpha M^2}{r_0^2} && \hspace{-5mm} {\rm our~formula} \,. \end{align} The linear terms are equal, except that of the Wegg's pseudo-Newtonian formula; and the quadratic-term coefficient of the exact value, $-(15\pi/8-2)\doteq -3.89$, is most closely reproduced by Beloborodov ($-4$) and by our formula ($-3.54$ if choosing $\alpha=1.77$ as in the figures). Let us add that it is not difficult to bring yet a better proposal {\em solely for the asymptotic angle}: in figure \ref{total-deflection}, look at the dotted gold curve that follows with a slight modification of our asymptotics, \begin{equation} \label{deflection-formula} \cos\phi_\infty=-\frac{2M}{r_0-\alpha M}\left[1-\frac{M^6}{(r_0-2.1M)^6}\right]. \end{equation} It mirrors the correct curve very accurately down to some $r_0=3.02M$ where it reaches $\cos\phi_\infty=+1$, namely the photon makes a $360^\circ$ angular revolution when starting from there. We stress that this has been just an ad hoc example; without doubt, still more accurate asymptotic formulas can be found. \begin{figure*} \epsscale{.8} \plotone{fig6.eps}\caption {Finding the connecting ray for a given configuration of source ($r=r_{\rm src}$) and observer ($r=r_{\rm obs}$), when only their angular separation $\Delta\phi$ is known (not their absolute angular positions). The solution is given in terms of the pericentre radius $r_0$ as a function of $\Delta\phi$, for $r_{\rm obs}=30M$; each curve corresponds to one particular value of $r_{\rm src}$, specifically, (from bottom to top) $r_{\rm src}=6M$, $8M$, $10M$, \dots, $30M$. Green curves have been obtained using our approximation (\ref{our-approximation,r0}), while red curves (drawn ``under'' the green ones) follow by numerical solution from the exact formula. The green approximation is clearly very accurate, it only fails at $\Delta\phi\rightarrow 2\pi$ ($\Leftrightarrow$ bending by $\sim\pi$) where our formula switches to bound-orbit mode (this happens for $r_0<(2+\alpha)M=3.77M$, specifically). } \label{exercise} \end{figure*} \section{Finding the ray parameters for generic boundary conditions} \label{lensing-exercise} We have assumed that the azimuthal angle $\phi$ is adjusted so that $r(\phi=0)=r_0$, but in a real lensing situation one can only suppose to know positions of the source, of the lensing body and of the observer. Placing the coordinate origin at the lensing body and choosing the azimuthal (``equatorial") plane as that defined by the connecting ray, this means knowing the radii of the source and of the observer, plus the angular {\em difference} between the source and the observer, $\Delta\phi$ say. One a priori does not know the angular position of the ray pericentre, so can{\em not} fix the azimuth $\phi$ absolutely prior to solving our inversion exercise $(r,\phi)\rightarrow r_0$. Consider now whether the exercise is still solvable, i.e. whether it is possible to find $r_0$ from this accessible information.\footnote {I thank the referee for drawing my attention to this point.} In order for the pericentre $r_0$ to have proper sense, let us assume that it lies between the source and the observer. As we wish to {\em eventually} (in solving the problem) adjust the azimuth $\phi$ so that $r(\phi=0)=r_0$, we can assume (for example) that the source is at $ \phi<0$ and the observer is at $\phi>0$. Let us thus denote their positions by $(r_{\rm src}\!>\!r_0,-\pi<\phi_{\rm src}\!<\!0)$ and $(r_{\rm obs}\!>\!r_0,\pi>\phi_{\rm obs}\!>\!0)$, respectively. Imagine solving the inversion for $r_0$ ``from both sides", i.e. looking for $(r_{\rm src},\cos\phi_{\rm src})\rightarrow r_0$ and $(r_{\rm obs},\cos\phi_{\rm obs})\rightarrow r_0$ (regarding that $\phi_{\rm src}\!<\!0$ whereas $\phi_{\rm obs}\!>\!0$, it is better to write the angular data in terms of the cosine which is independent of the sign; the exercise can then be treated using the same formulas ``from both sides" of the $\phi=0$ plane). Both must lead to the same pericentre $r_0$, and we also know that $\phi_{\rm obs}\!-\!\phi_{\rm src}$ gives the total angular distance travelled, $\Delta\phi$ (supposed to be $<2\pi$), so we have two constraints \begin{equation} r_0(r_{\rm src},\phi_{\rm src})=r_0(r_{\rm obs},\phi_{\rm obs}), \quad \phi_{\rm obs}\!-\!\phi_{\rm src}=\Delta\phi. \end{equation} Since $r_{\rm src}$, $r_{\rm obs}$ and $\Delta\phi$ are known, we have two equations for two unknowns, $\phi_{\rm src}$ and $\phi_{\rm obs}$. Let us check whether one can really solve the exercise in such a way. Suppose that we employ our approximation according to which $r_0$ is given in terms of $r$ and $\cos\phi$ by equation (\ref{our-approximation,r0}). The constraints thus yield \begin{equation} \frac{\left({\cal R}+\sqrt{{\cal R}^2+4Mr{\cal A}{\cal B}}\right)_{\rm src}} {2{\cal A}_{\rm src}} = \frac{\left({\cal R}+\sqrt{{\cal R}^2+4Mr{\cal A}{\cal B}}\right)_{\rm obs}} {2{\cal A}_{\rm obs}}, \end{equation} where \begin{align*} {\cal R}_{\rm src} &:= {\cal R}(r\!=\!r_{\rm src},\phi\!=\!\phi_{\rm src}), \\ {\cal R}_{\rm obs} &:= {\cal R}(r\!=\!r_{\rm obs},\phi\!=\!\phi_{\rm src}\!+\!\Delta\phi), \\ {\cal A}_{\rm src,obs} &:= {\cal A}(r\!=\!r_{\rm src,obs}), \\ {\cal B}_{\rm src} &:= {\cal B}(r\!=\!r_{\rm src},\phi\!=\!\phi_{\rm src}), \\ {\cal B}_{\rm obs} &:= {\cal B}(r\!=\!r_{\rm obs},\phi\!=\!\phi_{\rm src}\!+\!\Delta\phi), \end{align*} with ${\cal R}$, ${\cal A}$, ${\cal B}$ introduced below equation (\ref{our-approximation,r0}). The above equation is to be solved for $\phi_{\rm src}$ which in turn implies $\phi_{\rm obs}(=\!\phi_{\rm src}\!+\!\Delta\phi)$ and $r_0$, all as functions of $r_{\rm src}$, $r_{\rm obs}$ and $\Delta\phi$. As an illustration (figure \ref{exercise}), let us choose $r_{\rm obs}\geq r_{\rm src}$ (without loss of generality) and monitor how the solution of the exercise changes with $\Delta\phi$ increasing from zero to $2\pi$ for a series of source radii $r_{\rm src}$ increasing from some small value to $r_{\rm obs}$. A simple chart with the source lying somewhere on the $r=r_{\rm src}$, $-\pi<\phi<0$ half-circle, the observer lying on the $r=r_{\rm obs}\geq r_{\rm src}$, $\pi>\phi>0$ half-circle, and their angular separation $\Delta\phi$ gradually growing, reveals that i) when $r_{\rm src}$ and $r_{\rm obs}$ are sufficiently different, the solution does not exist for too-small $\Delta\phi$ (the connecting ray is nowhere purely tangential to the centre then); ii) the solution only starts to exist when $\Delta\phi$ is large enough for $r_{\rm src}$ to just coincide with $r_0$; iii) within the interval of $\Delta\phi$ where the solution does exist, the pericentre $r_0$ typically decreases from $r_{\rm src}$ with increasing $\Delta\phi$, because increasing the angular separation corresponds to a connecting ray increasingly bent around the centre; iv) for $\Delta\phi\rightarrow 2\pi$ the pericentre radius falls almost to $3M$ and the approximations more or less cease to provide reasonable answers; specifically, we learned at the end of section \ref{new-suggestion} that according to our approximation the minimal possible pericentre of an unbound trajectory lies at $(2+\alpha)M=3.77M$. The above estimates are confirmed by figure \ref{exercise} where the pericentre radii are plotted, in dependence on $\Delta\phi$, for $r_{\rm obs}=30M$ and $r_{\rm src}=6M$, $8M$, $10M$, \dots, $30M$. Besides the curves $r_0(r_{\rm obs},r_{\rm src};\Delta\phi)$ obtained from our approximation (green curves), the figure also shows analogous results obtained, purely numerically, from the {\em exact} formula (red curves, drawn ``under" the green ones). Clearly the two series of curves almost coincide, even at quite small $r_0$; our approximation is only not usable for very large $\Delta\phi$ (approaching $2\pi$), because this corresponds to very large bending (by $\sim\pi$) where our formula goes over to the elliptic-type bound-orbit regime. \section{Concluding remarks} We have suggested a new formula approximating light rays in the Schwarzschild space, compared it with other formulas from the literature and showed that it performs very well, though being quite simple and easily invertible for the pericentre radius $r_0$. Besides analytical estimates, the main focus has been in numerical testing of various plausible approximations against exact results in a very strong as well as a weaker-field regime. We have shown that our formula also yields very good results for the asymptotic angle of photons, as well as in searching for a connecting ray in a given source--gravitating body--observer configuration. On a more general level, we can conclude that in spite of the legitimate vigilance toward ad hoc prescriptions, not following from an exact result by any sound procedure, in our comparison such formulas proved considerably better than low-order expansions of the exact formula, some of them providing acceptable results even in close vicinity to the horizon. \acknowledgments I am thankful for support from Czech grant GACR-14-10625S.
1,116,691,497,003
arxiv
\section{Introduction}\label{sec1} In recent years, numerous studies have applied factor models combined with the Bayesian framework to analyze gene expression data, and their results often show an improvement in the identification and estimation of metagene groups and patterns related to the underlying biology; see, for example, \citet{West2003}, \citet{LucasEtAl2006} and \citet {CarvalhoEtAl2008}. The usual formulation for factor models assumes additive effects of latent factors across the samples. This assumption leads to very tractable model fitting and computation, but may not represent the reality in some applications. The structure of dependence between genes in biological pathways motivates the idea of a model with nonlinear interactions between latent factors. The presence of interactions can have important implications for the interpretation of the underlying biology. The study of nonlinear interactions between observed variables has been the focus of many publications in the context of regression problems. In many cases, the proposed model introduces the nonlinearity through the specification of Gaussian Process (GP) priors. \citet {HenaoWinther2010} consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. The framework consists of a fully Bayesian hierarchy for sparse models using spike and slab priors, non-Gaussian latent factors and a stochastic search over the ordering of the variables. The authors argue that the model is flexible in the sense that it can be extended by only changing the prior distribution of a set of latent variables to allow for nonlinearities between observed variables through GP priors. The nonlinear relationship between a set of observed variables is also the topic of \citet{HoyerEtAl2009} in the context of Directed Acyclic Graphs (DAG). Each observed variable (node in a DAG) is obtained as a function of its parents plus independent additive noise. An arbitrary function is chosen to define linear/nonlinear relationships between the observed values. The paper evaluates whether a DAG is consistent with the data by constructing a nonlinear regression of each variable on its parents, and subsequently testing whether the resulting residuals are mutually independent. GP regression and kernelized independence tests are used in the paper. Associations between observed and latent variables is another interesting topic. \citet{ArmingerMuthen1998} consider latent variable models including polynomial terms and interactions of latent regressor variables. Two groups of observed variables are used: the response vector \textbf{y}, and the vector of covariates \textbf{x}. Their model specifies two equations; the first one expresses \textbf{y} as a linear combination of polynomial terms and/or interactions of elements in the latent vector $\xi$. The second equation defines a factor model without interaction terms, where $\xi$ is the factor score and \textbf{x} is the target data. Because the model includes components representing functions of latent variables in the first equation, the authors denote the formulation as nonlinear. They use the Bayesian framework with conjugate priors to estimate the parameters; sparsity priors are not considered in their analysis. In the spirit of factor analysis, \citet {TehEtAl2005} model the relationships among components of a response vector \textbf{y} using linear (or generalized linear) mixing of underlying latent variables indexed by a covariate vector \textbf{x} (observed values). The authors assume that each latent variable is conditionally independently distributed according to a GP, with \textbf {x} being the (common) index set. The mean of the response \textbf{y} is then a function of a linear combination of the conditionally independent GP. Most applications of GP models involve learning tasks where both output and input data are assumed to be given at training time. \citet {Lawrence2004} and \citet{Lawrence2005} have proposed a multiple-output GP regression model assuming observed output data and latent variables as inputs. The approach explores nonlinear interactions between the latent factors. The authors introduce a probabilistic interpretation of principal component analysis (PCA) named dual probabilistic PCA (DPPCA). The DPPCA model has the advantage that the linear mappings from the latent-space to the data-space can be easily nonlinearized through Gaussian processes (DPPCA with a GP introducing nonlinearity is then called GP Latent Variable Model or GP-LVM). The GP (assumed for latent variables) with an inner product kernel in the covariance function defines a linear association, and it has an interpretation as a probabilistic PCA model. GP-LVM can be obtained by replacing this inner product kernel with a nonlinear covariance function. The nonlinear mappings are designed to address the weaknesses in visualizing data sets that arise when using statistical tools that rely on linear mappings, such as PCA and standard factor models. The analyses are based on optimization via maximum likelihood estimation; no MCMC algorithm is applied and no sparsity prior is assumed. In GP models, inference is analytically tractable for regression problems, and deterministic approximate inference algorithms are widely used for classification problems. The use of MCMC methods to sample from posterior distributions in a model assuming GP prior has been explored in the literature only for cases with observed input data. As an example, \citet{TitsiasEtAl2009} describe an MCMC algorithm which constructs proposal distributions by utilizing the GP prior. At each iteration, the algorithm generates control variables and samples the target function from the conditional GP prior. The control variables are auxiliary points associated with observed input variables defined in the model. An advantage of MCMC over deterministic approximate inference is that the sampling scheme will often not depend on details of the likelihood function, and is therefore very generally applicable. In addition, the development of deterministic approximations is difficult since the likelihood can be highly complex. \citet {ChenEtAl2010} have considered inference based on Variational Bayesian (VB) approximation and Gibbs sampling to examined distinct ways of inferring the number of factors in factor models applied to gene expression data. The study indicates that while the cost of each VB iteration is larger than that of MCMC, the total number of VB iterations is much smaller. However, the CPU cost of MCMC appears to be worthwhile, as they found that for a large-scale data set the MCMC results were significantly more reliable than VB; the VB approximation has difficulties with local-optimal solutions, and the factorized form of the VB posterior may not be as accurate for large-scale problems. Different latent class models have been proposed in the literature to analyze the DNA Copy Number Alteration (CNA) problem. For example, \citet {LucasEtAl2010} use sparse latent factor analysis to identify CNA associated with the hypoxia and lactic acidosis response in human cancers. Specifically, they fit a latent factor model of the gene signatures in one data set of 251 breast tumors [\citet {MillerEtAl2005}] to generate 56 latent factors. These factors then allow for connections to be made between a number of different data sets, which can be used to generate biological hypotheses regarding the basis for the variation in the gene signatures. They have identified variation in the expression of several factors which are highly associated with CNAs in similar or distinct chromosomal regions. \citet {DeSantisEtAl2009} developed a supervised Bayesian latent class approach to evaluate CNA on array CGH data. The authors assume that tumors arise from subpopulations (latent classes) sharing similar patterns of alteration across the genome. The methodology relies on a Hidden Markov Model (HMM) to account for the dependence structure involving neighboring clones within each latent class. In particular, the approach provides posterior distributions that are used to make inferences about gains and losses in copy number. \citet {FridlyandEtAl2004} proposed a discrete-state homogeneous HMM where underlying states are considered segments of a common mean. One of the goals of the procedure is to identify copy number transitions. \citet {MarioniEtAl2006} extended this approach by developing the method BioHMM for segmenting array CGH data into states with the same underlying copy number. They use a heterogeneous HMM with probability of transitioning between states depending on the distance between adjacent clones. We are interested in the study of multi-factor models developed for the analysis of matrices representing gene expression patterns. Our goal is to investigate the existence of interaction effects involving latent factors. In order to test the significance of the interaction terms, the mixture prior with a point mass at zero and a Gaussian component (sometimes referred to as the ``spike and slab'' prior) is assumed. This type of prior has been used effectively to define the sparse structure in \citet{West2003}, \citet{LucasEtAl2006}, \citet {CarvalhoEtAl2008} and others. The outline of this paper is as follows. In Section \ref{sec2} we propose a factor model with multiplicative interactions between latent factors. Our approach for this problem has not yet been considered in the literature. Two strategies are used to introduce the interactions. Section \ref{sec3} explores nonlinear structure of interactions between factors; the formulation is more general. In short, we introduce nonlinearities through the specification of a GP prior for a set of latent variables. Five different versions of the model are investigated; they differ in terms of prior formulations for probability parameters and the assumption regarding the similarity of the interaction effects for distinct features. In Section \ref{sec4} a simulated study is developed to evaluate and compare the models from Sections \ref{sec2} and \ref{sec3}. Additional synthetic data analyses to assess the performance of the models are presented in \citet{MayrinkLucas2012}. Sections \ref{sec5} and \ref{sec6} show real data applications where we examine interaction effects related to chromosomal regions detected with CNA in breast cancer data. Finally, Section \ref{sec7} indicates the main conclusions and future work. The algorithms required to fit the proposed models are implemented using the MATLAB programming language (\url{http://www.mathworks.com}). \section{Factor model with multiplicative interactions}\label{sec2} Assume $X$ is an ($m \times n$) matrix with $X_{ij}$ representing gene $i$ and sample $j$. We propose the model \begin{equation}\label{Eq01} X = \alpha\lambda+ \theta\eta+ \varepsilon, \end{equation} where $\alpha$ is an ($m \times L$) matrix of loadings, $\lambda$ is an ($L \times n$) matrix of factor scores, $\theta$ is an ($m \times T$) matrix of loadings, $\eta$ is a ($T \times n$) matrix of interaction effects, and $\varepsilon$ is an ($m \times n$) noise matrix with $\varepsilon_{ij} \sim N(0,\sigma_i^2)$; let $\sigma^2 = (\sigma_1^2,\ldots,\sigma_m^2)'$. With this formulation, we are separating the linear and nonlinear effects. One could add the term $\mu1_n$ in this model to estimate the mean expression of the genes; $\mu$ is an $m$-dimensional column vector and $1_n$ is an $n$-dimensional row vector of ones. We prefer the parsimonious version where the rows of $X$ are standardized and $\mu= \mathbf{0}$ is assumed. The multiplicative interactions are defined in $\eta$ with the following assumption: $\eta_{1j} = \lambda_{1j} \lambda_{2j}$, $\eta_{2j} = \lambda_{1j} \lambda_{3j},\ldots,\eta_{Tj} = \lambda _{(L-1)j} \lambda_{Lj}$. Note that $T = L!/[(L-2)! 2!]$. In terms of prior distributions, we consider the conjugate specifications $\lambda_{lj} \sim N(0,1)$ and $\sigma_i^2 \sim \mathit{IG}(a,b)$. In our study, the bimodal sparsity promoting priors are key elements in the structure of the model. This form of prior originated in the context of Bayesian variable selection, and it has been the subject of substantial research; see George and McCulloch (\citeyear{GeorgeMcCulloch1993,GeorgeMcCulloch1997}) and \citet{Geweke1996}. The spike and slab mixture prior is defined for the factor loadings to allow for sparsity and to test whether the factors/interactions have significant effect on each gene. Assume \begin{eqnarray} \label{Eq02} \alpha_{il} &\sim& (1-h_{il}) \delta_0( \alpha_{il}) + h_{il} N(0,\omega_{\alpha}), \nonumber\\[-8pt]\\[-8pt] h_{il} &\sim& \operatorname{Bernoulli}(q_{il}) \quad\mbox{and}\quad q_{il} \sim\operatorname{Beta}(\gamma_1,\gamma_2), \nonumber \\ \label{Eq03} \theta_{it} &\sim& (1-z_{it}) \delta_0( \theta_{it}) + z_{it} N(0,\omega_{\theta}), \nonumber\\[-8pt]\\[-8pt] z_{it} &\sim& \operatorname{Bernoulli}(\rho_{it}) \quad\mbox{and}\quad \rho_{it} \sim\operatorname{Beta}(\beta_1,\beta_2).\nonumber \end{eqnarray} We consider two approaches to introduce the corresponding multiplicative interaction term; they are enumerated below: \begin{longlist}[(2)] \item[(1)] Introduce the interaction via Gaussian prior: $\eta_{tj} \sim N(\lambda_{l_1 j} \lambda_{l_2 j}, \nu)$. \item[(2)] Assume the product with probability 1: $\eta_{tj} = \lambda_{l_1 j} \lambda_{l_2 j}$. \end{longlist} In the cases above, let $l_1 < l_2 \in\{1,\ldots,L\}$ be the indices of factors involved in the product term related to $\eta_{tj}$ where $t \in\{1,\ldots,T\}$. In the first version, we specify the product $\lambda_{l_1 j} \lambda_{l_2 j}$ as the mean parameter of the Gaussian distribution. This approach can be generalized with the specification of any function $f(\lambda_{l_1 j},\lambda_{l_2 j})$, which makes it possible to investigate other types of relationships between factors. The variance $\nu$ must have a small value; otherwise, we are indicating a weak association between $\eta_{tj}$ and $\lambda_{l_1 j} \lambda_{l_2 j}$. In this case, the multiplicative effect is lost and the interaction factor is just another factor in the model. If the number of genes is large, the variability in the posterior distribution can be very small due to the large amount of data. In this case, $\nu$ is difficult to set and only extremely small values will ensure that $\eta_{tj}$ is associated with $\lambda_{l_1 j} \lambda_{l_2 j}$. The target posterior in approach 1 is $p(\alpha,\lambda,\theta,\eta,\sigma^2|X)$. In the second approach, we force the perfect association between the interaction factor and the corresponding product term; this strategy is convenient to deal with large data sets. Here, $p(\alpha,\lambda,\theta,\sigma^2|X)$ is the target posterior distribution. Note that $\eta_{tj}$ is regarded as fixed variables; $\eta_{tj} = \lambda_{l_1 j} \lambda_{l_2 j}$. A Gibbs Sampler algorithm is implemented to generate observations from the target posterior distributions; see Section A in \citet {MayrinkLucas2012} to identify the full conditional distributions. A simulated study has been developed to investigate the performance of the proposed model; Section B in \citet{MayrinkLucas2012} shows the results and the associated discussion. \section{Factor model with general nonlinear interactions}\label{sec3} Assume the model \begin{equation}\label{Eq04} X = \alpha\lambda+ F + \varepsilon, \end{equation} where $\alpha$ is an ($m \times L$) matrix of loadings, $\lambda$ is an ($L \times n$) matrix of factor scores, and $\varepsilon$ is an ($m \times n$) matrix with idiosyncratic noise terms $\varepsilon_{ij} \sim N(0,\sigma_i^2)$. Here, we replace the term $\theta\eta$ with $F$, which is an ($m \times n$) matrix of interaction effects. This model is defined with $L$ factors, $m$ features and $n$ samples. Again, we chose to work without the genes' mean expression parameter $\mu$. This parsimonious configuration reduces the computational cost to fit large real data sets. In all applications, the rows of $X$ are standardized to define $\mu= \mathbf{0}$. If no constraint is imposed to $\alpha\lambda$ and $F$, the model will experience identifiability issues. As an example, consider the $i$th row of $\alpha\lambda+ F$ and note that $\alpha_{i \cdot} \lambda+ F_{i \cdot} = C \alpha_{i \cdot} \lambda+ F^*_{i \cdot}$, where $F^*_{i \cdot} = (1-C) \alpha_{i \cdot} \lambda+ F_{i \cdot}$ and $C$ is any real number. This paper is focused on the analysis of gene expression data; however, one should not restrict the application to this context only. The methodology can be applied to any data set satisfying the following aspects: (i) the data matrix $X$ can be specified with rows${}={}$features/variables and columns${}={}$samples, (ii) at least two factors can be well defined, (iii) for each factor ``$l$'' there is a subset of features $G_l$ in $X$ which are linearly related to that factor with no interaction effects. Our goal is to identify interactions between factors and identify the features in the data that are affected by such interactions. We take advantage of the known feature--factor relationship involving the elements in $G_l$ to impose, via prior distributions, a specific configuration for $\alpha$ and $F$ in (\ref{Eq04}); see Section D in \citet{MayrinkLucas2012}. In particular, we assume that most features are not affected by interactions; therefore, prior distributions favoring $F_{i \cdot} = \mathbf{0}$ can be applied. According to this assumption, $F_{i \cdot} = \mathbf{0}$ for most rows $i$. Different versions of the factor model will be explored in our analysis. These versions differ in terms of prior formulations for $\alpha_{il}$ and $F_{i \cdot}$. In all cases, we set the specifications $\sigma_i^2 \sim \mathit{IG}(a,b)$ and $\lambda_{\cdot j} \sim N_L(\mathbf{0}, I_L)$. Consider $\alpha_{il} \sim(1-h_{il}) \delta _0(\alpha_{il}) + h_{il} N(0, \omega)$ where $h_{il}$ is a binary indicator variable. We explore two different forms of expressing our prior uncertainty for the probability that $h_{il} = 1$: \begin{longlist}[(2)] \item[(1)] $h_{il} \sim\operatorname{Bernoulli}(q_{il})$ and $q_{il} \sim \operatorname{Beta}(\gamma_1, \gamma_2)$; \item[(2)] $h_{il} \sim\operatorname{Bernoulli}(q_R)$, $R \in\{R_1, R_2, R_3\}$, and $q_R \sim\operatorname{Beta}(\gamma_{1,R}, \gamma_{2,R})$. Let $R = R_1$ if we suspect that feature $i$ and factor $l$ are associated, $R = R_2$ if no association is expected, and $R = R_3$ if the relationship is unknown. \end{longlist} According to specification (1), $q_{il}$ is updated using a single observation $h_{il}$, and this strategy can be useful in applications involving large data sets. In specification~(2), $q_{R}$ is updated based on the group of $h_{il}$ such that $(i,l) \in R$. If the group of indices $R_3$ contains a large number of elements and $\alpha _{il} \neq0$ for most $(i,l) \in R_3$, the probability $q_{R_3}$ tends to be large which favors $h_{il} = 1$. As a result, very few or none of the $\alpha_{il}$ related to $R_3$ will be zero, that is, the level of sparsity is lower than it should be. If $m$ is small, the model performs well with both specifications for~$h_{il}$; see Section D in \citet{MayrinkLucas2012} which presents a simulated study to evaluate the performance of the models proposed in this section. Assume a mixture prior with two components for the interaction effect vector~$F_{i \cdot}$. One of the components is the degenerated distribution at 0, which allows for the possibility of having $F_{i \cdot} = \mathbf{0}$, that is, no interaction effect for feature $i$. We will explore two versions of this mixture distribution. The first one assumes that $F_{i \cdot}$ can be different comparing affected features, whereas the second version assumes that $F_{i \cdot}$ is the same for all affected features. In the context of gene expression analysis (feature${}={}$gene), version 2 would be less realistic: \begin{longlist}[(2)] \item[(1)] $(F_{i \cdot}' \mid\lambda) \sim(1-z_i) \delta_0(F_{i \cdot}') + z_i N_n[\mathbf{0}, K(\lambda)]$, \item[(2)] $(F_{i \cdot}' \mid F^*) \sim(1-z_i) \delta_0(F_{i \cdot}') + z_i \delta_{F^*}(F_{i \cdot}')$ and $(F^* \mid\lambda) \sim N_n[\mathbf{0},K(\lambda)]$, \end{longlist} where $z_i$ is an indicator variable and $K(\lambda)$ is the covariance matrix obtained from the Squared Exponential covariance function depending on $\lambda$, \begin{equation}\label{Eq05} K(\lambda)_{j_1, j_2} = \exp\biggl\{ -\frac{1}{2 l_s^2} \| \lambda_{\cdot j_1} - \lambda_{\cdot j_2}\|^2 \biggr\}, \end{equation} where $(j_1, j_2) \in\{1,2,\ldots,n\}$, $l_s$ is the characteristic length-scale and $\|\mathbf{y}\|$ represents the Euclidean norm of the vector $\mathbf{y}$. The covariance function is a crucial ingredient in the model, as it encodes our assumptions about the function we wish to learn. The Squared Exponential is stationary, isotropic and probably the most widely-used kernel in the literature. Furthermore, it is infinitely differentiable, which means that a Gaussian Process with this choice has mean square derivatives of all orders, and is thus smooth; see \citet{RasmussenWilliams2006}. Note that if the points $\lambda_{\cdot j_1}$ and $\lambda_{\cdot j_2}$ are very close in the $\mathbb{R}^L$ space, then the samples $j_1$ and $j_2$ are similar and $K(\lambda)_{j_1, j_2} \approx1$. Conversely, the larger the distance between these points, the higher is the dissimilarity between the samples and the closer to 0 is $K(\lambda )_{j_1, j_2}$. The length-scale $l_s$ is an adjustable parameter that controls how close the points $\lambda_{\cdot j_1}$ and $\lambda_{\cdot j_2}$ should be in order to be considered associated with each other. We explore different strategies to express our prior knowledge about the indicator $z_i$. Assume the following possibilities: \begin{longlist}[(2)] \item[(1)] $z_i \sim\operatorname{Bernoulli}(\rho_i)$ and $\rho_i \sim \operatorname{Beta}(\beta_1, \beta_2)$; \item[(2)] $z_i \sim\operatorname{Bernoulli}(\rho)$ and $\rho\sim\operatorname{Beta}(\beta_1, \beta_2)$; \item[(3)] $z_i \sim\operatorname{Bernoulli}(\rho_R)$, $R \in\{R_1, R_2\}$ and $\rho_R \sim\operatorname{Beta}(\beta_{1,R}, \beta_{2,R})$. Here, $R = R_1$ if we believe that feature $i$ is associated with some factor and is not affected by interactions. Let $R = R_2$ if the association between feature $i$ and any factor is unknown (interaction effect may exist). \end{longlist} Strategy (1) can be more convenient for applications involving large $m$, because it is less influenced by other observations. Strategy (2) assumes a global probability $\rho$ representing the level of features affected by interactions. The updating distribution of $\rho$ takes into account all observations $z_i$. We expect few rows of $F$ indicating nonzero effects, therefore, $\rho$ tends to be very small if $m$ is large. This situation favors $z_i = 0$ and, thus, the sparsity level in $F$ can be higher than expected. This same problem can occur with $\rho_{R_2}$ in specification (3); $\rho_R$ is updated with \mbox{$z_i\ \forall i \in R$}. We use the structure of the Gibbs Sampling algorithm to sample from the target distribution $p(\alpha, \lambda, F, \sigma^2 | X)$; the complete conditional posterior distributions are presented in Section C of \citet {MayrinkLucas2012}. In particular, the full conditional of $\lambda _{\cdot j}$ depends on which specification we use for $p(F_{i \cdot }|\lambda)$. An indirect sampling method is required in this case; we apply the Metropolis--Hastings algorithm with a random walk proposal distribution. \begin{table} \tablewidth=171pt \caption{Prior specifications defining different models} \label{Ta01} \begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}lccc@{\hspace*{-1.5pt}}} \hline & \multicolumn{3}{c@{}}{\textbf{Prior distributions}} \\[-4pt] & \multicolumn{3}{c@{}}{\hrulefill}\\ \textbf{Model} & $\bolds{h_{il}}$ & $\bolds{F_{i \cdot}}$ & $\bolds{z_i}$ \\ \hline 1 & 1 & 1 & 1 \\ 2 & 1 & 2 & 1 \\ 3 & 1 & 1 & 2 \\ 4 & 1 & 2 & 2 \\ 5 & 2 & 1 & 3 \\ \hline \end{tabular*} \end{table} Table \ref{Ta01} provides an identification number for each configuration of prior distributions defining a factor model. As can be seen, we choose to investigate 5 different configurations. In models~1, 3 and~5, we assume that the interaction effect can differ from row to row in $F$. On the other hand, the same interaction effect is considered for all affected features in models 2 and 4. Note that model 5 is the only one using the specifications $h_{il} \sim \operatorname{Bernoulli}(q_{R})$ and $z_i \sim\operatorname{Bernoulli}(\rho_{R})$. In addition, models 3 and 4 apply the global Bernoulli probability $\rho$. \section{Comparison between factor models with interactions}\label{sec4} Here, we compare the results from the factor models proposed in Sections \ref{sec2} and \ref{sec3}. Consider the same data sets simulated for the analysis in Section D of \citet{MayrinkLucas2012}. In that case, we define $F_{ij} = \lambda_{1j} \lambda_{2j}$ as the true interaction term affecting some features in $G_E = (G_1 \cup G_2)^C$. Figure \ref {Fi01}(a) shows the surface plot representing the saddle shape of the true interaction effect. Since we use the same $\lambda$ in all simulations, this is our target interaction effect for all cases. \begin{figure} \includegraphics{607f01.eps} \caption{Panel \textup{(a)}: true interaction effect in all simulations. Panel \textup{(b)}: statistic AAD, \textup{(D.1)} in Section~\textup{D} of Mayrink and Lucas (\citeyear{MayrinkLucas2012}), and the comparison of models 1 and 2 with different choices of~$l_s$ (simulation 1).} \label{Fi01} \end{figure} \begin{figure}[b] \includegraphics{607f02.eps} \caption{3-D surface plot representing the estimated interaction effect.} \label{Fi02} \end{figure} The model with multiplicative interactions (\ref{Eq01}) can be compared with model (\ref{Eq04}) in Section \ref{sec3}. The interaction effect $\theta_{i \cdot} \eta$ corresponds to $F_{i \cdot}$. Note that $\theta_{i \cdot} = \mathbf{0}$ represents $F_{i \cdot} = \mathbf{0}$. In terms of prior specifications, initial values and MCMC configuration, consider the same choices defined in the simulated studies developed in Sections B and D of \citet{MayrinkLucas2012}. In this section, we concentrate on the comparison of surface plots to see how well the saddle shape in Figure \ref{Fi01} is estimated.\setcounter{footnote}{2}\footnote{In order to test whether gene $i$ is affected by interactions, we consider the conditional probability $p(z_i = 1|\ldots)$ related to the mixture posterior distribution of $\theta_i$ or $F_{i\cdot}$, depending on the model. If $p(z_i = 1|\ldots) > 0.5$, we will assume a significant interaction effect.} Figure \ref{Fi02} shows the surfaces indicating the estimated interaction effect; we can identify the saddle shape in all cases. As one might expect, the multiplicative model [panels~(c) and~(d)] produces a smoother surface than the nonlinear model [panels (a) and~(b)]. The multiplicative model is in advantage, because it assumes the true saddle shape as the target effect. The parameter $l_s$ can be used to control the smoothness of the surface in the nonlinear model (current choice $l_s = 0.2$). If this value is increased, the number of neighbors influencing each point increases; the covariance matrix is then more populated. Figure \ref{Fi03} presents the surfaces related to models 1 and 2 assuming bigger choices of $l_s$. As can be seen, the level of irregularities in the middle of the graph seems reduced with respect to $l_s = 0.2$; this conclusion is more evident for model 1 with $l_s = 0.5$. \begin{figure} \includegraphics{607f03.eps} \caption{3-D surface plot representing the estimated interaction effect ($l_s = 0.3$ or 0.5).} \label{Fi03} \end{figure} The smooth surfaces, for $l_s = 0.5$ in Figure \ref{Fi03}, seem to be flatter and wider than the other cases. This characteristic can be interpreted as an indication of worse approximation between posterior estimates and true values. The bar plots in Figure~\ref{Fi01}(b) compare the AAD statistic, (D.1) in Section D of \citet {MayrinkLucas2012}, for parameters in models 1 and 2 with different choices of $l_s$. Note that the approximation is indeed worse when $l_s = 0.5$; the biggest AAD value is observed for $l_s = 0.5$ in all cases. Applications involving other data sets (simulations 2 and 3) and other models (models 3, 4 and 5) provide the same conclusions above. \section{Real application: CNA and multiplicative interactions}\label{sec5} The number of copies of a gene in a chromosome can be modified as a consequence of problems during cell division and these alterations are known to play an important role in human cancer. We wish to examine the possibility that there are genes that are synergistically affected by copy number alteration in multiple genomic locations. In order to assess this, we will build factor models in which we seed each latent factor with a set of genes that is known to be in a single region of copy number alteration (CNA). We accomplish the seeding with the prior assumption that they have nonzero factor loadings on the factor with very high probability. We then utilize our interaction model to assess all genes for interaction effects between two copy number alteration factors. Positive results will indicate genes that are synergistically differentially expressed in the presence of multiple CNAs and may lead to insights about the mechanism of action of the CNAs. Many studies have detected CNA in breast cancer data, for example, \citet {PollackEtAl2002}, \citet{PrzybytkowskiEtAl2011} and \citet {LucasEtAl2010}. In our analyses, different regions of CNA are drawn from \citet{LucasEtAl2010}. Each region is an interval, involving a collection of genes, located in the human genome sequence. The locations suggesting CNA are known, and an annotation file identifying the chromosome position for each probe set can be obtained from the Affymetrix website. In order to identify our seed genes, we consider a range (2,000,000 to the left and right) around the central position\footnote{In \citet{LucasEtAl2010} the expression scores of 56 latent factors were assessed on both the breast cancer data set as well as breast tumor cell lines. These scores were then compared with CGH clones in the corresponding tumor and cell line samples using Pearson correlation. Approximately, $1/3$ of the factors show a significant degree of association with the CGH clones in small chromosomal regions in both tumor and cell line. The mentioned ``central position'' represents the central point of the chromosomal region where the indicated correlations are significant. The analyst is free to apply the factor model to evaluate interactions together with any method for identification of genome regions with CNA.} where the CNA seems to occur. We explore four different breast cancer data sets: \citet{ChinEtAl2006}, \citet{MillerEtAl2005}, \citet {SotiriouEtAl2006} and \citet{WangEtAl2005}. We investigate the results for two groups of over-expressed genes. The first one has central position 35,152,961 in chromosome 22; we denote this group as $G_1$. The second collection of genes is located around the central point 68,771,985 in chromosome 16; let $G_2$ represent this group. We will fit a factor model with $L = 2$ latent factors describing the expression pattern of the genes in $G_1$ and $G_2$. The model includes a third factor representing the multiplicative interaction between the first two. Our goal is to identify the genes affected by the interaction factor. The group $G_1$ has 50 genes, and $G_2$ contains 42 elements. As described above, the selection of these genes is based on an interval specified around a position in the genome. This strategy can lead to the inclusion of cases unrelated to the CNA detected for the studied region. In order to remove the unrelated cases from the current gene lists, we fit a two-factor model (without interaction terms) to the ($92 \times n$) matrix $X$. The following configuration is expected for the estimated $\alpha\dvtx\{\alpha_{i1}\dvtx i \in G_1\}$ with the same sign, $\{\alpha_{i2}\dvtx i \in G_2\}$ with the same sign, and $\alpha_{il} = 0$ for all other cases. The genes in $(G_1 \cup G_2)$ violating this assumption are considered problematic, and thus removed from the analysis. This cleaning procedure involving $G_1$ and $G_2$ is described with more details in Section E of \citet{MayrinkLucas2012}. The procedure defines 22 genes in $G_1$ and 18 in $G_2$. Let $G_E$ represent a group of extra genes to be included in the analysis; $G_1$, $G_2$ and $G_E$ are disjoint sets. The microarrays selected for this application have 22,283 genes, and each breast cancer data set has more than 100 samples available for analysis. As a result, the MCMC algorithm can be rather slow to handle this large amount of data. As an alternative to reduce the computational cost, we implement a gene selection procedure to eliminate the cases which might not be affected by interactions. The full description of the selection process is given in Section E of \citet{MayrinkLucas2012}. In short, we fit a two-factor model (without interaction terms) to the ($22\mbox{,}283 \times n$) matrix $X$ assuming 22 genes in $G_1$, 18 genes in $G_2$ and 22,243 genes in $G_E$. The distribution of the conditional probability $p(h_{il}=1|\cdots)$ is evaluated to accept or reject $\alpha_{il} \neq 0$. It seems reasonable to assume that the genes affected by both factors are more likely to be affected by interactions, therefore, the final result includes only the cases satisfying this requirement. This selection process yields 3704 genes in the updated $G_E$. Consider the prior specifications: $\omega_{\alpha} = \omega_{\theta} = 10$ in (\ref{Eq02}) and (\ref{Eq03}), $\sigma^2_i \sim \mathit{IG}(2.1,\allowbreak1.1)$. Our goal is to fit the factor model with multiplicative interaction effects (using approach 1${}={}$Gaussian prior) to the real data having 22 genes in $G_1$, 18 genes in $G_2$ and 3704 genes in $G_E$. Given the large amount of genes, we need to set strong priors for $q_{il}$ to impose our assumptions related to $G_1$ and $G_2$ and assure the identification of the model. We use the configuration indicated as ``option 2'' in Table B.1. Degenerated priors are assumed to impose our assumptions regarding the gene--factor relationship for the cases in $G_1$ and $G_2$. This strategy is important to retain the CNA interpretation of factors 1 and 2; otherwise, the target association can be overwhelmed by the large amount of information in $G_E$. Note that we assume no interaction affecting the genes in ($G_1 \cup G_2$). The Beta($1,10$) is specified to induce sparsity in the loadings ($i \in G_E$) related to the interaction factor. Finally, the $U(0,1)$ is indicated for all other cases. The MCMC algorithm performs 600 iterations (burn-in period${}={}$400). In terms of initial values of the chains, consider the same choices defined in Section B of \citet{MayrinkLucas2012} for $\alpha _{il}^{(0)}$, $\lambda_{lj}^{(0)}$, $\theta_i^{(0)}$, $\eta_j^{(0)}$ and $(\sigma^2_i)^{(0)}$. The probabilities $q_{il}$ and $\rho_i$ are initialized with the values presented in Table B.1 (option 2); $h_{il}^{(0)} \sim\operatorname{Bernoulli}(q_{il}^{(0)})$ and $z_i^{(0)} \sim \operatorname{Bernoulli}(\rho_i^{(0)})$. The chains seem to converge in all applications of the MCMC algorithm. The model assuming the prior $\eta_j \sim N(\lambda_{1j} \lambda_{2j}, \nu)$ (approach 1) is the focus of the first application in the current section. As previously discussed, the variance parameter $\nu$ must be small to guarantee the target multiplicative effect. The real data set contains a large number of genes and, thus, the posterior variance is expected to be small. In this case, only extremely small values for $\nu $ will ensure that $\eta_j$ and $\lambda_{1j} \lambda_{2j}$ are correlated. Figure \ref{Fi04} shows scatter plots comparing the posterior estimates of $\eta_j$ and the product $\lambda_{1j} \lambda _{2j}$. Here, the factor model is fitted with $\nu= 10^{-5}$. Note that the model fit for the data set ``Sotiriou'' is the only one indicating correlated results. In the other applications, the multiplicative effect is lost and the interaction factor is just another factor. \begin{figure} \includegraphics{607f04.eps} \caption{Scatter plots comparing the posterior estimates of $\eta_j$ and $\lambda_{1j} \lambda_{2j}$ (approach 1${}={}$Gaussian prior). Each panel represents a different breast cancer data set.} \label{Fi04} \end{figure} Given the difficulty to set $\nu$, no further real data analysis is developed for the factor model with approach 1. Our next step is to investigate the model defined as approach 2, where we force the perfect association $\eta_j = \lambda_{1j} \lambda_{2j}$. Consider the same breast cancer data sets, configuration of prior distributions, initial values and MCMC setup defined in the previous application. Because we impose the equality between $\eta_j$ and $\lambda_{1j} \lambda_{2j}$, the scatter plots comparing their values indicate correlation 1. Figure \ref{Fi05} shows the 95\% credible interval and the posterior mean for $\alpha_{il}$ and $\theta_i$ such that $i \in(G_1 \cup G_2)$. Note that most nonzero loadings, related to the same factor, indicate posterior estimates with the same sign. This fact is observed for all data sets, and it supports the CNA interpretation for factors 1 and 2. Recall that the zero estimates are imposed via prior distribution to satisfy our assumptions for this group of genes. \begin{figure} \includegraphics{607f05.eps} \caption{Posterior mean (x mark) and the 95\% credible interval (bar) for the loadings with $i \in(G_1 \cup G_2)$ (approach 2${}={}$perfect product). Intervals are computed for the component with highest posterior probability weight. Dashed lines separate the factors.} \label{Fi05} \end{figure} Table \ref{Ta02} indicates (main diagonal) the number of genes affected by multiplicative interactions in each real data application. Note that the majority of features are free from interaction effects. The elements off diagonal are the number of common genes belonging to the intersection between the groups of affected genes. As can be seen, at least 14 genes can be found in the intersections involving different data sets. This result may be used as an argument against the idea that the model might be identifying interactions for a random set of genes. The intersections involving three data sets have 2--6 elements. Only 1 gene belongs to the intersection of all four data sets; its official full name is ``GTP binding protein 4,'' and it is located in chromosome~10. \begin{table}[b] \tablewidth=200pt \caption{Pairwise intersections between data sets; number of common genes affected by the multiplicative interaction} \label{Ta02} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lcccc@{}} \hline & \textbf{Chin} & \textbf{Miller} & \textbf{Sotiriou} & \textbf{Wang} \\ \hline Chin & 314 & \hphantom{0}30 & \hphantom{0}24 & \hphantom{0}20 \\ Miller & \hphantom{0}30 & 170 & \hphantom{0}14 & \hphantom{0}24 \\ Sotiriou & \hphantom{0}24 & \hphantom{0}14 & 244 & \hphantom{0}24 \\ Wang & \hphantom{0}20 & \hphantom{0}24 & \hphantom{0}24 & 255 \\ \hline \end{tabular*} \end{table} We apply a hypothesis test to investigate whether the configuration in Table \ref{Ta02} can be considered a result of an independent random sample of genes, from the population of 3704 cases in $G_E$, for each breast cancer data set. First, we select genes, uniformly at random, using the numbers in the main diagonal of Table \ref{Ta02} as the sample sizes. In the next step, we consider the pairwise intersections between the random selections and obtain the sum of elements in all intersections; this number $n_k$ represents the level of overlaps. We repeat this procedure 100,000 times to generate $\{ n_k\dvtx k = 1, 2, \ldots, 100\mbox{,}000\}$. Finally, we calculate the number of cases such that $n_k \geq n_{\mathrm{o}}$, where $n_{\mathrm{o}}$ is the overlap level observed in Table \ref{Ta02}. This result is then divided by 100,000 to provide the $p$-value 0.00003. In conclusion, we reject the hypothesis that the genes are independently selected for each data set. \begin{figure} \includegraphics{607f06.eps} \caption{3-D surface plots representing the multiplicative interaction effect $\theta_i \eta_j$ (approach 2${}={}$perfect product). In each panel, left${}={}$the smallest negative loading, and right${}={}$the largest positive loading.} \label{Fi06} \end{figure} Figure \ref{Fi06} shows the three-dimensional surface plot representing the multiplicative effect associated with the genes with the highest interaction effects. As can be seen, this type of interaction has a saddle shape. Each point in the surface corresponds to a different sample $j$. In the x and y axes we have $\lambda_{1j}$ and $\lambda _{2j}$; the z axis represents $\theta_i \eta_j$. The loading $\theta_i$ controls how strong the interaction effect is; values close to zero define flatter surfaces. The sign of $\theta_i$ determines the orientation of the saddle. In each panel, the graph on the left is related to the smallest negative~$\theta_i$, while the graph on the right represents the largest positive $\theta_i$. \section{Real application: CNA and nonlinear interactions}\label{sec6} Consider again the CNA problem investigated in the previous section using the four breast cancer data sets: \citet{ChinEtAl2006}, \citet {MillerEtAl2005}, \citet{SotiriouEtAl2006} and \citet {WangEtAl2005}. Two latent factors are defined in our model for this type of application. In other words, $\lambda$ has two rows of factor scores, and each row describes the expression pattern across samples for the genes associated with a region where the CNA was detected. We will evaluate the model fit assuming three different pairs of chromosome locations. Table \ref{Ta03} identifies the position and chromosome number for each region. Denote by $G_1$ the group of genes around the first location in the pair; $G_2$ represents the collection of features around the second location. The cleaning procedure, described in Section E of \citet{MayrinkLucas2012}, is applied to remove problematic genes from $G_1$ and $G_2$. Table \ref{Ta03} indicates the number of genes before and after the removal procedure. \begin{table} \tablewidth=270pt \caption{Regions detected with CNA. We apply a procedure to remove genes unrelated to the CNA factors. The number of genes before and after this removal is presented} \label{Ta03} \begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}lcccc@{}} \hline & & & \multicolumn{2}{c@{}}{\textbf{Number of genes}} \\[-4pt] & & & \multicolumn{2}{c@{}}{\hrulefill}\\ \textbf{Region} & \textbf{Chr.} & \textbf{Position} & \textbf{Before} & \textbf{After} \\ \hline 1 & 11 & 117,844,879 & 38 & 13 \\ 2 & 22 & \hphantom{0}35,152,961 & 50 & 22 \\ 3 & \hphantom{0}7 & 101,400,207 & 45 & 24 \\ 4 & 16 & \hphantom{0}68,771,985 & 42 & 18 \\ \hline \end{tabular*} \end{table} The microarrays have 22,283 genes and each data set contains at least 118 samples. In order to reduce the computational cost, consider again the gene selection procedure described in Section E of \citet {MayrinkLucas2012}. The method is based on the data set in \citet {ChinEtAl2006}, and we evaluate the pairs of regions $(1,4)$, $(2,4)$ and $(3,4)$; see Table \ref{Ta03}. The selection indicates 3717, 3704 and 3708 elements in $G_E$ for the pairs $(1,4)$, $(2,4)$ and $(3,4)$. For the purpose of comparison, this configuration of $G_E$ is used to study all data sets. Our goal is to identify features in $G_E$ affected by interactions. Model 1 in Table \ref{Ta01} is more convenient for applications with large $m$. In this case, we assume a particular Bernoulli probability for each indicator $h_{il}$ and~$z_i$, which makes these variables less dependent on other observations. If a large number of $h_{il}$ share the same Bernoulli probability $q_R$, the level of sparsity in $\alpha$ can be incorrectly determined. If most loadings are nonzero values, $q_R$ tend to be large which favors $h_{il} = 1$ for all $(i,l)$ related to $q_R$. Similarly, if a large number of $z_i$ share the same probability $\rho$ (models 3, 4) or $\rho_R$ (model 5), and if $F_{i \cdot} = \mathbf{0}$ for most genes, then $\rho$ or $\rho _R$ tend to be small which favors $z_i = 0$ for all involved features. Here, the level of sparsity is too high and some interaction effects are neglected. In a real application, it seems more realistic to assume different interaction effects for different affected genes; for this reason, model 1 is preferred to model 2. Assume $\omega= 10$ in the mixture prior for $\alpha_{il}$, $\sigma^2_i \sim \mathit{IG}(2.1,1.1)$, and set $l_s = 0.2$ in (\ref{Eq05}). The specifications in Table D.1 (option 2) are defined for $q_{il}$ and $\rho_i$ to impose our assumptions regarding the gene--factor relationship and provide the identification of the model. We do not expect interaction effects related to the genes in $G_1$ and $G_2$; these groups have a strong relationship with one latent factor and no association with the other. In addition, recall that most rows of $F$ should be null-vectors to ensure the identification between $\alpha \lambda$ and $F$. It is reasonable to expect few genes affected by interactions; as a result, one might choose a Beta distribution with higher probability mass below 0.5 for $\rho_i$ with $i \in G_E$. The choice $\rho_i \sim\operatorname{Beta}(1,1)$ works well in the applications of this section. In terms of initial values of the chains, let $F_{ij}^{(0)} = 0$ for all $(i,j)$, and consider the usual choices $\alpha_{il}^{(0)} = 0$, $(\sigma^2_i)^{(0)} = 1$, and $\lambda_{lj}^{(0)} \sim N(0,1)$. We initialize $h_{il}^{(0)} \sim\operatorname{Bernoulli}(q_{il}^{(0)})$ and $z_i^{(0)} \sim\operatorname{Bernoulli}(\rho_i^{(0)})$, where $q_{il}^{(0)}$ and $\rho_i^{(0)}$ are indicated in Table D.1 (option 2). The MCMC algorithm is set to perform 600 iterations (burn-in period${}={}$300); the chains seem to converge in all applications. The Metropolis--Hastings algorithm, used to sample from the full conditional posterior distribution of $\lambda_{\cdot j}$, has acceptance rate around 31--40\%, 15--65\%, 26--53\% and 67--84\% in the applications related to the data sets [\citet{ChinEtAl2006}, \citet{MillerEtAl2005}, \citet {SotiriouEtAl2006} and \citet{WangEtAl2005}]. \begin{figure} \includegraphics{607f07.eps} \caption{Results related to the pair of locations $(2,4)$. First four panels: posterior mean (x mark) and 95\% credible interval (bar) for $\alpha_{il}$ with $i \in(G_1 \cup G_2)$; the dashed line separates the two factors. Fifth panel: left-hand side${}={}$full matrix $F$ (3744 genes), right-hand side${}={}$cases $F_{i \cdot} \neq0$ (rows and columns are sorted so that the 1st principal components are monotone).} \label{Fi07} \end{figure} The 5th panel in Figure \ref{Fi07} shows images of interaction effects in $F$. The image on the left represents the full matrix with 3744 rows and 118 columns; the color bar is constrained between $(-1,1)$ for higher contrast. The second heat map exhibits the cases $F_{i \cdot } \neq0$. Note that we identify 275 genes affected by nonlinear interactions involving the factors. Further, the second image suggests a coherent pattern for groups of features; several rows have similar decreasing or increasing effect, as we move across samples. This result supports the idea of $F_{i \cdot}$ as a representation of interactions; on the contrary, a random pattern would be observed for most rows. Figure \ref{Fi07} also presents the posterior estimates and 95\% credible interval for the loadings related to genes in $G_1$ and $G_2$. These results are computed for the component in the posterior mixture with the highest probability weight. As can be seen, most intervals in $G_l$, $l = 1$ or 2, suggest loadings with the same sign. This result supports the association between factors 1--2 and the CNA detected for $G_1$ and $G_2$. In other words, the estimated interactions seem to be a result of the CNA in regions 2 and 4. \begin{figure} \includegraphics{607f08.eps} \caption{3-D surface plot of the estimated interaction effect $F_{1524 \cdot}$ \textup{(a)} and $F_{1945 \cdot}$ \textup{(c)}. Panels \textup {(b)} and \textup{(d)} contain the posterior mean (x mark) and the 95\% credible interval (bar). This result is related to the data set Chin et~al. (\citeyear{ChinEtAl2006}) and the pair of locations~$(2,4)$.} \label{Fi08} \end{figure} Figure \ref{Fi08} shows, in panels (a) and (c), the three-dimensional surface plot representing the shape of the estimated interaction effect for two genes. The x and y axes contain the estimated $\lambda_{1j}$ and $\lambda_{2j}$, therefore, each point in the x--y plane is related to a sample (microarray). These shapes are different, suggesting distinct interaction effects for those genes. Panels (b) and (d) present the posterior mean used in the z axis of the graph and the corresponding 95\% credible interval indicating our posterior uncertainty related to the estimated surface. Table \ref{Ta04} compares the list of affected genes related to different breast cancer data sets. The table is divided in three sections representing the pair of regions with CNA. The main diagonal in each section indicates the number of affected genes. Note that all intersections are nonempty sets, that is, different data sets indicate the same group of genes as affected by interactions. Given the large number of genes in $G_E$ and the relatively small list of affected cases determined in each application, the identification of elements in the intersections is an important result suggesting a plausible model. Most intersections involving three data sets have 1 or 2 elements for any pair of regions. \begin{table} \tablewidth=259pt \caption{Intersections between data sets; common genes affected by interactions} \label{Ta04} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lcccc@{}} \hline & \textbf{Chin} & \textbf{Miller} & \textbf{Sotiriou} & \textbf{Wang} \\ \hline \multicolumn{5}{@{}c@{}}{Pair $(1,4)$}\\[4pt] Chin & 139 & \hphantom{00}6 & \hphantom{00}8 & \hphantom{00}9 \\ Miller & \hphantom{00}6 & \hphantom{0}81 & \hphantom{00}6 & \hphantom{00}3 \\ Sotiriou & \hphantom{00}8 & \hphantom{00}6 & 121 & \hphantom{00}1 \\ Wang & \hphantom{00}9 & \hphantom{00}3 & \hphantom{00}1 & \hphantom{0}46 \\ [4pt] \multicolumn{5}{@{}c@{}}{Pair $(2,4)$}\\[4pt] Chin & 275 & \hphantom{0}14 & \hphantom{0}13 & \hphantom{0}19 \\ Miller & \hphantom{0}14 & 111 & \hphantom{00}7 & \hphantom{00}7 \\ Sotiriou & \hphantom{0}13 & \hphantom{00}7 & 143 & \hphantom{00}8 \\ Wang & \hphantom{0}19 & \hphantom{00}7 & \hphantom{00}8 & 111 \\ [4pt] \multicolumn{5}{@{}c@{}}{Pair $(3,4)$}\\[4pt] Chin & 235 & \hphantom{0}10 & \hphantom{0}11 & \hphantom{00}7 \\ Miller & \hphantom{0}10 & \hphantom{0}91 & \hphantom{00}4 & \hphantom{00}9 \\ Sotiriou & \hphantom{0}11 & \hphantom{00}4 & 115 & \hphantom{00}2 \\ Wang & \hphantom{00}7 & \hphantom{00}9 & \hphantom{00}2 & \hphantom{0}75 \\ \hline \end{tabular*} \end{table} We evaluate the results of Table \ref{Ta04} to test the hypothesis of independent random samples of genes for each data set. This same test was used in Section \ref{sec5} to examine Table \ref{Ta02}. The configuration of Table \ref{Ta04} provides the $p$-values: 0.00002 for the pair $(1,4)$, 0.00001 for $(2,4)$ and 0.00044 for $(3,4)$. Assuming a significance level of 0.05, we reject the indicated null hypothesis. In our final comparison analysis, the frameworks approach 1 (Section \ref{sec2}) and model 1 (Section \ref{sec3}) have been used to fit the data sets [\citet {ChinEtAl2006}, \citet{MillerEtAl2005}, \citet{SotiriouEtAl2006} and \citet{WangEtAl2005}]; consider the pair of regions $(2,4)$ in Table \ref{Ta03}. Each model provides a list of genes affected by interactions; we have found 22 (Chin), 7 (Miller), 13 (Sotiriou) and 7 (Wang) genes in the intersection of the lists generated for the same data set. This type of result reinforces the idea that the proposed models can be valid to study interactions. \section{Conclusions}\label{sec7} In an ordinary factor analysis, the involvement of any feature with the factors is always additive. Biological pathways establishing complex structure of dependencies between genes motivate the idea of a multi-factor model with interaction terms. We study the expression pattern across samples using Affymetrix GeneChip\regtm\ microarrays. The matrix $X$ contains the preprocessed data (RMA outputs) with rows representing genes and columns representing microarrays. Each column is a different individual, but all samples are related to the same type of cancer cell. We formulate the factor models with spike and slab prior distributions to allow for sparsity and then test whether the effect of factors/interactions on the features is significant or not. Simulated studies have been developed to verify the performance of the proposed models; the posterior estimates approximate well the real values. In Section \ref{sec2} we have proposed a model with pairwise multiplicative interactions, but any function defining a relationship between a pair of factors can be used. Two approaches were considered to introduce the interaction effect: (1) the product is inserted as the mean of a Gaussian prior, (2) we assume the perfect product between factors in a deterministic setup. In the real data application we have studied four breast cancer data sets. Two factors were defined in the model, and each one is directly associated with the genes located in a particular region (detected with CNA) of the human genome. The main aim was to identify other genes affected by the product interaction of the two factors. A selection process was implemented to choose the most interesting genes for this study, nevertheless, the matrix $X$ represents a large number of features. In this case, approach 1 requires a Gaussian prior with extremely small variance to ensure the multiplicative effect. On the other hand, approach 2 does not suffer from the same problem given its deterministic formulation. Depending on the data set, we have observed 170--314 genes affected by interactions, and the pairwise intersections of these groups have at least 14 elements. In Section \ref{sec3} we have developed a multi-factor model with a nonlinear structure of interactions; this version is more general. The nonlinearities involving the latent factors were introduced through the Squared Exponential kernel, which defines the covariance matrix in the Gaussian component of a mixture prior specified for the parameter representing interaction effects. One version of this prior assumes that the effect can be different comparing affected genes; the less realistic assumption ``same effect for any pair of affected features'' was also studied. In addition, different prior formulations were considered for probability parameters in the mixture prior specified for the interaction effects and for the factor loadings. As a result, five versions of the model were defined for investigation. Assumptions related to the intended type of application were used to choose the priors and induce a specific configuration in the matrices of factors loadings and interaction effects, which provides the identification of the model. In the real data application, we have revisited the two-factor analysis based on regions with CNA. Four breast cancer data sets were explored, and interactions can be identified in all evaluations. The intersections of results from the four data sets are nonempty sets which suggest a plausible model. The use of a different covariance function can be an alternative to better combine smoothness and good posterior estimation. Of particular interest in this regard is the Matern class of covariance functions $K(r) = [2^{1-\upsilon}/\Gamma(\upsilon)] (r\sqrt{2\upsilon } /l_s)^{\upsilon} \*K_{\upsilon}(r\sqrt{2\upsilon}/l_s)$ with positive parameters $\upsilon$ and $l_s$, where $K_\upsilon$ is a modified Bessel function [see \citet{AbramowitzStegun1965}, Section 9.6] and $r$ is the Euclidean length. The parameter $\upsilon$ is, in fact, a smoothness parameter. The Squared Exponential covariance function $\exp\{ -r^2/(2 l_s^2) \}$ is obtained for $\upsilon= \infty$ [see \citet{RasmussenWilliams2006}, page 204]. The process is $k$-times Mean Squared differentiable if and only if $\upsilon> k$. In summary, we currently control the range of influence between points using the parameter $l_s$. In order to improve smoothness and retain good posterior approximation, one could try to balance the choices of $l_s$ and $\upsilon< \infty$. In Section \ref{sec3} we have studied two mixture priors for $F_{i \cdot}$ specifying extreme cases, that is, the effects are all different or the same. It would be reasonable to consider the intermediate situation, where we identify groups of genes such that the nonlinear interaction is the same within each group, but it differs between groups. In order to implement this assumption, we can use the clustering properties of the Dirichlet Process (DP) [Ferguson (\citeyear{Ferguson1973,Ferguson1974})]. The following result is implied by the Polya urn scheme in \citet {BlackwellMacQueen1973}, and it leads to the so-called ``Chinese Restaurant Process'' [see \citet{Aldous1985}, page~92]:\vspace*{1pt} $(\psi_i |\psi_1,\ldots,\psi_{i-1}) \sim[\zeta/(\zeta+i-1)] P_0 + \sum _{j=1}^{i-1} [1/(\zeta+i-1)] \delta_{\psi_j}$, where $\zeta$ is the concentration parameter and $P_0$ is the base distribution in the DP. This implies that the $i$th feature is drawn from a new cluster with probability proportional to $\zeta$ or is allocated to an existing cluster with probability proportional to the number of features in that cluster. As a result, we can consider the prior $(F_{i \cdot}' \mid\lambda) \sim(1-\rho_i) \delta_0(F_{i \cdot}) + \rho_i DP(\zeta, P_0)$ with $P_0 = N_n[\mathbf{0},K(\lambda)]$, where $K(\lambda)$ is the covariance matrix depending on $\lambda$. \section*{Acknowledgments} The authors would like to thank Mike West, Sayan Mukherjee and the anonymous referees for constructive comments. \begin{supplement \stitle{Sparse latent factor models with interactions: Posterior computation, simulated studies and gene selection procedure\\} \slink[doi]{10.1214/12-AOAS607SUPP} \sdatatype{.pdf} \sfilename{aoas607\_supp.pdf} \sdescription{Additional material containing the following: formulations of the complete conditional posterior distributions for parameters in the proposed models, simulated studies to evaluate the performance of the models, and the description of the procedure used to select genes for the real applications.} \end{supplement}
1,116,691,497,004
arxiv
\section{Introduction} Anomaly-free matter coupled supergravities in six dimensions naturally arise in $K3$ compactification of Type I and heterotic string theories \cite{Green:1984bx}. Owing to the fact that $K3$ has no isometries, all of the resulting $6D$ models are ungauged in the sense that the $R$-symmetry group $Sp(1)_R$, or its $U(1)_R$ subgroup thereof, is only a global symmetry. The $R$-symmetry gauged general matter coupled models, on the other hand, have been constructed directly in six dimensions long ago \cite{Nishino:1986dc,Nishino:1997ff}. These theories harbor gravitational, gauge and mixed anomalies which can be encoded in an $8$-form anomaly polynomial, and the Green-Schwarz anomaly cancelation mechanism requires its factorization. It turns out that the $R$-symmetry gauging reduces drastically the space of solutions to this requirement. At present, the only known ``naturally'' anomaly-free gauged supergravities in $6D$ are: \begin{itemize} \item the $E_7\times E_6 \times U(1)_R$ invariant model in which the hyperfermions are in the $(912,1,1)$ representation of the gauge group \cite{Randjbar-Daemi:1985wc}, \item the $E_7\times G_2\times U(1)_R$ invariant model with hyperfermions in the $(56,14,1)$ representation of the gauge group \cite{Avramis:2005qt}, and \item the $F_4\times Sp(9)\times U(1)_R$ invariant model with hyperfermions in the $(52,18,1)$ representation of the gauge group \cite{Avramis:2005hc}. \end{itemize} The anomaly freedom of these models is highly nontrivial, and they are natural in the sense that they do not contain any gauge-singlet hyperfermions. If one considers a large factor of $U(1)$ groups, and tune their $U(1)$ charges in a rather ad-hoc way \cite{Avramis:2005hc}, or considers only products of $SU(2)$ and $U(1)$ factors with a large number of hyperfermions, and tune their $U(1)$ charges again in an ad-hoc way, infinitely many possible anomaly-free combinations arise \cite{Suzuki:2005vu}. These models appear to be ``unnatural'' at this time. In fact, none of the above mentioned models, natural or not, have any known string/M-theory origin so far, though progress has been made in embedding \cite{Cvetic:2003xr} a minimal sub-sector with $U(1)_R$ symmetry and no hyperfermions \cite{Salam:1984cj} in string/M theory. An apparently inconclusive effort has also been made in \cite{Avramis:2004cn} in which the $6D$ theory is considered to live on the boundary of a $7D$ theory, which, in turn is to be obtained from string/M-theory. Finding the string/M-theory origin of the anomaly free models mentioned above is likely to uncover some interesting mechanisms for descending to lower dimensions starting from string/M-theory. Moreover, models of these type have been increasingly finding remarkable applications in cosmology and braneworld scenarios \cite{Halliwell:1986bs,Maeda:1985es,Aghababaie:2003wz,Gibbons:2003di,Nair:2004yu,Carter:2006uk}. In this paper, we will not address the string/M-theory origin of the $6D$ theories at hand but rather investigate the general form of their supersymmetric solutions, and present, in particular, a dyonic string solution in which the hyperscalar fields have been activated. Our aims are: \begin{itemize} \item to lay out the framework for finding further solutions which, in turn, may lead to new solutions in other theories of interest that live in diverse dimensions, \item to establish the fact that (dyonic) string solution exists in a more general situation than so far that has been known, in the sense that new type of fields, to wit, hyperscalars, have been activated, and \item to open new avenues in the compactification schemes in which the sigma model sector of supergravity theories are exploited. \end{itemize} These aims call for a modest summary of what has been done in these areas so far. To begin with, the general form of supersymmetric solutions in $6D$ have been studied in \cite{Gutowski:2003rg,Cariglia:2004kk}, though in the absence of hypermultiplets. We will fill this gap here. We will extend the analysis for the existence of Killing spinors, determine the resulting integrability conditions and the necessary and sufficient equations for finding exact solutions, without having to directly solve all the field equations. Second, various dyonic string solutions of $6D$ supergravities exist in the literature \cite{Duff:1995yh,Duff:1996cf,Guven:2003uw,Randjbar-Daemi:2004qr}, though again, none of them employ the hypermatter. We will find some novel features here such as the necessity to switch on the magnetic charge of the dyonic string. Third, concerning the use of (higher than one dimensional) sigma model sector of supergravity theories in finding exact solutions, in the case of ungauged supergravities the oldest result is due to Gell-Mann-Zwiebach \cite{Gell-Mann:1984mu} who found the half-supersymmetry breaking tear-drop solution of Type IIB supergravity, by exploiting its $SU(1,1)/U(1)$ sigma model sector. The tear-drop represents the two-dimensional internal space which is non-compact with finite volume. The sigma model sector of Type IIB supergravity has also been utilized in finding an instanton solution dual to a $7$-brane \cite{Gibbons:1995vg}. Supersymmetric two dimensional tear-drop solutions in ungauged $D<10$ supergravities are also known \cite{Gell-Mann:1984mu,Izquierdo:1994jz,Gibbons:2003di,Nair:2004yu,Kehagias:2005dp}. More recently, the general form of the supersymmetric solutions in ungauged $4D$ supergravities, including their coupling to hypermatter, have been provided in \cite{Hubscher:2006mr}. In the case gauged supergravities, a solution of the matter coupled $N=(1,0)$ gauged supergravity in $6D$ called 'the superswirl' has been found in \cite{Parameswaran:2005mm} where two hyperscalars are activated. One of these scalars is dilatonic and the other one is axionic. Supersymmetric domain-wall solutions of maximal gauged supergravities in diverse dimensions where only the dilatonic scalars of the sigma model are activated have appeared in \cite{Bergshoeff:2004nq}. Supersymmetric black string solutions of matter coupled $N=2, D=3$ gauged supergravity exists in which only a single dilaton is activated in the Kahler sigma model sector \cite{Deger:1999st}. In such models, supersymmetric solutions with the additional axionic scalars activated, have also been found \cite{Abou-Zeid:2001tu,Deger:2004mw,Deger:2006uc}. Finally, conditions for Killing spinors and general form of the supersymmetric solutions in matter coupled gauged supergravities in $N=2, D=5$ supergravities have also been investigated \cite{Cacciatori:2002qx} but no specific solutions with multi-hyperscalars activated seem to have appeared. To summarize, we see that there exist only few scattered results on the nontrivial use of gauged sigma models in supergravity theories in finding exact supersymmetric solutions. As stated earlier, one of our goals in this paper is to take a step towards a systematic approach to this problem. We shall come back to this point in the Conclusions. Turning to the tear-drop solutions, a key feature in these backgrounds is the identity map by which the scalars of the sigma model manifold are identified with those of the internal part of the spacetime. The brief summary of literature above only dealt with solutions that have supersymmetry. The idea of identity map, on the other hand, was first proposed by Omero and Percacci \cite{Omero:1980vx} long ago in the context of bosonic sigma models coupled to gravity. This work was generalized later in \cite{Ianus:1987xa}. Several more papers may well exist that deal with the solutions of sigma model coupled ordinary gravities, as opposed to supergravities, but we shall not attempt to survey them since our emphasis is on gauge supergravities with sigma model sectors in this paper. After the description of the matter coupled $6D$ supergravity in the next section, the conditions for the existence of Killing spinors, and their integrability conditions will be presented in Sections 3 and 4, respectively. The new dyonic string solution and its properties are then described in Sections 5 and 6, respectively. The summary of our results that emphasizes the key points, and selected open problems are given in the Conclusions. Three appendices that contain our conventions and useful formulae are also presented. \section{The Model} \subsection{ Field Content and the Quaternionic Kahler Scalar Manifold} \bigskip The six-dimensional gauged supergravity model we shall study involves the combined $N=(1,0)$ supergravity plus anti-selfdual supermultiplet $(g_{\mu\nu}, B_{\mu\nu}, \varphi , \psi_{\mu +}^A, \chi_{-}^A)$, Yang-Mills multiplet $(A_\mu, \lambda_{+}^A)$ and hypermultiplet $(\phi^\alpha, \psi_{-}^a)$. All the spinors are symplectic Majorana-Weyl, $A=1,2$ label the doublet of the $R$ symmetry group $Sp(1)_R$ and $a=1,...,2n_H$ labels the fundamental representation of $Sp(n_H)$. The chiralities of the fermions are denoted by $\pm$. The hyperscalars $\phi^\alpha,\ \alpha=1,..., 4n_H$ parameterize the coset $~Sp(n_H,1)/Sp(n_H)\otimes Sp(1)_R$. This choice is due to its notational simplicity. Our formulae can straightforwardly be adapted to more general quaternionic coset spaces $G/H$ whose list can be found, for example in \cite{Bagger:1983tt}. In this paper, we gauge the group \begin{equation} K\times Sp(1)_R \subset Sp(n_H,1)\ ,\quad\quad K\subseteq Sp(n_H)\ .\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\end{equation} The group $K$ is taken to be semi-simple, and the $Sp(1)_R$ part of the gauge group can easily be replaced by its $U(1)_R$ subgroup. We proceed by defining the basic building blocks of the model constructed in \cite{Nishino:1986dc} in an alternative notation. The vielbein $V_\alpha^{aA}$, the $Sp(n_H)$ composite connection $Q_\alpha^{ab}$ and the $Sp(1)_R$ composite connection $Q_\alpha^{AB}$ on the coset are defined via the Maurer-Cartan form as \begin{equation} L^{-1} \partial_\alpha L= V_\alpha^{aA}T_{aA} + \ft12\, Q_\alpha^{ab}T_{ab}+ \ft12\,Q_\alpha^{AB}T_{AB}\ , \la{mc1} \end{equation} where $L$ is the coset representative, $(T_{ab},T_{AB}, iT_{aA})\equiv T_{{\widehat A}{\widehat B}}\ $ obey the $Sp(n_H,1)$ algebra \begin{eqnarray} &&[T_{{\widehat A}{\widehat B}},T_{{\widehat C}{\widehat D}}]= -\Omega_{{\widehat B}{\widehat C}} T_{{\widehat A}{\widehat D}} -\Omega_{{\widehat A}{\widehat C}} T_{{\widehat B}{\widehat D}} -\Omega_{{\widehat B}{\widehat D}} T_{{\widehat A}{\widehat C}} -\Omega_{{\widehat A}{\widehat D}} T_{{\widehat B}{\widehat C}} \ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w3 &&\Omega_{{\widehat A}{\widehat B}}= \left( \begin{array}{cc} \epsilon_{AB} & 0 \\ 0 & \Omega_{ab} \\ \end{array} \right) \la{alg} \, .\end{eqnarray} The generator $T_{aA}$ is hermitian and $(T_{AB}, T_{ab})$ are anti-hermitian. The vielbeins obey the following relations: \begin{equation} g_{\alpha\beta}V^\alpha_{a A} V^\beta_{bB}=\Omega_{ab}\epsilon_{AB}\ ,\quad\quad V^\alpha_{aA} V^{\beta aB} +\alpha \leftrightarrow \beta = g^{\alpha\beta} \delta_A^B\ , \label{vg}\end{equation} where $g_{\alpha\beta}$ is the metric on the coset. Another useful definition is that of the three quaternionic Kahler structures given by \begin{equation} V^A_{\alpha a} V_\beta^{aB} - A\leftrightarrow B = 2J^{AB}_{\alpha\beta} \ . \la{j}\end{equation} Next, we define the components of the gauged Maurer-Cartan form as \begin{equation} L^{-1} D_\mu L = P_\mu^{aA}T_{aA} + \ft12\, Q_\mu^{ab}T_{ab}+ \ft12\,Q_\mu^{AB}T_{AB} \ ,\la{mc2} \end{equation} where \begin{equation} D_\mu L=\left( \partial_\mu - A_\mu^I T^I \right) L\ ,\la{mc3}\end{equation} $A_\mu^I$ are the gauge fields of $K \times Sp(1)_R$. All gauge coupling constants are set equal to unity for simplicity in notation. They can straightforwardly be re-instated. We also use the notation \begin{equation} T^I= (T^{I'}, T^r)\ ,\qquad T_r= 2 T_r^{AB}\,T_{AB}\ ,\ \qquad T^r_{AB}= -\ft{i}2\,\sigma^r_{AB}\ , \quad r=1,2,3\ .\end{equation} The components of the Maurer-Cartan form can be expressed in terms of the covariant derivative of the scalar fields as follows \cite{Percacci:1998ag} \begin{equation} P_\mu^{aA}= (D_\mu\phi^\alpha ) V_\alpha^{aA} \ ,\quad\quad Q_\mu^{ab}=(D_\mu\phi^\alpha ) Q_\alpha^{ab}-A_\mu^{ab}\ ,\quad\quad Q_\mu^{AB}=(D_\mu\phi^\alpha) Q_\alpha^{AB}-A_\mu^{AB}\ ,\la{df}\end{equation} where \begin{equation} D_\mu \phi^\alpha = \partial_\mu \phi^\alpha -A_\mu^I K^{I\alpha}\ ,\la{df2}\end{equation} and $K^I(\phi)$ are the Killing vectors that generate the $K\times Sp(1)_R$ transformations on $G/H$. Other building blocks to define the model are certain $C$-functions on the coset. These were defined in \cite{Nishino:1997ff}, and studied further in \cite{Percacci:1998ag} where it was shown that they can be expressed as \begin{equation} L^{-1} T^I L \equiv C^I = C^{IaA}T_{aA}+\ft12 C^{IAB}T_{AB}+\ft12C^{Iab}T_{ab}\, . \end{equation} Differentiating and using the algebra \eq{alg} gives the useful relation \begin{equation} D_\mu C^I = \left(P_\mu^a{}_B C^{IAB}+P_{\mu b}{}^AC^{Iab}\right)\,T_{aA} + P_\mu^{aA}C^I_a{}^B\, T_{AB} +P_\mu^{aA}C^{Ib}{}_A\,T_{ab}\ . \la{dc}\end{equation} Moreover, using \eq{mc2} and \eq{df} we learn that \begin{equation} K^{I\alpha}V_\alpha^{aA} = C^{IaA}\ ,\qquad K^{I\alpha}Q_\alpha^{ab}= C^{Iab}-\delta^{II'}T_{I'}^{ab}\ , \qquad K^{I\alpha} Q_\alpha^{AB}= C^{IAB}-\delta^{Ir}\,T_r^{AB}\ .\la{kv}\\ \end{equation} Finally, it is straightforward and useful to derive the identities \begin{eqnarray} D_{[\mu} P_{\nu]}^{aA} &=& -\ft12\, F_{\mu\nu}^I C^{IaA}\ ,\la{id1}\w2 P_{[\mu}^{aA} P_{\nu]}^b{}_A &=& \ft12\, Q_{\mu\nu}^{ab} +\ft12 F_{\mu\nu}^I C^{Iab}\ ,\la{id2}\w2 P_{[\mu}^{aA} P_{\nu] a}{}^B &=& \ft12\,Q_{\mu\nu}^{AB} +\ft12 F_{\mu\nu}^I C^{IAB}\ . \la{id3}\end{eqnarray} \subsection{ Field Equations and Supersymmetry Transformation Rules} \bigskip The Lagrangian for the anomaly free model we are studying can be obtained from \cite{Nishino:1986dc} or \cite{Nishino:1997ff}. We shall use the latter in the absence of Lorentz Chern-Simons terms and Green-Schwarz anomaly counterterms. Thus, the bosonic sector of the Lagrangian is given by \cite{Nishino:1997ff} \begin{equation} e^{-1}{\cal L} =R\,- \ft14 (\partial\varphi)^2- \ft1{12} e^\varphi\, G_{\mu\nu\rho}G^{\mu\nu\rho}- \ft14\,e^{\ft12\varphi}\, F^I_{\mu\nu}\,F^{I\mu\nu}\, -2P_\mu^{aA}\,P^\mu_{aA}- 4 \, e^{-\ft12\varphi}\,C^I_{AB} C^{IAB}\ ,\end{equation} where the Yang-Mills field strength is defined by $F^I=dA^I +\ft12 f^{IJK} A^J\wedge A^K$ and $G$ obeys the Bianchi identity \begin{equation} dG = \ft12 F^I\wedge F^I\ .\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 \label{bianchi}\end{equation} The bosonic field equations following from the above Lagrangian are \cite{Nishino:1997ff} \begin{eqnarray} R_{\mu\nu} &=& \ft14 \partial_\mu\varphi\, \partial_\nu\varphi + \ft12 e^{\ft12\varphi}\, (F^2_{\mu\nu} - \ft18 F^2\,g_{\mu\nu}) + \ft14 e^\varphi\, (G^2_{\mu\nu} - \ft16 G^2\, g_{\mu\nu}) \nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && -2 P_\mu^{aA} P_{\nu aA} + e^{-\ft12\varphi}(C^I_{AB}C^{IAB}) \ g_{\mu\nu}\ , \nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 \mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}\, \varphi&=& \ft14 e^{\ft12\varphi}\,F^2 + \ft16 e^\varphi\, G^2 -4\, e^{-\ft12\varphi}\,C^I_{AB} C^{IAB} \nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 D_\rho\big(e^{\ft12\varphi}\,F^{I\rho}{}_{\mu}\big) &=& \ft12e^\varphi\, F^{I\rho\sigma}G_{\rho\sigma\mu} +4 P^{aA}_\mu C^I_{aA}\ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 \nabla_\rho\left(e^\varphi\, G^{\rho}{}_{\mu\nu}\right) &=& 0\ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 D_\mu P^{\mu aA} &=& 4 e^{-\ft12\varphi} C^{IAB} C^{Ia}{}_{B}\ , \la{e7} \end{eqnarray} where we have used a notation $V^2_{\mu\nu}=V_{\mu\lambda_2...\lambda_p} V_\nu{}^{\lambda_2...\lambda_p}$ and $V^2=g^{\mu\nu}V_{\mu\nu}$ for a $p$-form $V$, and $F^2=F_{\mu\nu}^I F^{\mu\nu I}$. The local supersymmetry transformations of the fermions, up to cubic fermion terms that will not effect our results for the Killing spinors, are given by \cite{Nishino:1997ff} \begin{eqnarray} \delta \psi_\mu &=& D_\mu \varepsilon + \ft1{48} e^{\ft12\varphi} G_{\nu\sigma\rho}^+\,\Gamma^{\nu\sigma\rho}\, \Gamma_\mu\,\varepsilon \ ,\la{s1}\w2 \delta\chi &=&\ft14\left( \Gamma^\mu\partial_\mu \varphi -\ft16 e^{\ft12\varphi} G_{\mu\nu\rho}^-\,\Gamma^{\mu\nu\rho} \right)\varepsilon\ , \la{s2}\w2 \delta \lambda_A^I &=& -\ft18 F_{\mu\nu}^I\Gamma^{\mu\nu}\varepsilon_A -e^{-\ft12\varphi} C^I_{AB}~\varepsilon^B \ , \la{s3}\w2 \delta\psi^a &=& P_\mu^{a A} \Gamma^\mu \varepsilon_A \ , \la{s4} \end{eqnarray} where $D_\mu\varepsilon_A = \nabla_\mu\varepsilon_A + Q_{\mu A}{}^B \varepsilon_B$, with $\nabla_\mu$ containing the standard torsion-free Lorentz connection only. The transformation rules for the gauge fermions differ from those in \cite{Nishino:1986dc} by a field redefinition. \section{ Killing Spinor Conditions} The Killing spinor in the present context is defined to be the spinor of the supersymmetry transformations which satisfies the vanishing of the supersymmetric variations of all the spinors in the model. The well known advantage of seeking such spinors is that the necessary and sufficient conditions for their existence are first order equations which are much easier than the second order field equations, and moreover, once they are solved, the integrability conditions for their existence can be shown to imply most of the field equations automatically. In deriving the necessary and sufficient conditions for the existence of Killing spinors, it is convenient to begin with the construction of the nonvanishing fermionic bilinears, which provide a convenient tool for analyzing these conditions. In this section, firstly the construction and analysis of the fermionic bilinears are given, and then all the necessary and sufficient conditions for the existence of Killing spinor are derived. \subsection{Fermionic Bilinears and Their Algebraic Properties} There are only two nonvanishing fermionic bilinears that can be constructed from {\it commuting} symplectic-Majorana spinor $\epsilon^A$. These are: \begin{eqnarray} {\bar\epsilon}^A\Gamma_\mu\epsilon^B &\equiv& V_\mu^{AB}\ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\\ {\bar\epsilon}^A\Gamma_{\mu\nu\rho}\epsilon^B &\equiv& X^r_{\mu\nu\rho} T_r^{AB}\ . \end{eqnarray} Note that $X^r$ is a self-dual three-form due to chirality properties. From the Fierz identity ${\Gamma_\mu}_{(\alpha\beta}\Gamma^\mu_{\gamma)\delta}=0$, it follows that \begin{eqnarray} V^\mu V_\mu=0\ ,\qquad i_V X^r =0\ . \end{eqnarray} Introducing the orthonormal basis \begin{equation} ds^2=2e^+e^- + e^ie^i\ , \end{equation} and identifying \begin{equation} e^+=V\ , \end{equation} the equation $i_V X^r=0$ and self-duality of $X^r$ yield \begin{equation} X^r= 2 V\wedge I^r\ , \la{xv}\end{equation} where \begin{equation} I^r=\ft12 I^r_{ij}\,e^i\wedge e^j\ \end{equation} is anti-self dual in the 4-dimensional metric $ds_4^2=e^ie^i$. Straightforward manipulations involving Fierz identities imply that $I^r$ are quaternionic structures obeying the defining relation \begin{equation} (I^r)^i{}_k\,(I^s)^k{}_j = \epsilon^{rst} (I^t)^i{}_j - \delta^{rs}\delta^i_j\ .\label{hks} \end{equation} Finally, using the Fierz identity ${\Gamma_\mu}_{(\alpha\beta}\Gamma^\mu_{\gamma)\delta}=0$ once more, one finds that \begin{equation} V_\mu\Gamma^\mu \epsilon=\Gamma^+\epsilon=0\ .\la{cplus} \label{susy1}\end{equation} If there exists more than one linearly independent Killing spinor, one can construct as many linearly independent null vectors. In this case \eq{cplus} is obeyed by each Killing spinor and the corresponding null vector, i.e. $V_\mu^1\Gamma^\mu \epsilon_1=0,\ V_\mu^2\Gamma^\mu \epsilon_2=0$, but it may be that $V_\mu^1\Gamma^\mu \epsilon_2 \ne 0$ and/or $V_\mu^2\Gamma^\mu \epsilon_1\ne 0$. In that case, \eq{cplus} should be relaxed since $\epsilon$ should be considered as a linear combination of $\epsilon_1$ and $\epsilon_2$. \subsection{Conditions From $\delta\lambda^I=0$} Multiplying \eq{s3} with ${\bar\epsilon}^B \Gamma^\rho$, we obtain \begin{eqnarray} i_V F^I &=&0\ ,\la{f1}\\ F^{Iij} I^{r}_{ij}&=&4 e^{-\ft12\varphi}\,C^{Ir}\ .\la{f2} \end{eqnarray} The second has been simplified by making use of \eq{f1} and \eq{xv}. Multiplying \eq{s3} with ${\bar\epsilon}^B\Gamma_{\lambda\tau\rho}$, on the other hand, gives \begin{eqnarray} && F^I\wedge V + \star (F^I\wedge V)+2 e^{\ft12\varphi}\,C^{Ir} X^r=0\ , \la{f3}\\ && \ft34 F^{I\sigma}{}_{[\mu}X^{r}_{\nu\rho]\sigma}+\ft12\epsilon^{rst} e^{-\ft12 \varphi}C^{Is}X^{t}_{\mu\nu\rho}=0\ .\la{f4}\end{eqnarray} One can show that these two equations are identically satisfied upon the use of \eq{f1} and \eq{f2}, which, in turn imply that $F$ must take the form \begin{equation} F^I=-e^{-\ft12\varphi}\,C^{Ir} I^r+{\widetilde F}^I+V\wedge \omega^I\ ,\la{fi} \end{equation} where ${\widetilde F}^I=\ft12 {\widetilde F}^I_{ij}\,e^i\wedge e^j$ is self-dual, and $\omega^I=\omega^I_i\, e^i$. Reinstating the gauge coupling constants, we note that the $C$-function dependent term will be absent when the index $I$ points in the direction of a subgroup of $K \subset Sp(2n_H)$ under which all the hyperscalars are neutral. Substituting \eq{fi} into the supersymmetry transformation rule, and recalling \eq{cplus}, one finds that \eq{s3} gives the additional conditions on the Killing spinor \begin{equation} \left(\ft18 I^r_{ij}\Gamma^{ij} \delta^A_B -T^{rA}{}_B\right)\,\epsilon^B=0\ . \la{it}\end{equation} The contribution from ${\widetilde F}$ drops out due to chirality-duality properties involved. Writing this equation as ${\cal O}^r \epsilon=0$, one can check that $[{\cal O}^r,{\cal O}^s] =\epsilon^{rst}{\cal O}^t$. Thus, any two projection imply the third one. In summary, the necessary and sufficient conditions for $\delta\lambda^I=0$ are \eq{fi} and \eq{it}. \subsection{Conditions From $\delta\psi^a=0$} This time multiplying \eq{s3} with ${\bar\epsilon}^B$ and ${\bar\epsilon}^B\Gamma_{\lambda\tau}$ gives rise to four equations which can be shown to imply \begin{eqnarray} V^\mu P_\mu^{aA}&=& 0\ ,\la{h1}\w2 P_i^{aA} &=& 2(I^r)_i{}^j\,(T^r)^A{}_B\,P_j^{aB}\ . \la{h2} \end{eqnarray} Using \eq{j} and \eq{df}, we can equivalently reexpress the second equation above as \begin{equation} D_i\phi^\alpha = (I^r)_i{}^j\,(J^r)_\beta{}^\alpha\, D_j\phi^\beta\ . \la{hc} \end{equation} Writing \eq{h2} as $P^a={\cal O} P^a$, we find that $({\cal O}-1)({\cal O}-3 )=0$. Thus, \eq{h2} implies that $P^a$ is an eigenvector of ${\cal O}$ with eigenvalue one. Moreover, using \eq{h2} directly in the supersymmetry transformation rule \eq{s4}, and using the projection condition \eq{it}, we find that $\delta\psi^a=3\delta\psi^a$, and hence vanishing. In summary, the necessary and sufficient conditions for $\delta\psi^a=0$ are \eq{h1}, \eq{h2} (or equivalently \eq{hc}), together with the projection condition \eq{it}. \subsection{Conditions From $\delta\chi=0$} The analysis for this case is identical to that given in \cite{Cariglia:2004kk}, so we will skip the details, referring to this paper. Multiplying \eq{s2} with ${\bar\epsilon}^B$ and ${\bar\epsilon}^B\Gamma_{\lambda\tau}$ gives four equations which can be satisfied by % \begin{equation} V^\mu \partial_\mu \varphi =0\ , \la{c1}\end{equation} and parametrizing $G^-$ as \begin{equation} e^{\ft12\varphi}\,G^- = \ft12(1- \star) \left[V\wedge e^-\wedge d\varphi +V\wedge K\right]\ ,\la{c2} \end{equation} where $\star$ is the Hodge-dual, $K=\ft12 K_{ij}\, e^i\wedge e^j$ is self-dual. In fact, these two conditions are the necessary and sufficient conditions for satisfying $\delta\chi=0$. \subsection{Conditions From $\delta\psi_\mu=0$} Multiplying \eq{s1} with ${\bar \epsilon}\Gamma_\nu$, we find \begin{equation} \nabla_\mu V_\nu = -\ft12 e^{\ft12\varphi}\,G^+_{\mu\nu\rho}V^\rho\ , \la{killing} \end{equation} which implies that $V^\mu$ is a Killing vector. Similarly, multiplying \eq{s1} with ${\bar \epsilon}\Gamma_{\nu\rho\sigma}$ gives an expression for $\nabla_\sigma X^r_{\mu\nu\rho}$. Using \eq{killing} one finds that this expression is equivalent to \begin{equation} D_\mu I^r_{ij} = e^{\ft12\varphi} G^{+k}{}_{\mu [i}\,I^r_{j]k}\ ,\la{di} \end{equation} where $D_\mu I^r \equiv \nabla_\mu I^r+\epsilon^{rst} Q_\mu^s I^t$. One can use \eq{di} to fix the composite $Sp(1)_R$ connection as follows \begin{equation} Q_\mu^r= \ft14 e^{\varphi}G^{(+)}_{\mu ij} I^{rij} -\ft18 \epsilon^{rst} I^{sij} \nabla_\mu I^t_{ij}\ .\la{cc} \end{equation} Manipulations similar to those in \cite{Cariglia:2004kk} shows that, using \eq{it} and \eq{killing}, the variation $\delta\psi_\mu=0$ is directly satisfied, with $\epsilon$ constant, in a frame where $I^r_{ij}$ are constants. In summary, the necessary and sufficient conditions for $\delta\psi_\mu=0$ are \eq{killing}, \eq{di}, together with the projection condition \eq{it}. \section{Integrability Conditions for the Existence of a Killing Spinor} Assuming the Killing spinor conditions derived in the previous section, the attendant integrability conditions can be used to show that certain field equations are automatically satisfied. Since the field equations are complicated second order equations, it is therefore convenient to determine those which follow from the integrability, and identify the remaining equations that need to be satisfied over and above the Killing spinor conditions. Let us begin by introducing the notation \begin{equation} \delta\psi_\mu={\widetilde D}_\mu\epsilon\ ,\qquad \delta\chi=\ft14 \Delta\epsilon\ ,\qquad \delta\lambda^I=e^{-\ft12\varphi} \Delta^I\epsilon\ ,\qquad \delta\psi^a=\Delta^{aA}\epsilon_A\ , \end{equation} for the supersymmetry variations and \begin{equation} R_{\mu\nu}= J_{\mu\nu}\ ,\quad \mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}\varphi=J\ ,\quad D_\mu(e^{\ft12\varphi}F^{I\mu\nu})=J^{I\nu}\ ,\quad D_\mu P^{\mu aA} =J^{aA}\ ,\end{equation} for bosonic field equations. Then we find that \begin{eqnarray} \Gamma^\mu[{\widetilde D}_\mu, \Delta^I]\epsilon^A &=& 2 \left[ D_\mu(e^{\ft12\varphi}F^{I\mu\nu})-J^{I\nu}\right]\Gamma_\nu\epsilon^A\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && + e^{\ft12\varphi} \left(D_\mu F^I_{\nu\rho}\right)\Gamma^{\mu\nu\rho}\epsilon^A -8\Gamma^\mu \left(D_\mu C^{IAB}+2C^{Ia(A} P_{\mu a}{}^{B)}\right)\epsilon_B \nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 &&-2[\Delta,\Delta^I]\epsilon^A + 2e^{\ft12\varphi} F^I_{\mu\nu}\Gamma^{\mu\nu}\, (\delta\chi^A) +16C^{IaA}\,(\delta\psi_a)\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document} \ ,\w2 &&+8e^{\ft12\varphi} f^{IJK}A_\mu^J \Gamma^\mu (\delta\lambda^{KA}) \ , \la{e1}\w2 \Gamma^\mu [{\widetilde D}_\mu,\Delta^{aA} ] \epsilon_A &=& \left(D_\mu P^{\mu aA} -J^{aA}\right)\epsilon_A \nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && +\Gamma^{\mu\nu}\left(D_\mu P_\nu^{aA}-\ft12 F_{\mu\nu}^IC^{IaA}\right) \epsilon_A \nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 &&-4C^{IaA} (\delta\lambda^I_A) -\ft1{24} e^{\ft12\varphi}G_{\mu\nu\rho}\Gamma^{\mu\nu\rho}\, (\delta\psi^a)\ , \la{e2}\w2 \Gamma^\mu[{\widetilde D}_\mu,\Delta]\epsilon_A &=& \left(\mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}\varphi -J\right)\epsilon_A -\ft12 e^{-\ft12\varphi} D_\mu (e^{\varphi} G^\mu{}_{\nu\rho})\, \Gamma^{\nu\rho} \epsilon_A\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && -\ft16 e^{\ft12\varphi} \Gamma^{\mu\nu\rho\sigma} \left(\nabla_\mu G_{\nu\rho\sigma} -\ft34 F^I_{\mu\nu} F^I_{\rho\sigma}\right) \epsilon_A\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && -\left(e^{\ft12\varphi}F^I_{\mu\nu}\Gamma^{\mu\nu}\epsilon_{AB}+8C^I_{AB}\right)\, \delta\lambda^{IB} +\ft16 e^{\ft12\varphi}G_{\mu\nu\rho}\Gamma^{\mu\nu\rho}\,(\delta\chi_A)\ ,\la{e3}\w2 \Gamma^\nu [{\widetilde D}_\mu,{\widetilde D}_\nu]\epsilon^A &=& \ft12 \left( R_{\mu\nu} -J_{\mu\nu}\right)\Gamma^\nu \epsilon^A +\ft1{16} e^{-\ft12\varphi}\nabla^\nu (e^{\varphi} G_{\nu\rho\sigma})\,\Gamma^{\rho\sigma}\Gamma_\mu \epsilon^A\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && +\ft1{48} e^{\ft12\varphi}\Gamma^{\rho\sigma\lambda\tau}\Gamma_\mu \left(\nabla_\rho G_{\sigma\lambda\tau} -\ft34 F^I_{\rho\sigma}F^I_{\lambda\tau} \right)\epsilon^A\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && + \left( Q_{\mu\nu}^{AB}+ F_{\mu\nu}^I C^{IAB} -2 P_{[\mu}^{aA} P_{\nu] a}{}^B\right)\Gamma^\nu \epsilon_B\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 &&+\ft12\left[\partial_\mu\varphi +\ft1{12} e^{\ft12\varphi} G_{\nu\rho\sigma}\Gamma^{\nu\rho\sigma}\Gamma_\mu\right]\,\delta \chi^A +2 P_\mu^{aA} (\delta \psi_a )\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && -\ft18 e^{\ft12\varphi}\left[ (\Gamma^{\nu\rho}\Gamma_\mu-4\delta_\mu^\nu \Gamma^\rho)F_{\nu\rho}^I \epsilon^{AB}- \Gamma_\mu C^{IAB}\right] \,\delta\lambda^I_B\la{e4} \ .\end{eqnarray} If one makes the ansatz for the potentials directly , then the Bianchi identities and the relations \eq{dc} and \eq{id1}--\eq{id3} are automatically satisfied. Otherwise, all of these equations must be checked. Assuming that these are satisfied, from \eq{e1} it follows that the Yang-Mills field equation $K_\mu=0$, {\it except for $K_+=0$}, is automatically satisfied, as can be seen by multiplying $K_\mu\Gamma^\mu\epsilon^A=0$ by ${\bar\epsilon}^B$ and $K_\nu\Gamma^\nu $, recalling $\Gamma^+\epsilon=0$ and further simple manipulations. Similarly, from \eq{e2} it follows that the hyperscalar field equation $K^{aA}=0$ is automatically satisfied as can be seen by multiplying $K^{aA}\epsilon_A=0$ by ${\bar\epsilon}_B\Gamma^\mu$. Finally, from \eq{e3} and \eq{e4}, it follows that the dilaton and Einstein equation $E_{\mu\nu}=0$, {\it except $E_{++}=0$}, are automatically satisfied, provided that we also impose the $G$-field equation. This can be seen by multiplying $E_{\mu\nu}\Gamma^\nu\epsilon_A=0$ with ${\bar\epsilon}_B$ and $E_{\mu\rho}\Gamma^\rho$ and simply manipulations that make use of $\Gamma^+\epsilon=0$. In summary, once the Killing spinor conditions are obeyed, all the field equations are automatically satisfied as well, except the following, \begin{equation} R_{++}=J_{++}\ ,\qquad D_\mu(e^{\ft12\varphi} F^{I\mu}{}_{+})=J^I_+\ ,\quad\quad D_\mu(e^{\varphi}G^{\mu\nu\rho})=0\ , \label{remaining} \end{equation} and the Bianchi identities $DF^I=0$ and $dG=\ft12 F^I\wedge F^I$. It is useful to note that in the case of gravity coupled to a non-linear sigma model, the scalar field equation follows from the Einstein's equation and the contracted Bianchi identity only when the scalar map is a submersion (i.e. when the rank of the matrix $\partial_\mu\phi^\alpha$ is equal to the dimension of the scalar manifold). In our model, however, the scalar field equation is automatically satisfied as a consequence of the Killing spinor integrability conditions, without having to impose such requirements. This is all the more remarkable given the fact that there are contributions to the energy-momentum tensor from fields other than the scalars. Finally, in analyzing the set of equations summarized above for finding a supersymmetric solution, it is convenient to parametrize the metric, which admits a null Killing vector, in general as \cite{Gutowski:2003rg} \begin{equation} ds^2= 2H^{-1}(du+\beta)\left(dv+\omega+{{\cal F}\over 2}(du+\beta)\right) +H ds_B^2\ , \label{gm}\end{equation} with \begin{eqnarray} e^+ &=&H^{-1}(du+\beta)\ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 e^- &=& dv+\omega+\ft12 {\cal F} H e^+\ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 e^i &=& H^{1/2} {\tilde e}_\alpha{}^i dy^\alpha\ , \end{eqnarray} where $ds_B^2 =h_{\alpha\beta}dy^\alpha dy^\beta$ is the metric on the base space ${\cal B}$, and we have $\beta=\beta_\alpha dy^\alpha$ and $\omega=\omega_\alpha dy^\alpha$ as $1$-forms on ${\cal B}$. These quantities as well as the functions $H$ and ${\cal F}$ depend on $u$ and $y$ but not on $v$. Now, as in \cite{Gutowski:2003rg}, defining the $2$-forms on ${\cal B}$ by \begin{equation} {\tilde J}^r = H^{-1}I^r\ ,\end{equation} these obey \begin{equation} ({\tilde J}^r)^\alpha{}_\g\,({\tilde J}^s)^\gamma{}_\beta = \epsilon^{rst} ({\tilde J}^t)^\alpha{}_\beta - \delta^{rs}\delta^\alpha_\beta\ ,\label{hks2} \end{equation} where raising and lowering of the indices is understood to be made with $h_{\alpha\beta}$. Note that the index $\alpha=1,...,4$ labels the coordinates $y^\alpha$ on the base space ${\cal B}$. This should not be confused with the index $\alpha=1,...,n_H$ that labels the coordinates $\phi^\alpha$ of the scalar manifold! A geometrically significant equation satisfied by ${\tilde J}^r$ can be obtained from \eq{di}, and with the help of \eq{killing} it takes the form \cite{Cariglia:2004kk}, \begin{equation} {\tilde\nabla}_i {\tilde J}^r_{jk} +\epsilon^{rst} Q_i^s {\tilde J}^t_{jk}- \beta_i {\dot {\tilde J}}^r_{jk}-{\dot\beta}_{[j}{\tilde J}^r_{k]i} +\delta_{i[j} {\dot \beta}^m {\tilde J}^r_{k]m}=0\ ,\label{djb} \end{equation} where ${\tilde\nabla}_i$ is the covariant derivative on the base space ${\cal B}$ with the metric $ds_B^2$ and $\dot\beta \equiv \partial_u\beta$. \section{The Dyonic String Solution} For the string solution we shall activate only four hyperscalars, setting all the rest equal to zero. In the quaternionic notation of Appendix B, this means \begin{equation} t= \left( \begin{array}{c} \phi \\ 0 \\ \vdots \\ 0 \\ \end{array} \right) \la{fvec} \end{equation} In what follows we shall use the map \begin{equation} \phi=\phi^{A'A}= \phi^\alpha (\sigma_{\alpha})^{A'A}\ , \la{qdef}\end{equation} where $ \sigma_{\alpha}=(1,-i \vec{\sigma} )$ are the constant van der Wardeen symbols for $SO(4)$. Moreover, we shall chose the gauge group $K$ such that \begin{equation} T^{I'}\, t =0\ .\la{tprime} \end{equation} This condition can be easily satisfied by taking $K$ to be a subgroup of $Sp(n_H-1)$ which evidently leaves $t$ given in \eq{fvec} invariant. Finally, we set \begin{equation} A_\mu^{I'}=0\ . \la{aprime}\end{equation} Then, supersymmetry condition \eq{fi} in $I'$ direction is satisfied by setting ${\widetilde F}^{I'}=0=\omega^{I'}$ and noting that $C^{I'r}=0$ in view of \eq{tprime} (see \eq{ci}). The supersymmetry condition \eq{h2} is also satisfied along the directions in which the hyperscalars are set to zero. Therefore, the model effectively reduces to one in which the hyperscalars are described by $Sp(1,1)/Sp(1)\times Sp(1)$, which is equivalent to a 4-hyperboloid $H_4=SO(4,1)/SO(4)$. Using \eq{qdef} in the definition of $D_\mu t$ given in \eq{ddf}, we obtain \begin{equation} D_\mu\phi^{\alpha}= \partial_\mu \phi^{\alpha}-\ft12 A_\mu^r (\rho^r)^{\alpha}{}_{\beta}\,\,\phi^{\beta} \ , \end{equation} where the 't Hooft symbols $\rho^r$ are constant matrices defined as \begin{equation} \rho^r_{\alpha\beta}= {\rm tr}\,(\sigma_{\alpha}\, T^r\,{\bar\sigma}_{\beta}) \ . \end{equation} These are anti-self dual and their further properties are given in Appendix A. For the metric we choose \begin{equation} \beta=0\ ,\qquad \omega=0\ ,\qquad {\cal F}=0\ , \qquad h_{\alpha\beta}=\Omega^2 \delta_{\alpha\beta}\ , \end{equation} in the general expression \eq{gm}, so that our ansatz takes the form \begin{equation} ds^2=2 H^{-1} \,du dv + H ds_B^2\ ,\quad\quad ds_B^2=\Omega^2 dy^\alpha dy^\beta\delta_{\alpha\beta}\ , \label{ds4}\end{equation} where $\Omega$ is a function of $y^2\equiv y^\alpha y^\beta\delta_{\alpha\beta}$. We also choose the null basis as \begin{equation} e^+=V= H^{-1} du\ ,\quad\quad e^-=dv\ . \end{equation} Thus, $V^\mu\partial_\mu=\partial /\partial v$. Moreover, in the rest of this section, {\it we shall take all the fields to be independent of $u$ and $v$}. Given that $\beta=0$, it also follows from \eq{djb} that \begin{equation} {\tilde\nabla}_i {\tilde J}^r_{jk} +\epsilon^{rst} Q_i^s {\tilde J}^t_{jk}=0\ . \label{djv} \end{equation} Next, in the general form of $G^{(-)}$ given in \eq{c2}, we choose \begin{equation} K=0 \ . \end{equation} Then, from \eq{c2} and \eq{killing} we can compute all the components of $G^+$ and $G^-$, which yield for $G=G^++G^-$ the result \begin{equation} G=e^{-\varphi/2} \left(e^+\wedge e^-\wedge d\varphi_+ + \star_4\, d\varphi_-\right)\ , \end{equation} % where $\star_4$ refers to Hodge dual on the transverse space with metric \begin{equation} ds_4^2= H ds_B^2\ ,\label{4dm} \end{equation} and we have defined \begin{equation} \varphi_\pm \ := \ \pm \ft12 \varphi +\ln\,H\ . \end{equation} Next, we turn to the supersymmetry condition \eq{hc} in the hyperscalar sector. With our ansatz described so far, it can now be written as \begin{equation} D_i\phi^{\underline{\alpha}} = ({\tilde J}^r)_i{}^j\,(J^r)_{\underline{\phantom{\alpha}}\!\!\!\beta}{}^{\underline{\alpha}}\, D_j\phi^{\underline{\phantom{\alpha}}\!\!\!\beta}\ , \la{hc4} \end{equation} where \begin{equation} D_i\phi^{\underline{\alpha}} \equiv D_i\phi^\alpha\,V_\alpha{}^{\underline{\alpha}}\ , \end{equation} and $ V_\alpha{}^{\underline{\alpha}}$ is the vielbein on $H_4$, and the above equations are in the basis \begin{equation} {\tilde e}^i = \delta_\alpha^i\, \Omega\, dy^\alpha\ ,\label{bb} \end{equation} referring to the base space ${\cal B}$. We also note that \begin{equation} J^r_{\underline{\alpha}\underline{\phantom{\alpha}}\!\!\!\beta}= \rho^r_{\alpha\beta}\,\delta^\alpha_{\underline{\alpha}} \,\delta_{\underline{\phantom{\alpha}}\!\!\!\beta}^\beta\ , \end{equation} which follows from rom \eq{jr} and \eq{av}. Recall that the 't Hooft matrices $\rho^r_{\alpha\beta}$ are constants. Next, we choose the components of ${\tilde J}^r_{ij}$ to be constants and make the identification \begin{equation} {\tilde J}^r=J^r\ .\label{jtilde}\end{equation} Using the quaternion algebra, we can now rewrite \eq{hc4} as \begin{equation} D_i\phi_{\underline{\phantom{\alpha}}\!\!\!\beta}= \left(\delta_{i\underline{\alpha}}\delta_{j\underline{\phantom{\alpha}}\!\!\!\beta}-\delta_{j\underline{\alpha}}\delta_{i\underline{\phantom{\alpha}}\!\!\!\beta}-\epsilon_{ij\underline{\alpha}\underline{\phantom{\alpha}}\!\!\!\beta}\right)\,D_j\phi_{\underline{\alpha}}\ . \la{dfi}\end{equation} Symmetric and antisymmetric parts in $i$ and $\underline{\phantom{\alpha}}\!\!\!\beta$ give \begin{eqnarray} && D_i \phi^i = 0\ ,\ \ \ \ \ \phi^i\equiv \phi^{\underline{\alpha}}\,\delta_{\underline{\alpha}}^i\ ,\label{ss1}\w2 && D_i\phi_j -D_j\phi_i = -\epsilon_{ijk\ell} D_k \phi_\ell\ . \label{ss2}\end{eqnarray} To solve these equations, we make the ansatz \begin{equation} \phi^\alpha=f y^\alpha\, ,\quad\quad A^r_\alpha=g\, \rho^r_{\alpha\beta}\,y^\beta\ , \label{af}\end{equation} where $f$ and $g$ are functions of $y^2$. This ansatz, in particular, implies that the function $\omega^r$ arising in the general form of $F^r$ given in \eq{fi} vanishes. Assuming that the map $\phi^\alpha$ is 1-1, one can actually use diffeomorphism invariance to set (at least locally) $f=1$. However, since we have already fixed the form of the metric as in \eq{ds4}, chosen a basis as in \eq{bb}, and identified the components of the quaternionic structures ${\tilde J}^{r}_{ij}$ referring to this orthonormal basis, the reparametrization invariance has been lost. Therefore it is important to keep the freedom of having an arbitrary function in the map \eq{af}. Using \eq{af} we find that \eq{ss2} is identically satisfied and \eq{ss1} implies \begin{equation} g=\fr{4 f' y^2+8f}{3fy^2} \ ,\label{son1}\end{equation} where prime denotes derivative with respect to argument, i.e. $y^2$. Next, the computation of the Yang-Mills field strength from the potential \eq{af} gives the result \begin{eqnarray} &&F^r = F^{r(+)}+F^{r(-)}\ ,\qquad F^{r\pm}= \pm \star_4 F^{r\pm}\ , \w2 && F^{r(-)}_{\alpha\beta} = (-2g-g'y^2+\ft12g^2y^2)\,\rho^r_{\alpha\beta}\ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && F^{r(+)}_{\alpha\beta} \equiv {\widetilde F}^r_{\alpha\beta}= (2g'+ g^2)\,\left( 2 y_{[\alpha}y^\delta\,\rho^r_{\beta]\delta} +\ft12 y^2\,\rho^r_{\alpha\beta}\right)\ .\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document} \end{eqnarray} Comparing these results with the general form of $F^I$ given in \eq{fi}, we obtain \begin{equation} e^{\varphi_-}= {\eta\over \Omega^2}\ , \label{son2}\end{equation} where \begin{equation} \eta \equiv \left(g'y^2+2g-\ft12 g^2y^2\right)(1-f^2y^2)\ . \label{eta}\end{equation} Here we have used the fact that $C^{r,s}=\delta^{rs}/(1-\phi^2)$ as it follows from the formula \eq{crr}. Finally using the composite connection \eq{qsp1} in \eq{djv} we obtain \begin{equation} \fr{\O '}{\O}= \fr{(2f^2-g)}{2(1-f^2y^2)}\ . \label{son3}\end{equation} This equation can be integrated with the help of \eq{son1}, yielding \begin{equation} \Omega = {b\over y^2}\left( {1-f^2 y^2\over f^2 y^2}\right)^{1/3}\ ,\label{omega} \end{equation} where $b$ is an integration constant. One can now see that all necessary and sufficient conditions for the existence of a Killing spinor on this background are indeed satisfied. As shown in the previous section, the integrability conditions for the existence of a Killing spinor imply all field equations except \eq{remaining} and the Bianchi identities on $F^I$ and $G$. It is easy to check that \eq{remaining} is identically satisfied by our ansatz, except for the $G$-field equation. Furthermore, the Yang-Mills Bianchi identity is trivial since we give the potential. Thus, the only remaining equations to be checked are the $G$-Bianchi identity and the $G$-field equation. To this end, it is useful to record the result \begin{equation} {\epsilon^{\alpha\beta\gamma\delta}\over \sqrt {g_4}} F_{\alpha\beta}^r F^r_{\gamma\delta}= {16 Q'\over y^2 H^2\Omega^4}\ , \end{equation} where $g_4$ is the determinant of the metric for the line element $ds_4^2$, and \begin{equation} Q \equiv (gy^2)^2 (gy^2-3) +c\ , \end{equation} where $c$ is an integration constant. Interestingly, this term is proportional to the sum of of $F^2$ and $C^2$ terms that arise in the dilaton field equation, up to an overall constant. We now impose the $G$-field equation $d (e^{\varphi} \star G)=0$ and the $G$-Bianchi identity $dG=\ft12 F^r\wedge F^r$. The $G$-field equation gives\ \begin{equation} \mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}_4 \varphi_+ +\ft12 \partial_\alpha\, \varphi \partial^\alpha \varphi_+ \ =\ 0\ , \label{son4} \end{equation} and the $G$-Bianchi identity amounts to \begin{equation} \mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}}_4 \varphi_- -\ft12 \partial_\alpha \varphi\, \partial^\alpha \varphi_- \ =\ {-2Q'\over y^2 H^2\Omega^4}\ ,\label{son5} \end{equation} where the Laplacian is defined with respect to the metric \eq{4dm}. These equations can be integrated once to give \begin{equation} \varphi_+'= {\nu e^{-\varphi}\over (y^2)^2 \eta}\ ,\quad\quad \varphi_-'= {(\lambda-\frac12 Q)\over (y^2)^2 \eta}\ , \label{son6}\end{equation} where $\nu,\lambda$ are the integration constants, $c$ has been absorbed into the definition of $\lambda$, and \eq{son2} has been used in the form $H\Omega^2=\eta e^{\varphi/2}$. These equation can be rewritten as \begin{eqnarray} \left(e^{\varphi_+}\right)' &=& {\nu\over b^2}\left( {f^2 y^2\over 1-f^2 y^2}\right)^{2/3}\ ,\label{fplus}\w2 \left(e^{\varphi_-}\right)' &=& {\lambda-\frac12 Q \over b^2}\left( {f^2 y^2\over 1-f^2 y^2}\right)^{2/3}\ ,\label{fminus} \end{eqnarray} by recalling $\varphi=\varphi_+ - \varphi_-$, exploiting \eq{son2} and using the solution \eq{omega} for $\Omega$. It is important to observe that the second equation in \eq{son6}, has to be consistent with \eq{son2}. Differentiating the latter and comparing the two expressions, we obtain a third order differential equation for the function $f$: \begin{equation}\eta'- \left({2f^2 -g\over 1-f^2y^2}\right)\eta= {\lambda-\frac12 Q\over (y^2)^2}\ . \label{fin2}\end{equation} In summary, any solution of this equation for $f$ determines also the functions $(\varphi,H,\Omega, g)$, and therefore fixes the solution completely. This is a highly complicated equation, however, and we do not know its general solution at this time. Nonetheless, it is remarkable that an ansatz of the form \begin{equation} f= {a\over y^2}\ , \end{equation} with $a$ a constant, which gives $g=4/(3y^2)$ from \eq{son1}, does solve \eq{fin2}, and moreover, it fixes the integration constant \begin{equation} \lambda=-\ft43 \ . \end{equation} Furthermore, it follows from \eq{omega}, \eq{son2}, \eq{eta} and \eq{fplus} that \begin{equation} \Omega = {b\over y^2}\, h^{1/3}\ ,\qquad e^{\varphi_-} = \left({2a\over 3b}\right)^2 h^{1/3}\ ,\qquad e^{\varphi_+}= 3\nu \left({a\over b}\right)^2 h^{1/3} +\nu_0\ , \label{defs}\end{equation} where $\nu_0$ is an integration constant and \begin{equation} h \equiv {y^2\over a^2}-1\ . \end{equation} Thus, the full solution takes the form \begin{eqnarray} ds^2 &=& e^{-\ft12\varphi_+}e^{-\ft12\varphi_-}(-dt^2+dx^2) + e^{\ft12\varphi_+}e^{\ft12\varphi_-} \left({b\over y^2}\right)^2\,h^{2/3}\,dy^\alpha dy^\beta\,\delta_{\alpha\beta}\ ,\label{smet}\w2 e^\varphi&=& e^{\varphi_+}/e^{\varphi_-}\ ,\quad\quad \phi^\alpha= {a y^\alpha\over y^2}\ ,\w2 A_\alpha^r &=& {4\over 3 y^2}\,\rho^r_{\alpha\beta} y^\beta\ ,\w2 G_{\alpha\beta\gamma} &=& {8\over 27 (y^2)^2}\,\epsilon_{\alpha\beta\gamma\delta}\,y^\delta\ , \qquad G_{+-\alpha} = -\partial_\alpha e^{-\varphi_+}\ , \end{eqnarray} with $\varphi_\pm$ given in \eq{defs}. The form of $h$ dictates that $a^2 < y^2 <\infty$, covering outside of a disk of radius $a$. The hyperscalars map this region into $H^4$ which can be viewed as the interior of the disk defined by $\phi^2 < 1$. These scalars are gravitating in the sense that their contribution to the energy momentum tensor, which takes the form $({\rm tr} P_iP_j-\ft12 g_{ij}{\rm tr} P^2)$, does not vanish since the solution gives \begin{equation} P_i^{A'A}= {a\over 3 y^2\left(1-\frac{a^2}{y^2}\right)} \left(\delta_i^\alpha - 4 {y_iy^\alpha\over y^2}\right)\, \sigma_\alpha^{A'A}\ . \end{equation} It is possible to apply a coordinate transformation and map the base space into the disc by defining \begin{equation} z^\alpha\equiv \fr{a y^\alpha}{y^2}. \end{equation} In $z^\alpha$ coordinates the solution becomes \begin{eqnarray} ds^2 &=& e^{-\ft12\varphi_+}e^{-\ft12\varphi_-}(-dt^2+dx^2) + L^2 e^{\ft12\varphi_+}e^{\ft12\varphi_-} \, h^{2/3}\, (dr^2 +r^2 d\Omega_3^2)\ \label{smetz}\w2 e^\varphi&=& e^{\varphi_+}/e^{\varphi_-}\ , \label{zfi}\w2 G &=& \ft{8}{27}\, \Omega_3 - dt\wedge dx \wedge de^{-\varphi_+}\ ,\w2 A^r &=& \ft23\, r^2 \sigma^r_R \ , \w2 \phi^\alpha &=& z^\alpha \ , \label{zha} \end{eqnarray} where \begin{eqnarray} && r= \sqrt{z^\alpha z^\beta \delta_{\alpha\beta}}\ ,\qquad \Omega_3 = \sigma^1_R \wedge \sigma^2_R \wedge \sigma^3_R\ , \qquad h={1\over r^2}-1\ , \w2 && e^{\varphi_+}= {3\nu h^{1/3}\over L^2} +\nu_0\ ,\qquad e^{\varphi_-} = {4 h^{1/3}\over 9L^2}\ , \label{har}\end{eqnarray} and $L\equiv b/a$. Here, $\sigma^r_R$ are the right-invariant one-forms satisfying \begin{equation} d\sigma^r_R = \ft12 \epsilon^{rst}\, \sigma^s_R\wedge \sigma^t_R\ ,\end{equation} and $\Omega_3$ is the volume form on $S^3$. We have also used the definitions \begin{equation} z^\alpha= r\, n^\alpha\ ,\qquad n^\alpha n^\beta\delta_{\alpha\beta}=1\ , \end{equation} where $dn^\alpha$ are orthogonal to the unit vectors $n^\alpha$ on the $3$-sphere, and satisfy \begin{equation} dn^\alpha=\ft12 \rho^{r\alpha}{}_\beta\,\sigma^r_R\,n^\beta\ ,\qquad dn^\alpha dn^\beta \delta_{\alpha\beta} = \ft14 d\Omega_3^2\ . \end{equation} Given the form of $A^r$, it is easy to see that the Yang-Mills $2$-form $F^r=dA^r-\ft12 \epsilon^{rst} A^s\wedge A^t$ is not (anti)self-dual, as it is given by \begin{equation} F^r=\ft43\,rdr\wedge \sigma^r_R +\ft13 r^2 \left(1-\ft23 r^2\right)\,\epsilon^{rst}\sigma^s_R\wedge \sigma^t_R\ . \end{equation} The field strength $P_i^{A'A}$ on the other hand, takes the form \begin{equation} P_i^{A'A}= {1\over 1-r^2}\,\left[ (1-\ft23 r^2) \delta_i^\alpha +\ft23 r^2 n_i n^\alpha\right]\, \sigma_\alpha^{A'A}\ . \end{equation} We emphasize that, had we started with the identity map $\phi^\alpha=z^\alpha$ from the beginning, the orthonormal basis in which ${\tilde J}^r_{ij}$ are constants would be more complicated than the one given in \eq{bb}. Consequently, \eq{son3} would change since it uses \eq{djv} that requires the computation of the spin connection in the new orthonormal basis. \section{Properties of the Solution} \subsection{Dyonic Charges and Limits} To begin with, we observe that the solution we have presented above is a dyonic string with with {\it fixed} magnetic charge given by \begin{equation} Q_m=\int_{S^3} G = \frac{8}{27}\, vol_{S^3}\ . \end{equation} The electric charge, however, turns out to be proportional to the constant parameter $\nu$ as follows: \begin{equation} Q_e=\int_{S^3} \star e^\varphi G = 2\nu\, vol_{S^3}\ . \end{equation} Next, let us compare our solution with that of \cite{Guven:2003uw} where a dyonic string solution of the $U(1)_R$ gauged model in the absence of hypermatter has been obtained. We shall refer to this solution as the GLPS dyonic string. To begin with, the GLPS solution has two harmonic functions with two arbitrary integration constants, as opposed to our single harmonic function $h$ with a fixed and negative integration constant. In our solution, this is essentially due to the fact that we have employed an identity map between a hyperbolic negative constant curvature scalar manifold and space transverse to the string worldsheet. Next, the transverse space metric $ds_4^2$ in the GLPS solution is a warped product of a {\it squashed} $3$-sphere with a real line, while in our solution it is conformal to $R^4$. In GLPS solution the deviation from the round $3$-sphere is proportional to a product of $U(1)_R$ gauge constant and monopole flux due to the $U(1)_R$ gauge field. Thus, assuming that we are dealing with a gauged theory, the round $3$-sphere limit would require the vanishing of the monopole flux, which is not an allowed value in GLPS solution. As for the $3$-form charges, the electric charge is arbitrary in the GLPS as well as our solution. However, while the magnetic charge in the GLPS solution is proportional to $k\xi/g_R$ where $k$ is the monopole flux, $g_R$ is the $U(1)_R$ coupling constant and $\xi$ is the squashing parameter, and therefore arbitrary, in our solution the magnetic charge is fixed in Planckian units and therefore it is necessarily non-vanishing. This is an interesting property of our solution that results from the interplay between the sigma model manifold whose radius is fixed in units of Plank length, which is typical in supergravities with a sigma model sector, and the four dimensional space transverse to the the string worldsheet. Our solution has $SO(1,1)\times SO(4)$ symmetry corresponding to Poincar\'e invariance in the string world-sheet and rotational invariance in the transverse space\footnote{It is clear that if one makes an $SO(4)$ rotation in $z^\alpha$ coordinates, the same transformation should be applied to hyperscalars and 't Hooft symbols $\rho^r_{\alpha\beta}$ to preserve the structure of the solution.}. The metric components exhibit singularities at $r=0$ and $r=1$. Too see the coordinate invariant significance of these points, we compute the Ricci scalar as \begin{equation} R= { 48 (\Delta+\mu_0)^2+\mu_0^2 \over r^6 \left({\Delta\over 3\nu}\right)^{\ft{17}{18}}(\Delta+\mu_0)^{\ft52}}\ , \end{equation} where $\Delta\equiv 3\nu ({1\over r^2}-1)$ and $\mu_0\equiv \nu_0 L^2$. We see that, near the boundary $r\to 1$, the Ricci scalar diverges, and there is a genuine singularity there. Near the origin $r=0$, however, the situation depends on the parameter $\nu$. If $\nu\ne 0$, then as $r\to 0$ the Ricci scalar approaches the constant value $8/\sqrt{3\nu}$. The metric is perfectly regular in this limit, and indeed, we find that it takes the form \begin{equation} ds^2 \to {L^2\over R_0^2}\, r^{2/3} (-dt^2+dx^2) + {R_0^2 dr^2\over r^2} + R_0^2 d\Omega_3^2\ , \end{equation} which is $AdS_3 \times S_3$ with $R_0= \sqrt{4\nu/3}$. This is to be contrasted with the GLPS solution which approaches the product of $AdS_3$ with a squashed $3$-sphere. The $r=0$ point can be viewed as the horizon, and as is usually the case, our solution also has a factor of two enhancement of supersymmetry near the horizon. This is due to the fact that the condition \eq{susy1}, which reads $H^{-1}\Gamma^+\epsilon=0$ has to be relaxed since $H^{-1}$ vanishes in in the $r\to 0$ limit. Note, however, that our solution at generic point has $1/8$ supersymmetry to begin with, as opposed to $1/4$ supersymmetry of the GLPS solution. For $\nu=0$, the $r\to 0$ limit of the metric is \begin{equation} ds^2 \to {3L\over 2\sqrt {\nu_0}}\, r^{1/3}(-dt^2+dx^2) + {2L\sqrt{\nu_0}\over 3}\, r^{-5/3} ({dr^2}+r^2d\Omega_3^2) \ , \end{equation} Defining furthermore $du=dr/r^{5/6}$ the metric becomes \begin{equation} ds^2\sim u^{2}(-dt^2+dx^2+d\O_3^2)+ du^2. \end{equation} Ignoring $x$ and $\O_3$ directions, this describes the Rindler wedge which is the near horizon geometry of the Schwarzcshild black hole. The ``horizon'', which has the topology $R\times \O_3$, shrinks to the zero size at $u=0$ and this gives the singularity in the dyonic string. Next, consider the boundary limit in which $ r\to 1$. First, assuming that $\nu_0 \ne 0$, we find that in the limit $r\to 1$ the metric takes the form \begin{equation} ds^2 \sim {1\over u^{1/3}} \left( -dt^2 + dx^2 + u^4 ( \,du^2 +{1\over u^2}\,d\Omega_3^2) \right) \qquad \mbox{for} \ \ \nu_0\ne 0\ ,\end{equation} where we have defined the coordinate $u=h^{1/2}$ and rescaled the string worldsheet coordinates by a constant. For $\nu_0=0$, on the other hand, the $r\to 1$ limit of the metric is given by \begin{equation} ds^2 \sim {1\over u^{2/3}} \left( -dt^2 + dx^2 \right) + u^4 \left( \,du^2 +{1\over u^2}\,d\Omega_3^2 \right) \qquad \mbox{for} \ \ \nu_0 = 0\ ,\end{equation} where, again, we have defined $u=h^{1/2}$ and rescaled coordinates by constants. \subsection{Coupling of Sources} Since the solution involves the harmonic function $h$, there is also a possibility of a delta function type singularity at the origin since \begin{equation} \partial_\alpha\partial^\alpha\, h=-4\pi^2 \delta(\vec{z})\, .\label{ds} \end{equation} The presence of such a singularity requires addition of extra sources to supergravity fields to get a proper solution. As it is not known how to write down the coupling of a dyonic string to sources, and as we cannot turn off the magnetic charge, we consider the coupling of the magnetic string to sources. Thus setting $\nu=0$, from \eq{smetz}, \eq{zfi} and \eq{zha} the dangerous fields that can possibly yield a delta function via \eq{ds} are the metric, the dilaton $\phi$ and the three form field $G$. Indeed from \eq{zha} we see that \begin{equation} dG\sim \delta(\vec{z})\, dz^1\wedge dz^2\wedge dz^3\wedge dz^4\label{sin1}\, , \end{equation} therefore extra (magnetically charged) sources are needed for $G$ at $\vec{z}=0$. For the dilaton we find that the candidate singular term near $\vec{z}=0$ behaves as \begin{equation} \mathord{\dalemb{6.8}{7}\hbox{\hskip1pt}} \varphi\sim \,z^{11/3}\,\delta(\vec{z})\to 0\ , \label{sinf}\end{equation} thus there is no problem at $\vec{z}=0$. Finally for the Ricci tensor expressed in the coordinate basis we find \begin{eqnarray} R_{tt}&=&-R_{xx}\sim z^{4} \delta(\vec{z}) \to 0\ , \label{sin}\w2 R_{\alpha\beta}&\sim& \,z^2 \delta(\vec{z})\,\delta_{\alpha\beta} \to 0\ . \label{sin2} \end{eqnarray} Contracting with the metric one can see that the possible singular part in the Ricci scalar becomes \begin{equation} R\sim \,z^{11/3}\,\delta(\vec{z})\to 0\ , \end{equation} and thus there appears no extra delta function singularity. The above results can be understood by coupling to supergravity fields a magnetically charged string located at $r=0$ with its action given by \begin{equation} S = -\int d^2\sigma e^{\varphi/2} \sqrt {-\gamma} +\int {\widetilde B}\ , \end{equation} where $\gamma$ is the determinant of the induced worldsheet metric and ${\widetilde B}$ is the 2-form potential whose field strength is dual to $G$. This coupling indeed produces exactly the behavior \eq{sin1} in the Bianchi identity. The source terms in \eq{sinf} and \eq{sin} are also produced, while the contribution to the right hand side of \eq{sin2} vanishes identically (which does not causes a problem since $z^2\delta({\vec z})$ vanishes at $z=0$ as well). \subsection{Base Space as a Tear Drop} In \eq{smetz} the four dimensional base space for our solution \eq{smetz} is \begin{eqnarray} ds_B^2 &=& L^2 \left({1\over r^2}-1\right)^{2/3}\, \left(dr^2+r^2d\O_3^2\right)\label{baseconf}\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 &=& {(1-r^2)^{8/3}\over 2r^{4/3}}\,ds_{H_4}^2\ , \end{eqnarray} where $ds_{H_4}^2= 2 (dr^2 +r^2 d\Omega_3^2)/(1-r^2)^2$ is the metric on $H_4$. Although the overall conformal factor blows at $r=0$, the total volume of this space turns out to have a finite value $(4\pi^3 L^4)/(9\sqrt{3})$. To that extent, our solution can be viewed as the analog of the Gell-Mann-Zwiebach teardrop solution, though the latter is regular at $r=0$ as well. The analogy with Gell-Mann-Zwiebach tear-drop is also evident in the fact that the scalar metric has been conformally rescaled by a factor that vanishes at the boundary. The curvature scalar of the base manifold is also singular at $r=0$, as it is given by \begin{equation} R_{\cal B} = {16\over 3L^2}\,{1\over r^2}\,{r^{4/3}\over (1-r^2)^{8/3}}\ . \end{equation} Since the total volume in the base space is finite, one would expect that the singularity at $r=0$ can be reached by physical particles at a finite proper time. We have checked that this is indeed the case. Another tear-drop like feature here is that the base space metric is conformally related to that of $H_4$ which has negative constant curvature, and that the curvature scalar of the bases space becomes positive due to the conformal factor. This switching of the sign is crucial for satisfying Einstein equation in the internal direction, just as in the case of 2-dimensional Gell-Zwiebach teardrop. The base space ${\cal B}$ that emerges in the $2+4$ split of the $6D$ spacetime is quaternionic manifold, as it admits a quaternionic structure. To decide whther it is Quaternionic Kahler (QK), however, the standard definition that relies on the holonomy group being contained in $Sp(n)\times Sp(1) \sim SO(4)$ becomes vacuous in $4D$ since all $4D$ Riemann manifolds have holonomy group $sp(1)\times Sp(1)$. Nonetheless, there exists a generally accepted and natural definition of QK manifolds in four dimensions, which states that an oriented $4D$ Riemann manifold is QK if the metric is self-dual and Einstein (see \cite{Galicki} for a review). According to this definition, our base space ${\cal B}$ is not QK since it is neither self-dual nor Einstein. \subsection{Reduction of Metric to Five Dimensions} Finally, we would like to note the 5-dimensional metric that can be obtained by a Kaluza-Klein reduction along the string direction. The 6-dimensional metric is parametrized in terms of the 5-dimensional metric as \begin{equation} ds_6^2=e^{2 \alpha \hat{\phi}} ds_5^2+ e^{2 \beta \hat{\phi}} dx^2 \end{equation} where $\beta=-3\alpha$ and $\hat{\phi}$ is the Kaluza-Klein scalar. From \eq{smetz} one finds \begin{equation} ds_5^2=-e^{-\ft23 \varphi_+}e^{-\ft23 \varphi_-}\,dt^2+L^2 e^{\ft13\varphi_+} e^{\ft13 \varphi_-}h^{2/3}( dr^2+d\O_3^2), \label{bh} \end{equation} where the functions are still given in \eq{har}. The metric \eq{bh} is singular at $r=0$. For $\nu=0$ looking at the metric near the singularity one finds \begin{equation} ds_5^2\sim u^2(-dt^2+d\O_3^2)+du^2, \end{equation} where $du=dr/r^{7/9}$. The geometry is like the Rindler space but the candidate spherical ``horizon'' shrinks to zero size at $u=0$ which produces a singularity. When $\nu\not=0$, one finds near $r=0$ that \begin{equation} ds_5^2\sim -r^{8/9}dt^2+r^{-16/9}dr^2+r^{2/9}d\O_3^2 \end{equation} which is again singular at $r=0$. This singularity is resolved by dimensional {\it oxidation} which is a well known feature of some black-brane solutions \cite{Gibbons:1994vm}. \section{Conclusions} In this paper, we have derived the necessary and sufficient conditions for the existence of a Killing spinor in $N=(1,0),\,6D$ gauge supergravity coupled to a single tensor multiplet, vector multiplets and hypermultiplets. This generalizes the analysis of \cite{Gutowski:2003rg} and \cite{Cariglia:2004kk} by the inclusion of the hypermatter. In our case as well, the existence of the Killing spinor implies that the metric admits a null Killing vector. This is in contrast to some other dimensions such as $D=4,5$ where time-like and space-like Killing vectors arise in addition to the null one. The Killing spinor existence conditions and their integrability are shown to imply most of the equations of motion. This simplifies greatly the search for exact solutions. The remaining equations to be solved are (i) the Yang-Mills equation in the null direction, (ii) the field equation for the $2$-form potential, (iii) the Bianchi identities for the Yang-Mills curvature and the field strength of the $2$-form potential, and (iv) the Einstein equation in the double null direction. We parametrize the most general form of a supersymmetric solution which involves a number of undetermined functions. However, we do not write explicitly the equations that these functions must satisfy. These can be straightforwardly derived from the equations just listed. The existence of a null Killing vector suggests a $2+4$ split of spacetime, and search for a string solution, possibly dyonic. Such solutions are already known but none of them involve any active hyperscalar. As a natural application of the general framework presented here, we have then focused on finding a dyonic string solution in which the hyperscalars have been activated. Indeed, we have found a $1/8$ supersymmetric such a dyonic string. The activated scalars parametrize a $4$ dimensional submanifold of a quaternionic hyperbolic ball of unit radius, characterized by the coset $Sp(n_H,4)/Sp(n_H)\times Sp(1)_R$. A key step in the construction of the solution is an identity map between the $4$-dimensional scalar submanifold and internal space transverse to the string worldsheet. The spacetime metric turns out to be a warped product of the string worldsheet and a $4$-dimensional analog of the Gell-Mann-Zwiebach tear-drop which is noncompact with finite volume. Unlike the Gell-Mann-Zwiebach tear-drop, ours is singular at the origin. There is also a delta function type singularity that comes from the Laplacian acting on a harmonic function present in the solution. This does not present any problem, however, as we place a suitable source which produces contributions to the field equations that balance the delta function terms. An interesting property of our dyonic string solution is that while its electric charge is arbitrary, its magnetic magnetic charge is fixed in Planckian units, and hence it is necessarily non-vanishing. This interesting feature results from the interplay between the sigma model manifold whose radius is fixed in units of Plank length, as it is the case in almost all supergravities that contain sigma models, and the four dimensional space transverse to the the string worldsheet through the identity map. The tear-drop is quaternionic but not quaternionic Kahler, since its metric is neither self-dual nor Einstein. The metric is conformally related to that of $H_4$ which has negative constant curvature, and its curvature scalar becomes positive due to the conformal factor. This switching of the sign is crucial for satisfying Einstein equation in the internal direction, just as in the case of 2-dimensional Gell-Zwiebach teardrop. We have also shown to have $1/4$ supersymmetric $AdS_3\times S^3$ near horizon limit where the radii are proportional to the electric charge. This is in contrast with the $1/4$ supersymmetric GLPS dyonic string that approaches the product of $AdS_3$ times a squashed $3$-sphere with $1/2$ supersymmetry. In GLPS solution the squashing is necessarily non-vanishing for non-vanishing gauge coupling constant, while in our case the round $3$-sphere emerges even in presence of nonvanishing gauge coupling. One might naively expect that a double dimensional reduction of our dyonic string might yield a novel black hole solution in $5D$ with active hyperscalars. However, we find that the resulting $5D$ metric has a naked singularity at the origin. We conclude with mention of a selected open problems. The existence of the supersymmetric dyonic string solution is encouraging with regard to the string/M theory origin of the $6D$ model. The source couplings we have found may provide additional information towards that end. The existence of black dyonic strings in the $SU(2)_R$ gauged theory motivates a search for 'naturally' anomaly free such models. We refer the reader to the introduction for what we mean by 'natural'. In any event, the string/M theory of origin of the matter coupled $N=(1,0),\,6D$ gauged supergravities remains a challenging open problem. Here, we have begun to uncover some universal features of supersymmetric solutions in which the sigma models play a nontrivial role. For example, the emergence of tear-drop like metrics in the space transverse to the brane. This is intimately related with another potentially universal mechanism by which a submanifold of the sigma model is identified with the transverse space. One possible generalization might involve more intricate maps from the transverse space to sigma model. It would be useful to find further examples to establish whether the features found here continue to persist in a larger class of supergravity models with supergravity sectors. \bigskip\bigskip {\bf Acknowledgments} The work of A.K. has been supported in part by the Turkish Academy of Sciences via Young Investigator Award (TUBA-GEBIP), and the work of D.C.J. and E.S. is supported in part by NSF Grant PHY-0314712, and that of E.S in part by the Scientific and Technological Research Council of Turkey (TUBITAK). E.S. would like to thank Feza G\"{u}rsey Institute and Bo\~{g}azi\gamma{c}i University Physics Department, where this work was done, for hospitality. We also thank S. Deger and R. G\"uven for useful discussions. \newpage \begin{appendix} \section{ Conventions } We use the spacetime signature $(-+++++)$ and set $\epsilon^{+-ijkl}=\epsilon^{ijkl}$. We define $\Gamma_7=\Gamma^{012345}$. The supersymmetry parameter has the positive chirality: $\Gamma_7\,\epsilon=\epsilon$. Thus, $\Gamma_{\mu\nu\rho}= \ft16\,\epsilon_{\mu\nu\rho\sigma\lambda\tau}\,\Gamma^{\sigma\lambda\tau}\,\Gamma_7$, and for a self-dual 3-form we have $S_{\mu\nu\rho}\Gamma^{\mu\nu\rho}\epsilon=0$. The Hodge-dual of a $p$-form \begin{equation} F=\frac1{p!}\, dx^{\mu_1} \wedge \cdots dx^{\mu_p} F_{\mu_1\dots \mu_p}\ ,\end{equation} is calculated using \begin{equation} *(dx^{\mu_1} \wedge \cdots dx^{\mu_p})=\frac1{(D-p)!}\, \epsilon_{\nu_1\dots\nu_{D-p}}{}^{\mu_1\dots\mu_p}\, dx^{\nu_1}\cdots dx^{\nu_{D-p}}\ . \end{equation} The 't Hoof symbols are defined as \begin{equation} \rho^r_{\alpha\beta}= {\rm tr}\,(\sigma_{\alpha}\, T^r\,{\bar\sigma}_{\beta})\ ,\quad\quad \eta^{r'}_{\alpha\beta}= {\rm tr}\,(\bar \sigma_{\alpha}\, T^{r'}\,{\sigma}_{\beta})\ , \end{equation} where $ \sigma_{\alpha}=(1,-i \vec{\sigma} )$ are the constant van der Wardeen symbols for $SO(4)$. These are real and antisymmetric matrices. It is easily verified that $\rho^r_{\alpha\beta}$ is anti-selfdual, while $\eta^{r'}_{\alpha\beta}$ is selfdual. Their further properties are \begin{eqnarray} && \rho^r_{\alpha\gamma}\, (\rho^s)^\gamma{}_\beta= -\delta^{rs}\delta_{\alpha\beta}+\epsilon^{rst}\,\rho^t_{\alpha\beta}\ , \quad\quad\quad\quad\ \ {\rm idem} \ \eta^{r'}_{\alpha\beta}\ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 && \rho^r_{\alpha\beta} \rho^r_{\gamma\delta} = \delta_{\alpha\gamma}\delta_{\beta\delta}-\delta_{\alpha\delta}\delta_{\beta\gamma} -\epsilon_{\alpha\beta\gamma\delta}\ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 % &&\eta^{r'}_{\alpha\beta} \eta^{r'}_{\gamma\delta} = \delta_{\alpha\gamma}\delta_{\beta\delta}-\delta_{\alpha\delta}\delta_{\beta\gamma} +\epsilon_{\alpha\beta\gamma\delta}\ ,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 % && \epsilon^{trs}(\rho^r)_{\alpha\beta}~(\rho^s)_{\gamma\delta}=\delta_{\beta\gamma}~(\rho^t)_{\alpha\delta}+{\rm 3\ more}\ ,\quad\quad {\rm idem} \ \eta^{r'}_{\alpha\beta}\ . \end{eqnarray} For $SU(2)$ triplets, we use the notation: \begin{equation} X^{AB}= X^r\,T^r_{AB}\ ,\qquad X^r=\ft12 X^{AB}\,T^r_{AB} .\end{equation} \section{ The Gauged Maurer-Cartan Form and the $C$-Functions } A convenient choice for the $Sp(n_H,1)/Sp(n_H)\times Sp(1)$ coset representative $L$ is \cite{Gursey:1979tu} \begin{equation} L = \gamma^{-1} \left(\begin{array}{ccc} 1 && t^\dagger\\ &\\ t && \Lambda \end{array} \right)\label{cr} \end{equation} where $t$ is an $n_H$-component quaternionic vector $t^p\,\, (p=1,...,n_H)$, and \begin{equation} \gamma= (1-t^\dagger\, t)^{1/2}\ ,\qquad \Lambda= \gamma\,(I-t\, t^\dagger)^{-1/2} \ . \end{equation} Here, $I$ is an $n_H\times n_H$ unit matrix, and $\dagger$ refers to quaternionic conjugation, and it can be verified that $\Lambda t=t$. The gauged Maurer-Cartan form is defined as \begin{equation} L^{-1} D_\mu L = \left(\begin{array}{ccc} Q_\mu && P_\mu^\dagger \\&\\ P_\mu && Q'_\mu \end{array} \right)\ , \end{equation} where $D_\mu L$ is given in \eq{mc3}, with $T^r$ representing three anti-hermitian quaternions (in the matrix representation of quaternions $T^r=-i\,\sigma^r/2$) obeying \begin{equation} [T^r,T^s]=\epsilon^{rst}T^t \end{equation} and $T^{I'}$ represents a subset of $n_H\times n_H$ quaternion valued anti-hermitian matrices spanning the algebra of the subgroup $K\subset Sp(n_H)$ that is being gauged. A direct computation gives \begin{eqnarray} Q_\mu &=& \frac12\, \gamma^{-2}\,\left(D_\mu t^\dagger t - t^\dagger D_\mu t\right) -A_\mu^rT^r\la{aq1}\w2 Q'_\mu &=& \gamma^{-2}\,\left(-t D_\mu t^\dagger +\Lambda D_\mu\Lambda +\ft12 \partial_\mu (t^\dagger t) I \right) -A_\mu^{I'} T^{I'}\ ,\la{aq2}\w2 P_\mu &=& \gamma^{-2} \Lambda D_\mu t \ ,\la{aq3} \end{eqnarray} where \begin{equation} D_\mu t=\partial_\mu t +t\, T^r A_\mu^r -A_\mu^{I'} T^{I'}\,t\ . \la{ddf}\end{equation} The $C$ functions are easily computed to yield \begin{eqnarray} C^r&=&L^{-1}T^r L=\gamma^{-2}\left( \begin{array}{ccc} T^r && T^r t^\dagger \\&\\ -t T^r && -t T^r t^\dagger\\ \end{array} \right)\la{crr} \w4 C^{I'}&=& L^{-1}T^{I'} L = \gamma^{-2}\left( \begin{array}{ccc} -t ^\dagger T^{I'} t && -t^\dagger T^{I'}\Lambda \\&\\ \Lambda T^{I'}t && \Lambda T^{I'}\Lambda \\ \end{array} \right)\la{ci} \end{eqnarray} \section{The Model for $Sp(1,1)/Sp(1)\times Sp(1)_R$} This coset, which is equivalent to $SO(4,1)/SO(4)$, represents a 4-hyperboloid $H_4$. In this case we have a single quaternion $ t=\phi^\alpha\,\sigma_\alpha$, and the vielbein becomes \begin{equation} V_\alpha^{A'A}= \gamma^{-2}\,\sigma_\alpha^{A'A}\ .\end{equation} It follows from the definitions \eq{vg} and \eq{j} that \begin{equation} g_{\alpha\beta}={2\over (1-\phi^2)^2}\,\delta_{\alpha\beta}\ ,\quad\quad J^r_{\alpha\beta}= {2\,\rho^r_{\alpha\beta}\over (1-\phi^2)^2}\ .\label{jr} \end{equation} We also introduce a basis in the tangent space of $H_4$ \begin{equation} V_\alpha{}^{\underline{\alpha}}=\fr{\sqrt{2}}{1-\phi^2}\,\, \delta^{\underline{\alpha}}_{\alpha}\, \, .\label{av} \end{equation} The $Sp(1)_R$ connection $Q_\mu^r$ can be found from \eq{aq1} as \begin{equation} Q_\mu^r=-2\,{\rm tr}\,(Q_\mu T^r)\,= {1\over 1-\phi^2}\,\left(2 \rho^r_{\alpha\beta} \partial_\mu\phi^\alpha\,\phi^\beta-A_\mu^r\right)\ .\label{qsp1} \end{equation} With the above results at hand, the Lagrangian can be written as \begin{eqnarray} e^{-1}{\cal L} &=& R\,- \ft14 (\partial\varphi)^2- \ft12 e^\varphi\, G_{\mu\nu\rho}G^{\mu\nu\rho}- \ft14\,e^{\ft12\varphi}\, F^r_{\mu\nu}\,F^{r \mu\nu}\,- \ft14\,e^{\ft12\varphi}\, F^{r'}_{\mu\nu}\,F^{r'\mu\nu}\,\nonumber} \def\begin{document}{\begin{document}} \def\end{document}{\end{document}\w2 &&-{4\over (1-\phi^2)^2}\,D_\mu\phi^\alpha D^\mu\phi^\beta\,\delta_{\alpha\beta} - {6 e^{-\ft12\varphi}\over (1-\phi^2)^2 }\,\left[ g_R^2 +g'^2 (\phi^2)^2\right]\ , \end{eqnarray} where the covariant derivatives are defined as \begin{equation} D_\mu\phi^{\alpha}= \partial_\mu \phi^{\alpha}-\ft12 g_R A_\mu^r (\rho^r)^{\alpha}{}_{\beta}\,\,\phi^{\beta} - \ft12 g' A_\mu^{r'} (\eta^{r'})^{\alpha}{}_{\beta}\,\,\phi^{\beta} , \end{equation} and we have re-introduced the gauge coupling constants $g_R$ and $g'$. The supersymmetry transformation rules are \begin{eqnarray} \delta \psi_\mu &=& D_\mu \varepsilon + \ft1{48} e^{\ft12\varphi} G_{\nu\sigma\rho}^+\,\Gamma^{\nu\sigma\rho}\, \Gamma_\mu\,\varepsilon \ ,\w2 \delta\chi &=&\ft14\left( \Gamma^\mu\partial_\mu \varphi -\ft16 e^{\ft12\varphi} G_{\mu\nu\rho}^-\,\Gamma^{\mu\nu\rho} \right)\varepsilon\ , \w2 \delta \lambda_A^r &=& -\ft18 F_{\mu\nu}^r\Gamma^{\mu\nu}\varepsilon_A - g_R{e^{-\ft12\varphi}\over 1-\phi^2}\,T^r_{AB}~\varepsilon^B \ , \w2 % \delta \lambda_A^{r'} &=& -\ft18 F_{\mu\nu}^{r'}\Gamma^{\mu\nu}\varepsilon_A + g'e^{-\ft12\varphi} {\phi^\alpha\phi^\beta \over 1-\phi^2}\, ({\bar\sigma}_\alpha T^{r'}\sigma_\beta)_{AB}~\varepsilon^B \ ,\w2 \delta\psi^{A'} &=& {1\over 1-\phi^2}\, D_\mu \phi^\alpha\,\sigma_a^{A'A}\,\varepsilon_A \ , \end{eqnarray} where $D_\mu\varepsilon_A = \nabla_\mu\varepsilon_A + Q_\mu^r (T^r)_ A{}^B \varepsilon_B$, with $\nabla_\mu$ containing the standard torsion-free Lorentz connection only, and $Q^r$ is defined in \eq{qsp1}. \end{appendix}
1,116,691,497,005
arxiv
\section*{\label{presence}Material that can Sediment to the Surface of Titan} To avoid any ambiguity, we reserve the term ``aerosols'' to solid particles that make the thick hazy layer of Titan. This photochemical haze extends roughly from the surface up to about $1000$ km altitude {\cite{west_etal_2014}} and it is made of aggregates of monomers. Aggregates show a fractal structure (see Supplementary Fig.~1) and count up to several thousands of monomers {\cite{tomasko_2008b}}; each of them can be approximated by a sphere {\cite{curtis_etal_2008}}. The monomers radius determinations, agree for a value around $50$ nm {\cite{seignovert_etal_2017}}. Each Titan's haze model, based on a microphysical description, depends on the rate of particle production among several parameters{\cite{rannou_etal_2004}}. The aerosols mass production, derived empirically, has its values{\cite{rannou_etal_2004}} spreading around $\sim 10^{-13}$ kg m$^{-2}$ s$^{-1}$. With the haze layers in steady state, the ``mass production rate'' is also the average ``mass deposit rate'' of aerosols, \textit{i.e.} the sedimentation rate over the surface of Titan. The adopted value corresponds, at ground level, to one nanometer per year, if we assume a density around $10^{3}$ kg m$^{-3}$. These organic particles should not be surrounded by liquid, in Titan's dry regions, while in the most humid ones, aerosols can play the role of nucleation cores for liquid methane droplets formation {\cite{rannou_etal_2006}}. Even if observational evidences for rainfalls are rare {\cite{turtle_eal_2011a}}; at Titan, the polar regions are recognized to be the wettest from climate simulations {\cite{schneider_etal_2012}}. In these regions, the precipitations of liquid methane are governed by the presence of small particles (micronic or submicronic) known as cloud condensation nuclei, aerosols are very good candidates for this role. In summary, a certain amount of aerosols should reach the seas as dry particles, whereas the remaining could get the sea embedded in liquid droplets.\\ Titan's aerosols are the end-products of a complex chemistry, in which a plethora of small molecules is generated. Some of them are detected by spaceborne instruments {\cite{coustenis_etal_2016}} or by Earth's telescopes {\cite{molter_etal_2016}}. On the theoretical side, models account for the production of these species {\cite{krasnopolsky_2014}}. Due to local thermodynamic conditions, some of these compounds can condensate to form either liquid droplets or ice crystals. For instance, the VIMS instrument aboard {\it Cassini} allowed the detection of micrometre-sized particles of frozen hydrogen cyanide (HCN ice) over Titan’s southern pole {\cite{dekok_etal_2014}}. The mass flux, to the surface, of these compounds is not negligible. Models indicate a mass flux for HCN of the same order of magnitude as that of aerosols{\cite{lavvas_etal_2011c}}. Hydrogen cyanide is not the only species that can produce organic crystals in the atmosphere, among many others, molecules like C$_2$H$_2$ and HC$_3$N have also a similar potential{\cite{couturier-tamburelli_etal_2018a}}. Nonetheless, HCN appears to be the most abundant. The key point for our purpose, is the potential propensity of these micron-sized crystals to aggregate with each other into, micron to millimeter-sized particles, analogs of terrestrial snowflakes {\cite{hobbs_etal_1974}}. The physical properties of HCN do not preclude this type of process. In this perspective, Titan's troposphere could be the scene of ``exotic snowfalls'' composed of ``HCN-flakes'' (or C$_2$H$_2$-flakes, ...). Even if CO$_2$ ``snowfalls'' has been considered in the case of Mars {\cite{forget_etal_1998}}, perhaps curiously this possibility has never be investigated from the point of view of microphysics in the context of Titan.\\ Finally, two {\it Cassini} instruments detected large molecules in Titan's thermosphere, with charge/mass ratios up to{\cite{waite_etal_2007}} $\sim 10,000$. In addition, the presence of polycyclic aromatic hydrocarbons above an altitude of $\sim 900$ km has been also suggested {\cite{lopez-puertas_etal_2013}}. These facts plead in favor of the presence of large molecules at ground level, analogs of terrestrial surfactants{\cite{stevenson_etal_2015a}}.\\ In summary, we have determined three sources of material that can end at the surface of Titan's hydrocarbon seas: (1) the haze particles, (2) crystallized organics, (3) large molecules, harboring at least one ``liquidophobic'' function. \section*{Existence and persistence of a floating film} Two distinct effects may be invoked when the floatability of an object is questioned: (1) the Archimedes' buoyancy, (2) the effect of capillary processes.\\ In an idealized case, the only relevant parameter is the density of monomers material, compared to the density of sea liquid. In the absence of any wetting effect the liquid penetrates in the whole aerosol free volume. In such a situation, the fractal structure of aerosols cannot be invoked to introduce an ``effective density'', lower than the monomers one. This is why we focus on the density of monomers. These latter are recognized to be formed by molecules harboring a large number of carbon and nitrogen atoms {\cite{gautier_etal_2014}}. As a first guess, we can adopt Earth fossil carbon forms, like oil or bitumen, as analogs for monomers the organic matter. Since petroleum industry products are made of complex mixtures of numerous species, their density is not unique, and for oil{\cite{ancheyta_speight_2007}} ranges between $0.8$ and $0.95$ g cm$^{-3}$. During its descent to Titan's surface, the {\it Huygens} probe made a lot of measurements, and the ACP (Aerosol Collector and Pyrolyser)-GCMS (Gas Chromatograph and Mass Spectrometer) experiment analyzed the chemical composition of the collected aerosols{\cite{tomasko_2008b}}. Their nuclei were found to be made up of N-rich organics, without information about the molecular structure. The best estimations of their composition and density are, to date, provided by Titan's aerosol laboratory analogues, named ``tholins'' {\cite{sagan_etal_1992}}. The few available measurements may be classified into two categories: the high pressure experiments {\cite{horst_tolbert_2013}} producing relatively light materials with a density around $\sim 0.8$ g cm$^{-3}$, and the low pressure simulations{\cite{imanaka_etal_2012,brouet_etal_2016}} leading to heavier products, with a mean density in the range $1.3 - 1.4$ g cm$^{-3}$. For low pressure measurements, individual density determinations can be found down to{\cite{horst_tolbert_2013}} $0.4$ g cm$^{-3}$. Concerning ``exotic snows'', densities of solid organics that could be common at the surface of Titan, may be found in the literature {\cite{cordier_etal_2016b}}: for the most abundant, and less soluble, HCN, the value should be $\sim 1.03$ g cm$^{-3}$.\\ Even if the chemical composition of Titan's seas is not known in details, there is a general consensus to consider that the main components are methane, ethane with some amounts of nitrogen {\cite{cordier_etal_2017a,legall_etal_2016}}. In Supplementary Table~1 we have gathered the densities of these species in conditions relevant for Titan's polar surface. A quick inspection of this table convinces that, as a general tendency, monomers density should remain above the expected value for the liquid. In such circumstances, the majority of aerosols particles, or exotic ``snowflakes'', should sink to the depths of hydrocarbons seas. This do not exclude the formation of a floating deposit, supported only by Archimedes buoyancy, and formed by the lightest in the mass spectrum.\\ We turn now our attention to capillary processes. It is well known that small bodies heavier than the supporting liquid, including those made of iron, can float under the influence of the so-called capillary force. Even some animals, bugs of the family of the \textit{Gerridae} (water striders) take advantage of this kind of force to survive at the surface of water {\cite{gao_jiang_2004}}. As recalled in Method, the action of a liquid on a tiny object is a function of two parameters: (1) the surface tension $\sigma$ (N m$^{-1}$), an intrinsic property of the interface liquid-air; (2) the contact angle $\theta_c$, which represents the interaction between the liquid and the material of the considered object. For $0^{\rm o} \le \theta_c \le 90^{\rm o}$ the liquid is rather ``attracted'' by the solid material, thus called ``liquidophillic'' in this case. When $90^{\rm o} \le \theta_c \le 180^{\rm o}$, the material is liquid-repellant and is named ``liquidophobic''. Clearly, an aerosol monomer, seen as a small sphere, may be maintained at the surface solely if the monomer is made of ``liquidophobic'' matter. A simple derivation (see Methods), based on a balance between weight and capillarity forces, leads to the layer thickness \begin{equation}\label{ethick} e \simeq \frac{3 \, \sigma \, |\cos \theta_{\rm c}|}{r \, g_{\rm Tit} \, \rho_{\rm mono}} \end{equation} In the case of a perfect non-wetting liquid (\textit{i.e.} for $\theta_{\rm c}= 180^{\rm o}$), a numerical estimate can be obtained for $e$, assuming a surface tension $\sigma$ fixed to $2 \times 10^{-2}$ N m$^{-1}$ (see Supplementary Table 2), a radius of monomers at $50$ nm and taking $\rho_{\rm mono} = 800$ kg m$^{-3}$, we found $e \simeq 500$ m. Such an unrealistically large value is the signature of the existence of strong limiting factors. The most obvious limitations are the aerosols sedimentation rate representing a few nanometer per year, and the idealized poor wettability. This, then, leads naturally to a discussion concerning expected contact angles.\\ On Titan, maritime surfaces are not the unique place for possible interaction between liquids and solid particles. This kind of interaction is known to play a crucial role in the formation of cloud particles, which are generated by heterogeneous nucleation{\cite{sanchezlavega}}. On the Earth, heterogeneous nucleation on micronic and sub-micronic aerosols is the dominant mechanism in forming liquid cloud droplets. In the context of Titan, given the large abundance of aerosols, a similar microphysics has been proposed for the nucleation of liquid methane droplets or small ethane crystals{\cite{barth_toon_2006}}. In these approaches, the contact angle plays a key role that can be easily understood: the more aerosols are wettable, the more the liquid can spread over its surface and favor the formation of a liquid ``envelope''. Unfortunately, contact angles are very unconstrained parameters {\cite{rodriguez_etal_2003}}, what can be found in the literature is either not perfectly relevant{\cite{curtis_etal_2005,curtis_etal_2008}}, or comes from informal personal communication{\cite{barth_toon_2006}}. Except concerning the nucleation of solid butane, we did not find proper peer reviewed publications providing $\theta_{\rm c}$ values for nucleation onto ``tholins''. Then, cloud formation models include values close to{\cite{barth_toon_2006}} $\theta_{\rm c} \simeq 0^{\rm o}$ and which obviously favors the formation of droplets onto organic aerosols particles. In other words, microphysics models of clouds assume the existence of ``liquidophilic'' aerosols to play the role of condensation nuclei, whereas ``liquidophobic'' particles are required to form a floating layer over Titan's lakes surface.\\ Let us now examine if some clues can be found about the wettability of aerosols. The actual chemical composition of Titan's aerosols is not known. Many teams published the global stoichiometry C$_x$H$_y$N$_z$ of ``tholins'', which spectral signature is compatible with what is observed at the Saturn moon. Nonetheless, spectroscopy is not sensitive neither to the detailed chemical composition nor to the exact composition and physical state of aerosol surface. These surface properties determine the ``liquidophilic'' or ``liquidophobic'' character of aerosols {\cite{pruppacher_klett_1978}}. During their fall to Titan's ground, particles may also undergo a variety of alterations, due to charging, photolysis or radiolysis {\cite{courtin_etal_2015,couturier-tamburelli_etal_2018a}}. This ``aging'', changes the surface properties of aerosols. This way, their wettability may evolve before they get to the sea surface. Laboratory measurements show a very low solubility of tholins in non-polar solvents {\cite{carrasco_etal_2009}}. An high solubility is generally recognized to be associated with a liquidophilic character. Thus, the low solubility of tholins may be regarded as an indication of a liquidophobia. Similarly, HCN snow may also float due to a strong liquidophobic properties.\\ Considering the very likely existence of a rich variety of aerosol surface properties, we propose the presence of both ``liquidophilic'' and ``liquidophobic'' aerosols in Titan's atmosphere. The first family of particles sink when they reach the maritime surface, even if they arrive in a ``dry state''. The particles belonging to the second category do not participate to the clouds formation and float when touching the surface of the sea. These kind of particles are good candidates for building up a more or less thick layer at the surface of hydrocarbons seas. In a sense, the surface of Titan's lakes/seas could retain liquidophobic material.\\ Since the precipitation rates of atmospheric products are small, one might wonder how an organic microlayer can build up and be maintained, rather than being destroyed by weathering. Titan surface is a dynamic environment: wind, rain, fluvial runoff or tides could impede such a formation. It is well known that saltation of particles is much more difficult from a wet substrate rather than from a dry surface{\cite{lorenz_2014}}. Thus, if lands surrounding seas are wet, the wind should let organic dusty material lieing down over these terrains, and similarly should not rip marine floating film. On the contrary, if polar lands are dry, the saltation should be easy and the wind could transport material to sea surface, which could behave as a ``wet trap'', leading to an accumulation process. Methane rain droplets, or nitrogen bubbles coming from seabed{\cite{cordier_etal_2017a,cordier_ligerbelair_2018}}, may locally disrupt the layer. Thanks to basics physics laws, the momenta associated to the impact of such objects can be estimated respectively to $5 \times 10^{-2}$ kg m s$^{-1}$ and $1$ kg m s$^{-1}$, revealing that bubbles could be more efficient than rain droplets. Nevertheless, more specific conclusion cannot be drawn since the mechanical properties of films are not known. But, bubbles and droplets have a significant difference: droplets bring to seas material washed up along their fall. Indeed, droplets transport solid particles on which they have nucleated. If rainfalls are heavy enough, fluvial run-off can also favor the appearance of surface layers, by transporting material from lands to seas. Finally, according to numerical simulations{\cite{vincent_etal_2016}}, Titan's seas undergo a moderate tidal activity. Except along the shores, where material could be periodically deposited and returned to the liquid, the tides should not alter any large film due to their large ``wavelength''. Nevertheless, relatively strong tidal currents, through the straits may generate some wave fields{\cite{kurata_etal_2016}}.\\ If lakes behave like a trap, an almost continuous shore-to-shore deposit can be expected. In the contrary, where only a partial coverage is at work, this aspect could introduce some temporal and spatial variabilities in surface properties. Even if it seems difficult to destroy floating layers by wind, rain, run-off or tides, these effects could induce migration or fragmentation of slicks. Intrinsic properties of floating material could also induce some evolution. Floating small objects {\cite{whitesides_boncheva_2002}} can make large structures by self-assembly processes driven by lateral capillary interactions. Finally, we stress that observations of specular reflections{\cite{soderblom_etal_2012}} over lakes is consistent with a partial and evolving film coverage. In the case of a shore-to-shore slick, a large range of refractive index values is compatible with glint observations, for which the photon fluxes are uncertain due to the lack of knowledge about the optical thickness of the hazy cap. \captionsetup{labelfont=bf} \begin{figure}[t] \begin{center} \includegraphics[angle=0, width=16 cm]{compa_Earth-Titan_3panels_1.eps} \caption[]{\label{compaET_y}Comparison of the wave damping efficiency, due to a floating film, between Titan context and under Earth conditions. The wave relative damping ratio $y$ caused by a monomolecular film deposited over the surface of a liquid is a function of the wavelength $\lambda$. The three panels correspond to different values of $\omega_{\rm d}$ which account for the relaxation time of the material forming the slick. In each panel, three values (\textit{i.e.} $36.5$, $21.3$ and $7.3$ in $10^{-3}$ N m$^{-1}$, respectively in solid, dashed and dot-dashed lines) are considered for the coefficient of elasticity in compression $E_{0}$ of the film. The parameters concerning the Earth (blue lines) are the gravity $g= 9.81$ m s$^{-2}$, the surface tension $\sigma = 73$ mN m$^{-1}$, the viscosity $\eta = 10^{-3}$ Pa s, and the density $\rho= 10^{3}$ kg m$^{-3}$, value relevant for liquid water. In the case of Titan (red lines), we took $g= 1.352$ m s$^{-2}$ for the gravity, and values expected for liquid methane: $\sigma = 2 \times 10^{-2}$ N m$^{-1}$, $\eta = 2 \times 10^{-4}$ Pa s and $\rho= 452$ kg m$^{-3}$.} \end{center} \end{figure} \section*{Damping of Sea Waves by Surface Films} \label{damping} The first fully satisfying theoretical explanation has been published in the sixties {\cite{van_den_Tempel_van_de_Riet_1965}}. For a monomolecular film, the damping of a wave of initial amplitude $a_0$, after a propagation along a distance $x$ can be written \begin{equation} a(x)= a_0 \, \exp -\Delta \, x \end{equation} with $\Delta$ (m$^{-1}$), the damping coefficient, which depends on the wavelength $\lambda$. A ``clean surface'', {\it i.e.} free of slick, has a damping coefficient noted $\Delta_0$ (see Method). In order to characterize the damping effect of a supernatant film, it is usual to introduce the relative damping ratio defined as {\cite{alpers_huhnerfuss_1989}} \begin{equation} y(\lambda)=\Delta/\Delta_{0} \end{equation} This ratio depends on the intrinsic mechanical properties of the surface slick, which are represented by $E_0$ (N m$^{-1}$) the modulus of its coefficient of elasticity, and $\omega_d$ (s$^{-1}$) a parameter accounting for the relaxation time of the layer (see Method for details). In Fig.~\ref{compaET_y}, we have reported the variations of $y(\lambda)$, employing values representative for monomolecular films (\textit{e.g} for hexadecanoic acid methyl ester $E_0= 4.5 \times 10^{-2}$ N m$^{-1}$ and $\omega_d= 22$ rad s$^{-1}$, while for oleic acid $E_0= 1.4 \times 10^{-2}$ N m$^{-1}$ and $\omega_d= 38$ rad s$^{-1}$). Not surprisingly, large viscoelastic modulii $E_0$ produce a strong damping effect; whereas, long relaxation times (\textit{i.e.} low frequencies $\omega_d$) lead to efficient damping. If we compare the Earth and Titan, the general tendency is at least a similar damping effect at short wavelengths, and a much stronger effect at longer wavelengths. The properties of the sea liquid have also their influence on the waves formation. Except the surface tension $\sigma$ (see Supplementary Fig.~2), which has a minor influence, all the other parameters tend to enhance the wave damping at Titan. Undoubtedly, the sea viscosity $\nu$ has the strongest effect by, in our example, multiplying the value of $y$ by a factor of $\sim 4$; corresponding to a factor of $\sim \exp 4 \simeq 55$ on the wave amplitude. According to this first approach, Titan seems to be more favorable for a wave damping caused by a monomolecular film, than the Earth, because liquid hydrocarbons have a density and a viscosity smaller than that of liquid water.\\ A monomolecular film is the thinnest blanket that one could imagine, but thicker deposits are also conceivable. A formalism, specifically adapted to these finite-thickness layers, have also been developed (see Method). In that more general frame, the relative damping ratio $y$ depends explicitly on the slick thickness $d$, and it firmly increases when $d$ becomes larger. Essentially, results obtained with monomolecular films remain valid with thicker ones.\\ Common observations, and numerous academic studies, show that winds, blowing over water, are found to result in the birth and growth of waves upon sea surface {\cite{komen_etal_1994}}. The global picture of waves generation can be divided into three physical processes. First turbulence in the wind produces random stress variations on the surface. These pressure and tangential shear fluctuations give rise to small wavelets, due to resonances in the wind-sea coupling {\cite{phillips_1957,miles_1957}}. Secondly the waves amplitude is reinforced by the air flow, the pressure being maximum on the windward side of the crest and minimum on the leeward side {\cite{miles_1957}}. Finally the waves start to interact each with other, exciting longer wavelength modes {\cite{komen_etal_1994}}. Many effects conspire to limit the wave growth in height and wavelength. For instance, the fetch length over which the wind blows and the so-called ``whitecapping'', affect the final spectrum of waves {\cite{komen_etal_1994}}.\\ It is worth noting that without the generation of the very first ripples, due to air turbulent eddies near the surface, the large waves cannot be produced, and the surface of the ocean would remain mirror-smooth. An estimation, for the wavelength $\lambda_r$, of these initial wavelets caused by resonances is given by {\cite{phillips_1957}} \begin{equation} \lambda_r = 2\pi \sqrt{\frac{\sigma}{\rho g}} \end{equation} with the notation already adopted in previous paragraphs. In the context of the Earth, this equation leads to $\lambda_r \simeq 1.7$ cm, whereas a transposition to Titan yields to a similar value $\lambda_r \simeq 3.4$ cm. Our discussion about the damping rate of waves, indicates that a very strong damping could occur at Titan mares, with a maximum efficiency around a wavelength of a few centimeters (see Fig.~\ref{influ_d_visc}), depending on the nature and actual properties of the floating deposit. Therefore, if the surface of a Titan sea was covered, at least partially, by such a film/slick; the onset of wave formation could be impeded, leading to the non-existence of waves at all larger wavelengths, in the corresponding regions. \section*{\label{compatibility}Compatibility of a strong wave damping with observations} In this paper, we have considered the massive presence of aerosols, an other organic products (large molecules, HCN crystals/snow flakes, ...), in the atmosphere of Titan; that sediment to the ground where in polar regions hydrocarbon seas and lakes are observed. The formation of a more or less thin deposit at the sea surface appears to be plausible. As already mentioned, the off-specular infrared observations may not be in conflict with this scenario: (1) the deposit may be patchy, letting free liquid being wavy, and/or (2) the floating layer may itself produce these ``reflections'', if the local deposit has a king of ``roughness'' at infrared wavelengths. Recently, a mechanism has been proposed to explain the occurrence of efficient RADAR reflectors at Ligeia Mare, one of the main Titan's seas {\cite{hofgartner_etal_2014,hofgartner_etal_2016}}. These, so-called, ``Magic Islands'' could be produced by streams of nitrogen bubbles rising from the sea depths {\cite{cordier_etal_2017a,cordier_ligerbelair_2018}}. This scenario is not in conflict with the existence of a thin film at the sea surface: bubbles arriving at the surface could locally break the layer, and, in the same time, the RADAR-waves transparency of that slick could not prevent the observation of bubbles still in the volume of the sea liquid, as it is proposed.\\ This work has strongly highlighted the need for laboratory studies of interactions between cryogenic liquids, relevant for Titan, and tholins. Particularly, reliable contact angles determinations are fundamental for the behavior of hydrocarbon seas together with nucleation of liquid droplets within atmospheric microphysics processes. This new class of experimentations includes studies of surfaces states and compositions of tholins particles. As an extension of preliminary works {\cite{lorenz_etal_2005}}, wind-tunnels may also be used, at room temperature, with liquids and fine particles or floating films, analog to what is expected on Titan.\\ Given its, potentially crucial, role in the carbon cycle, floating film/slick could be an important target for possible future {\it in situ} explorations{\cite{hartwig_etal_2016}}. And, much more speculatively, it could harbor an original ``exobiological'' activity.\\ \def\sciam{Sci. Am.}\def\nature{Nature}\def\nat{Nature}\def\science{Science}\def\natastro{Nat. Astron.}\def\natgeo{Nat. Geosci.}\def\natcom{Nat. Commun.}\def\pnas{PNAS}\def\AnnderPhys{‎Ann. Phys. (Berl.)}\def\icarus{Icarus}\def\pss{Planet. Space Sci.}\def\planss{Planet. Space Sci.}\def\ssr{Space Sci. Rev.}\def\solsr{Sol. Syst. Res.}\def\expastro{Exp. Astron.}\def\jcis{‎J. Colloid Interface Sci.}\def\aap{A\&A}\def\apj{ApJ}\def\apjl{ApJL}\def\apjs{ApJS}\def\aj{AJ}\def\mnras{MNRAS}\def\araa{Annu. Rev. Astron. Astrophys.}\def\pasj{Publ. Astron. Soc. Jpn.}\def\apss{Astrophys. Space Sci.}\def\pasp{Publ. Astron. Soc. Pac.}\def\expastron{Exp. Astron.}\def\asr{Adv. Space Res.}\def\astrobiol{Astrobiology}\def\areps{Annu. Rev. Earth Planet. Sci.}\def\georl{Geophys. Res. Lett.}\def\jgr{J. Geophys. Res.}\def\gca{Geochim. Cosmochim. Ac.}\def\epsl{Earth Planet. Sci. Lett.}\def\plasci{Planet. Sci.}\def\ggg{Geochem. Geophys. Geosyst.}\def\rmg{Rev. Mineral. Geochem.}\def\tpm{Transport Porous Med.}\def\philtrans{Phil. Trans.}\def\faradis{Farad. Discuss.}\def\jcis{‎J. Colloid Interface Sci.}\def\jfm{J. Fluid Mech.}\def\physflu{Phys. Fluids}\def\pachem{Pure Appl. Chem.}\def\jpcA{J. Phys. Chem. A}\def\chemrev{Chem. Rev.}\def\nature{Nature}\def\nat{Nature}\def\science{Sci}\def\jced{J. Chem. Eng. Data}\def\fpe{Fluid Phase Equilibria}\def\iecr{Ind. Eng. Chem. Res.}\def\aichej{AIChE J.}\def\pt{Powder Technol.}\def\etfs{Exp. Therm. Fluid Sci.}\def\jgr{J. Geophys. Res.}\def\jcp{J. Chem. Phys.}\def\jcis{‎J. Colloid Interface Sci.}\defJ. Chem. Soc. Faraday Trans.{J. Chem. Soc. Faraday Trans.}
1,116,691,497,006
arxiv
\section{Introduction} In Figure \ref{fig:teaser}, one person has a mistaken belief about their environment. Can you figure out who is mistaken? You likely can tell the woman is about to sit down because she incorrectly believes the chair is there. Although you can see the complete scene, the character inside the scene has an imperfect view of the world, causing an incorrect belief. \begin{figure*} \centering \includegraphics[width=\textwidth]{dataset_small.pdf} \vspace{-1.5em} \caption{\textbf{Visual Beliefs Dataset:} We introduce a new dataset of abstract scenes to study visual beliefs. We show five example scenes from our dataset. The \textbf{\textcolor{red}{red arrows}} indicate that a person has a false belief in that frame. Each scene (row) contains eight images, depicting a visual story when read left to right. The caption below each scene was collected during annotation for visualization purposes only.} \vspace{-1.5em} \label{fig:grid} \end{figure*} The ability to recognize when people have incorrect beliefs will enable several key applications in computer vision, such as in action understanding, robotics, and healthcare. For example, understanding beliefs of human drivers could improve the safety of autonomous vehicles \cite{sadigh2016information}. Robots that understand human beliefs may have more fluid interactions with humans \cite{koppula2013learning}. Understanding beliefs may provide clues for anticipating human actions \cite{kitani2012activity,vondrickanticipating} and generate better visual humor \cite{humor}. How do we give machines the capability to understand what a person believes? \enlargethispage{-5.5cm} In this paper, we introduce the novel problem of recognizing incorrect beliefs in short visual stories. We propose two new tasks aimed at understanding which people have false beliefs. Given a visual story, we aim to recognize \textbf{who} is mistaken and \textbf{when} they are mistaken. For example, in Figure \ref{fig:teaser}, the woman is mistaken in the third frame. To study this problem, we present a dataset of abstract scenes \cite{zitnick2013bringing} that depict visual stories of people in various types of everyday situations. In each story, one or more people have mistaken beliefs, and we seek to recognize these people. Abstract scenes are ideal for studying this problem because we can economically create large datasets that focus on the human activities, such as ones influenced by people's beliefs. Moreover, while abstract scenes are synthetic, the data models behavior on a high-level and can be applied to natural images with domain adaptation. The scenarios in our dataset are diverse and characters are mistaken for many reasons, such as occlusion or unexpected actions. We investigate models for learning to recognize mistaken characters in short sequences. Our model uses person-centric representations of scenes and combines information across several timesteps to better recognize mistaken characters. Experiments show that our model learns to mistaken people beliefs better than baselines, suggesting that it is possible to make progress on inferring people's beliefs. Although we only train our model to predict mistaken beliefs, experiments suggest that it internally learns important cues for beliefs, such as human gaze or time's arrow. The first contribution of this paper is introducing two new computer vision tasks for recognizing beliefs in images. The second contribution is a new dataset for training and evaluating models for recognizing beliefs. The third contribution is a model for starting to tackle these belief tasks. Code, data, and models will be available at \mbox{\small{\url{http://people.csail.mit.edu/bce/mistaken/}}}. \section{Related Work} \label{sec:related-work} \textbf{Beliefs and Intentions:} Our paper builds off several works that study beliefs of people. Shepherd \cite{psych-gaze} studies humans' \emph{theory of mind}, their reasoning about beliefs of others. He notes that gaze-following is important for this reasoning and failing to solve this problem may indicate a disability. Scassellati \cite{scassellati2002theory} studies theory of mind in human-robot interaction. Xie et al.\ \cite{xie2013inferring} explore people's intentions in real-world surveillance footage. Baker et al.\ \cite{bayesian-tom} propose a Bayesian model for learning beliefs based on a POMDP. Zhao et al.\ \cite{tom-hri} propose using probabilistic programming to infer the beliefs and desires of people in RGBD videos. We focus on learning the beliefs of characters directly from visual scenes. \begin{figure*} \centering \begin{subfigure}[t]{0.28\textwidth} \centering \captionsetup{width=.75\linewidth} \includegraphics[width=0.85\textwidth]{character_name.png} \caption{\textbf{Character ID:} For the 20 characters in our dataset, we show the probability they are mistaken in frames where each is present.} \end{subfigure}% ~ \begin{subfigure}[t]{0.11\textwidth} \centering \captionsetup{width=1.7\linewidth,oneside,margin={-1.5em,-1.5em}} \includegraphics[width=0.85\textwidth]{character_expression.png} \caption{\textbf{Facial expressions:} We show the probability a character is mistaken given their facial expression.} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \captionsetup{width=.6\linewidth,oneside,margin={3.2em,0.4em}} \includegraphics[width=0.9\textwidth]{time_hist.png} \caption{\textbf{Time:} People tend to be mistaken towards the end of the scene.} \end{subfigure} ~ \begin{subfigure}[t]{0.33\textwidth} \centering \captionsetup{width=1.0\linewidth} \includegraphics[width=0.9\textwidth]{location_hist.png} \caption{\textbf{Location:} We show the $(x, y)$ location of every character in every frame. The distribution for mistaken characters and not-mistaken characters appears similar.} \end{subfigure} \vspace{-0.5em} \caption{\textbf{Dataset Statistics:} We summarize biases of mistaken characters. Our method performs better than baselines that exploit these biases (see Table~\ref{table:main_experiment}). \label{fig:data}} \vspace{-1.5em} \end{figure*} \textbf{Common Sense:} Our work complements efforts to learn common sense. Yatskar et al.\ \cite{yatskarstating} extract common sense from object detection corpora, while Chen et al.\ \cite{chen2013neil} learn visual common sense by browsing the Internet. Vedantam et al.\ \cite{abstract-commonsense} use abstract images to learn how people, animals and objects are likely to interact. Recent work \cite{block-towers, galileo, pinto2016curious} has learned physical common sense given videos of colliding objects. Finally, Alahi et al.\ \cite{alahisocial} explore understanding social interactions in crowded spaces, and Prabhakar et al.\ \cite{prabhakar2010temporal} study causality in unconstrained video to understand social games. In this work, we study the subset of common sense related to visual beliefs. \textbf{Activity Understanding:} Our work is related to activity understanding in vision \cite{caba2015activitynet,wang2011action,chao2015hico,pirsiavash2012detecting,fathi2012learning}. Systems for understanding human actions typically leverage a variety of cues, such as context, pose, or gaze \cite{recasens2015they}. Our work complements action understanding in two ways. First, we study visual beliefs, which may be a useful signal for better understanding people's activities. Second, recognizing visual beliefs often requires an understanding of people's actions. \textbf{Abstract Images:} We take advantage of abstract images pioneered by Zitnick et al.\ \cite{zitnick2013bringing}, which have received wide interest in computer vision for studying high-level vision tasks. Chandrasekaran et al.\ \cite{humor} use abstract images to detect visual humor. Zhang et al.\ \cite{zhang2015yin} explore binary question-answering in abstract scenes, and Fouhey et al.\ \cite{fouhey2014predicting} learn to predict object dynamics in clip art. While these approaches reason about image-level features and semantics, our approach looks at character-level features. Importantly, two characters in the same scene can have different beliefs about the world, so each character should have a different character-level feature. Additionally, we extend this previous work to multi-frame scenes depicting visual stories. \textbf{Transfer:} After we learn to recognize mistaken characters in abstract scenes, one could use domain adaptation \cite{fouhey2014predicting, castrejon2016learning} to apply our approach to natural images. However, this is orthogonal to the goal of this paper. Additionally, Ganin et al.\ \cite{ganin2014unsupervised} and Tzeng et al.\ \cite{tzeng2015simultaneous} show how to perform unsupervised domain adaptation, which is relevant to our setting because annotating natural videos is costly. \section{Dataset} \label{sec:dataset} We collected a dataset of abstract scenes to study beliefs of characters. Each scene in our dataset consists of a sequence of $8$ frames showing an everyday situation. One or more people believe something incorrectly about their environment in each scene. A person may have a false belief for many reasons, including occlusion and misinterpreting intentions. Although the characters inside the scenes do not know if they are mistaken, we designed the dataset so that third-party viewers can clearly recognize who is mistaken. Our dataset complements existing abstract scene datasets. In contrast to the VQA dataset \cite{vqa}, frames in our dataset are grouped into scenes telling stories over several timesteps, and characters in our dataset frequently have mistaken beliefs. We believe abstract scenes provide a good benchmark for studying visual beliefs. We originally tried to collect a dataset of real videos containing people with false beliefs (such as suspense movies), but we encountered significant difficulty scaling up dataset collection. While many real videos contain characters with mistaken beliefs, these beliefs are very complex. This complexity made large-scale annotation expensive. We believe abstract scenes are suitable for understanding visual beliefs today because they allow the field to gradually scale up complexity on this important problem. To recognize mistaken beliefs in real videos, one could always apply domain transfer (e.g.~\cite{ganin2014unsupervised}) to adapt our abstract scenes model to real videos. However, we must first recognize false beliefs in abstract scenes. We use our dataset for both learning and evaluation of models for detecting mistaken characters in scenes. We show a few examples of our dataset in Figure \ref{fig:grid} and summarize statistics in Figure \ref{fig:data}. We collected this dataset on Mechanical Turk \cite{sorokin2008utility}. First, we ask workers to illustrate scenes. Then, we ask workers to annotate mistaken characters. In the remainder of this section, we describe how we built this dataset. The appendix contains additional details. \subsection{Collecting Scenes} In the illustration step, workers dragged and dropped clipart people and objects into eight frames to tell a coherent story. The interface was a modified version of \cite{vqa}. We told workers that some frame should contain a character who has a mistaken belief about the world. In addition to illustrating these eight frames, workers also wrote a scene-level description and eight frame-level descriptions. These descriptions were used during the annotation step, but were not used to train or evaluate our models. \subsection{Annotation} In the annotation step, the goal was to label which characters have mistaken beliefs. We hired workers to review the previously illustrated scenes and write one yes/no question for each frame. For each frame, workers wrote the true answer to the question and the answer according to each character. We labeled a character as mistaken if their answer was different from the true answer. In total, we collected 1,496 scenes, 1,213 of which passed our qualification standards. These scenes were the collective effort of 215 workers. On average, each frame contains 1.71 characters; characters are mistaken in 23.65\% of frames. A pool of 237 workers annotated each scene twice. The labels for whether a character was mistaken were consistent between workers 71.98\% of the time, indicating that in some scenes it was unclear whether a character was mistaken. In this paper, we only consider scenes where characters are clearly mistaken or not. \subsection{Quality Control} We used three methods to ensure we collected realistic and diverse scenes. First, workers completed qualification quizzes before starting the illustration and annotation steps. In the illustration quiz, workers identified good and bad scenes. In annotation quiz, workers filled in characters' answers for a scene with preselected questions. These quizzes forced workers to think about the beliefs of characters. Adding these quizzes significantly increased the quality of our data as compared to a pilot experiment. Second, the scene background and subset of available people, animals, and objects were randomly selected for each worker, ensuring that workers could not illustrate the same scene twice. Third, we manually reviewed the first scene illustrated by each worker. If the scene was incoherent or did not contain a mistaken character, we disallowed the worker from illustrating more scenes. \subsection{What Causes Mistaken Beliefs?} Figure \ref{fig:grid} shows a few scenes from our dataset that highlight different types of mistaken beliefs. In the first scene, the woman is mistaken because the dog is \textbf{occluded} behind couch, and because she cannot see actions \textbf{outside her field of view}. In the second scene, the woman falsely accuses the boy of breaking the painting because she cannot observe events when she is \textbf{not present}. The girl in the third scene mistakenly assumes the boy can safely get off the teeter totter because of her \textbf{faulty reasoning about physics}. In the fourth scene, the boy wearing a red shirt \textbf{misinterprets the intentions} of the other boy. In the last scene, the woman wearing the red shirt lacks the \textbf{common sense} that some mushrooms are poisonous. Recognizing mistaken characters requires detecting each of these types of beliefs \section{Belief Tasks} \label{sec:tasks} We study two tasks for recognizing mistaken people: \textbf{Task 1: Who is mistaken?} Given a scene and a character, the goal is to predict whether the character is mistaken in any frame. This task has several applications in identifying people who may be confused or unaware of danger. \textbf{Task 2: When are they mistaken?} Given a frame, the goal is to predict whether any character is mistaken in this frame. This task has applications in identifying when people might be confused, but it is not possible to know who is confused, such as in a crowd. \textbf{Joint Task:} We also explore a joint task where we seek to simultaneously recognize who is mistaken as well as localize when they are mistaken in time. \section{Method} \label{sec:method} We now describe an approach for predicting who is mistaken and when they are mistaken. Recognizing mistaken characters requires looking beyond a single frame; knowledge of the past or the future can provide important signals for recognizing mistaken beliefs in the present. For example, in the second scene of Figure \ref{fig:grid}, a model must see that the woman was not present when the girl broke the painting to understand why she falsely accused the boy. Our model for detecting mistaken characters will look at the past, present, and future. The model must also understand what a person may know and what they might not. To detect a mistaken person, the model should determine that the scene is different from what the person believes. \subsection{Person-Centric Representation} Before predicting whether a character is mistaken, we must tell our model which character to focus on. We use a \textbf{person-centric} representation of the world, where the model takes the perspective of an outside observer focusing on a specific character. For each frame in the scene, we center the frame at the head of the specified character. We also flip the frame so the specified character always faces left. For example, in Figure \ref{fig:egocentric}, the frame in the upper left can be viewed from each of the three characters' perspectives. Alternative approaches that remove parts of the frame outside the character's field of view may struggle to reason about what the character cannot see. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{egocentric_small.pdf} \vspace{-0.5em} \caption{\textbf{Person-Centric Representation:} We use a visual representation that focuses on the character of interest.} \vspace{-1.5em} \label{fig:egocentric} \end{figure} \subsection{Visual Features} \label{sec:visual-features} We use a frame-wise approach by extracting visual features for each frame and concatenating them temporally to create a time-series. We extract visual features from the person-centric images using the AlexNet convolutional network \cite{krizhevsky2012imagenet} trained on ImageNet \cite{deng2009imagenet}. We use activations from POOL5, and further downsample by a factor of two. The resulting feature has size $(256, 12, 21)$. Moreover, although the features we use are trained on natural images (i.e.\ ImageNet), we successfully used them for abstract scenes, possibly because the high rendering quality. \subsection{Learning} To learn to predict whether a person is mistaken or not, we can train a regularized convolutional logistic regression model, supervised by annotations from our training set. Suppose our image sequences are length $T$ and our features are $D$ dimensional. Let $\phi(x_i, p_j) \in \mathbb{R}^{T\times D}$ represent the features for sequence $x_i$ for person $p_j$ and $y_{ij} \in \{0, 1\}^T$ be our target category binary, indicating whether person $p_j$ is mistaken in each frame of sequence $x_i$. Our vector of predictions is $\widehat{y}_{i, j} \in \mathbb{R}^T$. We optimize the objective: \begin{equation} \begin{aligned} &\min_w \; \sum_{i, j, t} \left( y_{i,j}^{t} \log(\hat{y}_{i,j}^{t}) + (1-y_{i,j}^{t}) \log(1-\hat{y}_{i,j}^{t}) \right) \\ &\textrm{where} \quad \hat{y}_{i,j}^{t} = \left( w \ast \phi(x_i, p_j)\right)^{t} + b \end{aligned} \end{equation} The learned weight vector $w \in \mathbb{R}^{K\times D}$ represents the convolutional kernel, where parameter $K$ specifies the temporal width; $b \in \mathbb{R}$ is the learned bias. For simplicity, we have omitted the L2 penalty on $w$. The superscript $( \cdot )^{t}$ gives the entry of a vector corresponding to frame $t$ in a scene. We denote convolution as $\ast$, which is performed temporally. To handle border effects, we pad these features with zeros. The convolutional structure of our model encodes our prior that characters' beliefs are temporally invariant \subsection{Who and When} We tackle two tasks related to beliefs: predict who is mistaken and when they are mistaken. We train a single model that can be used for both tasks. Given a sequence $x_i^t$ centered at time $t$ and a person $p_j$ in the sequence, we train a model to estimate whether person $p_j$ is mistaken at time $t$. To answer the who question, we marginalize the classifier response across time. Likewise, to answer the when question, we marginalize the classifier response across people. \subsection{Implementation Details} We extracted image features using Caffe \cite{jia2014caffe} and we used Keras with Theano \cite{bastien2012theano} for learning. To optimize the weights, we used Adam \cite{kingma2014adam}, with a learning rate $10^{-5}$ and a batch size of 32. We set the temporal kernel width $K=7$. We added weight decay with parameter 1, and stopped training after the validation accuracy had stopped increasing for 3 consecutive iterations. Weight decay and downsampling image features helped prevent overfitting. \section{Experiments} \label{sec:experiments} We analyze several models on our dataset of abstract scenes. We evaluate each model on the ``who'' task, the ``when'' task, and the joint ``who + when'' task. \begin{table} \centering \begin{tabular}{l|c|c|c} & \multicolumn{3}{c}{Task} \\ Method & Who+When & Who & When\\ \hline Chance & 50 & 50 & 50\\ Time & 62.9 (1.9) & 52.4 (1.8) & 64.3 (2.2) \\ Pose & 51.9 (2.1) & 50.3 (3.5) & 54.8 (1.9) \\ Time+Pose & 60.6 (2.0) & 51.6 (1.2) & 61.2 (1.9)\\ Facial Expression& 50.1 (1.9) & 57.4 (5.1) & 52.9 (2.4) \\ Character ID & 54.0 (2.1) & 61.1 (5.4) & 53.4 (2.4) \\ Present & 64.5 (2.1) & 54.1 (6.7) & 66.1 (2.4) \\ Single Image & 61.1 (1.7) & 59.7 (3.3) & 62.0 (2.0)\\ Multiple Image & \textbf{66.6 (1.8)} & \textbf{64.1 (2.8)} & \textbf{67.5 (1.8)} \\ \end{tabular} \vspace{-0.5em} \caption{\textbf{Quantitative Evaluation:} We evaluate the accuracy of our model versus various baseline on the who task, the when task, and the joint task. We report classification accuracy; parenthesis show standard deviations. \vspace{-1em} \label{table:main_experiment} \end{table} \subsection{Experimental Setup} We trained each model on the joint task: given a character and a frame, classify if this character is mistaken in this frame. Before training, we balance the dataset by resampling so 50\% of training examples have a mistaken character. We randomly divide the dataset into training/validation/testing splits with sizes 80\%/10\%/10\%. For the experiments in Table \ref{table:main_experiment}, we repeat each experiment 20 times with different splits, and report the mean and standard deviation of the accuracies. For the numbers in Table \ref{table:corrupt}, we only repeat each experiment six times due to cost. \subsection{Baselines} We used seven baseline models to study the biases in our dataset, including those shown in Figure~\ref{fig:data}. We fit a kernelized SVM (RBF kernel) to the three baselines using Time and Pose, use logistic regression for the Single Image model, and use convolutional logistic regression for the Facial Expression, Character ID, and Present baselines. \textbf{Time:} This model uses only the time of the frame within the scene, represented as a fraction between 0 and 1. \textbf{Pose:} This model uses only the pose of the indicated character. Pose includes the $(x, y)$ position of the character, as well as a boolean indicator of whether the character is looking left or right. The $(x, y)$ coordinates are normalized to be in the interval $[0, 1]$. \textbf{Time + Pose:} This model combines the features from the Time model and the Pose model. \textbf{Facial Expression:} This model is given only the character's facial expression (encoded as a 1-hot vector). \textbf{Character ID:} This model is given only the character's identity (encoded as a 1-hot vector). \textbf{Present}: Each image is replaced by one bit indicating whether the character of interest is present in this frame. To handle border cases, we add another bit to the feature to indicate whether it is padded. \textbf{Single Image:} This model only looks at the present frame. It is equivalent to our model when $K=1$. \subsection{Who is mistaken?} In this experiment, each model is given a scene and a character, and must determine whether the character is mistaken in any frame. The (scene, character) pairs are randomly sampled so 50\% of pairs contain a mistaken character. If our model only recognized unnatural scenes and ignored the character of interest, it would perform at chance. We evaluate the model's decision function on each frame in the scene. For the SVM-based baseline models, each prediction is the signed distance from the separating hyperplane; for the models that use logistic regression, each prediction is a value in the interval $(0, 1)$. We take the maximum of these frame-level predictions as the model's scene-level prediction. To obtain a binary decision, we threshold this scene-level prediction (at 0 for the SVM models, and at 0.5 for the logistic regression models). \begin{table} \centering \begin{tabular}{l|c|c|c} & \multicolumn{3}{c}{Task} \\ Method & Who+When & Who & When \\ \hline Chance & 50 & 50 & 50 \\ Multiple Image & 66.6 (1.8) & 64.1 (2.8) & 67.5 (1.8) \\ \hline Flipped & 54.5 (1.8) & 52.5 (1.7) & 55.8 (2.4) \\ Centered & 62.4 (2.5) & 55.6 (3.0) & 63.0 (2.4) \\ Rewind & 57.4 (2.8) & 61.4 (3.6) & 57.3 (1.8) \end{tabular} \vspace{-0.5em} \caption{\textbf{Ablation Analysis}: We study the impact of training on altered data and testing on unaltered data. During training, we modify data to flip the character's pose (Flipped), not use the person-centric representation (Centered), and show the frames in reverse order (Rewind). The decrease in accuracy on each task indicates that pose, the person-centric representation, and the arrow of time are important parts of our model.} \vspace{-1em} \label{table:corrupt} \end{table} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{results.pdf} \vspace{-0.5em} \caption{\textbf{Example Results:} We show predictions from our model. The first three rows show correct predictions. Our model fails to detect mistaken characters in the last two scenes, which require reasoning about occlusion and physics. \vspace{-0.5em} \label{fig:predictions} \end{figure*} The second column of Figure \ref{table:main_experiment} shows that our Multiple Image model achieves a higher accuracy on the ``who'' task than the baselines. The Facial Expression, Character ID, and Single Image baselines perform better than chance, suggesting that information about the character of interest is important. Our Multiple Image model predicts who is mistaken more accurately than these baselines by also looking at past and future frames. \subsection{When are they mistaken?} \label{sec:when-experiment} In this experiment, each model predicts whether any character in a frame is mistaken. Frames are randomly sampled so 50\% contain mistaken characters. We evaluate the model's decision function on each character's person-centric representation of the scene. As in the ``who'' experiment, we aggregate predictions across characters by taking the maximum of the model's decision function. The third column of Table \ref{table:main_experiment} shows that the Time and Present baselines achieve high accuracies, indicating that temporal information is an important for the when task. The Single Image model performs better than the Pose model, suggesting that the characters' interactions with the scene are important for recognizing mistaken beliefs. Finally, our Multiple Image model performs better than all baselines. \subsection{Joint Task: Who and When? In this experiment, the goal is to predict whether a character is mistaken in a given frame. Frames are randomly sampled so 50\% of (frame, character) pairs contain a mistaken character. As shown in the first column of Table \ref{table:main_experiment}, our model achieves a higher accuracy on the ``who'' task than the baselines. Similar to the ``when'' experiment in Section \ref{sec:when-experiment}, the Time and Present baselines achieve a high accuracies on the joint task. The Pose baseline performs poorly, suggesting that the Time+Pose model likely ignores pose. Although pose is a poor feature for the ``who + when'' task, other features of a single image are important: the Single Image model performs well without knowing the position of the frame in the sequence. The Multiple Image model performs better than all baselines. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{ablative_small.pdf} \caption{\textbf{Predictions from Ablation Experiments:} We visualize our ablation experiments. The first and third rows show a normal scene, and the second and fourth rows show perturbed scenes. \textbf{Row 1:} A normal scene and predictions from our model. \textbf{Row 2:} We flip the boy's pose. In the last frame, the boy no longer sees the girl, so our model predicts he is still mistaken. \textbf{Row 3:} Another normal scene. \textbf{Row 4:} Predictions from the Rewind model make sense for the frames in the fourth row: the woman is mistaken in the second and third frames because she does not see the dog put the pie on the table, and therefore does not know how the pie appeared.} \vspace{-1em} \label{fig:ablative} \end{figure*} \subsection{Qualitative Results} Figure \ref{fig:predictions} shows our model's predictions on five scenes. \textbf{Row 1:} Our model correctly detects that the man is mistaken in the third frame when the girl is about to pull his chair from beneath him. In this scene, the man is mistaken because he cannot see the girl's actions behind him. \textbf{Row 2:} Our model correctly predicts that the girl is mistaken in the second and third frames as she can not see the man take her bike. Our model incorrectly predicts that the man is also mistaken in the third frame. Perhaps our model has learned that a character is likely to be mistaken when another character is performing actions behind them. \textbf{Row 3:} Our model correctly identifies the boy wearing a white shirt as mistaken in the third frame. \textbf{Row 4:} The man plays a prank on the girl by hiding a piece of corn beneath a pillow. Our model incorrectly predicts that the man is mistaken, likely because he cannot see the actions of the girl behind him. Our model incorrectly predicts that the girl is not mistaken in the third frame, perhaps because the corn is occluded behind the pillow. Our model might think that the corn disappeared when it became occluded. Better models for visual humor could improve our results. \vspace{1em} \textbf{Row 5:} We show another failure case in which a man places a basket on the see-saw, leaving the boy stranded. Here, our model incorrectly predicts that the boy has a misbelief in the first frame, but does not have a misbelief in the third frame. Understanding this situation requires knowledge of basic physics, which our model currently lacks. Advances in physical understanding may improve reasoning about visual beliefs. \subsection{What has it learned?} How does our model recognize mistaken characters? In this section, we study some key questions about what our model has learned. \emph{Does it only detect unusual frames?} Our experiments suggest not. A model for detecting unusual frames would perform well on the when task, but would be unable to do the who task. The Time and Present baselines do well on the when task but poorly on the who task, suggesting that these baselines only detect unusual frames. Our model performs significantly better than chance on the who task, indicating that it does more than detect unusual frames. \emph{How important is our person-centric representation?} We tested the impact of our person-centric representation by training a \textbf{Centered} version of our Multiple Image model without using the person-centric representation for each character. As shown in Table~\ref{table:corrupt}, the Centered model performs well on the when task. With no indication of the character of interest, the Centered model performs much worse than our model on the who task, suggesting that our person-centric representation is an important piece of our model. \emph{Does it do gaze following?} Given that humans use gaze following to reason about the beliefs of others \cite{psych-gaze}, we analyze whether our model started to learn gaze following cues. We trained a \textbf{Flipped} variation on our Multiple Image model that flipped the character's pose during training but not during evaluation.\footnote{We also removed the character of interest from the frame to avoid creating unrealistic images. For example, if we flipped a character sitting on a chair, his limbs would now extend through the back of the chair. We also confirmed that removing the character of interest from our model did not degrade its performance.} This Flipped model performs worse than our model on the three tasks, as shown in Table \ref{table:corrupt}. This suggests the model is internally learning to use gaze \cite{psych-gaze} without us supervising it to do so. In Figure \ref{fig:ablative}, the top two rows compare predictions made by our original model and the Flipped variation. The predictions made by the Flipped model are consistent with a world where people see from the back of their heads! \emph{How does it combine information across frames? Does it distinguish between past and future?} Our Multiple Image model outperforms the Single Image baseline, so it must combine information across multiple frames. To investigate how it does this, we ran time backwards during training and forwards during testing. Table~\ref{table:corrupt} shows that this \textbf{Rewind} model performs worse than our model, suggesting that our model treats the past and future differently. In Figure \ref{fig:ablative}, the bottom two rows compare predictions made by our original model and the Rewind variation. The predictions made by the Rewind model are logically consistent if the scene is read backwards (from right to left). This suggests that our model has learned that the arrow of time \cite{pickup2014seeing} is important. \vspace{-0.5em} \section{Discussion} \vspace{-0.5em} We propose a new computer vision task to recognize when people have mistaken beliefs about their environment. We believe this problem is important because understanding people's beliefs can enable many applications in action prediction, healthcare, and robotics. To spur progress, we introduce a new dataset of abstract scenes to study this problem. We present a model that uses multiple timesteps and a person-centric representation of the scene to recognize mistaken people. Although we only supervise the model with indicators of which characters are mistaken, our ablation experiments suggest that the model learns important cues for this task, such as gaze or the arrow of time. {\small \textbf{Acknowledgements:} We thank workers on Mechanical Turk for their creative scenes. NVidia donated the GPUs used for this research. This work was supported by a Samsung grant to AT, a Google PhD fellowship to CV, and MIT UROP funding to BE. } {\small \bibliographystyle{ieee}
1,116,691,497,007
arxiv
\section{Introduction and Summary} Recently there has been considerable interest in the metric-scalar gravity in four dimensions from various point of view. Our point of view is that such a fundamental theory of quantum gravity is either a string theory or another theory like it. We will not resolve this issue. Furthermore, we do not remove the possibility that quantum gravity is a local field theory with renormalizability. A candidate for a consistent theory of quantized gravity is string theory. A low-energy effective theory of a string below the Plank scale represented by a metric and a dilaton is well known \cite{GSW}. Such an effective action arises in the form of a power-series type of slope parameter ($\alpha^{\prime}$); the standard point of view is that the higher orders in such an expansion correspond to higher energies. From this point of view, at a lower energy scale the action for gravity has the form of a lower derivative dilaton action. Since the fundamental theory is not restricted to the string theory, we introduce N-dilations with the most general coupling to metric within two derivatives, such as (\ref{action}). Although in the string theory there is no scalar field having such a coupling, we call our scalar fields dilatons by analogy. The Einstein gravity coupled to scalars is nonrenormalizable as naive power counting, and higher derivative gravity is renormalizable \cite{St}; however, it is not unitary within a perturbation scheme \cite{BOS}. Of course, it is not strange that a useful local field theory of gravity covering all energy regions does not exist. It is important to know whether a renormalizable local field theory of gravity constructed by metric exists or not, and what type of environment would allow its existence. Several studies, starting in seventies, have been performed to calculate the divergence of an effective action of four-dimensional gravity \cite{HV,CD,BKK,ST1,Ta1,MT}. In the pure Einstein action case without a cosmological term, it was originally calculated at the one-loop level by t'Hooft and Veltman \cite{HV}. They found that the action is not renormalizable off mass shell, but is finite on mass shell at the one-loop level. Furthermore, although the pure Einstein action with a cosmological constant is renormalizable \cite{CD}, if one introduces matter fields the one loop renormalizability is lost, even on mass shell. Recently \cite{ST1,ST2} we considered the divergence of the effective action, which is the most general class with less than two derivatives for a scalar and a metric, while explicitly leaving the functions $A,B,\Lambda$ arbitrary. On an arbitrary back-ground space-time we found models which are finite in the case without a cosmological term, and with it are renormalizable by fine-tuning of functional form of $A(\phi), B(\phi), \Lambda(\phi)$ at the one loop level on mass shell. We have considered that on maximally symmetric background space-time the action (\ref{action}) with $N=1$. Without any fine-tuning of the coupling functions $A(\phi)$, $B(\phi)$, we have shown that the divergence of the effective action has one term only which proportional to $\Lambda^2$, and the divergence can be renormalized easily. In the present paper we consider the action: \begin{equation} S\left[g_{\mu\nu},\phi_i \right] = \int d^4x \sqrt{-g}\; \left[ A(\phi)_{i j}g^{\mu\nu}\partial_{\mu}\phi_i \partial_{\nu}\phi_j + B(\phi)R -2B(\phi) \Lambda(\phi) \right] \;\;\;i=1 \cdots N \label{action} \end{equation} This is the most general class with less than two derivative for N scalars and a metric. Since by redefinition of fields $A_{i j} \longrightarrow A \delta_{i j}$ in generic, in this paper we consider $A_{i j}=A \delta_{i j}$ case only. \begin{equation} S\left[g_{\mu\nu},\phi_i \right] = \int d^4x \sqrt{-g}\; \left[ A(\phi)g^{\mu\nu}\partial_{\mu}\phi_i \partial_{\nu}\phi_i + B(\phi)R -2B(\phi) \Lambda(\phi) \right] \;\;\;i=1 \cdots N \label{action2} \footnote{ In this paper we restrict $B \neq 0$ and $A \neq \frac{3}{2}\frac{B_i B_i}{B}$, where we write $X_{i_1 \cdots i_n} :=\frac{ \partial ^n X(\phi)}{\partial \phi_{i_1} \cdots \partial \phi_{i_n}}$, $X_i X_i := \sum_i^{N} X_i X_i$ for any function $X(\phi)$} \end{equation} Our paper is organized as follows. In section 2, we consider the classical analysis of the action (\ref{action}). We show classically the non-equivalence between the class of action (\ref{action}) and the class of the action without the kinetic term of the dilatons in (\ref{action}). In section 3, we calculate the divergence of the effective action with the background field method and the Schwinger-Dewitt method. Especially, on constant dilaton background, we show an explicit calculation, and we get the structure of the divergence. In section 4, we restrict the form of the couplings in order to cancel a non-renormalizable term, and we show $N$ dependence of another non-renormalizable term which cannot be canceled in the case of $N \geq 1$. In section 5, we conclude this paper. We have three Appendixes. \section{Analysis at the Classical Level} We consider gravity with a general coupling to scalars in which the action is (\ref{action}). In this section we analyze this theory at the classical level. \subsection{Classical Non-Equivalence between Constant and Non-Constant Dilaton Cases } In a previous paper\cite{ST1} which treated the $N=1$ case in (\ref{action}), we have shown the equivalence between an original action and the no kinetic term action. In this subsection, however, we show for $ N > 1$ a classical non-equivalence between the original action (\ref{action2}) and a model without kinetic term of dilaton ($\partial \phi_i = 0$) in the original action (\ref{action2}). First we start with an action without kinetic terms of dilatons: \begin{equation} S\left[ \bar{g}_{\mu\nu},\phi_i \right]= \int d^4x \sqrt{-\bar{g}} \; \left[ {\cal B}(\phi)\bar{R} -2 {\cal B}(\phi) \lambda(\phi) \right]\;\;\; \label{stand} \end{equation} We transform the metric: \begin{equation} \bar{g_{\mu\nu}} \longrightarrow g_{\mu\nu}=e^{2 \sigma(\phi)}\bar{g_{\mu\nu}} \end{equation} where $\sigma(\phi)$, ${\cal B}(\phi)$ and $\lambda(\phi)$ are arbitrary functions of $\phi$. In a new variable the action becomes: \[ S\left[ g_{\mu\nu},\phi_i \right] = \int d^4x \sqrt{-g} \times \] {\small \begin{equation} \left[ 6 e^{2\sigma(\phi)} \left({\cal B} \sigma_i \sigma_j + \frac{1}{2}\left({\cal B}_i \sigma_j +\sigma_i{\cal B}_j \right) \right)(\nabla \phi_i)(\nabla \phi_j) + {\cal B} (\phi) e^{2\sigma(\phi)} R -2 {\cal B}(\phi) \lambda(\phi) e^{4\sigma(\phi)} \right] \footnote{We use the convinient notations: $(\nabla \phi_i)(\nabla \phi_j) = g^{\mu \nu}(\partial_\mu \phi_i)(\partial_\nu \phi_j)$ and $(\nabla \phi)^2 =(\nabla \phi_i)(\nabla \phi_i)$ } \end{equation} } If we can set $\sigma(\phi)$, ${\cal B}(\phi)$ and $ \lambda(\phi)$ to \begin{equation} 6 e^{2\sigma(\phi)}\left({\cal B} \sigma_i \sigma_j + \frac{1}{2}\left({\cal B}_i \sigma_j +\sigma_i{\cal B}_j \right) \right) =A(\phi)_{i j} \;,\;\;\; {\cal B}(\phi)e^{2\sigma(\phi)}= B(\phi) \;,\;\;\; \frac{ \lambda(\phi)}{{\cal B}(\phi)}=\frac{\Lambda(\phi)}{B(\phi)} \label{functs} \end{equation} for arbitrary functions $A(\phi)_{i j}$, $B(\phi)$ and $\Lambda(\phi)$, then the original action (\ref{action}) and the action (\ref{stand}) are equivalent. If $N > 1 $ and $A_{i j}$ is diagonal, however, the first equation in (\ref{functs}) cannot be satisfied except for $A_{i j}=0$. This is an essential difference from the $N=1$ case. Therefore we will analyze the model in the case of $N > 1$. \begin{equation} \end{equation} \subsection{Classical Equations of Motion} The classical equations of motion for $g_{\mu\nu}$ and $\phi_i$ are \begin{equation} R_{\mu\nu} -\frac{1}{2}R g_{\mu\nu} + \Lambda g_{\mu\nu} =T_{\mu\nu} \;\;\;\;\;\;(\;\mbox{for}\;g_{\mu\nu}) \end{equation} and \begin{equation} B_i R -2(B\Lambda)_i +A_i(\nabla \phi)^2 -2 A_j(\nabla_{\mu} \phi_j)(\nabla_{\nu} \phi_i) -2A (\Box \phi_i) =0 \;\;\;\;\;\;(\;\mbox{for}\;\phi_i)\;, \end{equation} where \[ T_{\mu\nu}:= \] \[ \left( \frac{A}{2 B}(\nabla \phi)^2 - \frac{B_{ij}}{B}(\nabla \phi_i)(\nabla \phi_j) - \frac{B_i}{B}(\Box \phi_i) \right) g_{\mu\nu} \] \begin{equation} + \frac{B_{ij}}{B}(\nabla_{\mu} \phi_i)(\nabla_{\nu} \phi_j) -\frac{A}{B} (\nabla_{\mu} \phi_i)(\nabla_{\nu} \phi_i) +\frac{B_i}{B}(\nabla_{\mu}\nabla_{\nu}\phi_i)\;. \end{equation} Especially, we consider special solution with the constant dilaton. In that case, the energy momentum tensor vanishes and the classical action is \begin{equation} S_{\partial \phi_=0} = \int d^4x \sqrt{-g}\; \left[ B(\phi)R -2B(\phi) \Lambda(\phi) \right]\;. \end{equation} This is same to (\ref{stand})which is the action with no kinetic term of dilaton. The equations of motion are \begin{equation} R_{\mu\nu} -\frac{1}{2}R g_{\mu\nu} + \Lambda g_{\mu\nu} =0 \label{veomg} \end{equation} \begin{equation} B_i R -2(B \Lambda)_i =0 \label{veomp} \end{equation} In Appendix A we consider solution classically. \section{One-loop calculations} \subsection{BackGround Field Method} We consider the one-loop divergence of the effective action. First, we start with the background field method \cite{Ab}. We split the fields into background fields ($g_{\mu\nu}$, $\phi_i$) and quantum fields ($h_{\mu\nu}$, $\varphi_i$): \begin{equation} \phi_i \rightarrow \phi_i^{\prime} = \phi_i + \varphi_i \;,\;\;\;\;\;\;\;\;\;\;\;\;\;\; \; g_{\mu\nu} \rightarrow g'_{\mu\nu} = g_{\mu\nu} + h_{\mu\nu} \end{equation} Although the original action (\ref{action2}) and the action (\ref{stand}) are not equivalent when $N >1$ for the reason shown in the previous section, when background dilatons are constant the two classical actions have the same form. Classically the theory of gravity is explained well by Einstein action. Therefore we set the classical background dilaton $\phi_i$ to be constant while the quantum fluctuation $\varphi_i$ is allowed to vary. On the other hand we do not restrict the background and quantum metric. Since the action (\ref{action2}) has diffeomorphic invariance we have to fix the gauge freedom. We fix the quantum field with the gauge fixing term: \begin{equation} S_{gf} = \int d^4 x \sqrt{-g}\;\chi_{\mu}\;\frac{\alpha}{2}\;\chi^{\mu} \end{equation} where \footnote{$h=h_{\mu}^{\mu},\; \bar{h}_{\mu\nu} =h_{\mu\nu}-\frac{1}{4}\;hg_{\mu\nu}$} \begin{equation} \chi_{\mu} = \nabla_{\alpha} \bar{h}_{\mu}^{\,\alpha}+ \beta\nabla_{\mu}h + \gamma_{i} \nabla_{\mu} \varphi_{i}\;. \end{equation} are functions of the background dilaton.\\ In order to simplify the differential structure of the bilinear part of the total action ($S+S_{\mbox{gf}}+ S_{\mbox{gh}}$ ), we choose these functions as \begin{equation} \alpha=-B\;\;,\;\;\;\;\beta=-\frac{1}{4}\;\;,\;\;\;\;\gamma_i=-\frac{B_i}{B}\;, \end{equation} which induces \begin{equation} \left.\left(S + S_{\mbox{gf}} +S_{\mbox{gh}}\right)\right|_{\mbox{bilinear}} =\int d^4 x \sqrt{-g}\; \left({\Phi} \hat{H} {\Phi}^T + c_{\mu}\hat{H}_{\mbox{gh}c^{\mu}} \right)\;, \end{equation} where \[ \hat{H} =\hat{K}\Box +\hat{L}_{\rho}\nabla^{\rho} + \hat{M}\;, \] \begin{equation} \hat{H}_{\mbox{gh}} = g^{\mu\alpha}\Box +\gamma_i(\nabla^{\alpha}\phi_i)\nabla^{\mu} + \gamma_i(\nabla^{\mu} \nabla^{\alpha} \phi_i) + R^{\mu \alpha} \end{equation} Here, $\Phi=\left(\bar{h}_{\mu\nu},\;h,\; \varphi\right)$ and $c_{\mu}$ stand for ghosts and $T$ stands for transposition.\\ The components of $\hat{H}$ have the following form: \begin{equation} \hat{K}=\left( \begin{array}{ccc} \frac{B}{4} \delta^{\mu\nu \alpha \beta} & 0 & 0\\ 0 & -\frac{B}{16} & -\frac{B_j}{4} \\ 0 & -\frac{B_i}{4} & \frac{B_iB_j}{2B} -A \delta_{ij} \end{array} \right) \footnote{$\delta^{\mu\nu \alpha \beta} := \frac{1}{2}\left( g^{\mu \alpha}g^{\nu \beta} +g^{\mu \beta}g^{\nu \alpha}\right)$ } \end{equation} \[ \hat{L}^{\lambda}=(\nabla_{\tau}\phi_k) \times \] {\scriptsize \begin{equation} \left(\!\!\!\!\!\!\! \begin{array}{ccc} \frac{B_k}{4} \left(\delta^{\mu \nu \alpha \beta} g^{\tau \lambda} +2 g^{\nu \beta}\left(g^{\mu \tau } g^{\alpha \lambda } - g^{\alpha \tau } g^{\mu \lambda }\right) \!\! \right) \!\!\!\!\!\!\! & \!\!\!\!\!\!\! - \frac{B_k}{4} g^{\mu \tau} g^{\nu \lambda} \!\!\!\!\! & \!\!\!\!\! \left( \frac{B_{jk}}{2}-A\delta_{jk} \right) g^{\mu \tau} g^{\nu \lambda}\\ \frac{B_k}{4} g^{\alpha \tau} g^{\beta \lambda} \!\!\!\!\!\!\! & \!\!\!\!\!\!\! -\frac{B_k}{16} g^{\tau \lambda} \!\!\!\!\! & \!\!\!\!\! \left(\frac{A}{4}\delta_{jk} -\frac{5}{8} B_{jk} \right) g^{\tau \lambda}\\ \left( A\delta_{ik} - \frac{B_{ik}}{2}\right) g^{\alpha \tau} g^{\beta \lambda} \!\!\!\!\!\!\!\! & \!\!\!\!\!\!\!\! \left( \frac{B_{ik}}{8}-\frac{A}{4}\delta_{ik}\right) g^{\tau \lambda} \!\! & \!\! \left( \!\!\! \left( \!\! \frac{B_i B_j}{2B} - A\delta_{ij} \!\! \right)_{\!\!k} \!\!\! \left( A_i\delta_{jk}-A_j\delta_{ik}\right) \!\!\! \right) g^{\tau \lambda} \end{array} \!\!\!\!\!\!\! \right) \end{equation} } \[ \hat{M}= \] {\scriptsize \begin{equation} \left(\!\!\!\!\!\!\!\!\! \begin{array}{ccc} \begin{array}{l} \delta^{\mu \nu \alpha \beta}\left( \frac{B_k}{2} (\Box \phi_k) + \left( \frac{B_{kl}}{2}-\frac{A}{4}\delta_{kl} \right) (\nabla \phi_k)^(\nabla \phi_l) + \frac{B \Lambda}{2} \right) \\ + g^{\nu \beta}\left( -B_k\left( \nabla^{\mu} \nabla^\alpha \phi_k \right) +\left( A\delta_{kl}-B_{kl} \right)(\nabla^\mu \phi_k)(\nabla^\alpha \phi_l)\right) \\ +\frac{B}{4}\left( -\delta^{\mu \nu \alpha \beta} R + 2 g^{\nu \beta} R^{\mu \alpha}+2R^{\mu \alpha \nu \beta} \right) \end{array} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! 0 \!\!\!\!\!\!\!\!\!\!\!\!\!\! & \!\!\!\!\!\!\!\!\!\!\!\!\!\! \begin{array}{l} \frac{B_{jk}}{2}\left( \nabla^{\mu} \nabla^{\nu} \phi_k \right) \\ + \left( \frac{B_jkl}{2} - \frac{A_j}{2}\delta_{kl} \right) (\nabla^\mu \phi_k)(\nabla^\nu \phi_l) \\ - \frac{B_j}{2}R^{\mu \nu} \end{array} \\ \!\!\!\!\!\!\!\! & \!\!\!\!\!\!\!\! \!\!\!\!\!\! & \!\!\!\!\!\!\!\! \\ \frac{B_k}{4}\left( \nabla^{\alpha} \nabla^{\beta} \phi_k \right) + \frac{B_{kl}}{4} (\nabla^\alpha \phi_k)(\nabla^\beta \phi_l) \!\!\!\!\!\!\!\!\!\!\!\!\!\! & \!\!\!\!\!\!\!\!\!\!\!\!\!\! -\frac{B \Lambda}{8} \!\!\!\!\!\!\!\! & \!\!\!\!\!\!\!\!\!\!\!\! \begin{array}{l} -\frac{3}{8} B_{jk} (\Box \phi_k) \\ + \left( \frac{A_j}{8}\delta_{kl} - \frac{3}{8}B_{jkl} \right)(\nabla \phi_k)(\nabla \phi_l) \\ + \frac{B_j}{8} R - \frac{(B \Lambda)_j}{2} \end{array} \\ \!\!\!\!\!\!\!\! & \!\!\!\!\!\!\!\!\!\!\!\!\!\! & \!\!\!\!\!\!\!\! \\ A\left( \nabla^{\alpha} \nabla^{\beta} \phi_i \right) + \frac{A_i}{2} (\nabla^\alpha \phi)(\nabla^\beta \phi) - \frac{B_i}{2}R^{\alpha \beta} \!\!\!\! & \!\!\!\! \begin{array}{l} -\frac{A}{4} (\Box \phi_i) \\ +\frac{A_i}{8} (\nabla \phi)^2 \\ -\frac{A_k}{4}(\nabla \phi_k)(\nabla \phi_i) \\ + \frac{B_i}{8} R - \frac{(B \Lambda)_i}{2} \end{array} \!\!\!\!\!\!\!\! & \!\!\!\!\!\!\!\!\!\! \begin{array}{l} -A_j(\Box \phi_i) \\ +\frac{A_{ij}}{2}(\nabla \phi)^2 \\ -A_{jk}(\nabla \phi_k)(\nabla \phi_i) \\ + \frac{B_{ij}}{2} R - (B \Lambda)_{ij} \end{array} \end{array} \!\!\!\!\!\!\!\!\!\! \right) \end{equation} } The one loop effective action is given by the standard general expression, \begin{equation} \Gamma^{\mbox{\small 1-loop}}={i \over 2}\;\mbox{Tr} \ln {\hat{H}} - i\;\mbox{Tr}\ln {\hat{H}_{\mbox{gh}}} \footnote{ Tr includes space time integral, tr does not. }, \end{equation} \subsection{Schwinger-DeWitt Formula} In this subsection we use the version of the the Schwinger-DeWitt formula for the case of constant N-dilaton. For our minimal gauge, there are no second derivative term except for a d'Alembertian term, for which is the convenient formula of the structure of the divergence of the one loop effective action. In Appendix B we present a short review of the Schwinger-DeWitt formula\cite{De,BV,BOS}. We apply this formula to our case. From now on we restrict the background dilatons to be constant in order to simplify the calculation. Note that the quantum fluctuation $(\varphi )$ of $\phi$ is not restricted to a constant. There are no restriction on the metrics ($g_{\mu \nu}$ and $h_{\mu \nu}$).\\ After some calculations, \begin{equation} \hat{P}= \left( \begin{array}{cc} D_{\mu \nu \alpha \beta} + \left( \frac{R}{6} + 2 \Lambda \right) \delta_{\mu \nu \alpha \beta} & p_{1 2}R_{\mu \nu} \\ p_{2 1}R_{\alpha \beta} & p_r R + p_l \end{array} \right) \end{equation} where \begin{equation} \left\{ \begin{array}{ll} D_{\mu \nu \alpha \beta}= 2 R_{\mu \alpha \nu \beta} + 2 g_{\nu \beta} R_{\mu \alpha} - R \delta _{\mu \nu \alpha \beta} & \\ p_{1 2}= \left(\frac{4}{B} \;\;\; - \frac{2 B_i}{B} \right)\;\;, & p_{2 1}= k^{-1}\left(\begin{array}{c} 0 \\ -\frac{B_i}{2}\end{array}\right) \\ p_r = k^{-1}\left(\begin{array}{cc} 0 & \frac{B_j}{8} \\ \frac{B_i}{8} & \frac{B_{i j}}{2} \end{array} \right) + \left(\begin{array}{cc} \frac{1}{6} & 0 \\ 0 & \frac{1}{6} \end{array} \right)\;\;, & p_l= k^{-1}\left(\begin{array}{cc} -\frac{B \Lambda}{8} & -\frac{(B \Lambda )_j}{2} \\ -\frac{(B \Lambda )_i}{2} & -(B \Lambda)_{i j} \end{array} \right) \\ k= \left(\begin{array}{cc} -\frac{B}{16} & -\frac{B_j}{4} \\ -\frac{B_i}{4} & \frac{B_i B_j}{2 B}-A \delta_{i j} \end{array} \right) & \end{array} \right.\footnote{$k^{-1}$ exists when $X:=2 A B -3 B_i B_i \neq 0 $} \end{equation} \begin{equation} \hat{S}_{\lambda \lambda^{\prime}}= \left( \begin{array}{cc} 2 g_{\nu \beta}R_{\mu \alpha \lambda \lambda^{\prime}} & 0 \\ 0 & 0 \end{array} \right) \end{equation} \begin{equation} \hat{P}_{\mbox{gh}}=R_{\mu \alpha} + \frac{R}{6} g_{\mu \alpha} \end{equation} \begin{equation} \hat{S}_{\mbox{gh};\lambda \lambda^{\prime}}=R_{\alpha \mu \lambda \lambda^{\prime}}\;. \end{equation} Therefore, the divergence of the one-loop effective action with constant background dilatons is \[ \Gamma_{\mbox{div },\; \partial \phi=0}^{\mbox{1-loop}} = \frac{1}{16 \pi^2 (D-4)} \int d^4 x \sqrt{-g} \times \] \[ \left[ \frac{N+212}{180}R_{\mu \nu \alpha \beta}R^{\mu \nu \alpha \beta} +\left( p_{1 2} p_{2 1} - \frac{N+722}{180}\right) R_{\mu \nu}R^{\mu \nu} \right. \] \begin{equation} \left. +\left( \frac{1}{2} \mbox{tr}p_r^2 -\frac{1}{4} p_{1 2} p_{2 1} + \frac{85}{72} \right) R^2 +\left( \frac{9}{2}\Lambda + \mbox{tr}p_r p_l \right) R +\left( \frac{9}{2}\lambda^2 + \frac{1}{2} \mbox{tr}p_l^2 \right) \right]\;. \end{equation} For convenience, we write the above expression with the Weyl tensor ($C_{\mu \nu \alpha \beta}$) and Gauss-Bonnet topological invariant quantity ($ G $): \footnote{$G \equiv R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}-4R_{\mu\nu}R^{\mu\nu}+R^2$, $\;C_{\mu\nu\alpha\beta}C^{\mu\nu\alpha\beta} \equiv R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}-2R_{\mu\nu}R^{\mu\nu} +\frac{1}{3}R^2 $ }. \[ \Gamma_{\mbox{div},\; \partial \phi=0}^{\mbox{1-loop}} = \frac{1}{16 \pi^2 (D-4)} \int d^4 x \sqrt{-g} \times \] \[ \left[ \left( \frac{1}{2} p_{1 2} p_{21} + \frac{298 -N}{360}\right)G +\left( \frac{1}{2 } p_{1 2} p_{2 1} + \frac{N+42}{120}\right)C_{\mu \nu \alpha \beta}C^{\mu \nu \alpha \beta} \right. \] \begin{equation} \left. +\left( \frac{1}{2} \mbox{tr}p_r^2 +\frac{1}{12} p_{1 2} p_{2 1} + \frac{17}{72} \right) R^2 +\left( \frac{9}{2}\Lambda + \mbox{tr}p_r p_l \right) R +\left( \frac{9}{2}\Lambda^2 + \frac{1}{2} \mbox{tr}p_l^2 \right) \right]\;. \label{effact} \end{equation} \section{Removing the Non-Renormalizable Divergent Terms} We consider the divergent term in the equation (\ref{effact}). In (\ref{effact}) two divergent terms, the scalar curvature term and cosmological term, appear in the classical action, therefore its counter-terms are arranged. However, in (\ref{effact}), there are first three terms which cannot be canceled by the counter-terms. First, we fine tune the functions $A(\phi)$ in order to cancel the coefficient of the quadratic term in the Wyle tensor. Since $p_{1 2}p_{2 1}$ is calculated as \begin{equation} p_{1 2}p_{2 1}= -\frac{2 B_i B_i}{2A B -3 B_i B_i} \end{equation} we set $A(\phi)$ to \begin{equation} A(\phi)=\frac{3}{2}\left( 1 + \frac{40}{N + 42}\right)\frac{B_i B_i}{B}\;\;. \end{equation} Remark: When $N$ tends to infinity, $A$ is of the same form as the conformal symmetric case\cite{ST2}. Since the coefficient of the Gauss-Bonnet term is constant in this case, this term is total derivative. The divergence of the surface term is non-essential and is ignored. Last problem is to consider the divergence of the square of the scalar curvature term. After the fine tuning of the function $A(\phi)$, this term is reduced to \[ \left[ \frac{1}{2} \mbox{tr}p_r^2 +\frac{1}{12} p_{1 2} p_{2 1} + \frac{17}{72} \right] R^2 \] \[ =\left[\frac{N^2 +224 N +5344}{3600} -\frac{(N-1)(N+42)}{18(N+82)}\left(\frac{B}{\phi B^{\prime}} \right) +\frac{(N-1)(N+42)^2}{18(N+82)^2}\left(\frac{B}{\phi B^{\prime}} \right)^2 \right. \] \begin{equation} \left.-\frac{(N+42)(N+52)}{7200}\left( \frac{B B^{\prime \prime}}{B^{\prime 2}}\right) +\frac{(N+42)^2}{28800}\left( \frac{B B^{\prime \prime}}{B^{\prime 2}}\right)^2 \right]R^2 \label{coefrr} \footnote{$\phi:=\left(\phi_i \phi_i \right)^{\frac{1}{2}}$ and $B^{\prime}= \frac{\phi_i B_i}{\phi}\;, \;B^{\prime \prime}= B_{ii}+(1-N)\frac{\phi_i B_i}{\phi^2}$ the primes mean differentiations respect to $\phi$ if $B(\phi)$ is the function of only $\phi$}\;\;. \end{equation} Fig.\ref{fig} in Appendix C we show the parameter region where the coefficient of (\ref{coefrr}) vanishes. We found that when $N \geq 1$ the coefficient cannot vanish and this model is non-renormalizable at one loop in our method. \section{Conclusion and Discussion} We considered the model which includes N-scalar fields and a metric field. First we analyze this model at the classical level. At the classical level and in the case of only $N=1$, the action (\ref{action2}) reduces to some standard form by a conformal transformation. However, in the case of $N>1$, there are no such equivalence. Therefore introduction of the dilatons has essential meaning in $N>1$ case. On the other hand, the standard form (\ref{stand}) belongs to the class without the kinetic term of dilatons in the original action (\ref{action2}). There is also no equivalence at the quantum level between such models. We restrict, however, back ground classical field to the constant dilaton since the Einstein gravity explains well the nature at the classical level, while the quantum fluctuations of dilatons is allowed to vary. Of course the classical and quantum metrics do not have any restrictions. A one-loop calculation was carried out for the model (\ref{action2}) using the background field method. This calculation is an extension of the case of Ref.\cite{ST1}. We pulled a bilinear form out of the action (\ref{action2}) with a gauge fixing term added, and out of the ghost action. Such a form is sufficient to calculate the effective action at one-loop level. We have fixed a gauge to the minimal one in order to cancel the derivative terms, except for the d'Alembertian terms, we were then able to apply the standard Schwinger-DeWitt method to estimate the divergence of the effective action. We got a one-loop divergent term (\ref{effact}). There are naively three non renormalizable terms. However when we fine tune the function $A(\phi)$ there is only one non-renormalizable term which is the $R^2$ term. We show $N$ and $B(\phi)$ dependences explicitly. And graphically we show the region where the term vanishes. We found that there is no region when $N \geq 1$. Therefore it is impossible to renormalize the divergence of the effective action at the one-loop level on a constant dilaton background. If we consider the metric to be on mass shell, divergent terms may be renormalized, as shown in a previous paper\cite{Ta1} and an incoming paper\cite{MT}, which treats the $N=1$ case. In the case of constant dilaton and $R_{\mu \nu}= \Lambda g_{\mu\nu}$, we think that the last three terms in (\ref{coefrr}) are proportional to $\Lambda^2$ with constant of $\phi$, as in the above Refs. Then, by multiplicative renormalization of the function form of the $\Lambda$, the divergences are renormalized: \begin{equation} \Lambda_{\mbox{bare}} =\mu^{\frac{D-4}{2}} \left(1 -\frac{\mbox{constant}}{16 \pi^2 (D-4)} \right) \Lambda_{\mbox{renormalized}}\;\;. \end{equation} In this paper we considered the $A_{ij}=A \delta_{ij}$ case. In this case we cannot arrange counter term if $N \geq 1$. However, in our next studies, we have to consider more general case such as that there is no redefinition of fields to allow us to set $A_{ij} = A \delta_{ij}$. One cannot neglect the possibility of a renormalizable model in the class that metric coupled to N-scalars in the most general way. If such a general case, there may be also some models which differ essentially from the standard form (\ref{stand}). Such a model also may not be equivalent to the standard form (\ref{stand}). \vspace*{\fill} \section*{Acknowledgements} The author is grateful to H.Kawai and the entire Department of Theoretical and Computational Physics at KEK for the stimulating discussions. He is also grateful to I.L.Shapiro for various suggestions by e-mail. He is thankful to T.Muta, S.Mukaigawa and the entire Department of Particle Physics at Hiroshima University for the interesting discussions.
1,116,691,497,008
arxiv
\section{Introduction} Double pomeron exchange (DPE) processes are expected to extend the physics program at the LHC not only due to the possible Higgs boson detection but also because of the possibility to study broader range of QCD physics and diffraction \cite{sensitivityHiggs, Royon:2003ng, higgsInDPE, susy, HiggsGamma, Khoze:2007hx,Khoze:2006iw, bpr}. The processes are theoreticaly characterized by large rapidity gap regions devoid of particles between centrally produced heavy object and the scattered hadrons which leave the interaction intact. This is attributed to the exchange of a colorless object, the pomeron (or the reggeon). In the LHC environment, however, the rapidity gap signature will not appear because of the high number of multiple interactions occuring at the same time, and the diffractive events will be identified tagging the escaping protons in the beam pipe. One generally considers two classes of DPE processes, namely, \textit{exclusive} DPE events if the central object is produced alone carrying away the total available diffractive energy and \textit{inclusive} events when the total energy is used to produce the central object and in addition the pomeron remnants. Exclusive events allow a precise reconstruction of the mass and kinematical properties of the central object using the central detector or even more precisely using very forward detectors installed far downstream from the interaction point. The most appealing exclusive process to be studied at the LHC is the Higgs boson production but since it cannot be observed at the Tevatron due to the low production cross section, one should find other ways to look for exclusive events at the Tevatron, for example in dijet, diphoton channels. It is needed to mention that until recently, there was not a decisive measurement that would provide enough evidence for the existence of exclusive production. Although exclusive production yields kinematically well constrained final state objects, their experimental detection is non-trivial due to the overlap with the \textit{inclusive} DPE events. In those events, the colliding pomerons are usualy viewed as an object with partonic sub-structure. A parton emitted from the pomeron takes part in the hard interaction and pomeron remnants accompanying the central object are distributed uniformly in rapidity. Exclusive events usually appear as a small deviation from the inclusive model predictions which need to be studied precisely before accepting a new kind of production. In particular, the structure of the pomeron as obtained from HERA is not precisely known at high momentum fraction, and specifically, the gluon in the pomeron is not well constrained. It is not clear if such uncertainty could not lead to mis-identifying observed processes as exclusive. This would for instance preclude the spin analysis of the produced object. \par In this paper, we aim to investigate the observation of exclusive production at the Tevatron. Indeed, we use the dijet mass fraction distribution measured by the CDF collaboration and show that even taking into account uncertainties associated with the pomeron structure, one is unable to give a satisfactory description of the data without the existence of exclusive events. We also include other approach to explain diffraction in our study, the so called Soft color interaction model (the properties of all the models are discussed later). As an outlook, we apply current models for the DPE production for LHC energies and demonstrate the possible appereance of exclusive events through the dijet mass fraction. \par The paper is organized as follows: in the second section we give a brief description of the inclusive, exclusive, and Soft color interaction models. The third section discusses how well the various models can explain the preliminary Tevatron dijet mass fraction data and the constraints implied by data on the current models. In the fourth part, we foreshadow an application of the dijet mass fraction distribution as a tool to observe exclusive events at LHC energies. Finally, we discuss issues concerning the dijet mass fraction reconstruction and fast detector simulation in the Appendix. \section{Theoretical Models} Inclusive and exclusive DPE models used in this paper are implemented in the Monte Carlo program {DPEMC} \cite{dpemc}. The Soft color interaction model is embeded in the PYTHIA program \cite{SCIMC}. The survey of the different models follows. \subsection{Inclusive Models} The first inclusive model to be mentioned is the so called ``Factorized model". It is an Ingelman-Schlein type of model \cite{IS} describing the diffractive double pomeron process as a scattering of two pomerons emitted from the proton, assuming a factorization of the cross section into a regge flux convoluted with the pomeron structure functions. For $ep$ single diffraction, it is necessary to introduce secondary reggeon trajectory to describe the observed single diffractive non-factorable cross section. In the case of the Tevatron, the pomeron trajectory alone is sufficient to describe present data and the cross section is factorable as it was advocated in \cite{factorization}. Factorization breaking between HERA and Tevatron comes only through the survival probability factor, denoting the probability that there is no additional soft interaction which would destroy the diffractively scattered protons. In other word, the probability to destroy the rapidity gap does not depend on the hard interaction. At Tevatron energies, the factor was measured to be approximately 0.1, and calculation suggested the value of 0.03 for the LHC. Pomeron structure functions, reggeon and pomeron fluxes are determined from the DIS $ep$ collisions fitting the diffractive structure function $F_2^{D}$ at HERA. For one of the most recent published diffractive structure function analysis see e.g \cite{pdfs}. \par On the other hand, the Bialas-Landshoff (BL) inclusive model \cite{bpr}, is a purely non-perturbative calculation utilising only the shape of the pomeron structure function and leaving the overall normalization to be determined from the experiment; one can for example confront the prediction of DPE cross section with the observed rate at the Tevatron \cite{factorization} and obtain the missing normalization factor \footnote{One more remark is in order. In the BL inclusive model, the partonic content of the pomeron is expressed in terms of the distribution functions as $f_{i/\mathbb{P}}(\beta_i)\equiv \beta_i G_{i/\mathbb{P}}(\beta_i)$, where the $G_{i/\mathbb{P}}(\beta_i)$ are the true parton densities as measured by the HERA collaboration, and $\beta_i$ denotes the momentum fraction of the parton $i$ in the pomeron. The integral of $f_{i/\mathbb{P}}(\beta_i)$ is normalized to 1, so that in the limit $f_{i/\mathbb{P}}(\beta_i)\rightarrow \delta(\beta_i)$ the exclusive cross section is recovered \cite{dpemc}.}. \par Both models use the pomeron structure measured at HERA which is gluon dominated. In this paper, we use the results of the QCD fits to the most recent Pomeron structure function data measured by the H1 collaboration \cite{pdfs}. The new gluon density in the Pomeron is found to be slightly smaller than the previous ones, and it is interesting to see the effect of the new PDFs with respect to the Tevatron measurements. However, the gluon density at high $\beta$, where $\beta$ denotes the fraction of the particular parton in the pomeron, is not well constrained from the QCD fits performed at HERA. To study this uncertainty, we multiply the gluon distribution by the factor $(1 - \beta)^{\nu}$ as shown in Fig. 1. QCD fits to the H1 data lead to the uncertainty on the $\nu$ parameter $\nu=0.0\pm0.5$ \cite{pdfs}. We will see in the following how this parameter influences the results on dijet mass fraction as measured at the Tevatron. \subsection{Exclusive Models} Bialas-Landsoff exclusive model \cite{BLexc} is based on an exchange of two ``non-perturbative" gluons between a pair of colliding hadrons which connect to the hard subprocess. Reggeization is employed in order to recover the pomeron parameters which successfully described soft diffractive phenomena, e.g. total cross section at low energies. A calculation of $q\bar{q}$ and $gg$ production and more details can be found in \cite{BLexc} and \cite{BLgg}, respectively. On the contrary, the Khoze, Martin, Ryskin (KMR) \cite{kmr} model is purely a perturbative approach. The interaction is obtained by an exchange of two gluons directly coupled to the colliding hadrons (no pomeron picture is introduced). While one gluon takes part in the creation of the central object, the other serves to screen the color flow across the rapidity gap. If the outgoing protons remain intact and scatter at small angles, the exchanged di-gluon system, in both models, must obey the selection rules $J_{Z}=0$, C-even, P-even. Such constrains are also applied to the hard subprocesses for the production of the central object. \par The two models show a completely different $p_T$ dependence of the DPE cross section. The energy dependence of the BL model is found to be weaker since the Pomeron is assumed to be soft whereas it is not the case for the KMR model. \subsection{Soft Color Interaction Model} The Soft color interaction model (SCI) \cite{sci,SCIMC} assumes that diffraction is not due to a colorless exchange at the hard vertex but rather to a string rearrangement in the final state during hadronisation. This model gives a probability (to be determined by the experiment) that there is no string connection, and so no color exchange, between the partons in the proton and the scattered quark produced during the hard interaction. Since the model does not imply the existence of a pomeron, there is no need of a concept like survival probability and a correct normalisation is found between single diffraction Tevatron and HERA data without any new parameter, which is one of the big successes of this model. \begin{figure} \includegraphics[width=1.5\picwidth]{fig1.eps} \caption{Uncertainty of the gluon density at high $\beta$ (here $\beta\equiv z$). The gluon density is multiplied by the factor $(1-\beta)^{\nu}$ where $\nu$=-1., -0.5, 0.5, 1. The default value $\nu =0$ is the gluon density in the pomeron determined directly by a fit to the H1 $F_2^D$ data with an uncertainty of about 0.5.} \label{gluon} \end{figure} \section{Dijet mass fraction at the Tevatron} \label{sect:dmf} Dijet mass fraction (DMF) turns out to be a very appropriate observable for identifying the exclusive production. It is defined as a ratio $R_{JJ}=M_{JJ}/M_{X}$ of the dijet system invariant mass $M_{JJ}$ to the total mass of the final state system $M_X$ (excluding the intact beam (anti)protons). If the jet algorithm has such properties that the outside-cone effects are small, the presence of an exclusive production would manifest itself as an excess of the events towards $R_{JJ}\sim1$; for exclusive events, the dijet mass is essentially equal to the mass of the central system because no pomeron remnant is present. The advantage of DMF is that one can focus on the shape of the distribution; the observation of exclusive events does not rely on the overall normalization which might be strongly dependent on the detector simulation and acceptance of the roman pot detector. \par In the following analysis, we closely follow the measurement performed by the CDF Collaboration. One can find more information about the measurement and the detector setup in a note discussing preliminary results \cite{cdfnote}. In this paragraph, we will mention only the different cuts which are relevant for our analysis. To simulate the CDF detector, we use a fast simulation interface \cite{fastSimul}, which performs a smearing of the deposited cell energy above a $0.5\,\mathrm{GeV}$ threshold and reconstructs jets using a cone algorithm. Properties of the event such as the rapidity gap size were evaluated at the generator particle level. CDF uses a roman pot detector to tag the antiprotons on one side (corresponding to $\eta_{\bar{p}} < 0$). For the DMF reconstruction, we require the antiprotons to have the longitudinal momentum loss in the range $0.01<\xi_{\bar{p}}<0.12$ and we apply the roman pot acceptance obtained from the CDF Collaboration (the real acceptance is greater than 0.5 for $0.035< \xi_{\bar{p}}<0.095$). On the proton side, where no such device is present, a rapidity gap of the size $3.6<\eta_{gap}<5.9$ is required. In the analysis, further cuts are applied: two leading jets with a transverse momentum above the threshold $p^{jet1,jet2}>10\,\mathrm{GeV}$ or $p^{jet1,jet2}_T>25\,\mathrm{GeV}$ in the central region $|\eta^{jet1,jet2}|<2.5$, a third jet veto cut ($p_T^{jet3}<5\,\mathrm{GeV}$) as well as an additional gap on the antiproton side of the size $-5.9<\eta_{gap}<-3.6$. For the sake of brevity, the threshold for the transverse momentum of the two leading jets will be in the following denoted as $p_T^{min}$, if needed. The dijet mass is computed using the jet momenta for all events passing the above mentioned cuts. In order to follow as much as possible the method used by the CDF collaboration, the mass of the diffractive system $M_X$ is calculated from the longitudinal antiproton momentum loss $\xi_{\bar{p}}$ within the roman pot acceptance, and the longitudinal momentum loss of the proton $\xi_{p}^{part}$ is determined from the particles in the central detector ($-4< \sim \eta_{part} < \sim 4$), such that: \begin{eqnarray} M_X&=&\sqrt{s\xi_{\bar{p}}\xi_p^{part}},\\ \xi^{part}_p&=&\frac{1}{\sqrt{s}}\sum_{particles} p_T \exp^{\eta},\label{eq:xipart} \end{eqnarray} summing over the particles with energies higher than $0.5\,\mathrm{GeV}$ in the final state at generator level. To reconstruct the diffractive mass, $\xi_p^{part}$ was multiplied by a factor $1.1$, obtained by fitting the correlation plot between the momentum loss of the proton at generator level $\xi_p$ and $\xi_p^{part}$ at particle level with a straight line. \par The DMF reconstruction is deeply dependent on the accuracy of the detector simulation. Since we are unable to employ the complete simulation in our analysis, we discuss possible effects due to the various definitions of DMF on the generator and the particle level in the Appendix. \begin{figure}[h] \includegraphics[width=\picwidth]{fig2a.eps} \includegraphics[width=\picwidth]{fig2b.eps} \caption{Dijet mass fraction for jets $p_T>10\,\mathrm{GeV}$. FM (left) and BL (right) models, inclusive contribution. The uncertainty of the gluon density at high $\beta$ is obtained by multiplying the gluon distribution by $(1-\beta)^{\nu}$ for different values of $\nu$ (non-solid lines).} \label{FigInc10} \end{figure} \begin{figure}[h] \includegraphics[width=\picwidth]{fig3a.eps} \includegraphics[width=\picwidth]{fig3b.eps} \caption{Dijet mass fraction for jets $p_T>25\,\mathrm{GeV}$. FM (left) and BL (right) models, inclusive contribution. The uncertainty of the gluon density at high $\beta$ is obtained by multiplying the gluon distribution by $(1-\beta)^{\nu}$ for different values of $\nu$ (non-solid lines).} \label{FigInc25} \end{figure} \subsection{Inclusive model prediction} We present first the dijet mass fraction calculated with FM and BL inclusive models. As stated in a previous section, we want to explore the impact of the high $\beta$ gluon uncertainty in the pomeron. To do this, we multiply the gluon density by a factor $(1-\beta)^{\nu}$, for diverse values of $\nu=-1,-0.5,0,0.5,1$. The impact of the parameter is shown in Fig.~\ref{FigInc10} and Fig.~\ref{FigInc25} for jets with $p_T>10\,\mathrm{GeV}$ and $p_T>25\,\mathrm{GeV}$, respectively. The computed distributions were normalized in shape, since there was no luminosity determination, implying no cross section estimation, in the CDF measurement. The interesting possible exclusive region at high $R_{JJ}$ is enhanced for $\nu=-1$, however, not in such extent that would lead to a fair description of the observed distributions. As a consequence, the tail of the measured dijet mass fraction at high $R_{JJ}$ cannot be explained by enhancing the gluon distribution at high $\beta$, and an another contribution such as exclusive events is required. \par A particular property seems to disfavour the BL inclusive model at the Tevatron. Indeed, the dijet mass fraction is dumped at low values of $R_{JJ}$, especially for jets $p_T>10\,\mathrm{GeV}$. Since the cross section is obtained as a convolution of the hard matrix element and the distribution functions, the dumping effect is a direct consequence of the use of a multiplicative factor $\beta$ in the parton density functions in the pomeron (see footnote 1). We will come back on this point when we discuss the possibility of a revised version of the BL inclusive model in the following. \par As we have seen, inclusive models are not sufficient to describe well the measured CDF distributions. Thus, it opens an area to introduce different types of proceses/models which give a significant contribution at high $R_{JJ}$. \begin{figure}[h] \includegraphics[width=\picwidth]{fig4a.eps} \includegraphics[width=\picwidth]{fig4b.eps} \includegraphics[width=\picwidth]{fig4c.eps} \caption{Dijet mass fraction for jets $p_T>10\,\mathrm{GeV}$. FM + KMR (left), BL + BL (right), FM + BL (bottom) models. We notice that the exclusive contribution allows to describe the tails at high $R_{JJ}$.} \label{FigAll10} \end{figure} \begin{figure}[h] \includegraphics[width=\picwidth]{fig5a.eps} \includegraphics[width=\picwidth]{fig5b.eps} \includegraphics[width=\picwidth]{fig5c.eps} \caption{Dijet mass fraction for jets $p_T>25\,\mathrm{GeV}$. FM + KMR (left), BL + BL (right), FM + BL (bottom) models. We note that the exclusive contribution allows to describe the tails at high $R_{JJ}$.} \label{FigAll25} \end{figure} \subsection{Exclusive models predictions} In this section, we will study the enhancement of the dijet mass distribution using exclusive DPE processes, with the aim to describe the CDF dijet mass fraction data. We examine three possibilities of the interplay of inclusive plus exclusive contributions, specifically: \begin{enumerate} \item FM + KMR \item FM + BL exclusive \item BL inclusive + BL exclusive \end{enumerate} The full contribution is obtained by fitting the inclusive and exclusive distribution to the CDF data, leaving the overall normalization $N$ and the relative normalization between the two contributions $r^{\mathrm{EXC/INC}}$ free. More precisely, the DMF distribution is obtained with the fit as $N(\sigma^{\mathrm{INC}}(R_{JJ}) + r^{\mathrm{EXC/INC}}\sigma^{\mathrm{EXC}}(R_{JJ}))$. The fit was done for jets with $p^{min}_T=10\,\mathrm{GeV}$ and $p^{min}_T=25\,\mathrm{GeV}$, separately. \par The overall normalization factor cannot be studied since the CDF collaboration did not determine the luminosity for the measurement. On the other hand, the relative normalization between the inclusive and exclusive production is a useful information. The relative normalization allows to make predictions for higher $p_T$ jets or for LHC energies for instance. For this sake, the relative normalizations $r^{\mathrm{EXC}/\mathrm{INC}}$ should not vary much between the two $p^{min}_T$ measurements. Results are summarized in Table~\ref{TabRelNorm}. We give the inclusive and $\sigma^{\mathrm{INC}}$ and the exclusive cross sections $\sigma^{\mathrm{EXC}}$, obtained directly from the models, and the relative scale factor needed to describe the CDF data to be applied to the exclusive contribution only. Whereas the relative normalization changes as a function $p_{T}^{min}$ by an order of magnitude for the exclusive BL model, it tends to be rather stable for the KMR model (the uncertainty on the factor 2.5 might be relatively large since we do not have a full simulation interface and the simulation effects tend to be higher at low jet transverse momentum). Finally, in Fig. \ref{FigAll10} and \ref{FigAll25}, the fitted distributions are depicted for $p_{T}^{min}=10,25\, \mathrm{GeV}$ jets, respectively. \begin{table}[h] \begin{tabular}{|ccc|c|c|c||c|c|c|} \hline \multicolumn{3}{|c|}{contributions} & $r^{\mathrm{EXC/INC}}(10)$ & $\sigma^{\mathrm{INC}}(10)[\mathrm{pb}]$ & $\sigma^{\mathrm{EXC}}(10)[\mathrm{pb}]$ & $r\mathrm{^{EXC/INC}}(25)$ & $\sigma^{\mathrm{INC}}(25)[\mathrm{pb}]$ & $\sigma^{\mathrm{EXC}}(25)[\mathrm{pb}]$ \\ \hline FM &+& KMR & 2.50 & 1249 & 238 & 1.0 & 7.39 & 3.95 \\ FM &+& BL exc & 0.35 & 1249 & 1950 & 0.038 & 7.39 & 108 \\ BL inc &+& BL exc & 0.46 & 2000 & 1950 & 0.017 & 40.6 & 108 \\ \hline \end{tabular} \caption{ Cross sections for inclusive diffractive production $\sigma^{\mathrm{INC}}$, exclusive cross section $\sigma^{\mathrm{EXC}}$ to be rescaled with a relative additional normalization between inclusive and exclusive events $r^{\mathrm{EXC/INC}}$ for $p_T>10\,\mathrm{GeV}$ and $p_T>25\,\mathrm{GeV}$ jets and for different models (see text). Note that the fit to the data is parametriezed as $N(\sigma^{\mathrm{INC}}(R_{JJ}) + r^{\mathrm{EXC/INC}}\sigma^{\mathrm{EXC}}(R_{JJ}))$.} \label{TabRelNorm} \end{table} \par The Tevatron data are well described by the combination of FM and KMR model. We attribute the departure from the smooth distribution of the data to the imperfection of our fast simulation interface. On the contrary, the BL inclusive model is disfavoured because it fails to describe the low $R_{JJ}$ region. It is due to the $\beta_i$ factor in the parton density $f_{i/\mathbb{P}}(\beta_i)$ used by the BL inclusive model (see footnote 1 where the variables are defined) that the $R_{JJ}$ distribution is shifted towards higher values. This factor was introduced to maintain the correspondence between the inclusive and exclusive model in the limit $f_{i/\mathbb{P}}(x_i)\rightarrow \delta(x_i)$. On the contrary, this assumption leads to properties in contradiction with CDF data. Using the BL inclusive model without this additional normalization factor leads to a DMF which is in fair agreement with data. Indeed, we show in Fig. \ref{FigBLnb} the predictions of the ``modified" model (i.e. defined as $f_{i/\mathbb{P}}(\beta_i)\equiv G_{i/\mathbb{P}}(\beta_i)$) for $p_T>10$\,GeV and $p_T>25$\,GeV jets. We see that the low $R_{JJ}$ region is described well and that fitting the prediction of the exclusive KMR model with the BL inclusive model yields roughly the same amount of exclusive events as using the factorable models. The BL inclusive model will be revised to take these effects into account. We will not mention further this "modified" version of the BL inclusive model since it gives similar results as the factorable models. \begin{figure} \includegraphics[width=\picwidth]{fig6a.eps} \includegraphics[width=\picwidth]{fig6b.eps} \caption{Dijet mass distribution at the Tevatron calculated with the "modified" parton densities (see text) for 10\,GeV (left) and 25\, GeV (right) jets, KMR exclusive model included. } \label{FigBLnb} \end{figure} \par The exclusive BL model leads to a quite reasonable description of the DMF shape for both $p^{min}_T$ cuts in combination with FM, however, it fails to grasp the shape of the exclusive cross section measured as a function of the jet minimal transverse momentum $p_T^{min}$. To illustrate this, we present the CDF data for exclusive cross section corrected for detector effects compared with the predictions of both exclusive models after applying the same cuts as in the CDF measurement, namely: $p^{jet1,2}_T>p_T^{min}$, $|\eta^{jet1,2}|<2.5$, $3.6<\eta_{gap}<5.9$, $0.03 < \xi_{\bar{p}}<0.08$. The BL exclusive model shows a much weaker $p_T$ dependence than the KMR model and is in disagreement with data. \footnote{Let us note that the cross section of exclusive events measured by the CDF collaboration is an indirect measurement since it was obtained by subtracting the inclusive contribution using an older version of the gluon density in the pomeron measured at HERA. In that sense, the contribution of exclusive events using the newest gluon density from HERA might change those results. However, as we noticed, modifying the gluon density even greatly at high $\beta$ by multiplying the gluon distribution by $(1-\beta)^{\nu}$ does not change the amount of exclusive events by a large factor, and thus does not modify the indirect measurement performed by the CDF collaboration much.} To finish the discussion about the pomeron like models, it is worth mentioning that these results assume that the survival probability has no strong dependence on $\beta$ and $\xi$. If this is not the case, we cannot assume that the shape of the gluon distribution as measured at HERA could be used to make predictions at the Tevatron. However, this is a reasonable assumption since the survival probability is related to soft phenomena occuring during hadronisation effects which occur at a much longer time scale than the hard interaction. In other words, it is natural to suppose that the soft phenomenon will not be influenced by the hard interaction. \begin{figure} \includegraphics[width=\picwidth]{fig7.eps} \caption{Exclusive cross section as a function of the minimal transverse jet momentum $p^{min}_T$ measured by the CDF collaboration and compared to the prediction of the KMR and BL exclusive models. We note that the BL model overshoots the CDF measurement while the KMR model is in good agreement.} \label{FigSigmaEXC} \end{figure} \begin{figure}[h] \includegraphics[width=\picwidth]{fig8.eps} \caption{Dijet mass fraction for two values of minimal transverse jet momentum $p^{min}_T$. We note that the relative exclusive contribution is higher at high $p^{min}_T$.} \label{FigDMFpt} \end{figure} \begin{figure} \includegraphics[width=\picwidth]{fig9a.eps} \includegraphics[width=\picwidth]{fig9b.eps} \caption{Number of jet events and mean of the dijet mass fraction as a function of the minimal jet $p^{min}_T$. We note that the ideal value of $p^{min}_T$ to enhance the exclusive contribution is of the order of 30-40 GeV which leads to a high enough production cross section as well as a large effect of the exclusive contribution on the dijet mass fraction.} \label{FigNDPmean} \end{figure} \subsection{Prospects of future measurements at the Tevatron} In this section, we list some examples of observables which could be used to identify better the exclusive contribution in DMF measurements at the Tevatron. We present the prediction as a function of the minimal transverse momentum of the two leading jets $p_T^{min}$. Since the BL inclusive model does not describe the DMF at low $R_{JJ}$, we choose to show only the FM prediction in combination with both, KMR and BL exclusive models. \par The same roman pot acceptance and restriction cuts as in the CDF measurement were used, specifically, $0.01<\xi_{\bar{p}}<0.12$, $p_T^{jet1,2}> p_T^{min}$, $|\eta^{jet1,2}|<2.5$, $3.6<|\eta_{gap}|<5.9$. Moreover, we adopted a normalization between inclusive and exclusive events as obtained for the $p_T>25\,\mathrm{GeV}$ analysis in the previous section because we are less sensitive to the imperfections of the fast simulation interface for higher $p_T$ jets. Fig. \ref{FigDMFpt} illustrates the appearance of DMF for two separate values of minimum jet $p_T^{min}$. The character of the distribution is clearly governed by exclusive events at high $p^{min}_T$. \par Fig. \ref{FigNDPmean} shows the rate of DPE events. In addition to the curves denoting inclusive contribution with the varied gluon density for $\nu=-0.5,0, 0.5$, the full contribution for both exclusive models is shown. For the FM model which is in better consistency with accessible data, the measurement of the DPE rate does not provide an evident separation of exclusive contribution from the effects due to the pomeron uncertainty since the noticable difference appears when the cross sections are too low to be observable. It is possible, however, to examine the mean of the DMF distribution. As seen in Fig. \ref{FigNDPmean}, this observable disentangles well the exclusive production with the highest effect between 30 and $40\,\mathrm{GeV}.$ \par It needs to be stressed that even though we obtain a hint in understanding the exclusive production phenomena at the Tevatron, the final picture cannot be drawn before precisely measuring the structure of the pomeron. For this purpose the DMF or the DPE rate are not suitable at the Tevatron. In the former, there is no sensitivity to the high $\beta$ gluon variation, whereas in the latter, the gluon variation and the exclusive contribution cannot be easily separated. The way out is to perform QCD fits of the pomeron structure in gluon and quark for data at low $R_{JJ}$ where the exclusive contribution is negligible. Another possibility is to perform silmutaneously the global fits of pomeron structure functions using DGLAP evolution and of the exclusive production. \par A final important remark is that this study was assuming pomeron like models for inclusive diffraction. It is worth studying other models like Soft color interaction processes and find out if they also lead to the same conclusion concerning the existence of exclusive events. \subsection{Soft color interaction model} \begin{figure} \parbox{\picwidth}{ \includegraphics[width=\picwidth]{fig10a.eps} } \parbox{\picwidth}{\includegraphics[width=\picwidth]{fig10b.eps} } \caption{Dijet mass fraction at the Tevatron for jets $p_T>10\,\mathrm{GeV}$ (left) and the $\eta$ distribution of produced particles (right) for the Soft color interaction model.} \label{Figsciflow} \end{figure} \begin{figure} \includegraphics[width=\picwidth]{fig11a.eps} \includegraphics[width=\picwidth]{fig11b.eps} \caption{Rapidity distribution of a leading jet (left) and a second leading jet (right) in the SCI model when calculating dijet mass fraction.} \label{Figscijets} \end{figure} \begin{figure} \includegraphics[width=\picwidth]{fig12a.eps} \includegraphics[width=\picwidth]{fig12b.eps} \caption{Dijet mass fraction at the Tevatron for jets $p_T>10\,\mathrm{GeV}$ for the SCI model and KMR exclusive model (left), and for jets $p_T>25\,\mathrm{GeV}$ for the SCI model only (right).} \label{dmfsci25} \end{figure} The Soft color interaction model uses different approach to explain diffractive events. In this model, diffraction is due to special color rearrangement in the final state as we mentionned earlier. It is worth noticing that in this model, the CDF data are dominated by events with tagged antiproton on the $\bar{p}$ ($\eta_{\bar{p}}<0$) side and a rapidity gap on the $p$ side. In other words, in most of the events, there is only one single antiproton in the final state accompanied by a bunch of particles (mainly pions) flowing into the beam pipe. This is illustrated in Fig. \ref{Figsciflow} right which shows the rapidity distribution of produced particles and we notice the tail of the distribution at high rapidity. We should not omit to mention that on the other hand, the probability to get two protons intact (which is important for the double tagged events) is in SCI model extremly small. \par After applying all CDF cuts mentioned above, the comparison between SCI and CDF data on $R_{JJ}$ is shown in Figs. \ref{Figsciflow} (left) and \ref{dmfsci25}. Whereas it is not possible to describe the full dijet mass fraction for a jet with $p_T>10\,$GeV, it is noticeable that the exclusive contribution is found to be lower than in the case of the pomeron inspired models. Indeed, performing the same independent fit of SCI and KMR exlusive contribution one finds that only 70\,\% of the exclusive contribution needed in case of pomeron inspired models is necessary to describe the data. For jets with $p_T>25\,$GeV, no additional exclusive contribution is needed to describe the measurement which can be seen in Fig.~\ref{dmfsci25}. Since most events are asymmetric in the sense that only the antiproton is strictly intact and on the other side, there is a flow of particles in the beam pipe, it is worth studying the rapidity distribution of jets for this model. The results are shown in Fig. \ref{Figscijets}. We note that the rapidity distribution is boosted towards high values of rapidity and not centered around zero like for pomeron inspired models and CDF data. Moreover, the cross section for $p_T>10\,$GeV jets is in the SCI model $\sigma^{\mathrm{SCI}}=167\,$pb, about only 13\% of the cross section predicted by the pomeron inspired models which however give a correct prediction of a large range of observables including DPE cross sections. Therefore, such properties disfavour the SCI model. However, it would be worth studying and modifying the SCI model since the probability to observe two protons in the final state (and/or two gaps) should be higher than the square probability of observing one proton (and/or one gap) only (single diffraction) as it was seen by the CDF collaboration \cite{cdfprob}. The model needs to be adjusted to take this into account and than it would be interesting to see the impact on the dijet mass fraction and the existence of exclusive events. \section{Dijet mass fraction at the LHC} It was suggested that exclusive production at the LHC could be used to study the properties of a specific class of centrally produced objects like Higgs bosons. However, it relies on many subtleties such as a good understanding of the inclusive production. The perturbative nature of the diffractive processes results in the factorization of the cross section to a regge flux and pomeron structure functions, while factorization breaking appears via the survival probability only. The gluon density in the pomeron is of most important matter, since its value at high momentum fraction will control the background to exclusive DPE, and the pomeron flux and the survival probability factor will have to be measured at the LHC to make reliable predictions. The flux depends on the pomeron intercept $\alpha_\mathbb{P}$ whose impact on the DMF distribution for LHC energies is shown in Fig.~\ref{FigInterceptLHC}. The pomeron intercept is parametrized as $\alpha_\mathbb{P}=1+\epsilon$ and the prediction is made for four values of $\epsilon=0.5, 0.2, 0.12, 0.08$. The updated HERA pomeron structure function analysis \cite{pdfs} suggests that the ``hard pomeron" intercept value is close to $\alpha_\mathbb{P}=1.12$. Nevertheless, new QCD fits using single diffractive or double pomeron exchange data will have to be performed to fully constrain the parton densities and the pomeron flux at the LHC. \begin{figure}[h] \begin{center} \includegraphics[width=\picwidth]{fig13.eps} \caption{Sensitivity of the dijet mass fraction to different values of the pomeron intercept $\alpha_\mathbb{P} = 1 + \epsilon$. } \label{FigInterceptLHC} \end{center} \end{figure} \par We also give the dependence of the DMF on jet $p_T$ at the LHC. DPE events in this analysis were selected applying the roman pot acceptance on both sides from the interaction point, and using a fast simulation of the CMS detector \cite{cmssim} (the results would be similar using the ATLAS simulation) and asking two leading jets with $p_T>=100,200,300,400\,\mathrm{GeV}$. We have disfavored the predictions of the BL exclusive model at the Tevatron. The BL exclusive shows weak $p_T$ dependence which makes the model unphysical for LHC energies since it predicts cross sections even higher than the inclusive ones. We therefore focus on the predictions of FM and KMR models, only. As in the previous sections, we also include a study of the uncertainty on the gluon density enhancing the high $\beta$ gluon with a factor $(1-\beta)^{\nu}$. \begin{figure}[h] \begin{center} \epsfig{file=fig14.eps,width=\picwidth} \caption{Dijet mass fraction at the LHC as a function of jet minimal transverse momentum $p^{min}_T$, FM inclusive model.} \label{FigDMFLHC} \end{center} \end{figure} \begin{figure}[h] \begin{center} \epsfig{file=fig15a.eps,width=\picwidth} \epsfig{file=fig15b.eps,width=\picwidth} \caption{Dijet mass fraction at the LHC for jets $p_T>200\,\mathrm{GeV}$ and $p_T>400\,\mathrm{GeV}$, respectively, FM inclusive + KMR exclusive models.} \label{FigDMFexcLHC} \end{center} \end{figure} \begin{figure}[h] \begin{center} \epsfig{file=fig16.eps,width=\picwidth} \caption{Number of DPE events at the LHC as a function of minimal transverse momentum $p^{min}_T$ of two leading jets. FM inclusive + KMR exclusive models. The gluon variation is displayed for different $\nu$ values.} \label{FigNDPEptLHC} \end{center} \end{figure} \begin{figure}[h] \begin{center} \epsfig{file=fig17.eps,width=\picwidth} \caption{Average value of the dijet mass fraction as a function of minimal transverse momentum $p^{min}_T$ of the leading jets. Exclusive contribution and different values of $\nu$ are shown. FM + KMR models.} \label{FigMeanLHC} \end{center} \end{figure} \clearpage \par The dijet mass fraction as a function of different $p_T$ is visible in Fig.~\ref{FigDMFLHC}. The exclusive contribution manifests itself as an increase in the tail of the distribution which can be seen for $200\,\mathrm{GeV}$ jets (left) and $400\,\mathrm{GeV}$ jets (right), respectively in Fig. \ref{FigDMFexcLHC}. Exclusive production slowly turns on with the increase of the jet $p_T$ which is demonstrated in Fig. \ref{FigNDPEptLHC} where the number of expected DPE events is shown. However, with respect to the uncertainty on the gluon density this appearance is almost negligiable. One can use the average position of the DMF as a function of the minimal jet transverse momentum $p_T^{min}$ to study the presence of the exclusive contribution, see Fig. \ref{FigMeanLHC}. This is true especially for high $p_T$ jets. \par The exclusive production at the LHC plays a minor role for low $p_T$ jets. Therefore, measurements e.g for $p_T<200\,\mathrm{GeV}$ where the inclusive production is dominant could be used to constrain the gluon density in the pomeron. Afterwards, one can look in the high $p_T$ jet region to extract the exclusive contribution from the tail of the DMF. \section{Conclusion} The aim of this paper was to investigate whether we can explain the excess of events at the high dijet mass fraction measured at the Tevatron without the exclusive production. The result is actually two fold. \par Concerning the pomeron induced models ("Factorized model" and Bialas-Landshoff inclusive models) we found that the uncertainty on the high $\beta$ gluon density in the Pomeron has a small impact at high $R_{JJ}$. Therefore, an additional contribution is needed to describe the CDF data with these models. We examined the exclusive KMR model and Bialas-Landshoff exclusive model predictions for the role of the additional contribution and found that the best descriprion of data is achieved by the combination of the Factorized inclusive model (or the modified inclusive Bialas-Landshoff one) and the KMR exclusive model. The exclusive contribution at the Tevatron can be magnified requesting higher $p_T$ jets and studying specific observables like a mean of the dijet mass fraction, for example. Though, one of the limitations of using high $p_T$ jets is due to the rate of DPE events which falls logarithmicaly allowing measurements for jets up to approximately $40\,\mathrm{GeV}$. The Bialas-Landshoff exclusive model seems to be disfavoured by Tevatron data since it shows a softer jet $p_T$ dependance and predicts unphysically large DPE rates at LHC energies. \par In the case of the Soft color interaction model which is not based on pomeron exchanges, the need to introduce an additional exclusive production is less obvious. For low $p_T$ jets the amount of exclusive events to describe the data is smaller than in case of Factorized model, but for high $p_T$ jets no additional contribution is necessary. This draws a new question: whether the double pomeron exchange events could be explained by a special rearrangement of color only? The CDF data are in this model dominated by single diffractive events. The probability of tagging two protons in the final state within this model is very small, contradicting the CDF observation. So even though the SCI model is not applicable for DPE events in the current state it would be worth adjusting this model to correctly predict the rate of double tagged events and study the model prediction of dijet mass fraction and other DPE induced processes. \par Dijet mass fraction at the LHC could be used to select the exclusive events. Indeed, it is possible to study jets with $p_T>200\,\mathrm{GeV}$ for instance, and to focus on events with DMF above 0.8 which is dominated by exclusive production (see Fig.~\ref{FigDMFexcLHC}). However, as it was advocated earlier, a complete QCD analysis consisting of measuring the gluon density in the Pomeron (especially at high $\beta$) and study the QCD evolution of exclusive events as a function of jet $p_T$ is needed to fully understand the observables, and make predictions for diffractive Higgs production and its background at the LHC as an example. \section*{Acknowledgments} The authors want to thank M. Boonekamp, R. Enberg, D. Goulianos, G. Ingelman, R. Pechanski and K. Tereashi for useful discussions and for providing them the CDF data and roman pot acceptance. \newpage \section{Appendix} Throughout the paper, we have purpously omitted a discussion of imperfections concerning the dijet mass fraction reconstruction within our framework, postponing it to this section. In this appendix, all calculations are done for jets with $p_T>10\,\mathrm{GeV}$. \par \begin{itemize} \item In our analysis, we define the dijet mass fraction as a ratio of the two leading jet invariant mass $M_{JJ}$ to the central diffractive mass $M_X$. The latter was determined using the momentum loss $\xi_{\bar{p}}$ measured in a roman pot on the antiproton side and the $\xi_p^{part}$ obtained from particles on the generator level, such as $M_X=(s\xi_{\bar{p}}\xi^{part}_{p})^{1/2}$. In this case, we must ensure that all of the produced diffractive energy $M_X$ is deposited into the central detector. If this is not the case, our $M_X$ at generator level might be sensibly larger than the one measured by the CDF collaboration. The energy flow of the particles on the generator level as a function of rapidity is shown in Fig. \ref{FigEnergy}, upper plot. The middle plot shows the energy flow weighted by the transverse momentum of the particle $E_T$. We see that most of the energy is deposited in the calorimeter region, i.e. for $|\eta|<4$. In $\bar{p}$ tagged events, protons most frequently loose a smaller momentum fraction (roughly $\xi_p\sim0.025$) than the tagged antiproton for which the acceptance turns on for $\xi_{\bar{p}}>0.035$. This can be seen from the $\xi_p$ population plot in the bottom of Fig. \ref{FigEnergy}. Thus, a collision of more energetic pomeron from the antiproton side with a pomeron from the proton side is boosted towards the $\bar{p}$ as it is seen on the energy flow distributions. \begin{figure}[h] \includegraphics[width=\picwidth]{fig18a.eps} \includegraphics[width=\picwidth]{fig18b.eps}\\ \includegraphics[width=\picwidth]{fig18c.eps} \includegraphics[width=\picwidth]{fig18d.eps}\\ \includegraphics[width=\picwidth]{fig18e.eps} \includegraphics[width=\picwidth]{fig18f.eps}\\ \caption{Upper and medium plots: Rapidity and $E_T$ weighted rapidity distributions of all particles produced (except the protons); Lower plot: momentum loss of the proton in double pomeron exchange events $\xi_p$ for FM (left) and BL (right) inclusive models.} \label{FigEnergy} \end{figure} \item A comparison between the proton momentum loss obtained from particles $\xi_p^{part}$ calculated using formula (\ref{eq:xipart}) and the proton momentum loss at generator level $\xi_p$ leads to the factor 1.1 mentionned in a previous section. The dependance is displayed in Fig. \ref{Figetaxiinc}. \begin{figure} \includegraphics[width=\picwidth]{fig19a.eps} \includegraphics[width=\picwidth]{fig19b.eps}\\ \caption{Comparision of the proton momentum loss $\xi^{part}_p$ calculated with formula (\ref{eq:xipart}) and the proton momentum loss $\xi_p$ at generator level.} \label{Figetaxiinc} \end{figure} \item The size of the rapidity gap runs as a function of the momentum loss $\xi$ like $\Delta\eta\sim\log1/\xi$. The size of the gap which increases with decreasing $\xi$ for inclusive models can be seen in Fig. \ref{Figgapxi}. Regions of high rapidity show the $\bar{p}$ hits whereas the low rapidity region is due to the produced particles detected in the central detector; they are well separated by a rapidity gap. For exclusive events, the size of a rapidity gap is larger and does not show such a strong $\xi$ dependance as for inclusive models. \item The simulation interface plays a significant role in the determination of the exclusive contribution. As previously stated, we cannot profit from having access to the full simulation interface and having under control all the effects of the detector. In order to eliminate some effects of the simulation we plot the dijet mass distribution $R_{JJ}$ using the information from the generator and check whether the need of exclusive events to describe the data is still valid. Specifically, we require the same cuts as in Section \ref{sect:dmf} but the diffractive mass $M^{RP}$ is evaluted using the true (anti)protons momentum loss $(\xi_{\bar{p}})\xi_p$ at generator level \begin{equation} M^{RP}_X=\sqrt{s\xi_{\bar{p}}\xi_{p}}. \end{equation} The dijet mass fraction calculated with $M^{RP}$ is shown in Fig. \ref{FigDMFgen}. We see that the distribution is shifted to lower values of $R_{JJ}$, requesting slightly more exclusive events to describe the CDF data. The description of the data is also quite good. \begin{figure}[h] \includegraphics[width=\picwidth]{fig20a.eps} \includegraphics[width=\picwidth]{fig20b.eps}\\ \includegraphics[width=\picwidth]{fig20c.eps} \includegraphics[width=\picwidth]{fig20d.eps} \caption{Rapidity of particles on the $\bar{p}$ side vs. $\bar{p}$ momentum loss: inclusive models (top) for FM (left) and BL (right); exlusive models (bottom) for KMR (left) and BL (right). Hits of scattered $\bar{p}$ are included.} \label{Figgapxi} \end{figure} \begin{figure}[h] \includegraphics[width=\picwidth]{fig21a.eps} \includegraphics[width=\picwidth]{fig21b.eps} \caption{Dijet mass fraction for jets $p_T>10\,\mathrm{GeV}$: FM + KMR (left), and at generator level calculated according to (\ref{eq:dmfbeta}). } \label{FigDMFgen} \end{figure} \item The role of the simulation interface to reconstruct jets can be illustrated by comparing the above distributins to DMF calculated at generator level defined as \begin{equation} R_{JJ}=\frac{M_{JJ}}{M_X}=\frac{\sqrt{s\xi_{\bar{p}}\xi_p\beta_1\beta_2}}{\sqrt{s\xi_{\bar{p}}\xi_p}}=\sqrt{\beta_1\beta_2}, \label{eq:dmfbeta} \end{equation} where $\beta_1$, $\beta_2$ denote the fraction of the pomeron carried by the interacting parton. As can be seen, in Fig. \ref{FigDMFgen} (right), the DMF distribution at pure generator level shows a completely different shape not compatible with CDF data and shows the importance of jet reconstruction. \end{itemize}
1,116,691,497,009
arxiv
\section{Introduction} Probabilistic model evaluation and selection is an important task in statistics and machine learning, particularly when multiple models are under initial consideration. In the non-Bayesian literature, models are typically compared using out-of-sample performance criteria such as cross-validation \citep{Geisser1979,Shao1993,Vehtari2002}, or predictive information \citep{Watanabe2010}. Computing the leave-$p$-out cross-validation score requires $n$-choose-$p$ test set evaluations for $n$ data points, which in most cases is computationally unviable and hence approximations such as $k$-fold cross-validation are often used instead \citep{Geisser1975}. A survey is provided by \citet{Arlot2010}, and a Bayesian perspective on cross-validation by \citet{Vehtari2012, Gelman2014}. In Bayesian statistics, the marginal likelihood or model evidence is the natural measure of model fit. For a model $\mathcal{M}$ with likelihood function or sampling distribution $\left\{f_{\theta}(y): \theta \in \Theta \right\}$ parameterized by $\theta$, a prior $\pi(\theta)$, and observations $y_{1:n} \in \mathcal{Y}^n$, the marginal likelihood or the prior predictive is defined as \begin{equation} \label{eq:bml} p_{\mathcal{M}}( y_{1:n}) = \int f_{\theta}(y_{1:n} ) \, d\pi(\theta) \, . \end{equation} The marginal likelihood can be used to calculate the posterior probability of the model given the data, $p( {\cal{M}} \mid y_{1:n} ) \propto p_{\mathcal{M}}( y_{1:n}) \, p({\cal{M}})$, as it is the probability of the data being generated under the prior when the model is correctly specified \cite[Chapter~7]{Robert2007}. The ratio of marginal likelihoods between models is known as the Bayes factor that quantifies the prior to posterior odds on observing the data. The marginal likelihood can be difficult to compute if the likelihood is peaked with respect to the prior, although Monte Carlo solutions exist; see \citet{Robert2009} for a survey. Under vague priors, the marginal likelihood may also be highly sensitive to the prior dispersion even if the posterior is not; a well known example is Lindley's paradox \citep{Lindley1957,OHagan2004,Robert2014}. As a result, its approximations such as the Bayesian information criterion \citep{Schwarz1978} or the deviance information criterion \citep{Spiegelhalter2002} are widely used, see also \citet{Gelman2014}. For our work, it is useful to note from the property of probability distributions that the log marginal likelihood can be written as the sum of log conditionals, \begin{equation}\label{eq:p_fac} \log p_{\mathcal{M}}( y_{1:n}) = \sum_{i=1}^n \log p_{\mathcal{M}}(y_i \mid y_{1:i-1}) \end{equation} where $ p_{\mathcal{M}}(y_i \mid y_{1:i-1}) = \int f_{\theta}(y_i) \, d \pi(\theta \mid y_{1:i-1}) $ is the posterior predictive for $i>1$, ${p_{\cal{M}}(y_1 \mid y_{1:0})} \\ = \int f_{\theta}(y_{1} ) \, d\pi(\theta) \, $, and this representation is true for any permutation of the data indices. While Bayesian inference formally assumes that the model space captures the truth, in the model misspecified or so called $M$-open scenario \cite[Chapter~6]{Bernardo2009} the log marginal likelihood can be simply interpreted as a predictive sequential, or prequential \citep{Dawid1984}, scoring rule of the form $S(y_{1:n}) = \sum_i s(y_i \mid y_{1:i-1}) $ with score function $ s(y_i \mid y_{1:i-1}) = \log {p_{\cal{M}}(y_i \mid y_{1:i-1})}$. This interpretation of the log marginal likelihood as a predictive score \cite[][Chapter~6]{Kass1995,Gneiting2007,Bernardo2009} has resulted in alternative scoring functions for Bayesian model selection \citep{Dawid2014,Dawid2015,Watson2016,Shao2019}, and provides insight into the relationship between the marginal likelihood and posterior predictive methods \citep{Vehtari2012}. \citet{Key1999} considered cross-validation from an $M$-open perspective and introduced a mixture utility for model selection that trades off fidelity to data with predictive power. \section{Uniqueness of the marginal likelihood under coherent scoring} To begin, we prove that under an assumption of data exchangeability, the log posterior predictive is the only prequential scoring rule that guarantees coherent model evaluation. The coherence property under exchangeability, where the indices of the data points carry no information, refers to the principle that identical models on seeing the same data should be scored equally irrespective of data ordering. In demonstrating the uniqueness of the log posterior predictive, it is useful to introduce the notion of a general Bayesian model \citep{Bissiri2016}, which is a framework for Bayesian updating without the requirement of a true model. Define a parameter of interest by \begin{equation} \label{eq:parameter} \theta_0 = \argmin_\theta \int l(\theta, y) dF_0(y) \end{equation} where $F_0(y)$ is the unknown true sampling distribution giving rise to the data, and $l : \Theta \times \mathcal{Y} \rightarrow [0,\infty)$ is a loss function linking an observation $y$ to the parameter $\theta$. \citet{Bissiri2016} argue that after observing $y_{1:n}$, a coherent update of beliefs about $\theta_0$ from a prior $\pi_G(\theta)$ to the posterior $\pi_G(\theta \mid y_{1:n})$ exists and must take on the form \begin{equation}\label{eq:genBayespost} \pi_G(\theta \mid y_{1:n}) \propto \exp\left\{-w l(\theta, y_{1:n}) \right\} \pi_G(\theta) \end{equation} where $l(\theta, y_{1:n})= \sum_i l(\theta,y_i)$ is an additive loss function and $w>0$ is a loss scale parameter; see \citet{Holmes2017,Lyddon2019} on the selection of $w$. For $w= 1$ and $l(\theta,y) = -\log f_{\theta}(y)$, we obtain traditional Bayesian updating without assuming the model $f_{\theta}(y)$ is true for some value of $\theta$. From (\ref{eq:parameter}), $M$-open Bayesian inference is simply targeting the value of $\theta$ that minimizes the Kullback-Leibler divergence between $d F_0(y)$ and $f_{\theta}(y)$. The form (\ref{eq:genBayespost}) is uniquely implied by the assumptions in Theorem 1 of \citet{Bissiri2016}, and we now focus on the coherence property of the update rule. An update function $\psi\{l(\theta, y), \pi_G(\theta)\} = \pi_G(\theta \mid y)$ is coherent if, for some inputs $y_{1:2}$, it satisfies \begin{equation*} \psi[l(\theta, y_2),\psi \{l(\theta, y_1), \pi_G(\theta)\}] =\psi\{l(\theta, y_1)+l(\theta, y_2), \pi_G(\theta)\}. \end{equation*} This coherence condition is natural under an assumption of exchangeability as we expect posterior inferences about $\theta_0$ to be unchanged whether we observe $y_{1:2}$ in any order or all at once, as it is in traditional Bayesian updating. We now extend this coherence condition to general Bayesian model choice, where the goal is to evaluate the fit of the observed data under the general Bayesian model class $\mathcal{M}_G = \{l(\theta,y):{\theta \in \Theta\}}$ with a prior $\pi_G(\theta)$. We treat $w$ as a parameter outside of the model specification, as there are principled methods to select it from the model, prior and data. We define the log posterior predictive score as \begin{equation*} s_G (\tilde{y} \mid y_{1:n}) = \log \int g\{ l(\theta,\tilde{y})\} d\pi_G(\theta \mid y_{1:n}) \end{equation*} where $g: [0,\infty) \to [0,\infty)$ is a continuous monotonically decreasing scoring function that transforms $l(\theta,y)$ into a predictive score for a test point $\tilde{y}$. We define the cumulative prequential log score as \begin{equation*} S_G(y_{1:n}) = \sum_{i=1}^n s_G (y_i\mid y_{1:i-1}) \end{equation*} where $s_G ( y_1 \mid y_{1:0})= \log \int g\{ l(\theta,y_1)\} d\pi_G(\theta ) $. The cumulative prequential log score sums the log posterior predictive score of each consecutive data point in a prequential manner, where a large score indicates that the model is predicting well. An intuitive choice for the scoring function might be the negative loss $g(l) = -l$, but we will see that this violates coherency, as defined below. \begin{definition} The model scoring function $g(l)$ is coherent if it satisfies \begin{equation} \label{eq:coherentscore} \sum_{i=1}^n s_G (y_i\mid y_{1:i-1}) = \log\int g\{l(\theta,y_{1:n})\} d\pi_G(\theta) \end{equation} for all $\Theta$, $\pi(\theta)$ and $n>0$, such that $S_G(y_{1:n})$ is invariant to the ordering or partitioning of the observations. \end{definition} We now present our main result on the uniqueness of the choice of $g$. \begin{proposition}\label{prop1} If the model scoring function $g: [0,\infty) \to [0,\infty)$ is continuous, monotonically decreasing and coherent, then the unique choice of scoring rule $g(l)$ is \begin{equation*} g(l) = \exp(-wl) \end{equation*} where $w$ is the loss-scale in the general Bayesian posterior. \end{proposition} \begin{proof} The proof is given in the Supplementary Material. \end{proof} This holds irrespective of whether the model is true or not. More importantly for us is the corollary below. \begin{corollary} The marginal likelihood is the unique coherent marginal score for Bayesian inference. \end{corollary} \begin{proof} Let $w=1$ and $l(\theta,y) = -\log f_{\theta}(y)$, and hence $g\{ l(\theta,y)\} = f_{\theta}(y)$. \end{proof} The marginal likelihood arises naturally as the unique prequential scoring rule under coherent belief updating in the Bayesian framework. The coherence of the marginal likelihood implies an invariance to the permutation of the observations $y_{1:n}$ under exchangeability, including independent and identically distributed data, a property that is not shared by other prequential scoring rules, such as \citet{Dawid2014, Grunwald2017, Shao2019}. \section{The marginal likelihood and cross-validation} \subsection{Equivalence of the marginal likelihood and cumulative \newline cross-validation} The leave-$p$-out cross-validation score is defined as \begin{equation} \label{eq:SCV} S_{CV} (y_{1:n} ; p) = \frac{1}{{n \choose p}} \sum_{t=1}^{{n \choose p }} \frac{1}{p} \sum_{j=1}^{p} s\left(\tilde{y}_{j}^{(t)} \mid y^{(t)}_{1:n-p}\right) \end{equation} where $\tilde{y}^{(t)}_{1:p}$ denotes the $t$th of $n$-choose-$p$ possible held-out test sets, with $y_{1:n-p}^{(t)}$ the corresponding training set, such that $y_{1:n} = \left\{\tilde{y}^{(t)}, y^{(t)}\right\}$, and $S_{CV}$ records the average predictive score per datum. Although leave-one-out cross-validation is a popular choice, it was shown in \citet{Shao1993} that it is asymptotically inconsistent for a linear model selection problem, and requires $\left(p/n \right) \to 1$ as $n \to \infty$ for consistency. We will not go into further detail here but instead refer the reader to \cite{Arlot2010}. Selecting a larger $p$ has the interpretation of penalizing complexity \citep{Vehtari2012}, as complex models will tend to over-fit to a small training set. However, the number of test set evaluations grows rapidly with $p$ and hence $k$-fold cross-validation is often adopted for computational convenience. From a Bayesian perspective it is natural to consider the log posterior predictive as the scoring function, $s(\tilde{y} \mid y) = \log \int f_\theta(\tilde{y}) d\pi(\theta \mid y)$, particularly as we have now shown that it is the only coherent scoring mechanism, which leads us to the following result. \begin{proposition}\label{prop2} The Bayesian marginal likelihood is equivalent to the cumulative leave-$p$-out cross-validation score using the log posterior predictive as the scoring rule, such that \begin{equation} \label{eq:margcv} \log p_{\cal{M}}(y_{1:n}) = \sum_{p=1}^{n} S_{CV} (y_{1:n} ; p) \end{equation} with $s(\tilde{y}_j \mid y_{1:n-p}) = \log p_{\cal{M}}(\tilde{y}_j \mid y_{1:n-p}) = \log \int f_\theta(\tilde{y}_j) \, d\pi(\theta \mid y_{1:n-p})$. \end{proposition} \begin{proof} This follows from the invariance of the marginal likelihood under arbitrary permutation of the sequence $y_{1:n}$ in (\ref{eq:p_fac}). We provide a proof and an alternative proof by induction in the Supplementary Material. \end{proof} The Bayesian marginal likelihood is simply $n$ times the average leave-$p$-out cross-validation score, $n \times (1/n) \sum_{p=1}^{n} S_{CV} (y_{1:n} ; p)$, where the scaling by $n$ is due to (\ref{eq:SCV}) being a per datum score. Bayesian models are evaluated through out-of-sample predictions on all $(2^n-1)$ possible held-out test sets whereas cross-validation with fixed $p$ only captures a snapshot of model performance. Evaluating the predictive performance on $(2^n-1)$ test sets would appear intractable for most applications, but we see through (\ref{eq:margcv}) and (\ref{eq:bml}) that it is computable as a single integral. \subsection{Sensitivity to the prior and preparatory training} The representation of the marginal likelihood as a cumulative cross-validation score (\ref{eq:margcv}) provides insight into the sensitivity to the prior. The last term in the right hand side of (\ref{eq:margcv}) involves no training data, $S_{CV}(y_{1:n}; n) = (1/n)\sum_{i=1}^n \log \int f_{\theta}(y_i) \, d\pi(\theta)$, which scores the model entirely on how well the analyst is able to specify the prior. In many situations, the analyst may not want this term to contribute to model evaluation. Moreover, there is tension between any desire to specify vague priors to safeguard their influence and the fact that diffuse priors can lead to an arbitrarily large and negative model score for real valued parameters from (\ref{eq:margcv}). It may seem inappropriate to penalize a model based on the subjective ability to specify the prior, or to compare models using a score that includes contributions from predictions made using only a handful of training points even with informative priors. For example, we see that 10\% of terms contributing to the marginal likelihood come from out-of-sample predictions using, on average, less than 5\% of available training data. This is related to the start-up problem in prequential analysis \citep{Dawid1992}. A natural and obvious solution is to begin evaluating the model performance after a preparatory phase, for example using 10\% or 50\% of the data as preparatory training prior to testing. This leads to a Bayesian cumulative leave-$P$-out cross-validation score defined as \begin{equation}\label{eq:CCV} S_{CCV}(y_{1:n}; P) = \sum_{p=1}^{P} S_{CV} (y_{1:n} ; p) \end{equation} with a preparatory cross-validation score $ S_{PCV}(y_{1:n}; P) = \sum_{p=P+1}^{n} S_{CV} (y_{1:n} ; p), $ for $1 \leq P < n$. We suggest setting $P$ to leave out $0.9n$, $0.5n$ or $\max(0.9n, n-10d)$, where $d$ is the total number of model parameters, as reasonable default choices, but clearly this is situation specific. One may be interested in reporting both $S_{CCV}$ and $S_{PCV}$, as the latter can be regarded as an evaluation of the prior, but we suggest that only $S_{CCV}$ is used for model evaluation from the arguments above. Although full coherency is now lost, we still have coherency conditioned on a preparatory training set, where permutation of the data within the training and test sets does not affect the score, and so we can write (\ref{eq:CCV}) as \begin{equation}\label{eq:CCV2} S_{CCV}(y_{1:n}; P) = \frac{1}{{n \choose P }} \sum_{t=1}^{n \choose P } \log p_\mathcal{M}\left(\tilde{y}^{(t)}_{1:P} \mid y^{(t)}_{1:n-P}\right). \end{equation} This equivalence is derived in the Supplementary Material in a similar fashion to Proposition \ref{prop2}. This has precisely the form of the the log geometric intrinsic Bayes factor of \citet{Berger1996} but motivated by a different route. The intrinsic Bayes factor was developed in an objective Bayesian setting \citep{Berger2001}, where improper priors cause indeterminacies in the evaluation of the marginal likelihood. The intrinsic Bayes factor remedies this with a partition of the data into $y_{1:l},y_{l+1:n}$, where $y_{1:l}$ is the minimum training sample used to convert an improper prior $\pi(\theta)$ into a proper prior $\pi(\theta \mid y_{1:l})$. In contrast, we set $n-P$ to provide preparatory training and $\pi(\theta)$ can be subjective. Moreover, in modern applications we often have $d \gg n$ where intrinsic Bayes factors cannot be applied in their original form. \newpage We can approximate (\ref{eq:CCV2}) through Monte Carlo where the training data sets ${y}^{(t)}_{1:n-P}$ are drawn uniformly at random, and for non-conjugate models the inner term must also be estimated, for example through \begin{equation}\label{eq:MCCCV} \hat{S}_{CCV}(y_{1:n}; P) = \frac{1}{T} \sum_{t=1}^{T} \log \left\{ \frac{1}{B}\sum_{b=1}^B f_{\theta_b^{(t)}}\left(\tilde{y}^{(t)}_{1:P}\right)\right\} \end{equation} where samples $\theta_b^{(t)} \sim \pi \left(\theta \mid y^{(t)}_{1:n-P}\right)$ are obtained via $T$ Markov chain Monte Carlo samplers. If we assume that the number of samples $B$ per chain is sufficiently large, then the variance of the estimate $\hat{S}_{CCV}$ is approximately of the form $\tau^2 / T$. However, fitting $T$ models may be costly, but we can run the chains in parallel. To avoid the need for $T$ Markov chain Monte Carlo chains in (\ref{eq:MCCCV}), we can instead take advantage of the fact that the partial posteriors for different training sets will be similar, and utilize importance sampling \citep{Bhattacharya2007, Vehtari2017} or sequential Monte Carlo \citep{Bornn2010} to estimate the posterior predictives for computational savings. We provide further details on efficient computation of (\ref{eq:MCCCV}) in the Supplementary Material. \section{Illustration for the normal linear model} We illustrate the use of Bayesian cumulative cross-validation in a polynomial regression example, where the $r$th polynomial model is defined as \begin{equation*} f_{\theta}(y \mid x,r) =\mathcal{N}\{y; \theta^{ \mathrm{\scriptscriptstyle T} } \phi_r(x), \sigma^2 \}, \quad \phi_r(x) = \begin{bmatrix} 1 & x &\ldots & x^{r-1} &x^r \end{bmatrix}^{ \mathrm{\scriptscriptstyle T} }. \end{equation*} We observe the data $\{y_{1:n},x_{1:n}\}$, and we place a fixed vague prior on the intercept term, $\theta_0 \sim \mathcal{N}(\theta_0; 0,100^2)$, and $\theta_d \sim \mathcal{N}(\theta_d; 0,s^2)$ for $d \in \{1,\ldots,r\}$ on the remaining coefficients. In our example, we have $n=100$ and the true model is $r=1$, $\theta = \begin{bmatrix} 1 & 0.5\end{bmatrix}^{ \mathrm{\scriptscriptstyle T} }$ with known $\sigma^2 = 1$. For our prior, we vary the value of $s^2 \in \left\{10^{-1},10^0,10^4 \right\}$ to investigate the impact of the prior tails. For each prior setting, we calculate $\log p_\mathcal{M}(y_{1:n})$ and $S_{CCV}(y_{1:n};P)$ for models $r \in \{0,1,2\}$. In this example, $\log p_\mathcal{M}(y_{1:n})$ is tractable, whereas $S_{CCV}$ requires a Monte Carlo average over tractable log posterior predictives. We report the mean over 10 runs of estimating $S_{CCV}$ with $T= 10^6$ random training/test splits. We calculate the Monte Carlo standard error over the 10 runs and report the maximum for each setting of $P$. The results are shown in Table \ref{tbl:normal}, where $\hat{S}_{CCV}$ is normalized to the same scale as $\log p_r(y_{1:n})$. Under the strong prior $s^2 = 10^{-1}$ and the moderate prior $s^2 = 10^0$, the marginal likelihood correctly identifies the true model, but when we increase $s^2$ to $10^{4}$ it heavily over-penalizes the more complex models and prefers $r=0$. In fact, the magnitude of the marginal likelihood and the discrepancy just described can be made arbitrarily large by simply increasing $s^2$, which should be guarded against when a modeller has weak prior beliefs. This issue is not observed with $\hat{S}_{CCV}$ for the values of $P$ we consider. The vague prior does not impede the ability of $\hat{S}_{CCV}$ to correctly identify the true model $r=1$ and the scores are stable within each column of $P$. In the Supplementary Material, we present graphical tools for exploring the cumulative cross-validation and the effect of the choice of $P$ on $S_{CCV}$. We provide an additional example using probit regression on the Pima Indian data set. \begin{table}[!h] \center \def~{\hphantom{-}} \caption{ Log marginal likelihoods and cumulative cross-validation scores for normal linear model}{% \begin{small} \begin{sc} \begin{tabular}{l|c||c|c|c|c} $s^2$~~&Model &$\log p_r(y_{1:n})$& \multicolumn{3}{c} {$\hat{S}_{CCV}(y_{1:n}; P) \times n/P$} \\[2pt] & $r$& &$P=0.9n$& $P= 0.5n$ & $P = 0.1n$ \\[5pt] \hline\hline $10$\rlap{$^{-1}$} &0& -158.82 &-153.80&-153.21& -153.06 \\ &1& {-155.57} & {-150.39}&{-149.55}&{-149.27}\\ &2& -156.12 &-150.94&-149.81& -149.38 \\[5pt] $10$\rlap{$^{0}$} &0& -158.82 &-153.80&-153.21& -153.06 \\ &1& {-156.26} &{-150.77}&{-149.66}& {-149.34} \\ &2& -157.80&-151.90&-150.04& -149.50 \\[5pt] $10$\rlap{$^{4}$} &0& {-158.82 }&-153.80&-153.21& -153.06 \\ &1& -160.81 &{-150.91}&{-149.68}& {-149.35} \\ &2& -166.93 &-152.30&-150.08& -149.53\\[5pt] \hline \hline \multicolumn{3}{r|} {Maximum standard error} &\hphantom{-000}0.002&\hphantom{-000}0.008& \hphantom{-000}0.023 \end{tabular} \end{sc} \end{small}} \label{tbl:normal} \end{table} \newpage \section{Discussion} We have shown that for coherence, the unique scoring rule for Bayesian model evaluation in either $M$-open or $M$-closed is provided by the log posterior predictive probability, and that the marginal likelihood is equivalent to a cumulative cross-validation score over all training-test data partitions. The coherence flows from the fact that the scoring rule and the Bayesian update both use the same information, namely the likelihood function, which is appropriate as the alternative would be to learn and score under different criteria. If we are interested in an alternative loss function to the log likelihood, we advocate a general Bayesian update \citep{Bissiri2016,Lyddon2019} that targets the parameters minimising the expected loss, with models evaluated using the corresponding coherent cumulative cross-validation score. \section*{Acknowledgement} The authors thank Lucian Chan, George Nicholson, the editor, an associate editor and two referees for their helpful comments. Fong was funded by The Alan Turing Institute. Holmes was supported by The Alan Turing Institute, the Health Data Research, U.K., the Li Ka Shing Foundation, the Medical Research Council, and the U.K. Engineering and Physical Sciences Research Council. \bibliographystyle{apalike}
1,116,691,497,010
arxiv
\section{Introduction} Let ${\mathbb F}_q$ be the finite field of $q$ elements and of characteristic $p$, with $p\ge 3$. For a function $f:{\mathbb F}_q \to {\mathbb F}_q$, we define the functional graph of $f$ as a directed graph ${\mathcal G}_f$ on $q$ nodes labelled by the elements of ${\mathbb F}_q$ where there is an edge from $u$ to $v$ if and only if $f(u) = v$. For any integer $n\ge 1$, let $f^{(n)}$ be the $n$-th iteration of $f$. These graphs are particular as one can immediately observe that each connected component of the graph ${\mathcal G}_f$ has a unique cycle (we treat fixed points as cycles of length $1$). An example for the functional graph of $x^2+12 \pmod{31}$ is given in Figure~\ref{pic:connected_graph}. \begin{figure} \resizebox{5cm}{9cm}{% \begin{tikzpicture}[>=latex',line join=bevel,scale=0.60] \pgfsetlinewidth{1bp} \pgfsetcolor{black} \draw [->] (416.65bp,720.76bp) .. controls (412.29bp,712.28bp) and (406.85bp,701.71bp) .. (397.3bp,683.15bp); \draw [->] (207.0bp,71.697bp) .. controls (207.0bp,63.983bp) and (207.0bp,54.712bp) .. (207.0bp,36.104bp); \draw [->] (117.81bp,76.807bp) .. controls (135.0bp,65.665bp) and (160.62bp,49.062bp) .. (188.4bp,31.053bp); \draw [->] (41.57bp,146.83bp) .. controls (51.75bp,136.94bp) and (65.524bp,123.55bp) .. (84.204bp,105.38bp); \draw [->] (153.81bp,364.81bp) .. controls (171.0bp,353.67bp) and (196.62bp,337.06bp) .. (224.4bp,319.05bp); \draw [->] (243.0bp,359.7bp) .. controls (243.0bp,351.98bp) and (243.0bp,342.71bp) .. (243.0bp,324.1bp); \draw [->] (192.46bp,577.46bp) .. controls (185.73bp,568.4bp) and (177.1bp,556.79bp) .. (163.51bp,538.49bp); \draw [->] (314.56bp,505.12bp) .. controls (308.8bp,496.34bp) and (301.52bp,485.26bp) .. (289.4bp,466.82bp); \draw [->] (179.35bp,144.76bp) .. controls (183.71bp,136.28bp) and (189.15bp,125.71bp) .. (198.7bp,107.15bp); \draw [->] (77.57bp,434.83bp) .. controls (87.75bp,424.94bp) and (101.52bp,411.55bp) .. (120.2bp,393.38bp); \draw [->] (163.93bp,505.81bp) .. controls (171.21bp,496.55bp) and (180.66bp,484.52bp) .. (195.09bp,466.16bp); \draw [->] (361.35bp,720.76bp) .. controls (365.71bp,712.28bp) and (371.15bp,701.71bp) .. (380.7bp,683.15bp); \draw [->] (214.61bp,648.05bp) .. controls (213.07bp,640.35bp) and (211.21bp,631.03bp) .. (207.46bp,612.28bp); \draw [->] (185.57bp,290.83bp) .. controls (195.75bp,280.94bp) and (209.52bp,267.55bp) .. (228.2bp,249.38bp); \draw [->] (318.98bp,647.7bp) .. controls (319.86bp,639.98bp) and (320.92bp,630.71bp) .. (323.05bp,612.1bp); \draw [->] (135.0bp,431.7bp) .. controls (135.0bp,423.98bp) and (135.0bp,414.71bp) .. (135.0bp,396.1bp); \draw [->] (219.88bp,504.05bp) .. controls (217.99bp,496.26bp) and (215.7bp,486.82bp) .. (211.2bp,468.28bp); \draw [->] (243.0bp,287.7bp) .. controls (243.0bp,279.98bp) and (243.0bp,270.71bp) .. (243.0bp,252.1bp); \draw [->] (270.65bp,432.76bp) .. controls (266.29bp,424.28bp) and (260.85bp,413.71bp) .. (251.3bp,395.15bp); \draw [->] (158.59bp,649.81bp) .. controls (166.26bp,640.55bp) and (176.23bp,628.52bp) .. (191.44bp,610.16bp); \draw [->] (325.0bp,575.7bp) .. controls (325.0bp,567.98bp) and (325.0bp,558.71bp) .. (325.0bp,540.1bp); \draw [->] (99.0bp,143.7bp) .. controls (99.0bp,135.98bp) and (99.0bp,126.71bp) .. (99.0bp,108.1bp); \draw [->] (227.69bp,29.757bp) .. controls (263.67bp,50.127bp) and (334.0bp,97.994bp) .. (334.0bp,161.0bp) .. controls (334.0bp,379.0bp) and (334.0bp,379.0bp) .. (334.0bp,379.0bp) .. controls (334.0bp,419.08bp) and (330.38bp,465.41bp) .. (326.78bp,503.97bp); \draw [->] (185.57bp,218.83bp) .. controls (195.75bp,208.94bp) and (209.52bp,195.55bp) .. (228.2bp,177.38bp); \draw [->] (325.35bp,792.76bp) .. controls (329.71bp,784.28bp) and (335.15bp,773.71bp) .. (344.7bp,755.15bp); \draw [->] (136.84bp,576.05bp) .. controls (139.1bp,568.14bp) and (141.85bp,558.54bp) .. (147.2bp,539.79bp); \draw [->] (243.0bp,215.7bp) .. controls (243.0bp,207.98bp) and (243.0bp,198.71bp) .. (243.0bp,180.1bp); \draw [->] (380.65bp,792.76bp) .. controls (376.29bp,784.28bp) and (370.85bp,773.71bp) .. (361.3bp,755.15bp); \draw [->] (375.43bp,650.15bp) .. controls (366.69bp,640.6bp) and (355.17bp,627.99bp) .. (338.55bp,609.82bp); \draw [->] (234.65bp,144.76bp) .. controls (230.29bp,136.28bp) and (224.85bp,125.71bp) .. (215.3bp,107.15bp); \draw [->] (215.35bp,432.76bp) .. controls (219.71bp,424.28bp) and (225.15bp,413.71bp) .. (234.7bp,395.15bp); \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (135.0bp,450.0bp) ellipse (27.0bp and 18.0bp); \draw (135.0bp,450.0bp) node {24}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (171.0bp,162.0bp) ellipse (27.0bp and 18.0bp); \draw (171.0bp,162.0bp) node {25}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (243.0bp,234.0bp) ellipse (27.0bp and 18.0bp); \draw (243.0bp,234.0bp) node {26}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (218.0bp,666.0bp) ellipse (27.0bp and 18.0bp); \draw (218.0bp,666.0bp) node {27}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (389.0bp,666.0bp) ellipse (27.0bp and 18.0bp); \draw (389.0bp,666.0bp) node {20}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (152.0bp,522.0bp) ellipse (27.0bp and 18.0bp); \draw (152.0bp,522.0bp) node {21}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (207.0bp,18.0bp) ellipse (27.0bp and 18.0bp); \draw (207.0bp,18.0bp) node {22}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (99.0bp,162.0bp) ellipse (27.0bp and 18.0bp); \draw (99.0bp,162.0bp) node {23}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (204.0bp,594.0bp) ellipse (27.0bp and 18.0bp); \draw (204.0bp,594.0bp) node {28}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (389.0bp,810.0bp) ellipse (27.0bp and 18.0bp); \draw (389.0bp,810.0bp) node {29}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (243.0bp,378.0bp) ellipse (27.0bp and 18.0bp); \draw (243.0bp,378.0bp) node {1}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (325.0bp,522.0bp) ellipse (27.0bp and 18.0bp); \draw (325.0bp,522.0bp) node {0}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (132.0bp,594.0bp) ellipse (27.0bp and 18.0bp); \draw (132.0bp,594.0bp) node {3}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (317.0bp,810.0bp) ellipse (27.0bp and 18.0bp); \draw (317.0bp,810.0bp) node {2}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (171.0bp,234.0bp) ellipse (27.0bp and 18.0bp); \draw (171.0bp,234.0bp) node {5}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (146.0bp,666.0bp) ellipse (27.0bp and 18.0bp); \draw (146.0bp,666.0bp) node {4}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (63.0bp,450.0bp) ellipse (27.0bp and 18.0bp); \draw (63.0bp,450.0bp) node {7}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (243.0bp,162.0bp) ellipse (27.0bp and 18.0bp); \draw (243.0bp,162.0bp) node {6}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (325.0bp,594.0bp) ellipse (27.0bp and 18.0bp); \draw (325.0bp,594.0bp) node {9}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (27.0bp,162.0bp) ellipse (27.0bp and 18.0bp); \draw (27.0bp,162.0bp) node {8}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (317.0bp,666.0bp) ellipse (27.0bp and 18.0bp); \draw (317.0bp,666.0bp) node {11}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (224.0bp,522.0bp) ellipse (27.0bp and 18.0bp); \draw (224.0bp,522.0bp) node {10}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (243.0bp,306.0bp) ellipse (27.0bp and 18.0bp); \draw (243.0bp,306.0bp) node {13}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (279.0bp,450.0bp) ellipse (27.0bp and 18.0bp); \draw (279.0bp,450.0bp) node {12}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (425.0bp,738.0bp) ellipse (27.0bp and 18.0bp); \draw (425.0bp,738.0bp) node {15}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (99.0bp,90.0bp) ellipse (27.0bp and 18.0bp); \draw (99.0bp,90.0bp) node {14}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (207.0bp,90.0bp) ellipse (27.0bp and 18.0bp); \draw (207.0bp,90.0bp) node {17}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (353.0bp,738.0bp) ellipse (27.0bp and 18.0bp); \draw (353.0bp,738.0bp) node {16}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (207.0bp,450.0bp) ellipse (27.0bp and 18.0bp); \draw (207.0bp,450.0bp) node {19}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (171.0bp,306.0bp) ellipse (27.0bp and 18.0bp); \draw (171.0bp,306.0bp) node {18}; \end{scope} \begin{scope} \definecolor{strokecol}{rgb}{0.0,0.0,0.0}; \pgfsetstrokecolor{strokecol} \draw (135.0bp,378.0bp) ellipse (27.0bp and 18.0bp); \draw (135.0bp,378.0bp) node {30}; \end{scope} % \end{tikzpicture} } \caption{The functional graph of $X^2+12 \pmod{31}$} \label{pic:connected_graph} \end{figure} Recently, there have been an increasing interest in studying, theoretically and experimentally, the graphs ${\mathcal G}_f$ generated by polynomials $f \in {\mathbb F}_q[X]$ of small degree (such as quadratic polynomials), and how they differ, or not, from random mappings~\cite{FO2}. We refer to~\cite{BGTW,BrGa,BuSch,FlGar,KLMMSS, OstSha} and the references therein. In this paper, we concentrate on the case of quadratic polynomials over prime fields. In fact, up to isomorphism we only need to consider polynomials $f_a(X) = X^2 +a$, $a \in {\mathbb F}_p$ (see the proof of~\cite[Theorem~2.1]{KLMMSS}). For simplicity, we use ${\mathcal G}_a = {\mathcal G}_{f_a}$ to denote the functional graph generated by $f_a$. For this case, in~\cite[Section 4]{KLMMSS} the authors have provided numerical data for the number of distinct graphs ${\mathcal G}_a$, the statistics of cyclic points, the number of connected components, as well as the most popular component size. Different from the aspects in~\cite{KLMMSS}, we consider several questions related to distributions of cyclic points and sizes of connected components of ${\mathcal G}_a$ when $a$ runs through the elements in ${\mathbb F}_p$. In particular, we are interested in characterising connected functional graphs ${\mathcal G}_a$, that is, the graphs which contain only one component (and thus only one cycle). In this paper, we focus on characterising the functional graphs by providing direct parameters such as the number of (connected) components. We then characterise various cumulative parameters, such as the number of cyclic points and the shape of trees extracted from functional graphs. We highlight similarities and differences between functional graphs~\cite{KLMMSS} and random mappings~\cite{FO2}, and we also pay much attention to features of connected functional graphs. While obtaining theoretic results for these questions remains a challenge, we introduce efficient algorithms and present new interesting results of numerical experiments. The rest of the paper is structured as follows. In Section~\ref{sect:counting}, we develop a fast algorithm that determines whether a functional graph is connected, which is used to compute the number of connected functional graphs. In Section~\ref{sect:cyclic}, we compare the number of cyclic points in connected graphs with those in all graphs modulo $p$. In Section~\ref{sect:smallcyclic} and Section~\ref{sect:smallsize} respectively, we consider the number of components with small number of cyclic points and with small size. Finally, in Section~\ref{sect:tree} we illustrate the statistics of trees in functional graphs. Throughout the paper, we use the Landau symbol $O$. Recall that the assertion $U=O(V)$ is equivalent to the inequality $|U|\le cV$ with some absolute constant $c>0$. To emphasise the dependence of the implied constant $c$ on some parameter (or a list of parameters) $\rho$, we write $U=O_{\rho}(V)$. We also use the asymptotic symbol $\sim$. \section{Counting connected graphs} \label{sect:counting} In this section, we introduce a new efficient algorithm that quickly detects connected functional graphs, and formulate some conjectures for the number of connected graphs based on our computations. \subsection{Preliminaries and informal ideas of the algorithm} Let ${\mathcal I}_p$ be the set $a \in {\mathbb F}_p$ such that ${\mathcal G}_a$ is connected. We also denote by $I_p = \# {\mathcal I}_p$ the number of connected graphs ${\mathcal G}_a$ with $a \in {\mathbb F}_p$. Clearly the graph ${\mathcal G}_0$ is not connected, and also by~\cite[Corollary~18~(a)]{VaSha} ${\mathcal G}_{-2}$ is also not connected if $p>3$, and so ${\mathcal I}_p \subseteq {\mathbb F}_p \setminus \{0,-2\}$ if $p>3$. In fact, the functional graphs with values $a = 0$ and $a = -2$ lead to graphs with a particular group structure (and thus the structure of these graphs deviates significantly from the other graphs, see~\cite{VaSha}). Essentially in~\cite[Algorithm 3.1]{KLMMSS}, a rigorous deterministic algorithm using Floyd's cycle detection algorithm and needing $O(p)$ function evaluations (that is, of complexity $p^{1+o(1)}$) has been used to test whether ${\mathcal G}_a$ is a connected graph. Instead of evaluating $I_p$ via this algorithm which would need $O(p^2)$ function evaluations, we introduce a more efficient heuristic approach in practice, which is specifically useful for computations of a family of graphs (not just a single graph). The main idea is to first check quickly whether ${\mathcal G}_a$ has more than one small cycle (i.e., more than one component). A graph ${\mathcal G}_a$ has a component with a \textit{cycle} of size $\ell$ if and only if the equation $f_a^{(\ell)}(u) = u$ has a solution $u$ which is not a solution to any of the equations $f_a^{(k)}(u) = u$ with $1 \le k < \ell$. The roots of $f_a^{(\ell)}(u) = u$ are the \textit{cyclic points} in the graph. For this we need the \textit{dynatomic polynomials} $$ F_a^{(\ell)}(X) = \prod_{r \mid\ell} \(f_a^{(r)}(X) - X\)^{\mu(\ell/r)}, $$ where $\mu(k)$ is the M{\"o}bius function, see~\cite[Section~4.1]{Silv}. Moreover, we have $$ f_a^{(n)}(X) - X = \prod_{\ell \mid n} F_a^{(\ell)}(X), \quad n=1,2,\ldots. $$ For example $$ F_a^{(1)}(X) = X^2 -X +a \quad \mbox{and} \quad F_a^{(2)}(X) = X^2 + X + a +1 $$ and $$ F_a^{(3)}(X) = \(f_a^{(3)}(X) - X\)/ \(f_a^{(1)}(X) - X\). $$ Clearly, if ${\mathcal G}_a$ has a cycle of length $\ell$, then any point in this cycle is a root of the polynomial $F_a^{(\ell)}(X)$. However, the roots of $F_a^{(\ell)}(X)$ might be not all lying in cycles of length $\ell$; for instance see~\cite[Example~4.2]{Silv}. Certainly, ${\mathcal G}_a$ is not connected if $F_a^{(\ell)}(X)$ has a root for two distinct values of $\ell =\ell_1,\ell_2$ with $\ell_1\nmid \ell_2$ and $\ell_2 \nmid \ell_1$. Alternatively, if $F_a^{(\ell)}(X)$ has more than $\ell$ distinct roots, this indicates that ${\mathcal G}_a$ has at least two cycles, which again implies that ${\mathcal G}_a$ has more than one connected component. As we show later, it turns out that this occurs frequently and thus we can rule out the connectivity of most of the graph ${\mathcal G}_a$, $a \in {\mathbb F}_p$ quickly. A relatively small number of remaining suspects can be checked via the rigorous deterministic algorithm from~\cite[Algorithm 3.1]{KLMMSS}. \subsection{Algorithm} Algorithm~\ref{algo:one_component} is to determine whether a graph is connected or not, where we in fact use $f_a^{(\ell)}(X)$ instead of $F_a^{(\ell)}(X)$. \begin{algorithm} \begin{algorithmic}[1] \REQUIRE prime $p$, integer $a \pmod p$ and integer $L$. \ENSURE returns true if $X^2+a \pmod p$ generates a connected functional graph, and false otherwise \STATE $cycles \leftarrow 0$ \STATE $g_1 \leftarrow \gcd(X^p-X,f_a^{(1)}(X)-X)$ \IF {$\deg g_1 \ge 1$} \IF{$\deg g_1 = 2$} \RETURN false \ENDIF \STATE $cycles \leftarrow cycles + 1$ \ENDIF \FOR{$i \leftarrow 2$ to $L$} \STATE $g_i \leftarrow \gcd(X^p-X,f_a^{(i)}(X)-X)$ \IF{$\deg g_i > i$} \RETURN false \ELSIF {$\deg g_i = i$} \STATE $cycles \leftarrow cycles + 1$ \ENDIF \IF{$cycles > 1$} \RETURN false \ENDIF \ENDFOR \FOR{$j \leftarrow 0$ to $p-1$} \STATE start traversal from node $j$ \IF{two cycles are detected} \RETURN false \ENDIF \ENDFOR \RETURN true \end{algorithmic} \caption{Determine if ${\mathcal G}_a$ is a connected graph} \label{algo:one_component} \end{algorithm} The algorithm starts by checking if there is any cycle of size 1 in the graph. Since $X^p-X$ only contains simple roots and $f_a^{(1)}(X)$ has degree 2, if $\gcd(X^p-X,f_a^{(1)}(X)) > 1$, then there are two cycles of size 1 and thus two separate components in the graph. Otherwise, there is at most one component with a cycle of size 1 in the graph ${\mathcal G}_a$. Next, we compute $g_i = \gcd(X^p-X,f_a^{(i)}(X)-X)$ from $i=2$ until $L$ while keeping track of the number of cycles that has been detected. Here, we have several possibilities: \begin{itemize} \item if $\deg g_i < i$, then there are no cycle of size $i$ in the graph. \item if $\deg g_i = i$, then there is exactly one cycle of size $i$. \item if $\deg g_i > i$, then there are at least two different cycles in the graph. \end{itemize} When $\deg g_i < i$, there are no cycle of size $i$ since there are not enough roots to form one. Similarly, if $\deg g_i > i$, then there are more than $i$ cyclic points in the graph, of which at least $i$ of them form one cycle, and so there are more than one cycle in the graph. Finally, if at this stage the algorithm detects $\deg g_i = i$, then there is exactly one cycle of size $i$. By contradiction, if there is no cycle of size $i$, then there must be at least two cycles of size less than $i$, and so we would have detected that $cycles > 1$ at a previous iteration, thus returning `false'. Once we are done with the first loop, either we have found one cycle with size at most $L$, or we have not found any small cycles at all. We then proceed with a graph traversal until we find two cycles. \subsection{Statistics of the number of connected graphs} We implement Algorithm~\ref{algo:one_component} by using NTL~\cite{NTL} and PARI/GP~\cite{Pari}, choosing $L=5$ in our computations. We collect values of $I_p$ for some primes (as shown in Table~\ref{table:one_component_count}) that lead us to the following conjecture: \begin{conj} \label{conj:Ip} $I_p \sim \sqrt{2p}$ as $p\to \infty$. \end{conj} Here, we also pose a weaker conjecture: \begin{conj} \label{conj:Ip 1} For any prime $p$, $I_p \ge 1$. \end{conj} Conjecture~\ref{conj:Ip 1} predicts that there always exists a connected functional graph generated by quadratic polynomials modulo $p$. Indeed, according to our computations, Conjecture~\ref{conj:Ip 1} is true for all primes $p \le 100 000$. \begin{table}[H] {\small \begin{tabular}{r r r} \hline \\[-2.2ex] \multicolumn{1}{c}{$p$} & \multicolumn{1}{c}{$I_p$} & \multicolumn{1}{c}{$\sqrt{2p}$}\\[0.2ex] \hline\\[-1.5ex] 500,009 & 1,038 & 1,000.009 \\ 500,029 & 1,002 & 1,000.029 \\ 500,041 & 956 & 1,000.041 \\ 500,057 & 1,026 & 1,000.057 \\ 500,069 & 995 & 1,000.069 \\ 500,083 & 987 & 1,000.083 \\ 500,107 & 994 & 1,000.107 \\ 500,111 & 1,010 & 1,000.111 \\ 500,113 & 1,019 & 1,000.113 \\ 500,119 & 920 & 1,000.119 \\ 500,153 & 1,033 & 1,000.153 \\ 500,167 & 1,005 & 1,000.167 \\ 1,000,003 & 1,369 & 1,414.296 \\ 2,000,003 & 1,909 & 2,000.001 \\ 3,000,017 & 2,478 & 2,449.497 \\ 4,000,037 & 2,838 & 2,828.440 \\ \hline\\ \end{tabular} } \caption{The number of connected graphs modulo $p$} \label{table:one_component_count} \end{table} We also investigate the existence of connected functional graphs having (only) one cycle of size $1$. If the graph ${\mathcal G}_a$ is connected and has one cycle of size $1$, then the equation $X^2+a=X$ has two identical roots (corresponding to fixed points), and so $a=1/4$ and the root $x=1/2$. Thus, we only need to check the graph generated by $X^2+1/4$ in ${\mathbb F}_p$. We have tested all the primes up to 100000 and we only have found two such examples: one is $X^2+1$ in ${\mathbb F}_3$, and the other is $X^2+2$ in ${\mathbb F}_7$. Furthermore, we have: \begin{prop} For any prime $p$ with $p\equiv \textrm{$5$ or $11$} \pmod{12}$, there is no functional graph ${\mathcal G}_a$ having only one cycle of size $1$. \end{prop} \begin{proof} Note that we only need to consider the graph ${\mathcal G}_{1/4}$. Since $1/2$ is a fixed point of ${\mathcal G}_{1/4}$ and there is an edge from $-1/2$ to $1/2$, we consider the equation $X^2+1/4 = -1/2$ in ${\mathbb F}_p$, that is, whether $-3$ is a square in ${\mathbb F}_p$. However, if $p\equiv \textrm{$5$ or $11$} \pmod{12}$, $-3$ is not a square in ${\mathbb F}_p$. Then, the in-degree of $-1/2$ is zero, and so ${\mathcal G}_{1/4}$ must have more than one cycle. This completes the proof. \end{proof} So, we pose the following conjecture: \begin{conj} For any prime $p>7$, there is no functional graph ${\mathcal G}_a$ having only one cycle of size $1$. \end{conj} \section{Counting cyclic points in functional graphs} \label{sect:cyclic} We now assess the number of cyclic points in functional graphs modulo $p$. For the minimal and maximal numbers of cyclic points in graphs ${\mathcal G}_a$, we refer to~\cite[Table~4.1]{KLMMSS}, where the cases $a=0,-2$ are excluded. Roughly speaking, the reason why these two cases are excluded is that the number of cyclic points is maximized on the cases $a=0,-2$ quite often; see \cite[Section 4.3]{KLMMSS} for more details. In this section, we also follow this convention. Let $C_a$ be the total number of cyclic points of ${\mathcal G}_a$, and let $c_a$ be the largest number of cyclic points in a single component of ${\mathcal G}_a$. Clearly we have $C_a \ge c_a$ for any $a \in {\mathbb F}_p$ and $C_a = c_a$ when $a \in {\mathcal I}_p$. Furthermore, we define the average and largest values of these quantities: \begin{align*} &\overline{C_p} = \frac{1}{p-2} \sum_{a \in {\mathbb F}_p \setminus \{0,-2\}} C_a, &{\mathbf{C}}_p= \max \left\{C_a :~a \in {\mathbb F}_p \setminus \{0,-2\} \right\}; \\ &\overline{c_p} = \frac{1}{p-2} \sum_{a \in {\mathbb F}_p \setminus \{0,-2\}} c_a, &{\mathbf{c}}_p = \max \left\{ c_a:~a \in {\mathbb F}_p \setminus \{0,-2\}\right\};\\ &\overline{c_p}^*= \frac{1}{I_p} \sum_{a \in {\mathcal I}_p} c_a, &{\mathbf{c}}_p^*= \max \left\{c_a:~a \in {\mathcal I}_p \right\}. \end{align*} We remark again that ${\mathcal I}_p \subseteq {\mathbb F}_p \setminus \{0,-2\}$ if $p>3$. Numerical experiments in~\cite[Section~4.3]{KLMMSS} suggest that the average number of cyclic points modulo $p$, taken over all graphs modulo $p$ (excluding $a=0,-2$), is $\sqrt{\pi p/2}$, which is consistent with the behaviour of random maps (see~\cite[Theorem~2(ii)]{FO2}). Here we show that this is not the case for connected graphs (see Table~\ref{table:cyclic_points}). In that case, $\overline{c_p}^*$ is smaller than $\overline{C_p}$, i.e. there are fewer cyclic points than those for non-connected graphs on average. Notice that both $\overline{c_p}^*$ and $\overline{c_p}$ are both close to $\sqrt{2p/\pi}$ (and although close to each other, $\overline{c_p}^*$ is slightly larger). \begin{table}[H] {\small \begin{tabular}{r r r r r r r r} \hline \\[-2.2ex] \multicolumn{1}{c}{$p$} & \multicolumn{1}{c}{$\overline {C_p} $} & \multicolumn{1}{c}{$\sqrt{\pi p/2}$} & \multicolumn{1}{c}{$\overline {c_p} $} & \multicolumn{1}{c}{$\overline {c_p}^* $} & \multicolumn{1}{c}{$\sqrt{2p/\pi}$}\\[0.2ex] \hline\\[-1.5ex] 500,009 & 886.224 & 886.235 & 553.445 & 573.355 & 564.194\\ 500,029 & 885.990 & 886.253 & 553.312 & 587.750 & 564.205\\ 500,041 & 885.069 & 886.263 & 553.175 & 568.208 & 564.212\\ 500,057 & 884.963 & 886.277 & 552.870 & 586.037 & 564.221\\ 500,069 & 885.831 & 886.288 & 552.952 & 558.285 & 564.229\\ 500,083 & 884.970 & 886.300 & 552.692 & 564.995 & 564.236\\ 500,107 & 884.507 & 886.322 & 552.674 & 562.690 & 564.250\\ 500,111 & 884.341 & 886.325 & 552.157 & 575.976 & 564.252\\ 500,113 & 885.160 & 886.327 & 552.988 & 568.057 & 564.253\\ 500,119 & 884.559 & 886.332 & 552.597 & 569.750 & 564.257\\ 500,153 & 884.834 & 886.363 & 552.900 & 589.146 & 564.276\\ 500,167 & 885.756 & 886.375 & 552.525 & 560.095 & 564.284\\ 600,011 & 969.139 & 970.822 & 605.632 & 611.914 & 618.044\\ 700,001 & 1,047.771 & 1,048.599 & 654.317 & 667.624 & 667.559\\ 800,011 & 1,120.427 & 1,121.006 & 700.047 & 703.061 & 713.655\\ 900,001 & 1,188.822 & 1,188.999 & 742.619 & 762.673 & 756.940\\ 1,000,003 & 1,252.452 & 1,253.316 & 782.026 & 793.388 & 797.886\\ 2,000,003 & 1,772.078 & 1,772.455 & 1,106.815 & 1,134.598 & 1,128.380\\ \hline\\ \end{tabular} } \caption{Average number of cyclic points in graphs modulo $p$ (excluding $a=0,-2$)} \label{table:cyclic_points} \end{table} In Table~\ref{table:max_cyclic_points}, one can see that the largest cycles usually do not appear in the connected graphs, which appears surprising and shows the existence of components with a large cycle even when the graph is disconnected. In addition, the difference ${\mathbf{c}}_p-{\mathbf{c}}_p^*$ is large, while the difference of ${\mathbf{C}}_p$ and ${\mathbf{c}}_p$ is small. \begin{table}[H] {\small \begin{tabular}{r r r r r r} \hline \\[-2.2ex] \multicolumn{1}{c}{$p$} & \multicolumn{1}{c}{${\mathbf{C}}_p$} & \multicolumn{1}{c}{${\mathbf{c}}_p$} & \multicolumn{1}{c}{${\mathbf{c}}_p^*$} \\[0.2ex] \hline\\[-1.5ex] 500,009 & 3,578 & 3,164 & 2,319 \\ 500,029 & 3,620 & 3,291 & 2,327 \\ 500,041 & 3,798 & 3,118 & 2,333 \\ 500,057 & 3,468 & 3,319 & 2,423 \\ 500,069 & 3,556 & 3,129 & 2,089 \\ 500,083 & 3,596 & 3,050 & 2,131 \\ 500,107 & 3,527 & 3,232 & 2,643 \\ 500,111 & 3,732 & 3,237 & 2,244 \\ 500,113 & 3,805 & 3,232 & 2,335 \\ 500,119 & 3,873 & 3,142 & 2,275 \\ 500,153 & 3,472 & 3,380 & 2,754 \\ 500,167 & 3,644 & 3,159 & 2,770 \\ 600,011 & 3,847 & 3,488 & 3,265 \\ 700,001 & 4,350 & 3,670 & 2,950 \\ 800,011 & 4,600 & 4,242 & 3,208 \\ 900,001 & 4,997 & 4,274 & 3,245 \\ 1,000,003 & 5,101 & 4,639 & 3,117\\ 2,000,003 & 7,637 & 6,848 & 4,309\\ \hline\\ \end{tabular} } \caption{Maximum number of cyclic points in graphs modulo $p$ (excluding $a=0,-2$)} \label{table:max_cyclic_points} \end{table} Let us also define the following three families of parameters $a$ on which the values ${\mathbf{C}}_p$, ${\mathbf{c}}_p$ and ${\mathbf{c}}_p^*$ are achieved, that is \begin{align*} &\ensuremath{\mathscr{A}}_p = \left\{a\in {\mathbb F}_p\setminus \{0,-2\}:~C_a ={\mathbf{C}}_p \right\}, \\ &\ensuremath{\mathscr{B}}_p = \left\{a\in {\mathbb F}_p\setminus \{0,-2\}:~c_a ={\mathbf{c}}_p \right\},\\ & \ensuremath{\mathscr{B}}_p^* = \left\{a\in {\mathcal I}_p:~c_a ={\mathbf{c}}_p^* \right\}. \end{align*} It is certainly interesting to compare the sizes $A_p =\# \ensuremath{\mathscr{A}}_p$, $B_p =\# \ensuremath{\mathscr{B}}_p$ and $B_p^* =\# \ensuremath{\mathscr{B}}_p^*$ and also investigate the mutual intersections between these families. We find that typically these sets have one value of $a$ in common, and rarely more than two. As $p$ increases, the frequency of the sets having $2$ or more elements decreases, but does not disappear completely, as can be seen in Table~\ref{table:size_of_ABsets}. \begin{table}[H] {\small \begin{tabular}{l | rrr | rrr | rrr} \hline \\[-2.2ex] & \multicolumn{3}{c}{$A_p$} & \multicolumn{3}{c}{$B_p$} & \multicolumn{3}{c}{$B_p^*$} \\ range of $p$ & \multicolumn{1}{c}{$=1$} & \multicolumn{1}{c}{$=2$} & \multicolumn{1}{c}{$\ge3$} & \multicolumn{1}{c}{$=1$} & \multicolumn{1}{c}{$=2$} & \multicolumn{1}{c}{$\ge3$} & \multicolumn{1}{c}{$=1$} & \multicolumn{1}{c}{$=2$} & \multicolumn{1}{c}{$\ge3$}\\ \hline\\[-1.5ex] $[3,10^4]$ & 1,182 & 39 & 7 & 1,159 & 65 & 4 & 1,193 & 35 & 0\\ $[10^4,2\cdot10^4]$ & 1,013 & 20 & 0 & 1,010 & 22 & 1& 1,019 & 14 & 0 \\ $[2\cdot10^4,3\cdot10^4]$ & 967 & 14 & 2 & 970 & 13 & 0& 976 & 7 & 0 \\ $[3\cdot10^4,4\cdot10^4]$ & 949 & 9 & 0 & 941 & 17 & 0& 950 & 8 & 0 \\ $[4\cdot10^4,5\cdot10^4]$ & 921 & 8 & 1 & 921 & 9 & 0& 926 & 4 & 0 \\ $[5\cdot10^4,6\cdot10^4]$ & 915 & 9 & 0 & 920 & 4 & 0 & 921 & 3 & 0\\ $[6\cdot10^4,7\cdot10^4]$ & 868 & 10 & 0 & 872 & 6 & 0 & 868 & 9 & 1\\ $[7\cdot10^4,8\cdot10^4]$ & 895 & 7 & 0 & 897 & 5 & 0 & 899 & 3 & 0\\ $[8\cdot10^4,9\cdot10^4]$ & 869 & 7 & 0 & 869 & 7 & 0 & 866 & 10 & 0\\ $[9\cdot10^4,10^5]$ & 874 & 5 & 0 & 878 & 1 & 0 & 876 & 3 & 0\\ $[10^5,10^5+10^3]$ & 81 & 0 & 0 & 79 & 2 & 0 & 81 & 0 & 0\\ $[10^6,10^6+10^3]$ & 74 & 1 & 0 & 75 & 0 & 0 & 74 & 1 & 0\\ \hline\\ \end{tabular} } \caption{Values of $A_p$, $B_p$, and $B_p^*$} \label{table:size_of_ABsets} \end{table} For the set intersections, we start with $\ensuremath{\mathscr{A}}_p \cap \ensuremath{\mathscr{B}}_p^*$. With Table~\ref{table:cyclic_points}, we have observed that $\overline{C_p} > \overline{c_p}^*$, thus it is reasonable to expect that $\ensuremath{\mathscr{A}}_p \cap \ensuremath{\mathscr{B}}_p^*$ is empty. We remark that if $\ensuremath{\mathscr{A}}_p \cap \ensuremath{\mathscr{B}}_p^*$ is not empty, then ${\mathbf{C}}_p={\mathbf{c}}_p={\mathbf{c}}_p^*$, and so for any $a\in \ensuremath{\mathscr{B}}_p$ the graph ${\mathcal G}_a$ is connected, and thus $\ensuremath{\mathscr{B}}_p = \ensuremath{\mathscr{B}}_p^*$. Therefore, for any prime $p$, if ${\mathbf{c}}_p < {\mathbf{C}}_p$, then we must have that $\ensuremath{\mathscr{A}}_p \cap \ensuremath{\mathscr{B}}_p^*$ is empty. Our experiments with odd prime $p < 10^5$ counted only 20 occurrences of primes where the intersection is non-empty and in fact contains only one value of $a$, shown in Table~\ref{table:ApBpstar}. \begin{table}[H] {\small \begin{tabular}{cccc} \hline \\[-2.2ex] $p$ & value of $a$ & $p$ & value of $a$ \\ \hline\\[-1.5ex] 3 & 2 & 271 & 147 \\ 5 & 1 & 2,647 & 1,445 \\ 7 & 3 & 3,613 & 2,653 \\ 11 & 6 & 6,131 & 3,555 \\ 13 & 1 & 6,719 & 107 \\ 17 & 3 & 17,921 & 8,370 \\ 19 & 13 & 18,077 & 15,557 \\ 29 & 4 & 36,229 & 2,229 \\ 157 & 141 & 53,611 & 23,630 \\ 191 & 97 & 64,667 & 60,638 \\ \hline\\ \end{tabular} } \caption{Values of $p$ with non-empty $\ensuremath{\mathscr{A}}_p \cap \ensuremath{\mathscr{B}}_p^*$} \label{table:ApBpstar} \end{table} Since we have observed only one value of $a$ for each prime $p$ in the above table, we conjecture that: \begin{conj} For any prime $p \ge 3$, we have $\#\(\ensuremath{\mathscr{A}}_p \cap \ensuremath{\mathscr{B}}_p^*\) \le 1$. \end{conj} We also consider the intersection $\ensuremath{\mathscr{B}}_p \cap \ensuremath{\mathscr{B}}_p^*$; see Table~\ref{table:BpBpstar}. Clearly, if $\ensuremath{\mathscr{B}}_p \cap \ensuremath{\mathscr{B}}_p^*$ is not empty, then we have ${\mathbf{c}}_p={\mathbf{c}}_p^*$. One could expect the number of primes with non-empty intersections to decrease as $p$ increases, however even if our experiments show some reduction overall, it remains unclear. \begin{table}[H] {\small \begin{tabular}{lrrc} \hline \\[-2.2ex] range of $p$ & freq & \#primes & \% \\ \hline\\[-1.5ex] $[3,10^4]$ & 104 & 1,228 & 8.06\% \\ $[10^4,2\cdot10^4]$ & 35 & 1,033 & 3.19\% \\ $[2\cdot10^4,3\cdot10^4]$ & 32 & 983 & 3.26\% \\ $[3\cdot10^4,4\cdot10^4]$ & 20 & 958 & 1.98\% \\ $[4\cdot10^4,5\cdot10^4]$ & 19 & 930 & 2.04\% \\ $[5\cdot10^4,6\cdot10^4]$ & 16 & 924 & 1.73\% \\ $[6\cdot10^4,7\cdot10^4]$ & 20 & 878 & 2.28\% \\ $[7\cdot10^4,8\cdot10^4]$ & 15 & 902 & 1.66\% \\ $[8\cdot10^4,9\cdot10^4]$ & 15 & 876 & 1.71\% \\ $[9\cdot10^4,10^5]$ & 6 & 879 & 0.68\% \\ $[10^5,10^5+10^3]$ & 0 & 81 & 0.00\% \\ $[10^6,10^6+10^3]$ & 1 & 75 & 1.33\% \\ \hline\\ \end{tabular} } \caption{Primes with non-empty $\ensuremath{\mathscr{B}}_p \cap \ensuremath{\mathscr{B}}_p^*$} \label{table:BpBpstar} \end{table} The most surprising result comes from the observation of the intersection $\ensuremath{\mathscr{A}}_p \cap \ensuremath{\mathscr{B}}_p$. As Table~\ref{table:BpAp} shows, the event that this intersection is not empty is rather common. For any $a \in \ensuremath{\mathscr{A}}_p \cap \ensuremath{\mathscr{B}}_p$, the graph ${\mathcal G}_a$ not only has the maximal number of cyclic points but also has a maximal cycle. Note that for the last two rows we only give primes in the ranges $[10^5, 10^5+10^3]$ and $[10^6, 10^6+10^3]$, respectively, due to the limits of our current computational facilities. \begin{table}[H] {\small \begin{tabular}{lrrc} \hline \\[-2.2ex] range of $p$ & freq & \#primes & \% \\ \hline\\[-1.5ex] $[3,1\cdot10^4]$ & 268 & 1,228 & 20.36\%\\ $[10^4,2\cdot10^4]$ & 197 & 1,033 & 18.87\%\\ $[2\cdot10^4,3\cdot10^4]$ & 153 & 983 & 15.16\%\\ $[3\cdot10^4,4\cdot10^4]$ & 148 & 958 & 15.24\%\\ $[4\cdot10^4,5\cdot10^4]$ & 126 & 930 & 13.55\%\\ $[5\cdot10^4,6\cdot10^4]$ & 167 & 924 & 17.97\%\\ $[6\cdot10^4,7\cdot10^4]$ & 143 & 878 & 16.17\%\\ $[7\cdot10^4,8\cdot10^4]$ & 143 & 902 & 15.74\%\\ $[8\cdot10^4,9\cdot10^4]$ & 144 & 876 & 16.44\%\\ $[9\cdot10^4,10^5]$ & 147 & 879 & 16.72\%\\ $[10^5,10^5+10^3]$ & 13 & 81 & 16.05\%\\ $[10^6,10^6+10^3]$ & 9 & 77 & 11.69\%\\ \hline\\ \end{tabular} } \caption{Primes with non-empty $\ensuremath{\mathscr{A}}_p \cap \ensuremath{\mathscr{B}}_p$} \label{table:BpAp} \end{table} \section{Statistics of small cycles} \label{sect:smallcyclic} We now study components by analysing the distribution of the size of their cycles. Let ${\mathcal C}_{a,k}$ be the number of cycles of length $k$ in the graph ${\mathcal G}_a$. Let $$ {\mathcal C}_k = \sum_{a\in {\mathbb F}_p} {\mathcal C}_{a,k} $$ be the number of cycles of length $k$ over all graphs modulo $p$. Clearly, we have ${\mathcal C}_k=0$ for any $k\ge p/2$; see~\cite[Theorems~1 and~2]{PMMY} for better bounds of $k$. \begin{prop} \label{prop:Ck} For any integer $k\ge 1$, there is a constant $D_k$ depending only on $k$ such that for any prime $p > D_k$ we have $$ {\mathcal C}_k = p/k + O\( 4^k k^{-1} p^{1/2}\). $$ \end{prop} \begin{proof} We can assume that $p>k$. For any fixed $a$, notice that any point $x$ contributing to ${\mathcal C}_{a,k}$ is a root of the polynomial $F_a^{(k)}(X)$. Conversely, any root $x$ of $F_a^{(k)}(X)$ contributes to ${\mathcal C}_{a,d}$ for some $d\mid k$ (possibly $d\ne k$). Thus, we have $$ k{\mathcal C}_k \le \#\{(a,x) \in {\mathbb F}_p^2:~F_a^{(k)}(x) = 0\}. $$ Moreover, from~\cite[Theorem~2.4~(c)]{MorPa} and noticing $p \nmid k$, we know that if $F_a^{(d)}(x)=0$ and $F_a^{(k)}(x)=0$ with $d<k$, where $x$ is a point lying in a cycle of length $k$, then $(X-x)^2 \mid F_a^{(k)}(X)$, that is, the discriminant of $F_a^{(k)}(X)$ is zero. Note that as a polynomial in $X$ the degree of $F_a^{(k)}(X)$ is at most $2^k$, and as a polynomial in $a$ the degree of $F_a^{(k)}(X)$ is at most $2^{k-1}$. Then, as a polynomial in $a$, the degree of the discriminant of $F_a^{(k)}(X)$ is at most $4^k$. Thus, except for at most $4^k$ values of $a$, we have that $F_a^{(k)}(X)$ is a simple polynomial in $X$. Hence, we have \begin{equation} \label{eq:Ck} k{\mathcal C}_k = \#\{(a,x) \in {\mathbb F}_p^2:~F_a^{(k)}(x) = 0\} + O(8^k). \end{equation} In addition, combining \cite[Corollary~1 to Theorem~B]{Mort} with \cite[Proposition 3.2]{MorVi}, if we view $f_A(X) = X^2 +A $ as an integer polynomial in variables $A$ and $X$, then $F_A^{(k)}(X) \in {\mathbb Z}[A,X]$ is an absolutely irreducible polynomial. Then, by Ostrowski's theorem, there exists a positive integer $D_k$ depending only on $k$ such that for any $p > D_k$ the polynomial $F_A^{(k)}(X)$ is absolutely irreducible modulo $p$ in variables $A$ and $X$. It is also easy to see by induction on $k$ that $ f_A^{(k)}(X)$ is of total degree at most $2^k$ as a bivariate polynomial in $A$ and $X$, and the same is true for $F_A^{(k)}(X)$. Thus, by the Hasse-Weil bound (see~\cite[Section~VIII.5.8]{Lor}) we obtain $$ \#\{(a,x) \in {\mathbb F}_p^2:~F_a^{(k)}(x) = 0\} = p + O(4^k p^{1/2}), \quad \textrm{as $p \to \infty$}, $$ which, together with~\eqref{eq:Ck}, implies the desired result (as we can always assume that $D_k > 4^k$, so $4^k p^{1/2}> 8^k$). \end{proof} In particular, we see from Proposition~\ref{prop:Ck} that for any fixed integer $k\ge 1$, $$ {\mathcal C}_k \sim p/k, \qquad \text{as} \ p \to \infty. $$ Note that using~\cite[Theorem~1]{GaoRod} or~\cite[Satz~B]{Rup} or~\cite[Corollary]{Zannier}, one can obtain an explicit form for $D_k$. However, any such estimate has to depend on the size of the coefficients of $F_A^{(k)}(X)$ (considered as a bivariate polynomial in $A$ and $X$ over ${\mathbb Z}$) and is likely to be double exponential in $k$. We can also compute the exact values of ${\mathcal C}_1$ and ${\mathcal C}_2$. \begin{prop} \label{prop:cycle} For any odd prime $p$, we have ${\mathcal C}_1=p$ and ${\mathcal C}_2= (p-1)/2$. \end{prop} \begin{proof} First, note that any point $x$ contributing to ${\mathcal C}_1$ is a root of $F_a^{(1)}(X)$ for some $a$, and also $$ F_a^{(1)}(X) = X^2 - X + a=(X-1/2)^2 + a -1/4=0 $$ is solvable if and only if $1/4-a$ is a square. Since there are $(p-1)/2$ squares in ${\mathbb F}_p^*$, we have ${\mathcal C}_1 = p$. Now, it is easy to see that $$ F_a^{(2)}(X) = X^2 + X + a + 1. $$ If a point $x$ lies in a cycle of length $2$ in ${\mathcal G}_a$, then it is a root of $F_a^{(2)}(X)$ and also it is not a root of $F_a^{(1)}(X)$. However, if there exists a point $x$ such that $$ F_a^{(2)}(x)=F_a^{(1)}(x)=0, $$ then we must have $x=-1/2, a=-3/4$. So, if $a\ne -3/4$, then any root of $F_a^{(2)}(X)$ lies in a cycle of length $2$. Thus, noticing that $$ F_a^{(2)}(X) =(X+1/2)^2 + a + 3/4=0 $$ is solvable if and only if $-a-3/4$ is a square, we have ${\mathcal C}_2=(p-1)/2$ and conclude the proof. \end{proof} Table~\ref{tab:cyclic_point_dist} shows the ${\mathcal C}_k$ for some values of $p$ (in these cases, we also included the graphs $X^2$ and $X^2-2$). This is consistent with Proposition~\ref{prop:Ck}. \begin{table}[H] {\small \begin{tabular}{r r r r r r r} \hline \\[-2.2ex] \multicolumn{1}{c}{$k$} & \multicolumn{2}{c}{$p = 100,003$} & \multicolumn{2}{c}{$p = 500,009$} & \multicolumn{2}{c}{$p = 1,000,003$} \\ & ${\mathcal C}_k$ & \multicolumn{1}{c}{$\fl{p/k}$} & ${\mathcal C}_k$ & \multicolumn{1}{c}{$\fl{p/k}$} & ${\mathcal C}_k$ & \multicolumn{1}{c}{$\fl{p/k}$} \\ \hline\\[-1.5ex] 1 & 100,003 & 100,003 & 500,009 & 500,009 & 1,000,003 & 1,000,003 \\ 2 & 50,001 & 50,001 & 250,004 & 250,004 & 500,001 & 500,001 \\ 3 & 33,333 & 33,334 & 166,669 & 166,669 & 333,333 & 333,334 \\ 4 & 24,890 & 25,000 & 125,000 & 125,002 & 249,890 & 250,000 \\ 5 & 20,061 & 20,000 & 99,353 & 100,001 & 199,310 & 200,000 \\ 6 & 16,775 & 16,667 & 83,664 & 83,334 & 165,852 & 166,667 \\ 7 & 14,179 & 14,286 & 71,582 & 71,429 & 143,109 & 142,857 \\ 8 & 12,474 & 12,500 & 62,541 & 62,501 & 125,266 & 125,000 \\ \hline\\ \end{tabular} } \caption{Number of cycles of length $k$} \label{tab:cyclic_point_dist} \end{table} \section{Distribution of components with size $k$} \label{sect:smallsize} We now study the components of functional graphs by analysing the distribution of their sizes. For the minimal and maximal numbers of components in graphs ${\mathcal G}_a$ as well as the popular component size, we refer to~\cite[Sections~4.4 and~4.5]{KLMMSS}. Let ${\mathcal N}_p$ be the number of components taken over all ${\mathcal G}_a$ modulo $p$, and let ${\mathcal N}_{p,k}$ be the number of those components with size $k > 0$ (that is, there are $k$ nodes in the component). Furthermore, let $$ {\mathcal N}_{p,\text{even}}^K = \sum_{\substack{k \le K\\ \text{$k$ even}}} {\mathcal N}_{p,k} \quad \text{and} \quad {\mathcal N}_{p,\text{odd}}^K = \sum_{\substack{k \le K\\ \text{$k$ odd}}} {\mathcal N}_{p,k}. $$ Clearly, $$ {\mathcal N}_{p} = {\mathcal N}_{p,\text{even}}^p + {\mathcal N}_{p,\text{odd}}^p. $$ We first have: \begin{prop} For any odd prime $p$, ${\mathcal N}_{p,2}=(p-1)/2$. \end{prop} \begin{proof} If $C$ is a component of ${\mathcal G}_a$ of size $2$, then it is easy to see that $C=\{x,-x\}$ for some $x\in {\mathbb F}_p$ such that $x$ is a fixed point (that is, $x^2+a=x$) and the equation $X^2+a=-x$ has no solution in ${\mathbb F}_p$ (that is, $-x-a$ is not a square). In other words, for any $x\in {\mathbb F}_p$, if we choose $a=-x^2+x$, then $x$ is a fixed point in ${\mathcal G}_a$ and $-x-a=x^2-2x$. So, it is equivalent to count how many $x\in {\mathbb F}_p$ such that $x^2-2x$ is not a square in ${\mathbb F}_p$. Since $x^2-2x=(x-1)^2-1$, it is also equivalent to count how many $x\in {\mathbb F}_p$ such that $x^2-1$ is not a square in ${\mathbb F}_p$. If $x^2-1$ is a square in ${\mathbb F}_p$, say $x^2-1=y^2$, then we have $(x+y)(x-y)=1$. Let $\alpha=x+y$, then $x-y=\alpha^{-1}$, and so $$ x = \frac{\alpha+\alpha^{-1}}{2}, \qquad y = \frac{\alpha-\alpha^{-1}}{2}. $$ So, for such pairs $(x,y)$ we obtain a one-to-one correspondence between pairs $(x,y)$ and pairs $(\alpha,\alpha^{-1}),\alpha \ne 0$. It is easy to see that for any $\alpha_1,\alpha_2\in {\mathbb F}_p^*$, $$ \textrm{$\frac{\alpha_1+\alpha_1^{-1}}{2}=\frac{\alpha_2+\alpha_2^{-1}}{2}$ if and only if $\alpha_1\alpha_2=1$.} $$ So, by counting the pairs $(\alpha,\alpha^{-1})$, there are $(p+1)/2$ values of $x$ such that $x^2-1$ is a square. Therefore, there are $(p-1)/2$ values of $x$ such that $x^2-1$ is not a square. This completes the proof. \end{proof} It has been predicted in~\cite[Theorem~2~(i)]{FO2} that \[ {\mathcal N}_p \sim \frac{p\log p}{2}, \] which has a small bias (about $9.5\%$) over the real value; see~\cite[Table~4.2]{KLMMSS}. Here, we improve the precision of this estimate. First, we note that each node in ${\mathcal G}_a$ has in-degree two or zero except for the node $a$, since only $0$ maps to $a$. Therefore, each component in any graph ${\mathcal G}_a$ has an even number of nodes unless it is the component containing $0$ and $a$. So, each graph ${\mathcal G}_a$ has exactly one component of odd size. It follows that $$ {\mathcal N}_{p,\text{odd}}^p = p, $$ and so $$ {\mathcal N}_p \sim {\mathcal N}_{p,\text{even}}^p, \quad \textrm{as $p \to \infty$}. $$ For even-sized components, the situation is not as straightforward. In our experiments, we noticed that the number of even-sized components with size $k$ is very close to $p/k$ as shown in Table~\ref{tab:component_size_dist} for $k \le 20$ and for $k = 1000$ and $2000$ (i.e., even for larger values of $k$). \begin{table}[H] {\small \begin{tabular}{r r r r r r r} \hline \\[-2.2ex] \multicolumn{1}{r}{$k$} & \multicolumn{2}{c}{$p = 100,003$} & \multicolumn{2}{c}{$p = 500,009$} & \multicolumn{2}{c}{$p = 1,000,003$} \\ & ${\mathcal N}_{p,k}$ & \multicolumn{1}{c}{$\fl{p/k}$} & ${\mathcal N}_{p,k}$ & \multicolumn{1}{c}{$\fl{p/k}$} & ${\mathcal N}_{p,k}$ & \multicolumn{1}{c}{$\fl{p/k}$} \\ \hline\\[-1.5ex] 2 & 50,001 & 50,001 & 250,004 & 250,004 & 500,001 & 500,001 \\ 4 & 24,951 & 25,000 & 125,160 & 125,002 & 250,171 & 250,000 \\ 6 & 16,156 & 16,667 & 83,185 & 83,334 & 166,660 & 166,667 \\ 8 & 12,509 & 12,500 & 62,652 & 62,501 & 124,727 & 125,000 \\ 10 & 10,083 & 10,000 & 50,422 & 50,000 & 99,975 & 100,000 \\ 12 & 8,389 & 8,333 & 41,542 & 41,667 & 82,577 & 83,333 \\ 14 & 7,192 & 7,143 & 35,661 & 35,714 & 71,611 & 71,428 \\ 16 & 6,292 & 6,250 & 31,186 & 31,350 & 62,220 & 62,500 \\ 18 & 5,503 & 5,555 & 27,941 & 27,778 & 55,923 & 55,555 \\ 20 & 5,009 & 5,000 & 24,662 & 25,000 & 50,135 & 50,000 \\ 1000 & 117 & 100 & 533 & 500 & 954 & 1,000 \\ 2000 & 48 & 50 & 243 & 250 & 489 & 500 \\ \hline\\ \end{tabular} } \caption{Number of components of size $k$} \label{tab:component_size_dist} \end{table} Now, using $\fl{p/k}$ as an approximation of the number of components of size $k$ for any even $k<p$, we can get an approximation for ${\mathcal N}_{p,\text{even}}^p$. First, when $(p-1)/2 < k < p $, we have $\fl{p/k}=1$, and there are about $(p-1)/4$ values of such even $k$. In general, if $(p-1)/(n+1) < k \le (p-1)/n $, we have $\fl{p/k}=n$, and there are about $\frac{p-1}{2n(n+1)}$ values of such even $k$, which contributes to around $\frac{p-1}{2(n+1)}$ components of even size. Fixing a positive integer $n$, for $k>(p-1)/(n+1)$ we use the above estimate, while for $k\le (p-1)/(n+1)$ we use the estimate $(p-1)/k$, and so the total number of components of even size is around \begin{align*} \frac{p-1}{2}&\(1+\frac{1}{2} + \cdots + \frac{1}{(p-1)/(2(n+1))}\) \\ & \qquad \qquad + \frac{p-1}{2} \(\frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n+1}\), \end{align*} which, together with the approximation of the harmonic series, is approximated by \begin{align*} \frac{p-1}{2} \(\log\frac{p-1}{2(n+1)}+\gamma\) + \frac{p-1}{2}\(-1+\log(n+1)+\gamma\) \qquad &\\ = \frac{p-1}{2}\( \log(p-1) +2\gamma -1 - \log 2 \)&, \end{align*} where $\gamma = 0.5772156649\dots$ is the Euler constant. So, we denote $$ \widetilde{\mathcal N}_{p,\text{even}}^p = \frac{p-1}{2}\( \log(p-1) +2\gamma -1 - \log 2 \), $$ which is an approximation of ${\mathcal N}_{p,\text{even}}^p$. Table~\ref{tab:components_small_size} shows the difference between the two values for several large primes. We overestimate the actual value by about 2\%. \begin{table}[H] {\small \begin{tabular}{r c c c c} \hline \\[-2.2ex] \multicolumn{1}{c}{$p$} & ${\mathcal N}_{p,\text{even}}^K$ & ${\mathcal N}_{p,\text{even}}^p$ & ${\mathcal N}_p$ & $\widetilde{\mathcal N}_{p,\text{even}}^p$ \\ \hline\\[-1.5ex] 100,003 & 521,337 & 538,640 & 638,643 &548,722\\ 200,003 & 1,113,083 & 1,147,694 & 1,347,697 &1,166,748 \\ 300,007 & 1,730,420 & 1,782,805 & 2,082,812 &1,810,962 \\ 400,009 & 2,364,734 & 2,434,894 & 2,834,903 &2,472,154\\ 500,009 & 3,011,626 & 3,098,914 & 3,598,923 &3,145,966\\ 600,011 & 3,667,637 & 3,772,277 & 4,372,288 &3,829,859\\ 700,001 & 4,333,622 & 4,455,913 & 5,155,914 &4,522,041\\ 800,011 & 5,005,995 & 5,145,194 & 5,945,205 &5,221,530\\ 900,001 & 5,685,731 & 5,842,337 & 6,742,338 &5,927,145\\ 1,000,003 & 6,369,257 & 6,543,317 & 7,543,320 &6,638,411\\ \hline\\ \end{tabular} } \caption{Estimates for the number of components with even size and $K = (p-1)/2$} \label{tab:components_small_size} \end{table} \section{Shape of trees in functional graphs} \label{sect:tree} Finally, in order to reveal more detailed features of functional graphs, we consider the trees attached to such graphs. In the functional graph ${\mathcal G}_a$ corresponding to $f_a$, each node in a cycle, except for $a$ (if $a$ lies in a cycle), is connected to a unique node (say $w$) which is not in the cycle. Naturally, we treat the node $w$ as the root of the binary tree attached to a cyclic point in the graph ${\mathcal G}_a$. Thus, we can say that each node in a cycle of ${\mathcal G}_a$, expect for $a$, is associated with a binary tree -- in fact a full binary tree, unless $0$ is a node in the tree. For example, in Figure~\ref{pic:connected_graph}, there are $8$ full binary trees attached to the cyclic points. Let $t_p(a,k)$ be the number of such binary trees with $k$ nodes in ${\mathcal G}_a$, and let \[ T_{p}(k) = \sum_{a \in {\mathbb F}_p} t_p(a,k) \quad \text{and} \quad T_p = \sum_{k=1}^{p-1} T_p(k); \] and for the connected graphs equivalents, let \[ T^*_{p}(k) = \sum_{a \in {\mathcal I}_p} t_p(a,k) \quad \text{and} \quad T_p^* = \sum_{k=1}^{p-1} T_p^*(k). \] Note that $T_p$ is the total number of trees attached to all such functional graphs ${\mathcal G}_a$, and $T_p^*$ has a similar meaning but with restriction to connected functional graphs. An interesting question is whether these trees behave similarly to random full binary trees. First we observe that there is a significant proportion of trees with just one node, as shown in Table~\ref{table:tree_numbers} for the general case and in Table~\ref{table:tree_numbers_connected} for connected graphs. This motivates us to pose the following conjecture, which seems to be reasonable because exactly half of elements in ${\mathbb F}_p^*$ are not square. \begin{conj} We have $T_p(1)/T_p \sim 1/2$ as $p \to \infty$. \end{conj} \begin{table}[H] {\small \begin{tabular}{r r r r} \hline \\[-2.2ex] \multicolumn{1}{c}{$p$} & \multicolumn{1}{c}{$T_p(1)$} & \multicolumn{1}{c}{$T_p$} & \multicolumn{1}{c}{\%} \\ \hline\\[-1.5ex] 50,111 & 7,090,084 & 14,091,820 & 50.31\% \\ 100,003 & 19,845,915 & 39,530,737 & 50.20\% \\ % 200,003 & 56,210,936 & 112,088,213 & 50.15\% \\ 300,007 & 103,203,596 & 205,901,181 & 50.12\% \\ 400,009 & 158,746,944 & 317,089,081 & 50.06\% \\ 500,009 & 221,941,725 & 443,336,032 & 50.06\% \\ 1,000,003 & 627,460,216 & 1,253,326,817 & 50.06\% \\ \hline\\ \end{tabular} } \caption{Number of trees with one node} \label{table:tree_numbers} \end{table} \begin{table}[H] {\small \begin{tabular}{r r r r} \hline \\[-2.2ex] \multicolumn{1}{c}{$p$} & \multicolumn{1}{c}{$T_p^*(1)$} & \multicolumn{1}{c}{$T_p^*$} & \multicolumn{1}{c}{\%} \\ \hline\\[-1.5ex] 50,111 & 27,877 & 55,668 & 50.08\% \\ 100,003 & 52,923 & 105,612 & 50.11\% \\ % 200,003 & 115,746 & 231,583 & 49.98\% \\ % 300,007 & 161,975 & 323,410 & 50.08\% \\ % 400,009 & 222,865 & 445,931 & 49.98\% \\ % 500,009 & 298,060 & 595,142 & 50.08\% \\ % 1,000,003 & 542,592 & 1,086,147 & 49.96\% \\ % \hline\\ \end{tabular} } \caption{Number of trees with one node in connected graphs} \label{table:tree_numbers_connected} \end{table} Second, for large trees, we check the average height of the trees in the graphs. It has been shown in~\cite[Theorem~B]{FO} that the average height of full binary trees with $n$ internal nodes is \[ \overline{H}_n \sim 2\sqrt{\pi n} \qquad \text{as $n \rightarrow \infty$.} \] This means that for a random full binary tree, its height is asymptotic to $2\sqrt{\pi n}$ when $n$ goes to the infinity. In our situation, for each tree with $n$ internal nodes and height $H_n$, we compute the ratio $H_n/2\sqrt{\pi n}$ and find the average of this ratio for all graphs modulo $p$. (Again, a tree is not always guaranteed to be a full binary tree, since $0$ might be a node in the tree, but the impact of this happening is negligible, and at any case, we collect trees of both sizes $2n$ and $2n+1$.) In Table~\ref{table:tree_heights}, we compare the ratio of $\overline{H}_n/2\sqrt{\pi n}$ (see~\cite[Table~II]{FO}) with the average ratio of $H_n/2\sqrt{\pi n}$ of the trees in our graphs. One can see that they are close. \begin{table}[H] {\small \begin{tabular}{r r r r r} \hline \\[-2.2ex] \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$\overline{H}_n/2\sqrt{\pi n}$} & \multicolumn{3}{c}{average of $H_n/2\sqrt{\pi n}$} \\ & & $p = 50111$ & $p=100003$ & $p=200003$\\ \hline\\[-1.5ex] 50 & 0.797 & 0.837 & 0.837 & 0.837 \\ 100 & 0.846 & 0.875 & 0.873 & 0.872\\ 500 & 0.920 & 0.952 & 0.925 & 0.941 \\ 1,000 & 0.940 & 0.925 & 0.948 & 0.942 \\ 2,000 & 0.956 & 0.981 & 0.944 & 0.960\\ 5,000 & 0.970 & 0.927 & 0.916 & 0.977 \\ \hline\\ \end{tabular} } \caption{Average height of trees} \label{table:tree_heights} \end{table} \section{Future Directions} One of the most important directions in this area is developing an adequate random model predicting the statistical characteristics of the functional graphs of polynomials, see~\cite{MaPa} for some initial, yet promising results in this direction. Based on our computations, we pose several conjectures about the functional graphs of quadratic polynomials. Investigating whether they are true or not may help to characterise functional graphs generated by quadratic polynomials and understand the similarities and differences between these functional graphs and random mappings. The other interesting problem is to count the number of functional graphs modulo $p$ generated by quadratic polynomials up to isomorphism; see~\cite[Theorem~2.8]{KLMMSS} for a lower bound. In~\cite[Conjecture~C]{GKRS} the authors conjectured that for any odd prime $p\ne 17$, there are $p$ such functional graphs up to isomorphism, and they confirmed this for all the odd primes up to 1009 not equal to $17$. Under our computations, we confirm this conjecture for all the odd primes up to 100000 not equal to $17$. \section*{Acknowledgements} The authors are grateful to Patric Morton and Michael Zieve for several useful suggestions and literature references, especially concerning dynatomic polynomials. For the research, B.M. was partially supported by the Australian Research Council Grants DP140100118 and DP170102794, M.S. by the Macquarie University Research Fellowship, I.S. by the Australian Research Council Grants~DP130100237 and DP140100118.
1,116,691,497,011
arxiv
\section*{1. Introduction.} Colombeau algebras of generalized functions~\cite{c1, c2, MObook} are differential algebras that contain the vector space of Schwartz distributions as a linear subspace, and the space of smooth functions as a faithful subalgebra. Initially discovered in the context of infinite-dimensional calculus in locally convex spaces, such algebras have turned out to be a powerful tool in the study of singular problems that involve differentiation combined with non-linear operations. In particular, Colombeau algebras quickly found (and continue to find) applications in the field of non-linear partial differential equations (e.g., \cite{MObook,Biag,NP}), where the application of classical distributional methods is limited by the impossibility to consistently define an intrinsic product of distributions~\cite{Schw}. From the mid 1990's, it also became apparent that Colombeau algebras could be a significant tool with which to study singular problems in various geometrical settings. In particular, early work centered on applications to problems in General Relativity (see \cite{SVsurvey} for a recent survey), and Lie group analysis of partial differential equations (e.g., \cite{symm,book}). At the same time, structural properties of Colombeau algebras came to the fore in the work of several research groups. In particular, a thorough study of algebraic properties was carried out (e.g., \cite{A, V1}) and topological and functional analytic structures on Colombeau spaces were developed and refined to a high degree (e.g., \cite{S0, S, G1, G2}). The aim of this contribution is to provide an overview of some of these developments that show significant potential both for the intrinsic understanding of algebras of generalized functions and for applications in geometry, differential equations, and mathematical physics. \section*{2. Non-smooth differential geometry.} Throughout this paper we will employ the so-called special (or simplified) version of Colombeau's algebras. To fix notations we briefly recall the definition of the Colombeau algebra $\ensuremath{\mathcal{G}}(M)$ on a manifold $M$ (see, e.g., \cite{book}). Let ${\mathcal P}(M)$ denote the space of linear differential operators on $M$. $\ensuremath{\mathcal{G}}(M)$ is defined as the quotient space $\ensuremath{{\mathcal E}_m} (M)/\ensuremath{{\mathcal N}} (M)$, where the spaces of moderate resp.\ negligible nets are defined by \begin{eqnarray*} \ensuremath{{\mathcal E}_m} (M) &=& \{(u_\varepsilon)_\varepsilon\in {\mathcal C}^\infty(M)^{(0,1]} \ :\ \forall K \subset\subset M\, \,\forall P\in {\mathcal P}(M)\, \, \exists\, l \\%\in \N \\ && \ \sup_{x\in K} |Pu_\varepsilon(x)| = O(\varepsilon^{-l})\}\\ \ensuremath{{\mathcal N}} (M) &=& \{(u_\varepsilon)_\varepsilon\in \ensuremath{{\mathcal E}_m} (M)\qquad : \ \forall K\subset\subset M \,\,\, \, \ \ \ \, \forall m\, \sup_{x\in K} |u_\varepsilon(x)| \ \,\, = O(\varepsilon^{m})\} \end{eqnarray*} Here and in what follows we will assume that all representatives of generalized functions in fact depend smoothly on the regularization parameter $\varepsilon$. A similar definition can be given for the space $\Gamma_\ensuremath{\mathcal{G}}(M,E)$ of generalized sections of a vector bundle $E\to M$, and we have the fundamental ${\mathcal C}^\infty(M)$-module isomorphism $$ \Gamma_{\ensuremath{\mathcal{G}}}(M,E) \cong \ensuremath{\mathcal{G}}(M) \otimes_{{\mathcal C}^\infty(M)} \Gamma(M,E), $$ i.e., generalized sections may be viewed globally as sections with generalized coefficient functions. Based on regularization operations via convolution in charts (cf.\ the de Rham regularizations in \cite{deR}) it can be shown that there exist injective sheaf morphisms $$\iota: \Gamma(\_\,,E) \hookrightarrow {\mathcal D}'(\_\,,E) \hookrightarrow \Gamma_{\ensuremath{\mathcal{G}}}(\_\,,E). $$ An important feature distinguishing Colombeau generalized functions from Schwartz distributions is the availability of a point value characterization: we call a net $(x_\varepsilon)_\varepsilon$ of points in $M$ compactly supported if $x_\varepsilon$ remains in some compact set for $\varepsilon$ small. Two compactly supported nets $(x_\varepsilon)_\varepsilon$, $(y_\varepsilon)_\varepsilon$ are called equivalent, $(x_\varepsilon)_\varepsilon \sim (y_\varepsilon)_\varepsilon$, if $d_h(x_\varepsilon,y_\varepsilon)=O(\varepsilon^m)$ $\forall m$, where $d_h$ is the distance function induced by any Riemannian metric $h$ on $M$. The quotient space $\tilde M_c:= M^{(0,1]}$ is called the space of compactly supported generalized points. Then we have (cf.\ \cite{book}, Th.\ 3.2.8): \noindent{\bf Theorem 1.}{\em Let $u\in \ensuremath{\mathcal{G}}(M)$. Then $u=0$ if and only if $u(\tilde x) = 0$ for all $\tilde x \in \tilde M_c$.} As a first pointer at algebraic properties of $\ensuremath{\mathcal{G}}$, let us have a look at the question of (multiplicative) invertibility in both $\ensuremath{\mathcal{G}}(M)$ and the ring of constants in $\ensuremath{\mathcal{G}}(M)$ (or space of generalized numbers), $\tilde \mathbb K$.\medskip\\ \noindent{\bf Lemma 1. }{\em Let $u\in \ensuremath{\mathcal{G}}(M)$. The following are equivalent: \begin{itemize} \item[(i)] $u$ is invertible. \item[(ii)] $u(\tilde x)$ is invertible in $\tilde \mathbb K$ for all $\tilde x\in \tilde M_c$. \item[(iii)] $u$ is {\em strictly nonzero}, i.e., $\forall K\subset\subset M$ $\exists q$ s.t.\ $\inf_{p\in K}|u_\varepsilon(p)| > \varepsilon^q$ for $\varepsilon$ small. \end{itemize} } Similarly, for generalized numbers we have:\medskip\\ \noindent{\bf Lemma 2. }{\em Let $r\in \tilde \mathbb K$. The following are equivalent: \begin{itemize} \item[(i)] $r$ is invertible. \item[(ii)] $r$ is not a zero divisor. \item[(iii)] $r$ is strictly nonzero. \item[(iv)] For every representative $(r_\varepsilon)_\varepsilon$ of $r$ there exists some $\varepsilon_0>0$ such that $r_\varepsilon \not=0$ for all $\varepsilon<\varepsilon_0$. \end{itemize} } While the other conditions in Lemmas 1 and 2 are well-known (cf.\ \cite{book}), (iv) from Lemma 2 is a rather recent and very convenient observation from \cite{M}. For applications in general relativity, a notion of generalized (pseudo-)Rie\-mann\-ian metric is of central importance. Denoting $\Gamma_\ensuremath{\mathcal{G}}(M,T^0_2M)$ by $\ensuremath{\mathcal{G}}^0_2(M)$ we have the following characterization (\cite{gprg, M}): \medskip\\ \noindent{\bf Theorem 2. }{\em Let $g\in \ensuremath{\mathcal{G}}^0_2(M)$. The following are equivalent: \begin{itemize} \item[(i)] $g: {\ensuremath{\mathcal{G}}}^1_0(M)\times {\ensuremath{\mathcal{G}}}^1_0(M) \rightarrow \ensuremath{\mathcal{G}}(M)$ is symmetric and $\det(g)$ is invertible in $\ensuremath{\mathcal{G}}(M)$. \item[(ii)] For each chart $(\psi,V)$, $\forall \tilde x \in (\psi(V))^\sim_c$: $\psi_*g(\tilde x)$: $\tilde \mathbb K^n \times \tilde \mathbb K^n \to \tilde \mathbb K$ is symmetric and nondegenerate. \item[(iii)] $\det(g)$ is invertible in $\ensuremath{\mathcal{G}}(M)$ and $\forall \overline{V}\subset\subset M$ there exists a representative $(g_\varepsilon)_\varepsilon$, such that each $g_\varepsilon|_{V}$ is a smooth pseudo-Riemannian metric. \end{itemize} Moreover, if $g$ satisfies these equivalent conditions then $g$ has index $j$ if and only if for each chart $\psi$ and each $\tilde x$, $\psi_*g(\tilde x)$ is a symmetric bilinear form on $\tilde \mathbb R^n$ with index $j$. } As in the smooth setting, the following fundamental lemma shows that each generalized pseudo-Riemannian metric induces a unique Levi-Civita connection (\cite{gprg}):\medskip\\ \noindent{\bf Theorem 3. }{\em For any generalized pseudo-Riemannian metric $g$ on $M$ there exists a unique connection $\hat \nabla: {\ensuremath{\mathcal{G}}}^1_0(M)\times{\ensuremath{\mathcal{G}}}^1_0(M)\to{\ensuremath{\mathcal{G}}}^1_0(M)$ such that: \begin{itemize} \item[\hspace*{1cm}($\nabla1$)] $\hat \nabla_X Y$ is ${\tilde \mathbb R}$-linear in $Y$. \item[($\nabla2$)] $\hat \nabla_X Y$ is $\ensuremath{\mathcal{G}}(M)$-linear in $X$. \item[($\nabla3$)] $\hat \nabla_X(uY)=u\,\hat \nabla_X Y+X(u)Y$ for all $u\in\ensuremath{\mathcal{G}}(M)$. \item[($\nabla4$)] $[X,Y]=\hat \nabla_X Y-\hat \nabla_YX$ \item[($\nabla5$)] $X\langle Y,Z\rangle=\langle \hat \nabla_X Y,Z\rangle+\langle Y,\hat \nabla_X Z\rangle$ \end{itemize} } With these tools at hand, one can proceed to analyzing curvature quantities and geodesics for singular metrics. We refer to \cite{SVsurvey} for a recent overview of applications in general relativity. More generally, generalized connections in principal fiber bundles have been studied in \cite{connections}. Notions like curvature, holonomy and characteristic classes can then be modelled in a non-smooth setting. First applications to singular Yang-Mills equations can also be found in \cite{connections}. A further aspect of Colombeau algebras that allows to go beyond the distributional setting is the notion of generalized functions taking values in differentiable manifolds (\cite{gfvm,gfvm2}). The basic idea is to consider, for given manifolds $M$, $N$, a quotient construction on subspaces of $\mathcal{E}[M,N]:={\mathcal C}^\infty(M,N)^{(0,1]}$. The corresponding growth conditions can either be modelled by asymptotic estimates in charts (\cite{gfvm}) or, more elegantly, using `referee functions' for testing for moderateness resp.\ negligibility, as follows: we say that a net $(u_\varepsilon)_\varepsilon \in \mathcal{E}[M,N]$ (depending smoothly on $\varepsilon$) is c-bounded if for each $K\subset\subset M$ there exists some $K'\subset\subset N$ and some $\varepsilon_0$ such that $u_\varepsilon(K)\subseteq K'$ for all $\varepsilon <\varepsilon_0$. A c-bounded net $(u_\varepsilon)_\varepsilon$ is called moderate if $(f\circ u_\varepsilon)_\varepsilon\in\ensuremath{{\mathcal E}_m} (M)\ \forall\ f\in{\mathcal C}^\infty(N)$. The space of moderate nets is denoted by $\ensuremath{{\mathcal E}_m} [M,N]$. Two elements $(u_\varepsilon)_\varepsilon$, $(v_\varepsilon)_\varepsilon$ of $\ensuremath{{\mathcal E}_m} [M,N]$ are called equivalent, $(u_\varepsilon)_\varepsilon\sim (v_\varepsilon)_\varepsilon$, if $(f\circ u_\varepsilon-f\circ v_\varepsilon)_\varepsilon\in\ensuremath{{\mathcal N}} (M)\ \forall\ f\in{\mathcal C}^\infty(N)$. The space of Colombeau generalized functions on $M$ taking values in $N$ is then given by $\ensuremath{\mathcal{G}}[M,N]:=\ensuremath{{\mathcal E}_m} [M,N]/\sim$. Manifold-valued generalized functions are a necessary prerequisite for addressing problems like determining geodesics of singular metrics or flows of generalized vector fields. Based on $\ensuremath{\mathcal{G}}[M,N]$, a functorial theory of manifold-valued generalized functions and generalized vector bundle homomorphism has been developed in \cite{gfvm,gfvm2}. On the structural level, a basic question is whether $\ensuremath{\mathcal{G}}[\_,N]$ forms a sheaf. Due to the lack of algebraic structure on the target space $N$, the usual tools like partitions of unity are not directly available to answer this question. Nevertheless, we have:\medskip\\ \noindent{\bf Theorem 4. }{\em $\ensuremath{\mathcal{G}}[\_,N]$ is a sheaf of sets. } \noindent{\em Sketch of proof.} The nontrivial part is to show that any coherent family of locally defined generalized maps is given as a family of restrictions of one globally defined generalized map. The strategy is to use a Whitney embedding of $N$ into some $\mathbb R^n$ and then apply a gluing procedure in $\mathbb R^n$ based on partitions of unity. In order to obtain a global representative taking values in $N$, the retraction map of a tubular neighborhood of $N$ in $\mathbb R^n$ is employed. For details, see \cite{sheaf}. $\Box$ By a similar method, we obtain the following result on the inclusion of continuous maps in $\ensuremath{\mathcal{G}}[M,N]$ ($\sigma$ denotes the identical embedding $f\mapsto (f)_\varepsilon$ of ${\mathcal C}^\infty(M,N)$ in $\ensuremath{\mathcal{G}}[M,N]$), cf.\ \cite{sheaf}:\medskip\\ \noindent{\bf Theorem 5. }{\em There exists an embedding $\iota: {\mathcal C}(M,N) \hookrightarrow \ensuremath{\mathcal{G}}[M,N]$ with the following properties: \begin{itemize} \item[(i)] $\iota$ is a sheaf morphism. \item[(ii)] $\iota|_{{\mathcal C}^\infty(M,N)} = \sigma$. \item[(iii)] $\iota(u)_\varepsilon$ converges to $u$ uniformly on compact sets. \end{itemize} } As an added benefit, the construction of $\ensuremath{\mathcal{G}}[M,N]$ provides a blueprint for defining a space of manifold-valued distributions, as follows: set ${\mathcal A}[M,N] = \{u\in \ensuremath{\mathcal{G}}[M,N] \mid \forall f\in {\mathcal C}^\infty(N), \exists \lim_{\varepsilon\to 0}f\circ u_\varepsilon \in {\mathcal D}'\}$ and let $u\approx_{\mathcal M} v$ if for all $f\in {\mathcal C}^\infty(N)$, $f\circ u_\varepsilon - f\circ v_\varepsilon \to 0$ in ${\mathcal D}'$. Then set ${\mathcal D}'(M,N):= {\mathcal A}[M,N]/\approx_{\mathcal M}$. For $M$, $N$ Euclidean spaces, ${\mathcal D}'(M,N)$ singles out a subspace of bounded distributions. Further properties (e.g., the relationship to Young measures) are analyzed in \cite{sheaf}. \section*{3. Some algebraic aspects of Colombeau algebras on manifolds.} Based on the construction in the previous sections, here we will give a few examples indicating the increasingly important role that an understanding of the algebraic structure of Colombeau-type spaces plays in a geometrical context. To begin with, let us consider the structure of the space of algebra isomorphisms from $\ensuremath{\mathcal{G}}(M)$ to $\ensuremath{\mathcal{G}}(N)$. In the smooth setting, it has been known for a long time that for any algebra isomorphism $\phi: {\mathcal C}^\infty(M) \to {\mathcal C}^\infty(N)$ there is a unique diffeomorphism $f:N\to M$ of the underlying manifolds such that $\phi$ is given as the pullback map under $f$: $\phi = u \mapsto u\circ f$. The analogous problem for isomorphisms of Colombeau algebras has only recently been solved by H.\ Vernaeve in \cite{V2}. The result is based on the solution of `Milnor's exercise' in the Colombeau setting, i.e.\ the characterization of multiplicative linear functionals on $\ensuremath{\mathcal{G}}$:\medskip\\ \noindent{\bf Theorem 6. }{\em Every multiplicative linear functional on $\ensuremath{\mathcal{G}}(M)$ is of the form $$ e\delta_{\tilde x}: u \mapsto e u(\tilde x) $$ for $\tilde x$ a generalized point and $e\in \tilde \mathbb K$ idempotent. } Using this result, we obtain\medskip\\ \noindent{\bf Theorem 7. }{\em Let $\phi: \ensuremath{\mathcal{G}}(M)\to \ensuremath{\mathcal{G}}(N)$ be an algebra-isomorphism $($with $\phi(1)=1)$. Then $\phi = f^*$ for some $f\in \ensuremath{\mathcal{G}}[N,M]$ such that $f^{-1}\in \ensuremath{\mathcal{G}}[M,N]$. Also, $\phi^{-1} = f_*$. } Next, let us investigate generalized de Rham cohomology. By $\Omega^p_\ensuremath{\mathcal{G}}(M) = \Gamma_\ensuremath{\mathcal{G}}(M,$ $\Lambda^p(M))$ we denote the space of generalized $p$-forms on $M$. Also, as in the smooth setting we introduce the cohomology spaces by \begin{eqnarray*} Z^p_\ensuremath{\mathcal{G}}(M) &:=& \{\omega\in \Omega^p_\ensuremath{\mathcal{G}}(M) \mid d\omega = 0\} \\ B^p_\ensuremath{\mathcal{G}}(M) &:=& \{\omega\in \Omega^p_\ensuremath{\mathcal{G}}(M) \mid \exists \tau\in \Omega^{p-1}_\ensuremath{\mathcal{G}}: \omega=d\tau\}\\ H^p_\ensuremath{\mathcal{G}}(M) &:=& Z^p_\ensuremath{\mathcal{G}}(M)/H^p_\ensuremath{\mathcal{G}}(M) \end{eqnarray*} The relationship between generalized and smooth de Rham cohomology is as follows:\medskip\\ \noindent{\bf Theorem 8. }{\em For any $p\ge 0$ we have the following isomorphism of real vector spaces: $$ H^p_\ensuremath{\mathcal{G}}(M) \cong \tilde \mathbb R \otimes_\mathbb R H^p(M) $$ } \noindent{\em Sketch of proof.} Both $$ 0 \longrightarrow \ker(d) \stackrel{d}{\longrightarrow} \Omega_\ensuremath{\mathcal{G}}^0(M) \stackrel{d}{\longrightarrow} \Omega_\ensuremath{\mathcal{G}}^1(M) \stackrel{d}{\longrightarrow}\dots $$ and $$ 0 \longrightarrow \ker(d) \stackrel{id\otimes d}{\longrightarrow} \tilde \mathbb R \otimes_\mathbb R C^\infty(M,\mathbb R) \stackrel{id\otimes d}{\longrightarrow} \tilde \mathbb R \otimes_\mathbb R \Omega^1(M) \stackrel{id\otimes d}{\longrightarrow}\dots $$ are fine resolutions of the sheaf of locally constant Colombeau generalized functions. The result therefore follows from the abstract de Rham theorem. For details, see \cite{connections}. $\Box$ This means that the structural difference between generalized and smooth de Rham cohomology is encoded precisely in the algebraic structure of the ring of generalized numbers. Finally, let us return to the algebraic foundations of pseudo-Riemannian geometry in the Colombeau setting. As can be seen from Th.\ 2, the study of bilinear forms on $\tilde \mathbb R^n$ is of central importance here. We have (\cite{M}):\medskip\\ \noindent{\bf Theorem 9. }{\em Let $v\in \tilde\mathbb R^n$. The following are equivalent:\\ (i) For any positive definite bilinear form $h$, $h(v,v)>0$.\\ (ii) $v$ is free (i.e., $\lambda v = 0 \Rightarrow v=0$).\\ (iii) $v$ can be extended to a basis of $\tilde\mathbb R^n$.\\ (iv) For each representative $(v_\varepsilon)_\varepsilon$ there exists some $\varepsilon_0$ such that for all $\varepsilon<\varepsilon_0$, $v_\varepsilon\not=0$. } Based on this result, causality notions (time-like, space-like, and null vectors) can be introduced and analyzed in the generalized setting. Applications include energy methods for solving wave equations on singular space-times (cf.\ \cite{GMS}). \section*{4. Algebraic properties of $\tilde \mathbb K$.} In this section we give a brief overview of known results on the algebraic structure of the ring of generalized numbers. For details and proofs we refer to the original sources \cite{A, AJOS, V1}. In what follows, topological properties always refer to the sharp topology on $\tilde \mathbb K$ (cf.\ the following section). \begin{itemize} \item $\tilde \mathbb K$ is a reduced ring, i.e., there are no nontrivial nilpotent elements. \item Elements of $\tilde \mathbb K$ are either invertible or zero-divisors (cf.\ Lemma 2). \item $e\in \tilde \mathbb K$ is idempotent ($e^2 = 1$) iff $e=e_S$, the characteristic function of some $S\subseteq (0,1]$. \item $\tilde \mathbb K$ possesses uncountably many maximal ideals. \item $\tilde \mathbb K$ is a complete topological ring. \item The closure of any prime ideal is maximal. Conversely, every maximal ideal is closed. \item Let $I$ be an ideal in $\tilde \mathbb K$. Then the closure of $I$ is the intersection of all maximal ideals containing $I$. \item $\tilde \mathbb K$ is {\em not:} \begin{itemize} \item Artinian \item Noetherian \item von Neumann regular \end{itemize} \item Every ideal $I$ in $\tilde \mathbb K$ is convex ($x\in I$, $|y|\le |x|$ $\Rightarrow$ $y\in I$). \item An ideal $I$ is prime iff it is pseudoprime and radical, i.e.: \begin{itemize} \item $\forall S\subset (0,1]$: $e_S\in I$ or $e_{S^c}\in I$, and \item $\forall x\in I$: $\sqrt{|x|}\in I$. \end{itemize} \end{itemize} We note that many of the corresponding properties for $\ensuremath{\mathcal{G}}$ instead of $\tilde\mathbb K$ are the subject of ongoing research. We conclude this section with the following interesting connection to the nonstandard space of asymptotic numbers (cf.\ \cite{OT}), established in \cite{V1}, Th.\ 7.2:\medskip\\ \noindent{\bf Theorem 10. }{\em Let $I$ be a maximal ideal in $\tilde \mathbb K$. Let $\mathcal{U}:=\{S\subseteq (0,1]\mid e_{S^c}\in I\}$. Let ${}^*\mathbb K$ be the nonstandard field constructed by the ultrafilter $\mathcal U$ and let $\rho$ be the infinitesimal with representative $(\varepsilon)_\varepsilon$. Then ${}^\rho\mathbb K$ is canonically isomorphic to $\tilde\mathbb K/I$. } \section*{5. Topology and functional analysis.} Topologies on spaces of Colombeau generalized functions and generalized numbers were originally introduced by D.\ Scarpalezos by the name of {\em sharp topologies} in 1993 (and published only later in \cite{S0,S}). After the field lay dormant for some years (in which the main focus of research was on applications in PDEs) there occurred a veritable surge of activities lately. In particular, the fundamental work by C.\ Garetto \cite{G1,G2} has led to the development of a full-scale locally convex theory for algebras of generalized functions. In this section we outline some of the main features of this theory. For any given locally convex vector space $E$ whose topology is induced by the family of seminorms $(p_i)_{i\in I}$, we set \begin{eqnarray*} \mathcal{M}_E &:=& \{(u_\varepsilon)_\varepsilon \in E^{(0,1]} \mid \forall i\, \exists N\,: p_i(u_\varepsilon)= O(\varepsilon^{-N})\}\\ \mathcal{N}_E &:=& \{(u_\varepsilon)_\varepsilon \in E^{(0,1]} \mid \forall i\, \forall q\,: p_i(u_\varepsilon)= O(\varepsilon^{q})\}\\ \ensuremath{\mathcal{G}}_E &:=& \mathcal{M}_E/\mathcal{N}_E \end{eqnarray*} Then $\ensuremath{\mathcal{G}}_E$ is a $\tilde \mathbb C$-module. The special Colombeau algebra $\ensuremath{\mathcal{G}}(\Omega)$ is obtained as the special case $E={\mathcal C}^\infty(\Omega)$ of this construction (cf.\ \cite{G1,DHPV}). On $\ensuremath{\mathcal{G}}_E$ we introduce valuations given by $$ v_{p_i}(u):= \sup\{b\in \mathbb R \mid p_i(u_\varepsilon)=O(\varepsilon^b)\} $$ (here $(u_\varepsilon)_\varepsilon$ is any representative of $u\in \ensuremath{\mathcal{G}}_E$). The valuations, in turn, induce ultra-pseudo-seminorms (ups) via $$ \mathcal{P}_i := e^{-v_{p_i}}. $$ This family of ups defines the sharp topology on $\ensuremath{\mathcal{G}}_E$. As an important special case we may take $E = \mathbb C$, in which case $\ensuremath{\mathcal{G}}_E = \tilde \mathbb C$. Here we only have one seminorm, $p(x) = |x|$ which induces a valuation $v$ and a corresponding ups denoted by $|\,\,|_e$. More generally we may introduce suitable notions for directly generalizing locally convex topologies to the $\tilde \mathbb C$-module setting. Recall that for $V$ a vector space and $X\subseteq V$, $X$ is called {\em absorbent} in $V$ if $\forall u\in V$ $\exists \lambda_0$ $\forall \lambda\ge \lambda_0$: $u\in \lambda X$. Let now $\ensuremath{\mathcal{G}}$ be a $\tilde \mathbb C$-module and let $A\subseteq \ensuremath{\mathcal{G}}$. If we let $\lambda_0$ correspond to the infinitesimal $[(\varepsilon^a)_\varepsilon]$, then $\lambda_0 \cong [(\varepsilon^a)_\varepsilon] \le [(\varepsilon^b)_\varepsilon]\cong \lambda$ iff $b\le a$, so we are led to defining: $A$ is called $\tilde \mathbb C$-{\em absorbent} if $\forall u\in \ensuremath{\mathcal{G}}$ $\exists a\in \mathbb R$ $\forall b\le a$: $u\in [(\varepsilon^b)_\varepsilon]A$. Similarly, we call $A$ $\tilde \mathbb C$-{\em balanced} if $\forall \lambda\in \tilde \mathbb C$ with $|\lambda|_e\le 1$: $\lambda A \subseteq A$. To introduce a suitable notion of convexity, recall that a subset $X$ of a vector space $V$ is a convex cone in $V$ if $X+X\subseteq X$ and $\forall \lambda\in (0,1]$: $\lambda X\subseteq X$. Thus we call a subset $A$ of a $\tilde \mathbb C$-module $\ensuremath{\mathcal{G}}$ $\tilde \mathbb C$-{\em convex} if $A+A\subseteq A$ and $\forall b\ge 0$: $[(\varepsilon^b)_\varepsilon]A\subseteq A$. Finally, we define a locally convex topological $\tilde \mathbb C$-module to be a toplogical $\tilde \mathbb C$-module (which means that $+$ and $\lambda \cdot$ are continuous) with a base of $\tilde \mathbb C$-convex neighborhoods of $0$. This provides the starting point for a by now highly developed theory of locally convex $\tilde \mathbb C$-modules which to a large extent parallels the theory of locally convex vector spaces. Some of the main features of the theory are: \begin{itemize} \item The ups take over the role of seminorms. \item Completeness, metrizability, projective and inductive limits have been studied. \item There is a theory of barrelled and bornological $\tilde \mathbb C$-modules. \item Examples: $\ensuremath{\mathcal{G}}_c(\Omega)$ (corresponding to $\mathcal{D}(\Omega)$) is a strict inductive limit. $\ensuremath{\mathcal{G}}(\Omega)$ (corresponding to ${\mathcal C}^\infty(\Omega)$) is a Frechet $\tilde \mathbb C$-module. The standard spaces $\ensuremath{\mathcal{G}}_\tau(\Omega)$, $\ensuremath{\mathcal{G}}_{\mathcal{S}}(\Omega)$, $\ensuremath{\mathcal{G}}^\infty(\Omega)$, etc.\ can all be treated within the theory. \item Duality theory, study of $$ \mathcal{L}(\ensuremath{\mathcal{G}},\tilde \mathbb C):= \{T:\ensuremath{\mathcal{G}}\to \tilde \mathbb C \mid T\ \tilde\mathbb C\mathrm{-linear\ and\ continuous}\} $$ An example is the generalized delta distribution (point evaluation at $\tilde x\in \tilde \Omega$): $\delta_{\tilde x} = u\mapsto u(\tilde x) \in \mathcal{L}(\ensuremath{\mathcal{G}}_c(\Omega),\tilde \mathbb C)$. \item Based on this, kernels of pseudodifferential operators can be constructed as elements of $\mathcal{L}(\ensuremath{\mathcal{G}}_c(\Omega\times\Omega),\tilde \mathbb C)$ (cf.\ also \cite{D}). \item Microlocal analysis in the dual of Colombeau algebras, see \cite{G3}. \item A Hahn-Banach theorem is not attainable in general due to algebraic obstructions (\cite{V1}). \item Several open mapping and closed graph theorems and applications to $\ensuremath{\mathcal{G}}^\infty$-hypo\-ellip\-ticity are given in \cite{G4}. \end{itemize} \section*{6. Conclusions and Outlook.} As can be seen from the above summary of results, Colombeau theory is currently undergoing a profound and far-reaching conceptual restructuring. Several branches of research that so far had been rather disconnected have seen fruitful and promising interactions. As a first example we have seen the strong links between global analysis and algebraic properties in Section 3. These will give rise to a new {\em algebraic} approach to non-smooth differential geometry. Moreover, the algebraic causality structures also mentioned in Section 3 are currently being pursued as a tool for generalizing the Hawking and Penrose singularity theorems of general relativity to space-times of low differentiability. Interactions between algebra and PDE theory include topics like a refined study of hypoellipticity properties (which will require at least the rudiments of real algebraic geometry in the generalized setting). Moreover, as was indicated in Section 5, there are by now strong ties between functional analytic methods and the theory of pseudodifferential and Fourier integral operators. Similarly, there are close connections between such methods and variational problems of low regularity. There already are examples of abstract (functional analytic) existence results for concrete analytical problems in PDE theory in the Colombeau framework, a direction of research which without doubt will gain importance in the near future. The hope here is to provide a toolkit (similar to the one available in classical analysis) of topological and algebraical methods for solving problems of non-smooth analysis and geometry.
1,116,691,497,012
arxiv
\section{Introduction} Social networks shape the way people think. Individuals' private opinions can change as a result of social influence and a well-placed minority view can become what most people come to believe \citep{Stewart2019}. There is also a natural tendency for people to connect to individuals similar to them, the so-called homophily (see, e.g. \cite{mcpherson2001birds}), which adds to the potential for a social network to create information bubbles and is amplified even further in modern social media networks (\cite{lee2019homophily}). The current vaccination debate has brought to the fore the dramatic effects that misperception can have in people's lives \citep{covid-nature} and made it clear how important it is to design social networks where participants receive the most unbiased information possible. When individuals use their social network as a source of information, it can happen that minority groups are more ``visible" as a result of being better placed, which makes them overrepresented in many friendships' groups. Sometimes these minorities can be so well placed that many or even most individuals ``see" them as majorities - a phenomenon called {\em majority illusion}. Majority illusion was originally introduced by \citet{lerman2016majority} who studied the existence of social networks in which most agents belong to a certain binary type, but most of their peers belong to a different one. Thus, they acquire the wrong perception, i.e., the illusion, that the majority type is different from the actual one. Figure \ref{fig:deadlock} gives an example of this. \begin{figure}[H] \centering \scalebox{0.8}{ \begin{tikzpicture} [->,shorten >=1pt,auto,node distance=1.2cm, semithick] \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (A) {}; \node[shape=circle,draw=myred, right of=A, pattern=crosshatch, pattern color=myred] (B) {}; \node[shape=circle,draw=myred, below of=A, pattern=crosshatch, pattern color=myred] (C) {}; \node[shape=circle,draw=myred, below of=B, pattern=crosshatch, pattern color=myred] (D) {}; \draw [thick,-] (B) to (A) ; \draw [thick,-] (C) to (A) ; \draw [thick,-] (D) to (A) ; \draw [thick,-] (C) to (B) ; \draw [thick,-] (C) to (D) ; \draw [thick,-] (B) to (D) ; \node[shape=circle,draw=myblue, left of=C, fill=myblue] (E) {}; \node[shape=circle,draw=myblue, above of=A, fill=myblue] (E') {}; \node[shape=circle,draw=myblue, above of=B, fill=myblue] (F) {}; \node[shape=circle,draw=myblue, above of=B, right of=B, fill=myblue] (G) {}; \node[shape=circle,draw=black, right of=D, fill=myblue] (H) {}; \draw [thick,-] (E') to (F) ; \draw [thick,-] (F) to (A) ; \draw [thick,-] (E') to (B) ; \draw [thick,-] (D) to (H) ; \draw [thick,-] (E) to (C) ; \draw [thick,-] (E') to (A) ; \draw [thick,-] (F) to (B) ; \draw [thick,-] (G) to (B) ; \end{tikzpicture}} \caption{An instance of majority illusion. The well-placed red minority is seen as a majority by everyone}\label{fig:deadlock} \end{figure} Majority illusion has important consequences when paired with opinion formation. If for example individuals are influenced by the majority of their friends to change their mind, i.e., they abide to the well-known threshold model (\cite{granovetter1978threshold}), then majority illusion means that the overrepresented minorities become stable majorities. As such it is important to predict the occurrence of majority illusions in a network and, crucially, how a given network can be transformed so that this undesirable phenomenon is eliminated. Some analysis of majority illusion is already present in the literature. \citet{lerman2016majority}, for example, studied network features that correlate with having many individuals under illusion. In particular, the study demonstrated how disassortative networks, i.e. those in which highly connected agents tend to link with lowly connected ones, increase the chances of majority illusion. However, the computational questions of checking whether a network admits majority illusion and, crucially, how this can be corrected, are still unanswered. Network transformation has shown important applications in the context of election manipulation (see, e.g., \cite{castiglioni2021election}), influence maximisation \citep{zhou21maximizing}, anonymisation (see, e.g., \cite{kapron2011social}) and of $k$-core maximization (see, e.g., \cite{chitnis}, \cite{zhou2019k}). Applying optimal network transformation techniques for illusion elimination is therefore a natural and important challenge. \paragraph{Our contribution.} In this paper we initiate the algorithmic analysis of majority illusion in social networks, focusing on two computational questions. First, we are interested in which networks allow for the possibility of illusion, i.e., whether there is a labelling of the nodes such that a specified fraction of agents is under illusion. We show that such problem is NP-complete for every fraction strictly greater than $\frac{1}{2}$ by a non-trivial reduction from the NP-complete problem 3-SAT. Further, we focus on the problem of eliminating illusion from a network, by modifying the agents' connectivity with a constraint on the number of edges which can be added or eliminated. We show that checking if it is possible to alter the structure of the network to ensure that at most a given fraction of agents is under illusion is NP-complete, reducing from the NP-complete problem 2P2N-SAT. \paragraph{Other related work.} Our results are also connected to a number of research lines in various AI-related areas. {\em Opinion Manipulation.} Our work is directly related to computational models of social influence, notably the work of \citet{AulettaEtAlAIJ2020}, where networks and initial distribution of opinions are identified such that an opinion can become a consensus opinion following local majority updates. In this context, it is important to observe that when all nodes are under majority illusion, a synchronous majoritarian update causes an initial minority to evolve into a consensus in just one step. Other notable models include \cite{DoucetteEtAlJAAMAS2019} who studied the propagation of possibly incorrect opinions with an objective truth value in a social network, and the stream of papers studying the computational aspects of exploiting (majoritarian) social influence via opinion transformation \citep{BredereckElkind2017,AulettaEtAlAIJ2020,AulettaEtAlTCS2021,CastiglioniAAAI2020}. {\em Network Manipulation.} An important research line has looked at how to transform a social network structure with applications in the voting domain. \cite{WilderVorobeychik2018}, e.g., studied how an external manipulator having a limited budget can select a set of agents to directly influence, to obtain a desired outcome of elections. In a similar setting, \cite{FaliszewskiEtAl2018} studied `` bribes" of voters' clusters. {\em Social Choice on Social Networks.} Our research aligns with the work in computational social choice, in particular strategic voting \citep{MeirStrategicVoting} and iterative voting (e.g., \cite{MeirAIJ2017,ReijngoudEndrissAAMAS2012}) where decision-making happens sequentially. Of relevance are also the recently found connections between iterative voting and social networks (\cite{Wilczynski2019PollConfidentVI}, \cite{Baumeister2020ManipulationOO}). There are also various other accounts of paradoxical effects in social networks which are related to our work, such as the \emph{friendship paradox}, according to which, on average, individuals are less well-connected than their friends (see, e.g.\cite{hodas2013friendship}, \cite{alipourfard2020friendship}). Exploiting a similar paradox, \cite{santos2021biased} recently showed how false consensus leads to the lack of participation in team efforts. \paragraph{Paper structure.} Section~\ref{sec:preliminaries} provides the basic setup and definitions. Section~\ref{sec:verifying} focuses on checking whether illusion can occur in a network while Section~\ref{sec:eliminating} studies illusion elimination. Section~\ref{sec:conclusions} concludes the paper presenting various potential future directions. Some proofs are omitted and can be found in the appendix. \section{Preliminaries}\label{sec:preliminaries} Our model features a set $N$ of agents, connected in a graph $(N,E)$, with $E\subseteq N^2$. Throughout the paper we will consider \emph{undirected graphs}, requiring $E$ to be symmetric. Furthermore, we assume that $E$ is \emph{irreflexive}, i.e. that $E$ does not include self-loops. We call such a graph a \emph{social network}. For $i\in N$ we denote as $E(i)=\{j \in N : E(i,j)\}$ the set of agents that $i$ is following. Furthermore, a network $( N, E)$ is an \emph{extension} of $( N, E')$ if $E' \subseteq E$. Similarly, if $E \subseteq E'$, we say that $(N,E)$ is a \emph{subnetwork} of $(N,E')$. \paragraph{Labellings.} We will work with social networks where each of the agents has an opinion, which we model as a labelling (or a colouring) over two possible alternatives. So, we consider \emph{labelled social networks}, in which every node is assigned its alternative (colour). Throughout the paper we assume a binary set of colours $C=\{b,r\}$ (\emph{blue} and \emph{red}). \begin{definition}[Labelled Social Network] A \emph{labelled social network} is a tuple $(N,E,f)$, where $(N,E)$ is a social network and $f: N \rightarrow C$ is a \emph{labelling} which assigns an alternative to each agent. \end{definition} Further, given a labelling $f$ of a social network $(N,E)$, we denote the set of red nodes $\{i \in N : f(i)=r \} $ as $R_f$ and the set of blue nodes $\{i \in N : f(i)=b \}$ as $B_f$. Moreover, for a set $S \subseteq N$, $R^S_f$ is the set of red nodes in $S$, while $B^S_f$ is the set of blue nodes in $S$. We omit $f$ if clear from the context. We will further distinguish between the majority option in the entire social network and the majority option from an agent's perspective, while only considering \emph{strict} majority. It is worth noting that under such a definition, a majority winner does not exist if the number of nodes labelled blue is the same as of those labelled red. So, given a labelled social network \textit{SN}=$(N,E,f)$, we denote the colour adopted by the strict majority in \textit{SN} as the \emph{majority winner} ($W_{\textit{SN}}$). Formally, a colour $c$ is a majority winner in \textit{SN} if and only if $| \{n \in N : f(n)=c \} | > | \{ n' \in N : f(n') \neq c \} |$. Similarly, for an agent $i$, $W_{\textit{SN}}^i$ is the majority option in $i$'s (open) neighbourhood. Formally, a colour $c$ is a majority winner in $i$'s neighbourhood if and only if $| \{n \in E(i) : f(n)=c \} | > | \{ n' \in E(i) : f(n') \neq c \} |$. Henceforth, where relevant, we will assume without loss of generality that blue is the majority winner in a network. We are now ready to define the concept of \emph{majority illusion}, that occurs when a certain number of agents has a wrong perception of which colour is the majority winner in the network. We say that an agent $i \in N$ is \emph{under illusion} if $W_{\textit{SN}}$ and $W^i_{\textit{SN}}$ exist, while $W_{\textit{SN}}^i \neq W_{\textit{SN}}$. \begin{definition}[$q$-majority illusion] Let $q \in \mathbb{Q} \cap [0,1]$. Then, a \emph{$q$-majority illusion} is a labelled social network $\textit{SN}=(N,E,f)$ such that at least $q \cdot |N| $ agents are under illusion. \end{definition} For a given social network $(N,E)$, fraction $q$ and a function $f: N \rightarrow C$, we say that $f$ \emph{induces} a $q$-majority illusion, if $(N,E,f)$ is a $q$-majority illusion. When not confusing, we will sometimes only say that $f$ induces illusion. If there is a labelling of a network \textit{SN} which induces $q$-majority illusion, then we say that \textit{SN} \emph{admits} a $q$-majority illusion. Also, for a network $(N,E)$ and $n,n' \in N$ such that $E(n)={n'}$, we say that $n$ is a \emph{dependant} of $n'$. Let us further observe, that if a labelling $f$ induces 1-majority illusion for a network $(N,E)$ and $n$ is a dependant of $n'$, then $f(n')=r$. Finally, for a labelled network $(N,E,f)$ and $i\in N$ we define the \emph{margin of victory} for $i$ as $|B^{E(i)}_f| - |R^{E(i)}_f|$. \section{Verifying Illusion}\label{sec:verifying} We are interested in finding the complexity of checking, for a specific $q$, if a given network admits a $q$-majority illusion. \begin{quote} \noindent \textsc{$q$-majority illusion}:\\ \hspace*{-1em} \indent\textit{Input:} Social network $\textit{SN}=( N, E)$.\\ \hspace*{-1em}\textit{Question:} Is there a colouring $f: N \rightarrow C$ such $f$ induces a $q$-majority illusion? \end{quote} We now prove that \textsc{$q$-majority illusion} is NP-complete for every rational $q \in(\frac{1}{2},1]$, by providing a reduction from the NP-complete problem \textsc{3-SAT} for every such $q$. In \textsc{3-SAT} we check the satisfiability of a CNF formula in which all clauses have exactly three literals (see, e.g. \cite{papadimitriou2003computational}). We say that such a formula is in 3-CNF. We describe the constructions and sketch the main lines of the proof, which can be found in complete form in the appendix. Let $\varphi$ be a formula in 3-CNF. We commence with constructing a social network which we call the \emph{encoding} of $\varphi$, or $E_{\varphi}=(N,E,f)$. We will further show that it admits 1-majority illusion if and only if $\varphi$ is satisfiable, entailing the NP-hardness of \textsc{1-majority illusion}. Finally, for each $q \in(\frac{1}{2},1]$ we construct $E^q_{\varphi}$ appending a non-trivial network construction to $E_{\varphi}$. We then conclude the proof showing that $E^q_{\varphi}$ admits a $q$-majority illusion iff $\varphi$ is satisfiable. \paragraph{Variable, clause, and balance gadgets.} {For a formula $\varphi$ in 3-CNF, we denote the set of variables in $\varphi$ as $P_{\varphi}=\{p_1, \dots, p_m\}$, and the set of clauses in $\varphi$ as $C_{\varphi}=\{C_1, \dots, C_n\}$.} The first step is to encode propositional variables. For a variable $p_i$, we define a subnetwork called \emph{variable gadget} as depicted in Figure \ref{VariableGadget1}. {We refer to the nodes in the bottom pair of the gadget as \emph{literal nodes}. Also, we call the left literal node $p_i$, and the right $\neg p_i$.} \begin{lemma} A labelling of a variable gadget (considered as a separate network) induces a 1-majority illusion only if exactly one of the nodes in the bottom pair is labelled $r$. \end{lemma} \begin{comment} Observe that a labelling of this gadget (considered as a separate network) induces a 1-Majority Illusion only if exactly one of the nodes in the bottom pair is labelled $r$. To see that this observation holds, notice that all nodes in the 4-clique need to be labelled $r$ for the 1-Majority Illusion to hold, as all of them have at least one dependant Further, as there are eleven nodes in the gadget, only five of them can be labelled $r$ for a labelling to induce the 1-Majority Illusion in such a structure (considered as a separate network). But 1-Majority Illusion will not be induced for this structure if nodes $p_i$ and $\neg p_i$ situated at the bottom of the variable gadget are both labelled $b$. {Indeed, if, in some labelling of the gadget which induces 1-Majority Illusion, $p_i$ and $\neg p_i$ were labelled $b$, then at least one of the nodes in the 4-clique would not be under illusion, which is not possible. This holds as, by the definition of 1-Majority Illusion, all of the nodes in the clique are labelled with $r$ and at least six nodes in the gadget are labelled $r$, in a labelling of a variable gadget which induces illusion. It implies that one of the nodes in the clique would be linked to three red nodes and three blue nodes.} \end{comment} We say that a labelling of a variable gadget is of type A if it induces a 1-majority illusion and $f(p_i)=r$. Symmetrically, we say that a labelling is of type B if it induces illusion and $f(\neg p_i)=r$. It is worth to observe that labellings of type A and of type B are unique. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture} [->,shorten >=1pt,auto,node distance=1.2cm, semithick] \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (A) {}; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (B) [ right of= A] {}; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (C) [below of = A] { }; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (D) [below of = B] { }; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (E) [below of = C] { }; \node (E') [left of = E] [fontscale=4] { $\displaystyle p_i$ }; \node[shape=circle,draw=black, fill=myblue] (F) [below of = D] { }; \node[shape=circle,draw=white] (F') [right of = F] [fontscale=4] {$\neg p_i$ }; \node[shape=circle,draw=black, fill=myblue] (G) [left of = A]{}; \node[shape=circle,draw=black, fill=myblue] (H) [right of = B]{}; \node[shape=circle,draw=black, fill=myblue] (I) [ below of=H]{}; \node[shape=circle,draw=black, fill=myblue] (K) [ below of=G]{}; \node[shape=circle,draw=black, fill=myblue] (TEST) at ($(G)!0.5!(K)$) {}; \draw [thick,-] (TEST) to (A) ; \draw [thick,-] (A) to(G) ; \draw [thick,-] (K) to(C) ; \draw [thick,-] (B) to(H) ; \draw [thick,-] (D) to(I) ; \draw [thick,-] (A) to(B) ; \draw [thick,-] (B) to (C) ; \draw [thick,-] (A) to (D) ; \draw [thick,-] (A) to (C) ; \draw [thick,-] (C) to (D) ; \draw [thick,-] (B) to (D) ; \draw [thick,-] (C) to (E) ; \draw [thick,-] (D) to (F) ; \draw [thick,-] (C) to (F) ; \draw [thick,-] (D) to (E) ; \end{tikzpicture}} ~ \scalebox{0.5}{ \begin{tikzpicture} [->,shorten >=1pt,auto,node distance=1.2cm, semithick] \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (A) {}; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (B) [ right of= A] {}; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (C) [below of = A] { }; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (D) [below of = B] { }; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (E) [below of = C] { }; \node[shape=circle,draw=white] (E') [left of = E] [fontscale=4] {$p_i$ }; \node[shape=circle,draw=black, fill=myblue] (F) [below of = D] { }; \node[shape=circle,draw=white] (F') [right of = F] [fontscale=4] {$\neg p_i$ }; \node[shape=circle,draw=black, fill=myblue] (G) [left of = A]{}; \node[shape=circle,draw=black, fill=myblue] (H) [right of = B]{}; \node[shape=circle,draw=black, fill=myblue] (I) [ below of=H]{}; \node[shape=circle,draw=black, fill=myblue] (K) [ below of=G]{}; \node[shape=circle,draw=black, fill=myblue] (TEST) at ($(G)!0.5!(K)$) {}; \draw [thick,-] (A) to(G) ; \draw [thick,-] (K) to(C) ; \draw [thick,-] (B) to(H) ; \draw [thick,-] (D) to(I) ; \draw [thick,-] (TEST) to(A) ; \draw [thick,-] (A) to(B) ; \draw [thick,-] (B) to (C) ; \draw [thick,-] (A) to (D) ; \draw [thick,-] (A) to (C) ; \draw [thick,-] (C) to (D) ; \draw [thick,-] (B) to (D) ; \draw [thick,-] (C) to (E) ; \draw [thick,-] (D) to (F) ; \draw [thick,-] (C) to (F) ; \draw [thick,-] (D) to (E) ; \end{tikzpicture} } \caption{Variable gadget of type A in the left network, and of type B in the right network. The gadgets above correspond to $p_i$, and we refer to the left node in the bottom pair as $p_i$, and to the right as $\neg p_i$.}\label{VariableGadget1} \end{figure} As a second step, we define \emph{clause gadgets}, associated to each clause $C_j \in C_{\varphi}$, as depicted in Figure~\ref{ClaueGadget1}. {The top three nodes outside of the dashed rectangle are literal nodes and do not belong to the gadget.} Then, a clause gadget consists of sixteen nodes , including a 5-clique. In this structure, three members of the clique are adjacent to two dependants each and to one additional node, which we call a \emph{co-dependant}. The three co-dependants form a clique in this gadget. {Members of the five-clique which are adjacent to a co-dependant, are also adjacent to particular literals nodes. For every literal $L$ in the $C_j$, $L$ is adjacent to exactly one of the mentioned members of the clique and at most one literal node is adjacent to each member of a clause gadget.} Connections between literal nodes and a clause gadget are shown in the Figure {\ref{ClaueGadget1}}. The remaining two nodes in the 5-clique have one dependant each. \begin{figure}[H] \centering \scalebox{0.4}{ \begin{tikzpicture} [->,shorten >=1pt,auto,node distance=1.5cm, semithick] \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (S) { }; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (A)[below of=S, left of=S] {}; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (B) [below of=S, right of= S] {}; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (C) [below of = A] { }; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (D) [below of = B] { }; \node[shape=circle,draw=black, fill=myblue] (L2) [above of=S] { }; \node[shape=circle,draw=black, fill=myblue] (E) [below of = C] { }; \node[shape=circle,draw=black, fill=myblue] (F) [below of = D] { }; \node[shape=circle,draw=black, fill=myblue] (G) [left of = A]{}; \node[shape=circle,draw=black, fill=myblue] (G1) [above of = G]{}; \node[shape=circle,draw=black, fill=myblue] (H) [right of = B]{}; \node[shape=circle,draw=black, fill=myblue] (H1) [above of = H]{}; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (L1) [above of=G1, right of=G1] { }; \node[] (L1') [left of = L1] [fontscale=4] {$L_1$ }; \node[shape=circle,draw=black, fill=myblue] (L3) [above of=H1, left of=H1] { }; \node[shape=circle,draw=white] (L3') at ($(L2)!0.5!(L3)$) [fontscale=4] {$L_3$ }; \node[shape=circle,draw=white] (L2')at ($(L2)!0.5!(L1)$) [fontscale=4] {$L_2$ }; \draw [thick,-] (L1) to(A) ; \draw [thick,-] (L2) to(S) ; \draw [thick,-] (L3) to(B) ; \node[shape=circle,draw=black, fill=myblue] (I) [ below of=H]{}; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (J) [ below of=I]{}; \node[shape=circle,draw=black, fill=myblue] (K) [ below of=G]{}; \node[shape=circle,draw=black, fill=myblue] (N) [below of = K]{}; \node[shape=circle,draw=myred, pattern=crosshatch, pattern color=myred] (M) [right of = E]{}; \draw [thick,-, bend right] (M) to (J) ; \draw [thick,-, bend left] (M) to (N) ; \draw [thick,-, bend left] (J) to (N) ; \draw [thick,-] (A) to(G) ; \draw [thick,-] (A) to(N) ; \draw [thick,-] (K) to(A) ; \draw [thick,-] (B) to(H) ; \draw [thick,-] (B) to(I) ; \draw [thick,-] (S) to(M) ; \draw [thick,-] (G1) to(S) ; \draw [thick,-] (H1) to(S) ; \draw [thick,-] (A) to(B) ; \draw [thick,-] (B) to (C) ; \draw [thick,-] (A) to (D) ; \draw [thick,-] (A) to (C) ; \draw [thick,-] (C) to (D) ; \draw [thick,-] (B) to (D) ; \draw [thick,-] (A) to (S) ; \draw [thick,-] (B) to (S) ; \draw [thick,-] (C) to (S) ; \draw [thick,-] (S) to (D) ; \draw [thick,-] (J) to (B) ; \draw [thick,-] (C) to (E) ; \draw [thick,-] (D) to (F) ; \node[draw, dashed, fit=(G1) (H1) (J) ](FIt1) {}; \end{tikzpicture}} \caption{Clause gadget, corresponding to a clause $(L_1, L_2, L_3)$, enclosed in the dashed rectangle. The top three nodes are literals}\label{ClaueGadget1} \end{figure} Observe that in any labelling of this gadget inducing 1-majority illusion all members of the 5-clique need to be labelled $r$, as each of them has a dependant. Further, for a labelling of this gadget in which blue is the majority winner to induce 1-majority illusion only two nodes outside of the clique can be labelled $r$. Otherwise, at least eight nodes in the gadget would be labelled $r$ and thus $b$ would not be the unique majority winner. Also, note that at least two co-dependants need to be labelled red in order for all of three of them to be under illusion. So, in a labelling of this gadget which induces 1-majority illusion, exactly 7 nodes are labelled red. \begin{lemma} There exists a labelling of a clause gadget (not as a separate network) which induces 1-majority illusion with blue being a majority winner in this structure if and only if at least one node is adjacent to a literal node labelled $r$. \end{lemma} The final component of the encoding of $\varphi$ is the \emph{balance gadget}. Given a natural number $k \geq 2$, if $k$ is even, it consists of $\frac{k}{2}$ pairs of nodes. Otherwise, it consists of $\frac{k-1}{2}$ pairs of nodes, and 1 triple of nodes. \begin{comment} \begin{figure}[H] \centering \scalebox{0.6}{ \begin{tikzpicture} [->,shorten >=1pt,auto,node distance=1cm, semithick] \node[shape=circle,draw=myred, dashed] (A) {}; \node[shape=circle, below of = A, draw=myred, fill=myred] (B) {}; \node[shape=circle, right of =B, draw=myred, fill=myred] (C) {}; \draw [thick, dashed,-] (A) to (B) ; \draw [thick, dashed,-] (C) to (A) ; \draw [thick,-] (C) to (B) ; \node[shape=circle, below of=B, draw=myred, fill=myred] (D) {}; \node[shape=circle, below of =C, draw=myred, fill=myred] (E) {}; \draw [thick,-] (D) to (E) ; \node[shape=circle, below of=D] (D) {$\dots$}; \node[shape=circle, below of=D, draw=myred, fill=myred] (F) {}; \node[shape=circle, right of =F, draw=myred, fill=myred] (G) {}; \draw [thick,-] (F) to (G) ; \end{tikzpicture}} \caption{Balance gadget with a unique labelling such that all members are under illusion.}\label{balance1} \end{figure} \end{comment} \paragraph{Encoding of a 3-CNF formula.} We are now ready to construct a social network starting from a 3-CNF formula $\varphi$. Firstly, for every $p \in P_{\varphi}$ create a variable gadget as in Figure \ref{VariableGadget1}. Further, for every clause $C_i=\{L_i^1, L_i^2, L_i^3\}$ in $C_{\varphi}$ create a clause gadget as in Figure \ref{ClaueGadget1}, with the literal nodes corresponding to $L_i^1$ adjacent to the top left member of the 5-clique, corresponding to $L_i^2$ to the central top member , and corresponding to $L_i^3$ to the top right member. As a final step, create a balance gadget with $k= m+2n-1$. Observe that as there are $m +2n-1$ nodes in the balance gadget, the total number of nodes in the encoding of $\varphi$ is $12m + 18n -1 $. Let us first observe a few facts regarding any labellings of $E_{\varphi}$ for a formula $\varphi$ in 3-CNF which induces a 1-majority illusion. First note that $E_{\varphi}$ contains $m$ variable gadgets, with 11 nodes each. As observed earlier, in every labelling of $E_{\varphi}$ which induces 1-majority illusion, at least 5 nodes have to be labelled $r$ in every variable gadget. Furthermore, $E_{\varphi}$ contains $n$ clause gadgets, with 16 nodes each. Note that in a labelling of $E_{\varphi}$ which induces 1-majority illusion at least 7 nodes need to be labelled $r$ in every clause gadget, as the 5 clique has to be labelled all $r$ due to the presence of dependants, and at least 2 co-dependants need to be labelled red, as otherwise some of the nodes in the bottom 3-clique would not be under illusion. Observe further that in all labellings of the encoding of $\varphi$ that induce a 1-majority illusion, all $m + 2n -1$ members of the balance gadget are labelled red. Hence, due to the presence of the balance gadget, any labelling of $E_{\varphi}$ which induces a 1-majority illusion contains at least $6m+9n-1$ red nodes and at most $6m+9n$ blue nodes, while blue has at most a margin of victory of 1. \begin{lemma} In a labelling of $E_{\varphi}$ which induces a 1-majority illusion every variable gadget is of type A or type B. \end{lemma} So, every labelling of $E_{\varphi}$ which induces 1-majority illusion corresponds to a unique valuation over $P_{\varphi}$, where a variable $p_i$ is said to be true if the labelling of the variable gadget corresponding to $p_i$ is of type A, and false if it is of type B. Note also that, as we argued before, a labelling of $E_{\varphi}$ can only induce 1-majority illusion if at least one node in every clause gadget is adjacent to a literal node labelled $r$. Finally, observe that if every variable gadget is of type A or type B, and least one node in every clause gadget is adjacent to a literal node labelled $r$, we can find a labelling of $E_{\varphi}$ which induces illusion, as depicted in Figures \ref{VariableGadget1} and \ref{ClaueGadget1}, with all nodes in the balance gadget labelled red. We are now ready to show that for every formula $\varphi$ in 3-CNF, $E_{\varphi}$ admits 1-majority illusion if and only if $\varphi$ is satisfiable. \begin{lemma}\label{lemma:1-hard} Let $\varphi$ be a formula in 3-CNF. Then, $\varphi$ is satisfiable if and only if $E_{\varphi}$ admits 1-majority illusion. \end{lemma} \begin{proof} Let us consider a formula $\varphi$ in 3-CNF with the set of variables $P_{\varphi}=\{p_1, \dots, p_m\}$ and the set of clauses $C_{\varphi}=\{C_1, \dots, C_n\}$. Then, let us construct the encoding $E_{\varphi}$ and show that it admits 1-majority illusion if and only if $\varphi$ is satisfiable. Suppose that it is. Then, take a model $M$ of $\varphi$ and construct the following labelling of $E_{\varphi}$. Colour variable gadgets so that for a gadget corresponding to $p_i$, it is of type A if if $p_i$ is true in $M$, and of type B otherwise. Note that, as $M$ is a model of $\varphi$, by construction of $E_{\varphi}$ at least one node in every clause gadget is adjacent to a literal node labelled $r$. So, there is a labelling of $E_{\varphi}$ which induces 1-majority illusion. Further, suppose that $\varphi$ is not satisfiable. Then, assume towards contradiction, that there is a labelling $f$ of $E_{\varphi}$ which induces 1-majority illusion. Observe that as $f$ induces 1-majority illusion, it corresponds to a unique valuation $V$ over $P_{\varphi}$, where a variable $p_i$ is true in $V$ if it's corresponding gadget is labelled in type A, and false if it is labelled in type B. Furthermore, observe that as $\varphi$ is not satisfiable, there exists a clause $C_j \in C_{\varphi}$ such that for every literal $L$ in $C_j$, $L$ is false in $V$. But this entails that all literal nodes adjacent to the clause gadget corresponding to $C_{\varphi}$ are labelled $b$. But then $f$ does not induce 1-majority illusion, which contradicts the assumptions. \end{proof} We now show some further properties of $E_{\varphi}$. Given a 3-CNF formula $\varphi$, let $I_{\varphi} = 6m +9n -1$, where $m$ is the number of variables and $n$ the number of clauses in $\varphi$. Observe that this is the maximum number of nodes which can be labelled red in $E_{\varphi}$ if blue is the strict majority colour in this network. \begin{lemma}\label{lemma:RigidEncoding} For every 3-CNF formula $\varphi$, $k \leq I_{\varphi}$ and any labelling $f$ of $E_{\varphi}$ such that $R_f = I_{\varphi} - k$, the number of nodes under illusion in $E_{\varphi}$ under $f$ is at most $| N| - k $. \end{lemma} We also need the following technical lemma. \begin{lemma}\label{lemma:h-exists} Let $q$ be a rational number in $(\frac{1}{2}, 1]$, and $k>0$ be a natural number. Then, there exists a natural number $h^*$ such that $\frac{k+ h^*}{k+ 2h^* } \geq q$, but $\frac{k+ h^* -1}{k+ 2h^* } < q $. \end{lemma} We refer to such a number as $h^*_{k,q}$. It is not difficult to show that we can compute $h_{k,q}^*$ in polynomial time. This observation is crucial to ensure that the intended reduction is constructable in polynomial time. We are now ready to prove the main result of this section. To show that \textsc{$q$-majority illusion} is NP-hard for a particular, rational $q$ in $(\frac{1}{2},1]$, we construct a network $E^q_{\varphi}$ for every formula $\varphi$ in 3-CNF. We start with constructing $E_{\varphi}$ and set of $h_{|E_{\varphi}|, q}^*$ pairs of nodes. Then, it follows from Lemma \ref{lemma:1-hard}, as well as Lemmata \ref{lemma:RigidEncoding} and \ref{lemma:h-exists} that $E^q_{\varphi}$ admits $q$-majority illusion if and only if $\varphi$ is satisfiable. The details of the proof can be found in the appendix. Observe further that \textsc{$q$-majority illusion} is in NP, as one can easily check the number of nodes under illusion in a labelled network. This concludes the proof of the following theorem: \begin{theorem}\label{thm:verifyillusion} \textsc{$q$-majority illusion} is NP-complete for every rational $q$ in $(\frac{1}{2},1]$. \end{theorem} \section{Eliminating Illusion}\label{sec:eliminating} We now turn to the problem of reducing the number of nodes under illusion in a given labelled network, by modifying the connections between them. Namely, we consider the problem of checking if it is possible to ensure that a $q$-majority illusion does not hold in a labelled network be altering only a bounded number of edges. \begin{quote} \noindent \textsc{$q$-Illusion Elimination}:\\ \hspace*{-1em} \indent\textit{Input:} $\textit{SN}=( N, E, f)$ such that $f$ induces $q$-majority illusion in \textit{SN}, $k \in \mathbb{N}$ such that $k \leq |E|$.\\ \hspace*{-1em}\textit{Question:} Is there a $\textit{SN}'=(N,E',f)$ such that $| \{( e \in N^2: \ e \in E \textit{ iff } e \notin E' \} | \leq k$ and $f$ does not induce $q$-majority illusion in \textit{SN'}? \end{quote} Subsequently, we consider the problem of eliminating a $q$-majority illusion just by adding edges to the network. \begin{quote} \noindent \textsc{Addition $q$-Illusion Elimination }:\\ \hspace*{-1em} \indent\textit{Input:} $\textit{SN}=( N, E, f)$ s.t. $f$ induces $q$-majority illusion in \textit{SN}, $k \in \mathbb{N}$ such that $k \leq |E|$.\\ \hspace*{-1em}\textit{Question:} Is there a $\textit{SN}'=(N,E',f)$ such that \textit{SN} is a subnetwork of \textit{SN'}, $ |E'| - |E| \leq k$ and $f$ does not induce $q$-majority illusion in \textit{SN'}? \end{quote} Finally, we can give an analogous definition for \textsc{Removal $q$-Illusion Elimination}, which looks for subnetworks of \textit{SN} obtained by removing at most $k$ edges such that an existing $q$-illusion is eliminated. In this section we will show that these problems are NP-complete for every rational $q$ in $(0,1)$ by reduction from 2P2N-SAT problem, which has been shown to be NP-complete. In 2P2N-SAT it is checked if a CNF formula in which every variable appears twice in the positive, and twice in the negative form is satisfiable (see \cite{berman2004approximation}). We will commence with showing that \textsc{$q$-Illusion Elimination} is NP-complete for every rational $q$ in $(0,1)$. We begin by presenting the structures that will form our reduction, and then sketch the main lines of the proof, which can be found in complete form in the appendix. \paragraph{$k$-Pump-up gadget.} Let us construct what we call a \emph{$k$-pump-up gadget}. For a natural number $k \geq 1$ we create $k+4$ blue nodes which are not connected to each other. In addition we construct 4 red nodes, which are also not connected to each other. Furthermore, let each red node in the gadget be connected to all blue nodes in this structure. Observe that if a $k$-pump-up gadget is embedded in a network in which blue is the majority winner, then $k+4$ nodes are under illusion in this structure, while 4 are not. Also, for every blue node $i$ in the gadget, the margin of victory of $i$ is $-4$. \paragraph{$k$-Pump-down gadget.} Let us further construct what we call a \emph{$k$-pump-down gadget}. For an uneven, natural $k \geq 3$ the $k$-pump-down gadget is a $k$-clique in which blue has the majority of 1. Also, for an uneven, natural $k \geq 4$ we construct a gadget for $k-1$ and a disjoint red node. Observe that if a $k$-pump-down gadget is embedded in a network in which blue is the majority winner, then all $k$ members of the structure are not under illusion. Moreover, if a blue node in the gadget would be adjacent to an additional red node, then it would be pushed into illusion. We also need the following technical lemmas. \begin{lemma}\label{lemma:UpExists1} For every pair of natural numbers $m,k>0$ and any rational number $q$ in $(0,1)$ such that $\frac{m}{k}<q$ there exists an $h$ such that $\frac{m+h}{k+h+4} < q$ but $\frac{m+h+1}{k+h+4} \geq q$. \end{lemma} We will further denote such a number as $h_{k,m,q}^{\#}$, or $h^{\#}$ if $k,m$ and $q$ are clear from the context. \begin{lemma}\label{lemma:DownExist1} For every rational number $q\in (0,1)$ and $m,k \in \mathbb{N}$ such that $\frac{m}{k} \geq q$ there is a natural $h$ such that $\frac{m}{k+h} < q$, but $\frac{m+1}{k+ h} \geq q$. \end{lemma} We denote such a number as $h^+_{m,k,q}$, or $h^+$ if $m,k$ and $q$ are clear from the context. We are now ready to construct the labelled social network which we will call an \emph{encoding} of a formula $\varphi$ in 2P2N form, with the set of variables $P_{\varphi} = \{p_1, \dots, p_m\}$ and the set of clauses $C_{\varphi}=\{C_1, \dots, C_n\}$. We also refer to such a network as $E_{\varphi}=(N,E,f)$. \paragraph{Variable, clause, and balance gadgets.} Let us start with describing what we call a \emph{variable gadget}. For every variable $p_i \in P_{\varphi}$, construct two triples of nodes labelled blue, $\{p_i^1, p_i^2, p_i^3\}$ and $\{ \neg p_i^1, \neg p_i^2, \neg p_i^3\}$. Let all literal nodes form a clique. We say that the first of them corresponds to the literal $p_i$, while the second to $\neg p_i$, and call members of these triples \emph{literal nodes}. Further, for every literal $L$ let us construct a node $A_L$, labelled blue, which we call an \emph{auxiliary node} of $L$ and let auxiliary nodes form a clique. For each literal $L$, let $A_L$ be adjacent to all literal nodes not corresponding to $L$. Also, for every variable $p_i$ let us construct a node $E_i$ labelled blue, which we call the \emph{extra node} of $p_i$. Furthermore, for each variable $p_i$, let $E_i$ be adjacent to all auxiliary nodes and literal nodes not corresponding to $p_i$ or $\neg p_i$, and let all extra nodes form a clique. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture} [->,shorten >=1pt,auto,node distance=1.2cm, semithick] \node[shape=circle,draw=black, fill=myblue] (A) {}; \node[shape=circle,draw=black, right of=A, fill=myblue] (B) {}; \node[shape=circle,draw=black, left of=A, fill=myblue] (A') {}; \node[draw, fit=(A') (B), minimum height=1.2cm ](FIt1) {}; \draw [thick,-] (A) to (A') ; \draw [thick,-] (B) to (A) ; \draw [thick, bend left, -] (A') to (B) ; \node[shape=circle, left of=A'] (A'') [fontscale=4] {$p_i$}; \node[shape=circle,draw=black, right of=B, fill=myblue] (C) {}; \node[shape=circle,draw=black, right of=C, fill=myblue] (D) {}; \node[shape=circle,draw=black, right of=D, fill=myblue] (D') {}; \node[draw, fit=(C) (D'), minimum height=1.2cm ](FIt2) {}; \draw [thick,-] (C) to (D) ; \draw [thick,-] (D) to (D') ; \draw [thick, bend left, -] (C) to (D') ; \node[shape=circle, right of=D'] (D'') [fontscale=4] {$\neg p_i$}; \draw [thick,-] (FIt1) to (FIt2) ; \node[shape=circle,draw=black, below of=B, fill=myblue] (N) {}; \node[shape=circle, right of=N] (Q'') [fontscale=4] {$E_i$}; \node[shape=circle, below of=N] (N') {$\dots$}; \node[shape=circle,draw=black, left of=N', pattern=crosshatch, pattern color=myred] (N'') {}; \node[shape=circle,draw=black, right of=N', pattern=crosshatch, pattern color=myred] (N''') {}; \node[draw, fit=(N'') (N''') ](FIt3) {}; \node[shape=circle,draw=black, fill=myblue, above of=B] (E) {}; \node[shape=circle,draw=black, fill=myblue, right of=E] (F) {}; \draw [thick,-] (E) to (F) ; \node[shape=circle, left of=E] (Q) [fontscale=4] {$A_{p_i}$}; \node[shape=circle, right of=F] (Q') [fontscale=4] {$A_{\neg p_i}$}; \draw [thick,-] (F) to (FIt1) ; \draw [thick,-] (FIt2) -- (E) ; \draw [thick,-] (FIt3) to (N) ; \node[shape=circle,draw=black, above of=E, pattern=crosshatch, pattern color=myred] (G) {}; \node[shape=circle, left of=G] (H) {$\dots$}; \node[shape=circle,draw=black, left of=H, pattern=crosshatch, pattern color=myred] (I) {}; \node[draw, fit=(G) (I) ](FIt4) {}; \draw [thick,-] (E) to (FIt4) ; \node[shape=circle,draw=black, above of=F, pattern=crosshatch, pattern color=myred] (L) {}; \node[shape=circle, right of=L] (M) {$\dots$}; \node[shape=circle,draw=black, right of=M, pattern=crosshatch, pattern color=myred] (q) {}; \node[draw, fit=(L) (q) ](FIt6) {}; \draw [thick,-] (F) to (FIt6) ; \end{tikzpicture}} \caption{Variable Gadget}\label{fig:VariableGadget3} \end{figure} In addition, let us construct what we call a \emph{clause gadget}. For every clause $C_i \in C_{\varphi}$ let us create a \emph{verifier node} $v_{C_i}$, labelled blue. We say that this node corresponds to $C_i$. Furthermore, for each clause $C_i$ and each literal $L$ not in $C_i$, let $v_{C_i}$ be adjacent to all literal nodes corresponding to $L$, as well as all auxiliary and extra nodes. Finally, for each clause $C_i$, create a group of $3 |\neg P^i| + 3 |P_{\varphi}| +1 $ nodes labelled red, where $\neg P^i$ is the set of literals which are not in $C_i$. Let all nodes corresponding to members of $\neg P^i$ be adjacent to $v_{C_i}$. Observe that in an extension of the encoding of $\varphi$ in which one additional node labelled blue is adjacent to $v_{C_i}$ (and no edges from red nodes are added to the network), illusion is eliminated from this node. In addition, for every auxiliary node $A_L$ create a group of nodes labelled red, adjacent to $A_L$, of the size such that there number of red nodes in the neighbourhood of $A_L$ is greater than of those labelled blue by exactly 3. Namely, let the size of such a group be $9 |P_{\varphi}| + |C_{\varphi}| -1$. Similarly, for every literal node $p_i^j$ construct a group of nodes labelled red, adjacent to $p_i^j$ such that there is one more red node in the neighbourhood of $L_i^j$ than blue. Namely, let there be $9 |P_{\varphi}|+ \neg C^L-2$ nodes adjacent to $L_i^j$, where $C^L$ is the number of clauses in which $L$ does not appear. Also, for every extra node $E_i$, construct a group of red nodes adjacent to $E_i$ of the size such that there is one more red node in the neighbourhood of $E_i$ than the number of blue nodes in the neighbourhood of $E_i$. Namely let there be $6 |P_{\varphi}| + 9| P_{\varphi}| -6 $ such nodes. Finally, create a group of disconnected blue nodes of the minimal size sufficient for blue to be the strict majority in the encoding of $\varphi$. \paragraph{Budget and requirement.} We call $|P_{\varphi}|$ the \emph{requirement}, or $r_{\varphi}$. Also, we call $6|P_{\varphi}|$ the \emph{budget}, or $b_{\varphi}$. We say that network $E_{\varphi}'=(N,E',f)$ satisfies the requirement an the budget if $ | \{ e \in N^2 : e \in E \textit{ iff } e \notin E' \} \leq b_{\varphi}| $ while less than $r_{\varphi}$ nodes are under illusion in $E_{\varphi}'$. \paragraph{Observations on modifications satisfying the budget and the requirement.} Let us first observe that the only nodes under illusion are literal nodes, extra nodes, auxiliary nodes and verifier nodes. Therefore, it is sufficient to eliminate the illusion from all literal, extra and verifier nodes, as well as from the half of auxiliary nodes to meet the requirement. Furthermore, one can verify that there is no network satisfying the budget and the requirement in which some node is pushed into illusion. In addition, if no literal node is pushed into illusion, one can verify that if illusion is eliminated from more than a half of auxiliary nodes, then at least 1 extra node would remain under illusion. For a variable gadget corresponding to $p_i$ such that in an extension \textit{SN'} of the encoding of $\varphi$ the illusion has been eliminated from $E_i$ and from one of the auxiliary nodes, we say that $p_i$ is false in \textit{SN'} if the illusion was eliminated from $A_{p_i}$, and true if it has been eliminated from $A_{\neg p_i}$. Furthermore, observe that in a network which satisfies budget and requirement, at least one auxiliary node is not under illusion in every variable gadget. Moreover, observe that by construction, in every network satisfying the budget and the requirement, exactly one edge to a blue node is added to each literal node. This entails that in every network satisfying the budget and the requirement, each verifier node $v_{C_i}$ is adjacent to a literal node corresponding to some true literal in $C_i$. \begin{lemma}\label{lemma:RemovalWorks} For every formula $\varphi$ in 2P2N form, there is a network $E'_{\varphi}=(N,E',f)$ which satisfies the requirement and the budget if and only if $\varphi$ is satisfiable. \end{lemma} \begin{proof} Take a formula $\varphi$ in 2P2N form. First, suppose it is not satisfiable. Let us further suppose, towards contradiction, that there exists a network $E'_{\varphi}$ satisfying the budget and the requirement. As $\varphi$ is not satisfiable, for every valuation $V$ over $P_{\varphi}$ there is a clause $C_i$ such that no literal in $C_i$ is true in $V$. Further, let $V$ be a valuation over $P_{\varphi}$ in which a literal $L$ is true if and only if it is true in $E'_{\varphi}$. But this means that, by previous observations, there needs to exist a verifier node $v_{C_i}$ which is not to adjacent to a literal node corresponding to some true literal in $C_i$. So, $E'_{\varphi}$ does not satisfy the budget and the requirement which contradicts the assumptions. Suppose now that $\varphi$ is satisfiable. Let us construct a network $E'_{\varphi}$ satisfying the requirement and the budget. As $\varphi$ is satisfiable, there exists a valuation $V$ over $P_{\varphi}$ such that, for every clause $C_i$, there is a literal $L$ in $C_i$ which is true in $V$. Connect now edges between all literal nodes corresponding to literals false in $V$ and auxiliary nodes. Then, for every literal $L$ true in $E_{\varphi}$, construct an edge between node $L^1$ and an extra node. Finally, for every clause $C_i$, add an edge between $v_{C_i}$ and exactly one literal node corresponding to a literal true in \textit{SN}, represented in $C_i$. Note that this is always possible since $V$ is a model of $\varphi$. Notice further that $\varphi$ is in 2P2N form. Therefore, as $L$ occurs twice in $\varphi$, we can ensure that at most one edge is added between a node corresponding to $L$ and a verifier node. Finally, for every literal node $L^i$ still under illusion, add an edge between $L^i$ and any blue node in the encoding of $\varphi$. But then, in the constructed subnetwork, only $| P_{\varphi}| $ nodes are under illusion, and edges have been added between $6 |P_{\varphi}|$ pairs of nodes. Thus, the encoding of $\varphi$ satisfies the requirement and the budget. \end{proof} The above observations are sufficient to prove the main result of this section. To show that \textsc{$q$-Illusion Elimination} is NP-complete for a given rational $q \in (0,1)$, we construct a labelled network $E_{\varphi}^q$ for every formula $\varphi$ in 2P2N form. First, we construct $E_{\varphi}$. Let $I_{\varphi}$ denote the number of nodes under illusion in $E_{\varphi}$. Then, if $\frac{I_{\varphi} - r_{\varphi}}{|N|} < q $, we add a $ h_{|N|- r_{\varphi}, b_{\varphi}, q}^{\#}$-pump-up gadget. Otherwise, we add a $ h_{|N| - r_{\varphi}, b_{\varphi}, q}^{+}$-pump-down gadget. It follows from Lemma \ref{lemma:RemovalWorks}, as well as Lemmata \ref{lemma:DownExist1} and \ref{lemma:UpExists1} that in both cases the $q$-majority illusion can be eliminated from $E_{\varphi}^q$ if and only if $\varphi$ is satisfiable. Notice further that \textsc{$q$-Illusion Elimination } is in NP. Thus, NP-completeness of the problem follows. \begin{theorem}\label{theorem:removal} \textsc{$q$-Illusion Elimination} is NP-complete for every rational $q \in (0,1)$. \end{theorem} Furthermore, NP-completeness of \textsc{Addition $q$-Illusion Elimination} and \textsc{Removal $q$-Illusion Elimination } can be shown with a reduction similar to the one in the proof of Theorem \ref{theorem:removal}, which can be found in the appendix. \begin{theorem}\label{theorem:ExtraRemoval} \textsc{Addition $q$-Illusion Elimination} and \textsc{removal $q$-Illusion Elimination} are NP-complete for every rational $q \in (0,1)$. \end{theorem} \section{Conclusions}\label{sec:conclusions} We have provided non-trivial constructions showing the algorithmic hardness of checking if it is possible to find a colouring of a social network in which a specified fraction of agents is under illusion, and of checking if the number of agents under illusion can be reduced to a desired level by modifying the connections between them. Our research opens a number of directions for further investigations. Let us mention a few particularly interesting ones. \begin{itemize} \item Establishing the complexity of checking if a network admits a $q$-majority illusion for fractions smaller than or equal to $\frac{1}{2}$ remains open (cf. Theorem~\ref{thm:verifyillusion}). \item There are social networks that do not admit a illusion but do admit a ``plurality illusion", i.e., misperceiving the more popular option, if three or more were to be used (see the Appendix for such an example). This is particularly relevant for voting contexts such as elections with multiple candidates. \item Real-world social networks often show a high level of clustering, i.e., agents with many connections in common also tend to be connected themselves (see, e.g. \cite{fox2020finding}). It is of interest to study how the ``level'' of clustering can impact the existence of illusion and our complexity results. For example, a network where any pair of nodes with a common connection is also connected, does not admit 1- majority illusion. \item Similarly, while our results show the hardness of the problems studied in the general case, identifying well-known graph parameterisations (e.g., treewidth of the network) under which they are fixed-parameter tractable is a natural direction of research that is motivated by our results. Note that trees do not admit 1-majority illusion. \end{itemize} \newpage \bibliographystyle{named}
1,116,691,497,013
arxiv
\section{Introduction} \par {\color{black} Throughout this paper, let $n$ (resp., $S^n$) be a positive integer (resp., the unit sphere in $\mathbb{R}^{n+1}$). For any point $P\in S^n$, let $H({\color{black}P})$ be the closed hemisphere centered at $P$, namely, $H(P)$ is the set consisting of $Q\in S^n$ satisfying $P\cdot Q\ge 0$, where the dot in the center stands for the scalar product of two vectors $P, Q\in \mathbb{R}^{n+1}$. For any non-empty subset $W\subset S^n$, the {\it spherical polar set of $W$}, denoted by $W^\circ$, is defined as follows: \[ W^\circ = \bigcap_{P\in W}H(P). \] \par In \cite{nishimurasakemi2}, the spherical polar set plays an essential role for investigating a Wulff shape, which is the geometric model of a crystal at equilibrium introduced by G.~Wulff in \cite{wulff}. \par Let $\mathcal{H}(S^n)$ be the set consisting of non-empty closed subsets of $S^n$. It is well-known that $\mathcal{H}(S^n)$ is a complete metric space with respect to the Pompeiu-Hausdorff metric (for instance, see \cite{barnsley, falconer}). Let $\mathcal{H}^\circ (S^n)$ be the {\color{black}subspace} of $\mathcal{H}(S^n)$ consisting of non-empty closed subset $W$ of $S^n$ such that $W^\circ\ne \emptyset$. The {\it spherical polar transform} $\bigcirc: \mathcal{H}^\circ(S^n)\to \mathcal{H}^\circ(S^n)$ is defined by $\bigcirc(W)=W^\circ$. Since $W\subset W^{\circ\circ}$ for any $W\in \mathcal{H}^\circ(S^n)$ by Lemma 2.2 of \cite{nishimurasakemi2}, it follows that $W^\circ\in \mathcal{H}^\circ(S^n)$ for any $W\in \mathcal{H}^\circ(S^n)$. Thus, the spherical polar transform $\bigcirc$ is well-defined \footnote{\color{black} Since $(S^n)^\circ =\emptyset$ for any $n\in \mathbb{N}$, the spherical polar transform defined in \cite{aperture} should be understood as $\bigcirc: \mathcal{H}^\circ (S^n)\to \mathcal{H}^\circ(S^n)$.}. \par In \cite{aperture}, crystal growth is investigated by introducing a geometric model of a certain growing crystal in $\mathbb{R}^2$. One of the powerful tools in \cite{aperture} is the {spherical polar transform} $\bigcirc: \mathcal{H}^{\circ}(S^{2})\to \mathcal{H}^{\circ}(S^{2})$. Especially, for studying the dissolving process of the geometric model introduced in \cite{aperture}, the spherical polar transform {\color{black}is} indispensable since it enables one to analyze in detail the image of dissolving one-parameter family of spherical Wulff shapes. Notice that it is impossible to give a similar analysis in the Euclidean space $\mathbb{R}^n$ since the corresponding image in $\mathbb{R}^n$ is divergent. We consider that the spherical polar transform may be more applicable {\color{black}especially} to investigate dissolving process of Wulff shapes and that it is significant to obtain useful properties of the spherical polar transform. \par In this paper, motivated by these considerations, we investigate natural restrictions of the spherical polar transform. {\color{black} The most natural subspace for the restriction of spherical polar transform is \[\mathcal{H}_{\rm Wulff}(S^n, P) \] defined as follows. \begin{definition}\label{definition closure} {\rm \begin{enumerate} \item\quad Let $W$ be a subset of $S^n$. Suppose that there exists a point $P\in S^n$ such that $W\cap H(P)=\emptyset$. Then, $W$ is said to be {\it hemispherical}. \item\quad Let $W\subset S^{n}$ be a hemispherical subset. Let $P, Q$ be two points of $W$. Then, the following arc is denoted by $PQ$: \[ PQ=\left\{\left.\frac{(1-t)P+tQ}{||(1-t)P+tQ||}\in S^{n}\; \right|\; 0\le t\le 1\right\}. \] \item\quad Let $W\subset S^{n}$ be a hemispherical subset. Suppose that $PQ\subset W$ for any $P, Q\in W$. Then, $W$ is said to be {\it spherical convex}. {\color{black} \item\quad Let $W\subset S^{n}$ be a hemispherical subset. Suppose that $W$ is closed, spherical convex and has an interior point. Then, $W$ is said to be a {\it spherical convex body}. } \item \quad For any point $P$ of $S^n$, let $\mathcal{H}_{\rm Wulff}(S^n, P)$ be the following set: \begin{eqnarray*} \mathcal{H}_{\rm Wulff}(S^n, P) & = & \left\{ W\in \mathcal{H}(S^n)\; \right| \; W\cap H(-P)=\emptyset, P\in \mbox{int}(W), \\ { } & { } & \qquad\qquad \left. W\mbox{ is a spherical convex body} \right\}, \end{eqnarray*} where $\mbox{int}(W)$ stands for the set consisting of interior points of $W$. The topological closure of $\mathcal{H}_{\rm Wulff}(S^n, P)$ is denoted by $\overline{\mathcal{H}_{\rm Wulff}(S^n, P)}$. \item\quad For any $P\in S^n$, an element of $\mathcal{H}_{\rm Wulff}(S^n, P)$ is called a {\it spherical Wulff shape}. \end{enumerate} } \end{definition} \noindent It is known that a Wulff shape in $\mathbb{R}^n$ can be characterized as a convex body of $\mathbb{R}^n$ such that the origin is an interior point of it, namely, as a compact and convex subset of $\mathbb{R}^n$ such that the origin is an interior point of it (\cite{taylor}). Hence, the definition of spherical Wulff shape is reasonable. The restriction of $\bigcirc$ to $\mathcal{H}_{\rm Wulff}(S^{n}, P)$ (resp., $\overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$) is called the {\it spherical dual transform relative to $P$} (resp., {\it extension of the spherical dual transform relative to $P$}) and is denoted by $\bigcirc_{{\rm Wulff},P}$ (resp., $\overline{\bigcirc_{{\rm Wulff},P}}$). The set $\bigcirc(W)=W^\circ$ is called the {\it spherical dual Wulff shape} of $W$ if $W$ is a spherical Wulff shape. Thus, it is reasonable to call $\bigcirc_{{\rm Wulff},P}$ the spherical dual transform. It is not difficult to have the following (cf. {\color{black}Proposition \ref{lemma 1.1}} in Subsection \ref{subsection 5.1}). {\color{black} \begin{eqnarray*} \bigcirc_{{\rm Wulff},P}: \mathcal{H}_{\rm Wulff}(S^{n}, P) & \to & \mathcal{H}_{\rm Wulff}(S^{n}, P) \mbox{ is well-defined and bijective}, \\ \overline{\bigcirc_{{\rm Wulff},P}}: \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)} & \to & \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)} \mbox{ is well-defined and bijective}. \end{eqnarray*} } \par The main purpose of this paper is to show the following: \begin{theorem}\label{theorem 1} Let $P$ be a point of $S^n$. {\color{black} Then, with respect to the Pompeiu-Hausdorff metric, the following two hold: \begin{enumerate} \item {\color{black}The spherical dual transform relative to $P$} \[ \bigcirc_{{\rm Wulff},P} : {\color{black}{\mathcal{H}_{\rm Wulff}(S^{n}, P)}}\rightarrow {\color{black}{\mathcal{H}_{\rm Wulff}(S^{n}, P)}} \] is an isometry. \item The extension of spherical dual transform relative to $P$ \[ \overline{\bigcirc_{{\rm Wulff},P}} : {\color{black}\overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}}\rightarrow {\color{black}\overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}} \] is an isometry. \end{enumerate} } \end{theorem} \noindent For any positive real number $r$, let $D_r$ be the set consisting of $x\in \mathbb{R}^n$ satisfying $||x||\le r$. Then, $D_r$ is a Wulff shape for any $r\in \mathbb{R}$ $(r>0)$ and it is well-known that the dual Wulff shape of $D_r$ is $D_{\frac{1}{r}}$. Moreover, it is easily seen that $h(D_{r_1}, D_{r_2})=|r_1-r_2|$ holds for any $r_1, r_2\in \mathbb{R}$ ($r_1, r_2>0$), {\color{black}where $h$ is the Pompeiu-Hausdorff metric}. Thus, it is impossible to expect the Euclidean counterpart of the assertion (1) of Theorem \ref{theorem 1}. This shows an advantage of studying the spherical version of Wulff shapes. Moreover, the Euclidean counterpart of the extension of spherical dual transform relative to $P$ is not well-defined. This, too, shows an advantage of studying the spherical version of Wulff shapes. } \par \medskip Next, we investigate the restriction of $\bigcirc$ to \[ \overline{\mathcal{H}_{\mbox{s-conv}}(S^n)}, \] which is the topological closure of the set consisting of spherical convex closed subsets. The restriction of $\bigcirc$ to $\overline{\mathcal{H}_{\mbox{s-conv}}(S^n)}$ is denoted by $\bigcirc _{\mbox{\rm s-conv}}$. {\color{black} It is not hard to see the following (cf. Proposition \ref{lemma 1.1} in Subsection \ref{subsection 5.1}). \[ \bigcirc _{\mbox{\rm s-conv}}: \overline{\mathcal{H}_{\mbox{s-conv}}(S^n)} \to \overline{\mathcal{H}_{\mbox{s-conv}}(S^n)} \mbox{ is well-defined and bijective}. \] \begin{theorem}\label{theorem 2} With respect to the Pompeiu-Hausdorff metric, the restriction of the spherical polar transform \[ \bigcirc _{\mbox{\rm s-conv}} : \overline{\mathcal{H}_{\mbox{{\rm s-conv}}}(S^{n})} \rightarrow \overline{\mathcal{H}_{\mbox{{\rm s-conv}}}(S^{n})} \] is bi-Lipschitz but never an isometry. \end{theorem} \bigskip This paper is organized as follows. In Section \ref{section 2}, preliminaries for the proofs of Theorems \ref{theorem 1} and \ref{theorem 2} are given. Theorems \ref{theorem 1} and \ref{theorem 2} are proved in Sections \ref{section 3} and \ref{section 4} respectively. Section \ref{section 5} is an appendix where, for the sake of readers' convenience, it is proved that all of $\bigcirc_{{\rm Wulff},P}$, $\overline{\bigcirc_{{\rm Wulff},P}}$ and $\bigcirc _{\mbox{\rm s-conv}}$ are well-defined bijective mappings; and moreover it is explained why {\color{black} the restriction of $\bigcirc$ to $\overline{\mathcal{H}_{\mbox{{\rm s-conv}}}(S^{n})}$ is important.} \section{Preliminaries}\label{section 2} \subsection{Convex geometry in $S^n$}\label{convex in sphere} \begin{definition}[\cite{nishimurasakemi2}]\label{definition 2.2} {\rm Let $W$ be a hemispherical subset of $S^{\color{black}n}$. Then, the following set, denoted by $\mbox{\rm s-conv}({\color{black}W})$, is called the {\it spherical convex hull of} ${\color{black}W}$. \[ \mbox{\rm s-conv}({\color{black}W})= \left\{\left. \frac{\sum_{i=1}^k t_iP_i}{||\sum_{i=1}^kt_iP_i||}\;\right|\; P_i\in {\color{black}W},\; \sum_{i=1}^kt_i=1,\; t_i\ge 0, k\in \mathbb{N} \right\}. \] } \end{definition} {\color{black} \begin{lemma}[\cite{nishimurasakemi2}]\label{lemma 3} Let $W_{1}, W_{2}$ be non-empty subsets of $S^{n}$. Suppose that the inclusion ${W_{1}\subset W_{2}}$ holds. Then, the inclusion $W_{2}^{\circ}\subset W_{1}^{\circ}$ holds. \end{lemma}} \begin{lemma}[\cite{nishimurasakemi2}]\label{lemma 4} For any non-empty closed hemispherical subset $X \subset S^{n}$, {\color{black} the equality $\mbox{ \rm s-conv}(X)= \left( \mbox{ \rm s-conv}\left( X \right) \right)^{\circ\circ }$ holds. } \end{lemma} The following proposition may be regarded as a spherical version of the separation theorem, which may be easily obtained by the separation theorem in Euclidean space (for the separation theorem in Euclidean space, see for instance \cite{matousek}). \begin{proposition}\label{proposition matousek} Let $P$ be a point of $S^n$ and let $W_1, W_2$ be {\color{black} closed} spherical convex sets such that $W_i\cap H(P)=\emptyset$ $(i=1,2)$. Suppose that $W_1\cap W_2=\emptyset$. Then, there exists a point $Q\in S^n$ satisfying the following: \[ W_1\subset H(Q)\quad \mbox{and}\quad W_2\cap H(Q)=\emptyset. \] \end{proposition} \subsection{Metric geometry in $S^n$}\label{metric in sphere} \quad { } \par {\color{black} For any $P, Q\in S^{n}$, the length of the arc $ PQ$ is denoted by $|PQ|$. } \begin{lemma}\label{lemma 1} For any $P, Q\in S^{n}$ such that $|PQ|\leq \frac{\pi }{2}$, the following equality holds: \[ h(H(P),H(Q))= |PQ|. \] \end{lemma} \begin{proof}\quad {\color{black}By} Figure 1, it is clear that $h(H(P),H(Q))+r=\mid PQ\mid+\ r=\frac{\pi }{2}$, so we have $h(H(P),H(Q))= \mid PQ\mid${\color{black}.} \end{proof} \begin{figure}[hbtp] \begin{center} \includegraphics[width=4cm]{figure1.eps} \caption{$h(H(P),H(Q))= |PQ|$.} \label{figure 1} \end{center} \end{figure} \begin{definition}\label{definition 2.2} {\rm \begin{enumerate} \item For any point $P\in S^n$ and any real number $r$ satisfying $0<r<\pi$, define the following two sets: \begin{eqnarray*} \overline{B(P, r)} & = & \{Q\in S^{n}\; |\; |PQ|\leq r\}, \\ \partial \overline{B(P,r)} & = & {\{Q\in S^{n}\mid |PQ|=r\}}. \end{eqnarray*} {\color{black} \item For any non-empty subset $W\subset S^n$ and any real number $r$ satisfying $0<r<\pi$, define the following two sets: \[ \overline{B(W, r)} = \bigcup_{P\in W}\overline{B(P, r)}. \] } \end{enumerate} } \end{definition} \begin{lemma}\label{lemma 2} For any subset $W\in S^{n}$ such that $W^{\circ}$ is spherical convex set and any real number $r$ satisfying $0<r<\frac{\pi}{2}$, the following equality holds: \[ \overline{B\left(\displaystyle\bigcap_{P\in W}H(P),r\right)}=\displaystyle\bigcap_{P\in W}\overline{B(H(P), r)}. \] \end{lemma} \par \begin{proof} \indent ''$\subset$ '' \indent Let $Q$ be a point of $\overline{B\left(\bigcap_{P\in W}H(P),r\right)}$. Then, it follows that $\overline{B(Q, r)}\cap \bigcap_{P\in W}H(P)\neq \emptyset$. Thus, there exists a point $ Q_{1}\in \overline{B(Q, r)}$ such that $Q_{1}$ belongs to $\bigcap_{P\in W}H(P)$. Therefore, there exists a point $Q_{1}\in \overline{B(Q, r)}$ such that $Q_{1}\in H(P) $ for any $P\in W$. It follows that $Q\in \bigcap_{P\in W}\overline{B(H(P), r)}.$\\ {\color{black} ''$\supset$'' \indent Suppose that there exists a point $Q \in \bigcap_{P\in W}\overline{B(H(P), r)}$ such that $Q\notin \overline{B(\bigcap_{P\in W}H(P), r)}$. Since $Q$ belongs to $\bigcap_{P\in W} \overline{B(H(P), r)}$, we have that $Q\in \overline{B(H(P), r)}$ for any $P\in W$. On the other hand, since $Q$ does not belongs to $\overline{B(\bigcap_{P\in W}H(P), r)}=\overline{B(W^{\circ}, r)}$, it follows that $\overline{B(Q, r)}\cap W^{\circ}= \emptyset$. Since $W^{\circ}$ and $\overline{B(Q, r)}$ are closed spherical convex sets, by Proposition \ref{proposition matousek}, there exists a point $P\in S^{n}$ such that $W^{\circ}\subset H(P)$ and $\overline{B(Q, r)}\cap H(P)=\emptyset$. By Lemmas \ref{lemma 3} and \ref{lemma 4}, it follows that $P\in W^{\circ \circ}=W$. Therefore, there exists a point $P\in W$ such that $Q\notin \overline{B(H(P), r)}.$ This contradicts the assumption $Q \in \bigcap_{P\in W}\overline{B(H(P), r)}$. } \end{proof} \subsection{Pompeiu-Hausdorff metric}\label{pompeiu-hausdorff} \par \begin{definition}[\cite{barnsley}]\label{Pompeiu-Hausdorff} {\rm Let $(X, d)$ be a complete metric space. \begin{enumerate} \item Let $x$ {\color{black}(resp., $B$)} be a point of $X$ {\color{black}(resp., a non-empty compact subset of $X$)}. Define \[ d(x,B)=\min\{d(x,y)\; |\; y\in B\}. \] Then, $d(x,B)$ is called the {\it distance from the point $x$ to the set $B$}. \item Let $A, B$ be two non-empty compact subsets of $X$. Define \[ d(A,B)=\max \{d(x, B)\; |\; x\in A\}. \] Then, $d(A,B)$ is called the {\it distance from the set $A$ to the set $B$}. \item Let $A, B$ be two non-empty compact subsets of $X$. Define \[ h(A,B)=\max \{d(A, B), d(B, A)\}. \] Then, $h(A,B)$ is called the {\it Pompeiu-Hausdorff distance between $A$ and $B$}. \end{enumerate} } \end{definition} \par Let $A, B$ be two non-empty compact subsets of a complete metric space $(X,d)$. The Pompeiu-Hausdorff distance between $A$ and $B$ naturally induces the {\it Pompeiu-Hausdorff metric} $h: \mathcal{H}(X)\times \mathcal{H}(X)\to \mathbb{R}_+\cup \{0\}$, where $\mathcal{H}(X)$ is the set consisting of non-empty compact sets of $X$ and $\mathbb{R}_+$ is the set of positive real numbers. The set $\mathcal{H}(X)$ is a metric space with respect to the Pompeiu-Hausdorff metric. It is well-known that the metric space $(\mathcal{H}(X), h)$ is complete. For details on $(\mathcal{H}(X), h)$, see for example \cite{barnsley, falconer}. \subsection{Lipschitz mappings} {\color{black} \begin{definition}\label{Lipschitz} {\rm Let $(X, d_X), (Y,d_Y) $ be metric spaces. A mapping $f: X\to Y$ is said to be {\it Lipschitz} if there exists a positive real number $K\in \mathbb{R}$ such that the following holds for any $x_1, x_2\in X$: \[ d_Y(f(x_1), f(x_2))\le K d_X(x_1, x_2). \] The positive real number $K\in \mathbb{R}$ for a Lipschitz mapping is called the {\it Lipschitz coefficient} of $f$. } \end{definition} } {\color{black} \begin{definition}\label{bi-Lipschitz isometry} {\rm Let $(X, d_X), (Y, d_{\color{black}Y})$ be metric spaces. \begin{enumerate} \item A mapping $f: X\to Y$ is said to be {\it bi-Lipschitz} if $f$ is bijective and there exist positive real numbers $K, L\in \mathbb{R}$ such that the following hold for any $x_1, x_2\in X$ and any $y_1, y_2\in Y$: \begin{eqnarray*} d_Y(f(x_1), f(x_2)) & \le & K d_X(x_1, x_2), \\ d_X(f^{-1}(y_1), f^{-1}(y_2)) & \le & L d_Y(y_1, y_2), \end{eqnarray*} \item A mapping $f: X\to Y$ is {\color{black}called} an {\it isometry} if $f$ is bijective and the following holds for any $x_1, x_2\in X$: \[ d_Y(f(x_1), f(x_2)) = d_X(x_1, x_2). \] \end{enumerate} } \end{definition} } \begin{proposition}\label{proposition 1} For any $n\in \mathbb{N}$, the spherical polar transform $\bigcirc: \mathcal{H}^\circ (S^n)\to \mathcal{H}^\circ(S^n)$ is Lipschitz with respect to the Pompeiu-Hausdorff metric. \end{proposition} \begin{proof} We first show Proposition \ref{proposition 1} under the assumption that $W^{\circ}$ is a spherical convex set. Suppose that $\bigcirc$ is not Lipschitz. Then, for any $K>0$ there exist $W_{1}, W_{2}\in \mathcal{H}^{\circ}(S^{n})$ such that $Kh(W_1,W_2)<h(W_{1}^{\circ },W_{2}^{\circ })$. In particular, for $K=2$ there exist $W_{1}, W_{2}\in \mathcal{H}^{\circ}(S^{n})$ such that $2h(W_1,W_2)<h(W_{1}^{\circ },W_{2}^{\circ }).$ Since $h(X, Y)\leq \pi$ for any $X, Y\in \mathcal{H}^{\circ}(S^{n})$, {\color{black} i}t follows that $h(W_1,W_2)<\frac{\pi }{2}$. Set $r=h(W_1,W_2).$ Then, since $2r=2h(W_1,W_2)<h(W_{1}^{\circ },W_{2}^{\circ }),$ by Definition \ref{Pompeiu-Hausdorff}, it follows that at least one of $d(W_{1}^{\circ },W_{2}^{\circ })>2r$ and $d(W_{2}^{\circ },W_{1}^{\circ })>2r$ holds. Therefore, at least one of the following two holds.\vspace{2mm}\\ (1) There exists a point $P\in W_{1}^{\circ }$ such that $d(P,Q)>2r$ for any $Q\in W_{2}^{\circ }.$\vspace{2mm}\\ (2) There exists a point $Q\in W_{2}^{\circ }$ such that $d(Q,P)>2r$ for any $P\in W_{1}^{\circ }.$\vspace{2mm}\\ We show that (1) implies a contradiction. Suppose that there exists a point $\widetilde{P}\in W_{1}^{\circ}$ such that $\widetilde{P}\notin \overline{B(W_{2}^{\circ}, 2r)}$. In particular, $\widetilde{P}$ does not belong to $\overline{B(W_{2}^{\circ}, r)}$. {\color{black} Notice that the assumption that $W_{2}^{\circ}$ is a spherical convex set. Notice furthermore that $r$ is less than $\frac{\pi}{2}$. Thus by Lemma \ref{lemma 2},} we have the following: \[ \widetilde{P} \notin \overline{B(W_{2}^{\circ}, r)} =\overline{B\left(\bigcap_{Q\in W_{2}}H(Q), r\right)} =\bigcap_{Q\in W_{2}}\overline{B(H(Q), r)}. \] Hence, there exists a point $Q \in W_{2}$ such that $\widetilde{P}\notin \overline{B(H(Q), r)}$.\\ \indent On the other hand, since $h(W_{1}, W_{2})=r$, there exists a point $P_{Q}\in W_{1}$ such that $d(P_{Q}, Q)\leq r$. Thus, by Lemma \ref{lemma 1}, it follows that $\widetilde{P}\in H(P_{Q})\subset \overline{B(H(Q), r)}.$ Therefore, we have a contradiction.\\ \indent In the same way, we can show that (b) implies a contradiction. \\ \indent Next we show that for any closed $W, \widetilde{W}\in \mathcal{H}^{\circ}(S^{n})$ such that at least one of $W^{\circ}, \widetilde{W}^{\circ}$ {\color{black} is not spherical} convex, {\color{black} the inequality } $h(W^{\circ}, \widetilde{W}^{\circ})\leq 2h(W, \widetilde{W})$ holds. Since $W, \widetilde{W}\in \mathcal{H}^{\circ}(S^{n})$, there exists $P, \widetilde{P}\in S^{n}$ such that $W\subset H(P), \widetilde{W}\subset H(\widetilde{P})$. {\color{black} Set} $W_{i} = \overline{B(W, \frac{1}{i})} \cap \overline{B(H(P), \frac{\pi}{2}-\frac{1}{i})},\ \widetilde{W}_{i} = \overline{B(\widetilde{W}, \frac{1}{i})} \cap \overline{B(H(\widetilde{P}), \frac{\pi}{2}-\frac{1}{i})}$ for any $i\in \mathbb{N}$. Since {\color{black} both} $W_{i}^{\circ}, \widetilde{W}_{i}^{\circ}$ are {\color{black} spherical convex, by the proof given above}, we have that $h(W_{i}^{\circ}, \widetilde{W}_{i}^{\circ})\leq {\color{black} 2}h(W_{i}, \widetilde{W}_{i})$ for any $i\in \mathbb{N}$. {\color{black} Notice that} $W=\lim_{i\to \infty}W_{i},\ \widetilde{W}=\lim_{i\to \infty}\widetilde{W}_{i}$. Therefore, for any $i\in \mathbb{N}$, it follows that \begin{eqnarray*} h(W^{\circ}, \widetilde{W}^{\circ}) &\leq & h(W^{\circ}, W_{i}^{\circ})+h(W_{i}^{\circ}, \widetilde{W}_{i}^{\circ})+h(\widetilde{W}_{i}^{\circ}, \widetilde{W}^{\circ})\\ &\leq & h(W^{\circ}, W_{i}^{\circ})+2h(W_{i}, \widetilde{W}_{i}) + h(\widetilde{W}_{i}^{\circ}, \widetilde{W}^{\circ}). \end{eqnarray*} In \cite{aperture}, it has been shown that $\bigcirc: \mathcal{H}^\circ(S^2)\to \mathcal{H}^\circ(S^2)$ is continuous. It is easily seen that the proof of this result given in \cite{aperture} works well for general $n\in \mathbb{N}$. Thus, we have that $\lim_{i\to \infty}h(W^{\circ}, W_{i}^{\circ})=0$, $\lim_{i\to \infty}h(\widetilde{W}^{\circ}, \widetilde{W}_{i}^{\circ})=0$. Therefore, we have the following: \[ h(W^{\circ}, \widetilde{W}^{\circ}) \leq 2\lim_{i\to \infty}h(W_{i}, \widetilde{W}_{i})=2h(W, \widetilde{W}). \] \noindent Thus, the spherical polar transform $\bigcirc: \mathcal{H}^\circ (S^n)\to \mathcal{H}^\circ(S^n)$ is Lipschitz. \end{proof} \par \begin{claim} \label{claim 1} The following example {\color{black} shows} that the natural number \lq \lq \ $2$ \rq \rq is the least real number for the Lipschitz coefficient {\color{black}of $\bigcirc$}. \end{claim} \par \noindent {\bf Example:}\ \ For any real number $r\ (1<r<2)$, there exist a real number $r_{1}$ and two points $P_{1}, P_{2}\in S^{n}$ such that $r\frac{\pi}{2}<r_{1}<\pi$ and $d(P_{1}, P_{2})=r_{1}$. {\color{black} Since $H(P_{i})\subset S^{n}= \overline{B\left(H(P_{j}), \frac{\pi}{2}\right)},\ \{i, j\}=\{1, 2\}$, } we have that $h(H(P_{1}), H(P_{2})) {\color{black}\leq} \frac{\pi}{2}$. Set $W_{1}=H(P_{1}), W_{2}=H(P_{2})$. Then, we have the following: \[ rh(W_{1}, W_{2}) {\color{black}\leq} \ r\frac{\pi}{2} {\color{black} <}\ r_{1}=d( P_{1}, P_{2}) = h(\{P_{1}\}, \{P_{2}\})=h(W_{1}^{\circ}, W_{2}^{\circ}). \] It follows the following({\color{black}s}ee {\color{black} F}igure \ref{figure 1}): \[ rh(W_{1}, W_{2})<h(W_{1}^{\circ}, W_{2}^{\circ}). \] \begin{figure}[hbtp] \begin{center} \includegraphics[width=4cm]{figure2.eps} \caption{$r\ h(W_{1}, W_{2}) <h({\color{black}\{{\color{black}P_1}\}, \{{\color{black}P_2}\}})\ (1<r<2)$.} \label{figure 1} \end{center} \end{figure} \par By Proposition \ref{proposition 1}, we can extend Lemma \ref{lemma 4} as follows. \begin{lemma}\label{lemma 5} For any $X=\lim_{i\to \infty} X_{i}$, the following equality holds. \[ X=X^{\circ\circ}, \] where $X_{i}\in \mathcal{H}_{\mbox{\rm s-conv}}(S^{n})$ \quad $ (i= 1, 2, 3, \dots)$. \end{lemma} \begin{proof} By Proposition \ref{proposition 1}, it follows that the composition $\bigcirc \circ \bigcirc$ is Lipschitz. Thus, if $X=\lim_{i\to \infty} X_i$ then $X^{\circ \circ}=\lim_{i\to \infty} X_{i}^{\circ \circ}$. By Lemma \ref{lemma 4}, $X=\lim_{i\to \infty} {X}_{i}=\lim_{i\to \infty} {X}_{i}^{\circ \circ}=X^{\circ \circ}$. \end{proof} \section{Proof of Theorem \ref{theorem 1}}\label{section 3} \indent We first show that for any $W_{1}, W_{2}\in \mathcal{H}_{{\rm Wulff}}(S^{n}, P)$, the following holds: \[ h(W_{1}^{\circ}, W_{2}^{\circ})\leq h(W_{1}, W_{2}). \] \par \par Suppose that there exist $W_{1}, W_{2}\in \mathcal{H}_{{\rm Wulff}}(S^{n}, P)$ such that $h(W_{1}, W_{2})< h(W_{1}^{\circ}, W_{2}^{\circ})$. Since $h(X, Y)< \frac{\pi}{2}$ for any $X, Y\in \mathcal{H}_{{\rm Wulff}}(S^{n}, P)$, it follows that $h(W_{1}, W_{2})< \frac{\pi}{2}$. Set $r=h(W_{1}, W_{2})$. Then, since $r=h(W_{1}, W_{2})< h(W_{1}^{\circ}, W_{2}^{\circ})$, it follows that at least one of $d(W_{1}^{\circ },W_{2}^{\circ })>r$ and $d(W_{2}^{\circ },W_{1}^{\circ })>r$ holds, where $d(A, B)$ is the distance from $A$ to $B$ defined in Section 2. Therefore, at least one of the following two holds.\\ (a) There exists a point $Q_{1}\in W_{1}^{\circ }$ such that $d(Q_{1}, R_{2})>r$ for any $R_{2}\in W_{2}^{\circ }.$\vspace{2mm}\\ (b) There exists a point $Q_{2}\in W_{2}^{\circ }$ such that $d(Q_{2}, R_{1})>r$ for any $R_{1}\in W_{1}^{\circ }.$\vspace{2mm}\\ \indent Suppose that (a) holds. This means that there exists a point $Q_{1}\in W_{1}^{\circ}$ such that $Q_{1}\notin \overline{B(W_{2}^{\circ}, r)}$. Then, by Lemma \ref{lemma 2}, we {\color{black} have the following:} \[ Q_{1}\notin \overline{B(W_{2}^{\circ}, r)}=\overline{B\left(\bigcap_{\widetilde{Q}\in W_{2}}H(\widetilde{Q}), r\right)}=\bigcap_{\widetilde{Q}\in W_{2}}\overline{B(H(\widetilde{Q}), r)}. \] Hence, there exists a point $R\in W_{2}$ such that $Q_{1}\notin \overline{B(H(R), r)}$.\\ \indent On the other hand, since $h(W_{1}, W_{2})=r$, there exists a point $\widetilde{P}_{R}\in W_{1}$ such that $d(\widetilde{P}_{R}, R)\leq r$. By Lemma \ref{lemma 1}, $Q_{1}\in H(\widetilde{P}_{R})\subset \overline{B(H(R), r)}.$ Therefore, we have a contradiction.\\ \indent In the same way, we can show that (b) implies a contradiction.\\ Therefore, we have proved that {\color{black}for any $W_{1}, W_{2}\in \mathcal{H}_{{\rm Wulff}}(S^{n}, P)$, the following holds: \[ h(W_{1}^{\circ}, W_{2}^{\circ})\leq h(W_{1}, W_{2}). \]} \par \indent {\color{black} B}y Lemma \ref{lemma 4}, for any $W_{1}, W_{2}\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$ we have the following: \[ h(W_{1}, W_{2})\leq h(W_{1}^{\circ}, W_{2}^{\circ})\leq h(W_{1}, W_{2}). \] {\color{black} Therefore, we have that $h(W_{1}, W_{2})=h(W_{1}^{\circ}, W_{2}^{\circ})$ for any $W_{1}, W_{2}\in \mathcal{H}_{{\rm Wulff}}(S^{n}, P)$. } \par \indent (2) Let $W_{1}=\lim_{i\to \infty}W_{1_{i}}, W_{2}=\lim_{i\to \infty}W_{2_{i}}$, where $W_{1_{i}}, W_{2_{i}}\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$ for any $i\in \mathbb{N}$. {\color{black} By (1), we have that} $h(W_{1_{i}}, W_{2_{i}})=h(W_{1_{i}}^{\circ}, W_{2_{i}}^{\circ})$. By Proposition \ref{proposition 1}, we have that \begin{eqnarray*} h(W_{1}, W_{2})& = & h(\lim_{i\to \infty}W_{1_{i}}, \lim_{i\to \infty}W_{2_{i}}) =\lim_{i\to \infty}h(W_{1_{i}}, W_{2_{i}})\\ & = & \lim_{i\to \infty}h(W_{1_{i}}^{\circ}, W_{2_{i}}^{\circ}) = h(\lim_{i\to \infty}W_{1_{i}}^{\circ}, \lim_{i\to \infty}W_{2_{i}}^{\circ}) =h(W_{1}^{\circ}, W_{2}^{\circ}). \end{eqnarray*} \hfill {$\Box$} \\ \section{Proof of Theorem \ref{theorem 2}}\label{section 4} By the proof of Proposition \ref{proposition 1}, we have that \begin{eqnarray*} h(W_{1}^{\circ},W_{2}^{\circ}) & \leq & 2h(W_{1},W_{2}) \\ h(W_{1}^{\circ \circ},W_{2}^{\circ \circ}) & \leq & 2 h(W_{1}^{\circ},W_{2}^{\circ}) \end{eqnarray*} for any $W_{1}, W_{2}\in \mathcal{H}_{\mbox{\rm s-conv}}(S^{n})$. \\ By Lemma \ref{lemma 5}, we have {\color{black} the following for any $W_{1}, W_{2}\in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$:} \[ W_{1}^{\circ \circ} = W_{1}, W_{2}^{\circ \circ} = W_{2}. \] Therefore, the following inequality holds for any $W_{1}, W_{2}\in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$: \[ \frac{1}{2}h(W_{1},W_{2})\leq h(W_{1}^{\circ},W_{2}^{\circ})\leq 2h(W_{1},W_{2}). \] \par Hence, $\bigcirc _{\mbox{\rm s-conv}}: \overline{\mathcal{H}_{\mbox{\rm s-conv}} (S^{n})}\rightarrow \overline{\mathcal{H}_{\mbox{\rm s-conv}} (S^{n})}$ is bi-{\color{black} L}ipschitz. \\ \indent By Claim \ref{claim 1}, it is clear that $\bigcirc_{\mbox{\rm s-conv}}: \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{\circ})}\rightarrow \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{\circ})}$ is never isometric. \hfill {$\Box$}\\ \section{Appendix}\label{section 5} \subsection{Mappings in Theorems are well-defined bijections} \label{subsection 5.1} {\color{black} \begin{proposition}\label{proposition 2} \begin{enumerate} \item For any point $P\in S^n$, $\overline{\mathcal{H}_{\rm Wulff}(S^n, P)}\subset \mathcal{H}^\circ(S^n)$. \item For any point $P\in S^n$, $\bigcirc({\mathcal{H}_{\rm Wulff}(S^n, P)}) = {\mathcal{H}_{\rm Wulff}(S^n, P)}$. \item For any point $P\in S^n$, $\bigcirc(\overline{\mathcal{H}_{\rm Wulff}(S^n, P)}) = \overline{\mathcal{H}_{\rm Wulff}(S^n, P)}$. \item For any point $P\in S^n$, the restriction of $\bigcirc$ to $\overline{\mathcal{H}_{\rm Wulff}(S^n, P)}$ is injective. \item $\overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^n)}\subset \mathcal{H}^\circ(S^n)$. \item $\bigcirc({\mathcal{H}_{\mbox{\rm s-conv}}(S^n)}) \ne {\mathcal{H}_{\mbox{\rm s-conv}}(S^n)}$. \item $\bigcirc(\overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^n)}) = \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^n)}$. \item The restriction of $\bigcirc$ to $\overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^n)}$ is injective. \end{enumerate} \end{proposition} } \noindent {\bf Proof of Proposition \ref{proposition 2}. } \quad{} \par {\it Proof of the assertion (1) of Proposition \ref{proposition 2}.} \quad {\color{black}It is clear that f}or any $W\in \overline{\mathcal{H}_{\rm Wul{\color{black}f}f}(S^{n}, P)}$, we have that $W\subset H(P)$. Thus, by Lemma \ref{lemma 3}, it follows that $P\in W^{\circ}$, {\color{black} which implies $W^{\circ}\neq \emptyset$}. \hfill {$\Box$}\\ \par \smallskip {\it Proof of the assertion (2) of Proposition \ref{proposition 2}.} \quad {\color{black} We first show that $\bigcirc (W)\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$ for any $W\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$. For any $W\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$, there exist $r_{1}, r_{2}$\ $(0<r_{1}<r_{2}<\frac{\pi}{2})$ such that $\overline{B(P, r_{1})}\subset W \subset \overline{B(P, r_{2})}$. By Lemma \ref{lemma 3}, we have the following: \[ \left(\overline{B(P, r_{2})}\right)^{\circ}\subset W^{\circ}\subset \left(\overline{B(P, r_{1})}\right)^{\circ}. \] It follows that $W^{\color{black}\circ}\cap H(-P) =\emptyset$ and $P\in \mbox{\rm int}(W^{\color{black}\circ})$. Let $Q_1, Q_2$ be two points of $W^\circ=\bigcap_{Q\in W}H(Q)$. Since $W^{\color{black}\circ}\cap H(-P)=\emptyset$, it follows that $(1-t)Q_1+tQ_2$ is not the zero vector for any $t\in [0,1]$. Thus, for any $t\in [0,1]$ we have the following: \[ \frac{(1-t)Q_1+tQ_2}{||(1-t)Q_1+tQ_2 ||} \in \bigcap_{Q\in W}H(Q)=W^\circ. \] It follows that $W^\circ$ is spherical convex. Therefore, $\bigcirc (W)$ is contained in $\mathcal{H}_{\rm Wulff}(S^{n}, P)$. \par Next, we show that for any $W\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$, there exists an element $\widetilde{W}\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$ such that $\bigcirc (\widetilde{W})=W$. For any $W\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$, set $\widetilde{W}=W^\circ$. Then, we have already proved that $\widetilde{W}$ is contained in $\mathcal{H}_{\rm Wulff}(S^{n}, P)$. By Lemma \ref{lemma 4}, it follows that $\bigcirc (\widetilde{W})=W$. \hfill {$\Box$} \par \smallskip {\it Proof of the assertion (3) of Proposition \ref{proposition 2}.} \quad {\color{black} We first show that $\bigcirc (W)\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$ for any $W\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$. For any $W\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$, there exists a convergent sequence $W_i\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$ such that $W=\lim_{i\to \infty}W_i$. Since $\bigcirc : \mathcal{H}^\circ (S^n)\to \mathcal{H}^\circ (S^n)$ is continuous, it follows that $W^\circ =\lim_{i\to \infty}W_i^\circ$. Since we have already proved the assertion (2) of Proposition \ref{proposition 2}, it follows that $\bigcirc (W)\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$. \par Next, we show that for any $W\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$, there exists an element $\widetilde{W}\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$ such that $\bigcirc (\widetilde{W})=W$. For any $W\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$, set $\widetilde{W}=W^\circ$. Then, we have already proved that $\widetilde{W}$ is contained in $\overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$. By Lemma \ref{lemma 5}, it follows that $\bigcirc (\widetilde{W})=W$. \hfill {$\Box$} \par \smallskip {\it Proof of the assertion (4) of Proposition \ref{proposition 2}.} \quad Suppose that there exist $W_{1}=\lim_{i\to \infty}W_{1_{i}}, W_{2}=\lim_{i\to \infty} W_{2_{i}}\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$ such that $W_{1}^{\circ}=W_{2}^{\circ}$, where $W_{1_{i}}, W_{2_{i}}\in \mathcal{H}_{\rm W{\color{black}u}lff}(S^{n}, P)$ for any $i\in \mathbb{N}$. Since $W_{1_{i}}, W_{2_{i}}$ are spherical convex, by Lemma \ref{lemma 5} the following holds: \[ W_{1}=W_{1}^{\circ \circ}=W_{2}^{\circ \circ}=W_{2}. \] Therefore, for any point $P\in S^{n}$, the restriction of $\bigcirc$ to $\overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$ is injective. \hfill {$\Box$} \par \smallskip {\it Proof of the assertion (5) of Proposition \ref{proposition 2}.} \quad Let $W$ be an element of $\overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^n)}$. Then, by Proposition \ref{relation}, there exists a point $P\in S^n$ such that $W\in \overline{\mathcal{H}_{\rm Wulff}(S^n, P)}$. Since we have already proved the assertion (1) of Proposition \ref{proposition 2}, it follows that $\bigcirc (W)\in \mathcal{H}^\circ (S^n)$. \hfill {$\Box$} \par \smallskip {\it Proof of the assertion (6) of Proposition \ref{proposition 2}.} \quad For any point $P\in S^{n}$, $\bigcirc({\color{black} \{}P{\color{black} \}})=H(P)$ is not hemispherical. Therefore, $\bigcirc(\mathcal{H}_{\mbox{\rm s-conv}}(S^{n}))\neq \mathcal{H}_{\mbox{\rm s-conv}}(S^{n})$. \hfill {$\Box$}\\ \par \smallskip {\it Proof of the assertion (7) of Proposition \ref{proposition 2}.} \quad {\color{black} We first show that $\bigcirc(W)\in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$ for any $W\in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$. For any $W\in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$, by Proposition \ref{relation}, $W\in \bigcup_{P\in S^{n}}\overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$. It follows that there exists spherical convex body subsequence $\{\widetilde{W}_{i}\}_{i=1, 2, 3, \dots}$ such that $W=\lim_{i\rightarrow \infty}\widetilde{W}_{i}$, where $\widetilde{W}_{i}\in \mathcal{H}_{\rm Wulff}(S^{n}, P_{i})$. Since $\bigcirc$ is continuous and $\widetilde{W}_{i}^{\circ} \in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$ ($i=1, 2, 3, \dots$), we have that $\lim_{n\to \infty}\widetilde{W}_{i}^{\circ} =W^{\circ}\in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$.\\ \indent Next, we show that for any $W\in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$, there exists an element $\widetilde{W}\in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$ such that $\bigcirc(\widetilde{W})=W$. For any $W \in \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$, set $\widetilde{W}=W^{\circ}$. By Lemma \ref{lemma 5}, it follows that $W=\bigcirc(W^{\circ})$.}\hfill $\square$\\ \par \smallskip {\it Proof of the assertion (8) of Proposition \ref{proposition 2}.} \quad Suppose that there exist $W_{1}=\lim_{i\to \infty}W_{1_{i}}, W_{2}=\lim_{i\to \infty} W_{2_{i}}\in \overline{\mathcal{H}_{\mbox{\rm s-conv} }(S^{n})}$ such that $W_{1}^{\circ}=W_{2}^{\circ}$, where $W_{1_{i}}, W_{2_{i}}\in \mathcal{H}_{\mbox{\rm s-conv}}(S^{n})$ for any $i\in \mathbb{N}$. Since $W_{1_{i}}, W_{2_{i}}$ are spherical convex, by Lemma \ref{lemma 5} the following holds: \[ W_{1}=W_{1}^{\circ \circ}=W_{2}^{\circ \circ}=W_{2}. \] Therefore, for any point $P\in S^{n}$, the restriction of $\bigcirc$ to $\overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$ is injective.\hfill {$\Box$}\\ } } By Proposition \ref{proposition 2}, {\color{black}we have the following: \begin{proposition}\label{lemma 1.1} Each of the following is {\color{black}a} well-defined bijective mapping. \begin{eqnarray*} \bigcirc_{{\rm Wulff}, P}: {\mathcal{H}_{\rm Wulff}(S^n, P)} & \to & {\mathcal{H}_{\rm Wulff}(S^n, P)}, \\ \overline{\bigcirc_{{\rm Wulff}, P}}: \overline{\mathcal{H}_{\rm Wulff}(S^n, P)} & \to & \overline{\mathcal{H}_{\rm Wulff}(S^n, P)}, \\ \bigcirc_{\mbox{\rm s-conv}}: \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^n)} & \to & \overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^n)}. \\ \end{eqnarray*} \end{proposition} } Moreover, by the assertion (6) of Proposition \ref{proposition 2}, it follows that the restriction of $\bigcirc$ to $\mathcal{H}_{\mbox{s-conv}}(S^n)$ is not a transform. Thus, the restriction to this subspace is not investigated in this paper. \subsection{Why the restriction of $\bigcirc$ to $\overline{\mathcal{H}_{\mbox{s-conv}}(S^n)}$ is important ?} \label{subsection 5.2} \quad {} \par It is natural to expect that the isometric property still holds even for the restriction of $\bigcirc$ to $\bigcup_{P\in S^n}\overline{\mathcal{H}_{\rm Wulff}(S^n, P)}$. Since this subspace of $\mathcal{H}^\circ(S^n)$ seems to be complicated, we want to have a translation of this space into a subspace of $\mathcal{H}^\circ(S^n)$ which is easy to treat. The following proposition is the desired translation. Hence, the subspace $\overline{\mathcal{H}_{\mbox{s-conv}}(S^n)}$ naturally arises in our study and the restriction of $\bigcirc$ to $\overline{\mathcal{H}_{\mbox{s-conv}}(S^n)}$ is important. {\color{black} \begin{proposition}\label{relation} \[ \bigcup_{P\in S^n}\overline{\mathcal{H}_{\rm Wulff}(S^n, P)} =\overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^n)}. \] \end{proposition} } \begin{proof} \quad By Definition \ref{definition closure}, it is clear that any $W\in \bigcup_{P\in S^n}\overline{\mathcal{H}_{{\rm Wulff}}(S^n, P)} $ is an element of $\overline{\mathcal{H}_{\mbox{\rm s-conv }}(S^{n})}$. {\color{black} Thus, it is sufficient to} show the following inclusion: \[ \overline{\mathcal{H}_{\mbox{\rm s-conv }}(S^{n})} \subset \bigcup_{P\in S^n}\overline{\mathcal{H}_{\rm Wulff}(S^n, P)}. \] \par We first show the above inclusion under the assumption that $W\in \overline{\mathcal{H}_{\mbox{\rm s-conv }}(S^{n})}$ is a hemispherical closed subset of $S^{n}$. Suppose {\color{black} that} $W$ has an interior point. Then, it is easily seen that there exists a point $P\in {\rm int}(W)$ {\color{black} such that} $W\subset H(P)$. Since $H(P)\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$, it follows that $W\in \overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}$. Next, suppose that $W$ does not have an interior point. Since $W$ is hemispherical and closed, there exist a point $P\in W$ and a positive {\color{black} integer} number $N\in \mathbb{N}$ such that for any $i>N$, we have ${\color{black} \partial\overline{B(W, \frac{2}{i})}\bigcap H(-P)=\emptyset}$. For any $i>N$, there exists a sequence \[ \{W_{i}\}_{i= 1, 2, 3, \dots}\subset \mathcal{H}_{\mbox{\rm s-conv}}(S^{n}) \] such that $h(W_{i}, W)<\frac{1}{i}$. Thus, we have the following: \[ P\in W\subset \overline{B\left(W_{i}, \frac{1}{i}\right)} \subset \overline{B\left(\overline{B\left(W, \frac{1}{i}\right)}, \frac{1}{i}\right)} =\overline{B\left(W, \frac{2}{i}\right)}\subset H(P). \] Therefore, {\color{black} it follows that for $\overline{B\left(W_{i}, \frac{1}{i}\right)}\in \mathcal{H}_{{\rm Wulff}}(S^{n}, P)$, we have the following:} \[ W=\lim_{i\to \infty}\overline{B\left(W_{i}, \frac{1}{i}\right)}\in \bigcup_{P\in S^{n}}\overline{\mathcal{H}_{\rm Wulff}(S^{n}, P)}. \] \indent Finally,{\color{black} we show Proposition {\color{black}\ref{relation}} in the case that $W$ is an element of $\overline{\mathcal{H}_{\mbox{\rm s-conv}}(S^{n})}$}. Notice that there exists a point $P\in S^{n}$ such that $W\subset H(P)$. For any positive interger $i$, define $W_{i}$ as follows. \[ W_{i}=\overline{B\left(W, \frac{1}{i} \right)} \bigcap \overline{B\left(P, \frac{\pi}{2}-\frac{1}{i} \right)}. \] Then, it is easily seen that $W_{i}\in \mathcal{H}_{\rm Wulff}(S^{n}, P)$ for any $i\in \mathbb{N}$ and $W=\lim_{i\to \infty} W_{i}$. Therefore, $W$ belongs {\color{black}to} $\bigcup_{P\in S^n}\overline{\mathcal{H}_{\rm Wulff}(S^n, P)})$. \end{proof}
1,116,691,497,014
arxiv
\section{Introduction} \subsection{Background} Many control systems, particularly those arising from mechanical systems, have symmetries---often translational and rotational, and sometimes combinations of them. Such a symmetry is usually described as an invariance or equivariance under an action of a Lie group, and the system can be reduced to a lower-dimensional one or decoupled into subsystems by exploiting the symmetry. \citet{NiSc1982} and \citet{GrMa1985} formulated symmetries of nonlinear control systems from the differential-geometric point of view, and also showed how one can reduce a control system with symmetry to a quotient space. Likewise, optimal control systems also have such symmetries. \citet{GrMa1984} showed that, in relation to the work in \cite{GrMa1985}, one can decompose optimal feedback laws by exploiting the symmetries of control systems; \citet{Sc1987} showed a method to analyze symmetries of optimal Hamiltonians without explicitly calculating them, while \citet{LeCoDiMa2004} analyzed symmetries of vakonomic systems and applied their result to optimal control problems, \citet{EcMaMuRo2003} from the pre-symplectic point of view, and \citet{BlSc2000} and \citet{IbPeSa2010} using Dirac structures. Symmetry reduction of optimal control systems are desirable from a computational point of view as well. Given that solving optimal control problems usually involves iterative methods such as the shooting method (as opposed to solving a single initial value problem), reducing the system to a lower-dimensional one results in a considerable reduction of the computational cost. From a theoretical point of view, a certain class of optimal control problems has a rich geometric structure, and provides many interesting questions relating differential-geometric ideas with control-theoretic problems. Most notably, \citet{Mo1990, Mo1991a, Mo1993a, Mo1993b, Mo2002}, following the work of \citet{ShWi1987, ShWi1989}, explored optimal control of deformable bodies, such as the falling cat problem, from the differential-geometric point of view. In particular, principal bundles, along with principal connections on them defined by momentum maps, are identified as a natural geometric setting for such problems. The same geometric setting applies to kinematic control of nonholonomic mechanical systems (see, e.g., \citet{KeMu1995}, \citet[Chapters~7 and 8]{MuLiSa1994}, and \citet{LiCa1993}), where the principal connections are defined by the constraints instead of momentum maps. This geometric setting also gives rise to geometric phases and holonomy (see, e.g., \citet{MaMoRa1990} and references therein), which have applications in motion generation of mechanical systems by shape change. \subsection{Main Results and Comparison with Existing Literature} \label{ssec:MainResultsAndComparison} Figure~\ref{fig:RelatingOCPs} gives a schematic overview of the results in the paper and their relationships. \begin{figure}[h!] \sf \centerline{ \xymatrix@!0@R=1.1in@C=2in{ \ar[d]_{\text{(Section~\ref{sec:SymmetryInNonlinearControlSystems})}}^{\text{Reduction}}="a" \framebox{\small\strut\minibox[c]{Control system\\with symmetry}} \ar[r]^{\text{Cost function}}_{\text{with symmetry}} & \framebox{\small\strut\minibox[c]{Optimal control\\problem}} \ar[r]^{\text{PMP}\quad} & \framebox{\small\strut\minibox[c]{Hamiltonian system\\with symmetry}} \ar[d]^{\text{(Section~\ref{sec:SymRedInOptimalControlSystems})}}_{\text{Reduction}}="b" \POS(51,-14.2)*+\txt{\raisebox{-2.5pc}{\minibox[c]{\footnotesize Principal connection\\\scriptsize(Section~\ref{sec:PrincipalConnection})}}} \ar@{=>} "a" \ar@{=>} "b" \\ \framebox{\small\strut\minibox[c]{Reduced\\control system}} \ar@{-->}[rr]_{\text{Reduced PMP (Section~\ref{ssec:PoissonRedOfPMP})}} & & \framebox{\small\strut\minibox[c]{Reduced\\Hamiltonian system}} } } \caption{Schematic overview of reduction of control and optimal control systems with references to corresponding sections in the paper. See also the outline in Section~\ref{ssec:Outline}.} \label{fig:RelatingOCPs} \end{figure} We first characterize symmetries in nonlinear control systems and use a principal connection to reduce such systems. We then discuss the associated symmetries in optimal control problem of such systems following \citet{GrMa1984}, and apply Hamiltonian reduction theory to the Pontryagin maximum principle (PMP) for optimal control systems with symmetries; the principal connection plays an important role here as well. In particular, we apply the Poisson reduction of \citet{CeMaPeRa2003} to the Hamiltonian system given as a necessary condition for optimality by the Pontryagin maximum principle. The resulting Hamilton--Poincar\'e equations give a reduced set of equations for optimality, and are naturally considered as a reduced maximum principle applied to the reduced control system. We note that \citet{IbPeSa2010} study a similar problem in a more general setting with a slightly different focus: We assume that one can eliminate the control to obtain a Hamiltonian system on a cotangent bundle, whereas \citet{IbPeSa2010} do not make the assumption and exploit Dirac structures to handle those cases where one cannot easily obtain a Hamiltonian system explicitly, an approach originally due to \citet{BlSc2000}. Therefore, our problem setting and geometric framework are in fact a special case of those in \cite{IbPeSa2010}. At the expense of generality, however, we focus on the practical issue of obtaining an explicit expression for the reduced system. Specifically, {\em our specialization leads us to a prescription to obtain a principal connection for a given optimal control system with various symmetries, as opposed to assuming, as in \cite{IbPeSa2010}, that it is given at the outset or deliberately choosing a particular symmetry group to simplify the geometric setting}. In particular, for affine and kinematic optimal control systems, we may explicitly characterize the principal connection using the nonholonomic connection of \citet{BlKrMaMu1996}. We also note that a reduced maximum principle (see Fig.~\ref{fig:RelatingOCPs} and Section~\ref{ssec:PoissonRedOfPMP}) is discussed by \citet{BlSc2000}; however their definition of symmetry of control systems is slightly more restrictive compared to that of \cite{IbPeSa2010} and the present paper (see Remark~\ref{remark:ComparisonWithBlSc2000}). Note also that, in their setup, it is shown that their reduced equations are simplified due to the transversality condition. However, this is not in general true for the case with fixed endpoints and thus the reduced equations become more complicated (see Remark~\ref{remark:ComparisonWithBlSc2000-2}). {\em The construction of principal connections developed here turns out to be a generalization of the mechanical connection used in the falling cat problem as well as those used in kinematic control of nonholonomic systems}. In the falling cat problem, there is a natural choice of principal connection that arises from the problem setting, but the same construction of principal connection applies to kinematic control problems only by choosing certain symmetry groups to realize the same geometric setting; in other words, one does not have the freedom to choose the symmetry group to be used in the reduction. {\em Our construction does not have such restriction and hence can be applied to a wider class of control systems with symmetries}. As a result, we synthesize some previous works by showing how the basic settings of those works arise as special cases of our result; these include optimal control of deformable bodies mentioned above and also the Lie--Poisson reduction of optimal control systems on Lie groups of \citet{Kr1993}. \subsection{Outline} \label{ssec:Outline} We first define, in Section~\ref{sec:SymmetryInNonlinearControlSystems}, symmetries in nonlinear control systems, and show reduction of such systems by the symmetries (see Fig.~\ref{fig:RelatingOCPs}). Section~\ref{sec:SymRedInOptimalControlSystems} first briefly discusses Poisson reduction of \citet{CeMaPeRa2003} for Hamiltonian systems and then applies it to the Hamiltonian system defined by the maximum principle and obtain the Hamilton--Poincar\'e equations for such systems. Section~\ref{sec:PrincipalConnection} addresses the issue of how one should choose the principal connection to be used in the reduction of optimal control systems. Section~\ref{sec:Examples} gives various examples to show how the theory specializes to several previous works on the subject as well as to illustrate how the reduction decouples the optimal control system. \section{Symmetry and Reduction of Nonlinear Control Systems} \label{sec:SymmetryInNonlinearControlSystems} \subsection{Nonlinear Control Systems} Let $M$ be a smooth manifold and $\tau_{M}: TM \to M$ be its tangent bundle; let $E \mathrel{\mathop:}= M \times \mathbb{R}^{d}$ and see $\pi^{E}: E \to M$ as a (trivial) vector bundle\footnote{More generally, we may take a fiber bundle for $E$ (see, e.g., \citet{NiSc1982} and references therein).}; also let $f: E \to TM$ be a fiber-preserving smooth map, i.e., the diagram \begin{equation*} \xymatrix@R=0.225in@C=.225in{ E \ar[rr]^{f} \ar[rd]_{\pi^{E}\!} & & TM \ar[ld]^{\!\tau_{M}} \\ & M & } \end{equation*} commutes. Then a {\em nonlinear control system} is defined by \begin{equation} \label{eq:NonlinearControlSystem} \dot{x} = f(x, u). \end{equation} \subsection{Symmetry in Nonlinear Control Systems} \label{ssec:SymmetryInNonlinearControlSystems} Following \citet{Sc1981} and \citet{NiSc1982} (see also \citet{GrMa1985} and \citet{Sc1987}), we assume that the control system~\eqref{eq:NonlinearControlSystem} has a symmetry in the following sense: Let $G$ be a free and proper (left) Lie group action on $M$. We have $\Phi: G \times M \to M$ or $\Phi_{g}: M \to M$ for any $g \in G$; as a result we have the principal bundle \begin{equation*} \pi: M \to M/G. \end{equation*} The action $\Phi_{g}$ gives rise to the tangent lift $T\Phi_{g}: TM \to TM$. Let us also assume that we have a linear representation of $G$ on the control space $\mathbb{R}^{d}$, i.e., we have a representation $\sigma_{(\cdot)}: G \to GL(d, \mathbb{R})$. Then we define an action of $G$ on $E = M \times \mathbb{R}^{d}$ as follows: \begin{equation} \label{eq:Psi} \Psi_{g}: E \to E; \quad (x, u) \mapsto \parentheses{ \Phi_{g}(x), \sigma_{g}(u) } = (g x, g u), \end{equation} where we introduced the shorthand notation $g x \mathrel{\mathop:}= \Phi_{g}(x)$ and $g u \mathrel{\mathop:}= \sigma_{g}(u)$. \begin{remark} \label{remark:NontrivialActionToControls} In many examples, the representation $\sigma_{(\cdot)}: G \to GL(d, \mathbb{R})$ turns out to be trivial. However, there are non-trivial cases as well: See Section~\ref{ssec:ClebschOptimalControl}. \end{remark} We are now ready to define a symmetry for a nonlinear control system (see \citet[Definition~8]{IbPeSa2010}): We say that the nonlinear control system~\eqref{eq:NonlinearControlSystem} has a $G$-symmetry if the map $f: E \to TM$ is equivariant under the $G$-actions on $E$ and $TM$ defined above, i.e., \begin{equation} \label{eq:f-symmetry} T\Phi_{g} \circ f = f \circ \Psi_{g}, \end{equation} or the diagram \begin{equation*} \xymatrix@!0@R=0.55in@C=.7in{ E \ar[r]^{f} \ar[d]_{\Psi_{g}} & TM \ar[d]^{T\Phi_{g}} \\ E \ar[r]_{f} & TM } \end{equation*} commutes for any $g \in G$. \begin{remark} \label{remark:ComparisonWithBlSc2000} Note that this definition of symmetry is more general compared to that of \citet{BlSc2000}. Their definition of symmetry of the control vector fields \cite[Eq.~(14)]{BlSc2000}, i.e., \begin{equation} \label{eq:SymmetryInBlSc2000} [f(\cdot,u), \mathfrak{g}_{M}] = 0, \end{equation} where $\mathfrak{g}_{M}$ is the set of infinitesimal generators, is rather restrictive for us, since this is not true in general if $f(\cdot,u)$ has vertical components and $G$ is non-Abelian. To illustrate it, consider the extreme case where $M = G$, a non-abelian Lie group. Then the system is a left-invariant control system on $G$ (see Section~\ref{ssec:ControlSystemsOnLieGroups}); but then $\mathfrak{g}_{M} = TG$ and $f(x,u) \in TG$ and so Eq.~\eqref{eq:SymmetryInBlSc2000} does not hold except for the very special case where $f(\cdot,u)$ commutes with every vector field on $G$. \end{remark} \subsection{Symmetry in affine control systems} \label{ssec:SymmetryInAffineControlSystems} Consider an {\em affine control system}, i.e., Eq.~\eqref{eq:NonlinearControlSystem} with \begin{equation} \label{eq:f-affine} f(x, u) = X_{0}(x) + \sum_{i=1}^{d} u_{i} X_{i}(x), \end{equation} where the control vector fields $\{ X_{i} \}_{i=1}^{d}$ are linearly independent on $M$. Let $\mathcal{D} \subset TM$ be the distribution defined by \begin{equation} \label{eq:mathcalD} \mathcal{D} \mathrel{\mathop:}= \Span\{X_{1}, \dots, X_{d}\}. \end{equation} We assume that the vector field $X_{0}$ is $G$-invariant, i.e., for any $g \in G$, \begin{equation} \label{eq:X_0-symmetry} T\Phi_{g} \circ X_{0} = X_{0} \circ \Phi_{g}, \end{equation} and also that the distribution is invariant under the tangent lift of the $G$-action on $Q$, i.e., \begin{equation} \label{eq:mathcalD-symmetry} T\Phi_{g}(\mathcal{D}) = \mathcal{D} \end{equation} for any $g \in G$. This implies that, for each vector field $X_{i}$ for $i = 1, \dots, d$ and any $x \in M$ and $g \in G$, we have \begin{equation} \label{eq:R} T_{x}\Phi_{g}\parentheses{ X_{i}(x) } = \sum_{j=1}^{d} R_{i}^{j}(x,g)\,X_{j}(g x), \end{equation} where $R(x,g)$ is an invertible $d \times d$ matrix. This gives rise to an action of $G$ on $E = M \times \mathbb{R}^{d}$, i.e., $\Psi_{g}: E \to E$ defined by \begin{equation*} \Psi_{g}: (x, u) \mapsto \parentheses{ g x, R^{T}(x,g) u }. \end{equation*} Then the $G$-symmetry of $X_{0}$ and $\mathcal{D}$, i.e., Eqs.~\eqref{eq:X_0-symmetry} and \eqref{eq:mathcalD-symmetry}, implies that of $f$, i.e., \begin{equation*} T_{x}\Phi_{g}(f(x,u)) = f \circ \Psi_{g}(x, u). \end{equation*} In particular, consider the case where $R(x,g)$ has no dependence on $x$, i.e., $R(x,g) = R(g)$; this is the case if, for example, $M$ is a vector space and the action $\Phi_{g}: M \to M$ is linear. Then the matrix $R^{T}(g)$ gives the representation $\sigma_{(\cdot)}: G \to GL(d, \mathbb{R})$, i.e., $\sigma_{g} = R^{T}(g)$. \subsection{Reduced Control System} The equivariance of the map $f$ shown above gives rise to the map $\bar{f}: E/G \to TM/G$ defined so that the diagram \begin{equation*} \xymatrix@!0@R=0.55in@C=.8in{ E \ar[r]^{f} \ar[d]_{\pi^{E}_{G}} & TM \ar[d]^{\pi^{TM}_{G}} \\ E/G \ar[r]_{\bar{f}} & TM/G } \end{equation*} commutes, where $\pi^{E}_{G}: E \to E/G$ and $\pi^{TM}_{G}: TM \to TM/G$ are both quotient maps. Then the map $\bar{f}$ defines the reduced control system. Since $E = M \times \mathbb{R}^{d}$, the quotient $E/G$ defines the associated bundle \begin{equation*} E/G = (M \times \mathbb{R}^{d})/G = M \times_{G} \mathbb{R}^{d}, \end{equation*} which is a vector bundle over $M/G$ (see, e.g., \citet[Section~2.3]{CeMaRa2001}). On the other hand, again following \citet[Section~2.3]{CeMaRa2001}, the quotient $TM/G$ is identified with $T(M/G) \oplus \tilde{\mathfrak{g}}$, where $\tilde{\mathfrak{g}}$ is the associated bundle defined as \begin{equation*} \tilde{\mathfrak{g}} \mathrel{\mathop:}= M \times_{G} \mathfrak{g} = (M \times \mathfrak{g})/G \end{equation*} with $\mathfrak{g}$ being the Lie algebra of the Lie group $G$. More specifically, given a principal bundle connection form \begin{equation} \label{eq:mathcalA} \mathcal{A}: TM \to \mathfrak{g}, \end{equation} we have the identification~(see \cite[Section~2.4]{CeMaRa2001}) \begin{equation} \label{eq:alpha_mathcalA} \alpha_{\mathcal{A}}: TM/G \to T(M/G) \oplus \tilde{\mathfrak{g}}; \qquad [v_{x}]_{G} \mapsto T_{x}\pi(v_{x}) \oplus [ x, \mathcal{A}_{x}(v_{x}) ]_{G}, \end{equation} where $[\,\cdot\,]_{G}$ stands for an equivalence class defined by the $G$-action. Therefore, we may introduce the maps $\bar{f}_{M/G}: E/G \to T(M/G)$ and $\bar{f}_{\tilde{\mathfrak{g}}}: E/G \to \tilde{\mathfrak{g}}$ defined by \begin{equation*} \bar{f}_{M/G}([x, u]_{G}) \mathrel{\mathop:}= T_{x}\pi \circ f(x, u), \qquad \bar{f}_{\tilde{\mathfrak{g}}}([x, u]_{G}) \mathrel{\mathop:}= [x, \mathcal{A}_{x}(f(x,u)) ]_{G} \end{equation*} for any element $[x, u]_{G} \in E/G = M \times_{G} \mathbb{R}^{d}$; these maps are clearly well-defined because of the equivariance of $f$. Then we have \begin{equation*} \alpha_{\mathcal{A}} \circ \bar{f} = \bar{f}_{M/G} \oplus \bar{f}_{\tilde{\mathfrak{g}}}, \end{equation*} and thus the reduced system is decoupled into two subsystems: \begin{equation} \label{eq:ReducedControlSystem} \dot{\bar{x}} = \bar{f}_{M/G}(\bar{u}_{\bar{x}}), \qquad \tilde{\xi}_{\bar{x}} = \bar{f}_{\tilde{\mathfrak{g}}}(\bar{u}_{\bar{x}}), \end{equation} where $\bar{x} \mathrel{\mathop:}= \pi(x)$, $\bar{u}_{\bar{x}} \mathrel{\mathop:}= [x, u]_{G}$, and $\tilde{\xi}_{\bar{x}} \mathrel{\mathop:}= [x, \mathcal{A}_{x}(\dot{x}) ]_{G}$. \section{Symmetry and Reduction of Optimal Control Systems} \label{sec:SymRedInOptimalControlSystems} This section first summarizes the fact that the $G$-symmetry of a nonlinear control system implies that of the corresponding optimal control system if the cost function is also $G$-invariant. We note that similar results are briefly discussed in \citet{GrMa1984}. We then show how a Poisson reduction may be applied to reduce the optimal control system with symmetry. \subsection{Pontryagin Maximum Principle and Symmetry in Optimal Control} \label{ssec:PMPandSymmetryInOptimalControl} Given a cost function $C: E \to \mathbb{R}$ and fixed times $t_{0}$ and $t_{1}$ such that $t_{0} < t_{1}$, define the cost functional \begin{equation*} J \mathrel{\mathop:}= \int_{t_{0}}^{t_{1}} C(x(t), u(t))\,dt. \end{equation*} Let $x_{0}$ and $x_{1}$ be fixed in $M$. Then we formulate an {\em optimal control problem} as follows: Minimize the cost functional, i.e., \begin{equation*} \min_{u(\cdot)} J = \min_{u(\cdot)} \int_{t_{0}}^{t_{1}} C(x(t), u(t))\,dt, \end{equation*} subject to Eq.~\eqref{eq:NonlinearControlSystem}, i.e., $\dot{x} = f(x, u)$, and the endpoint constraints $x(t_{0}) = x_{0}$ and $x(t_{1}) = x_{1}$. A Hamiltonian structure comes into play with the introduction of the augmented cost functional: Let us introduce the costate $\lambda(t) \in T^{*}M$ and define \begin{align*} \hat{S} &\mathrel{\mathop:}= \int_{t_{0}}^{t_{1}} \brackets{ C(x(t), u(t)) + \ip{ \lambda(t) }{ \dot{x}(t) - f(x(t), u(t)) } } dt \\ &= \int_{t_{0}}^{t_{1}} \brackets{ \ip{ \lambda(t) }{ \dot{x}(t) } - \hat{H}(x(t), \lambda(t), u(t)) } dt \end{align*} with the {\em control Hamiltonian}: \begin{equation*} \hat{H}: T^{*}M \oplus E \to \mathbb{R}; \quad \hat{H}(\lambda_{x} , u_{x}) = \hat{H}(x, \lambda , u) \mathrel{\mathop:}= \ip{\lambda_{x}}{f(u_{x})} - C(u_{x}), \end{equation*} where we wrote $\lambda_{x} \mathrel{\mathop:}= (x, \lambda) \in T_{x}^{*}M$ and $u_{x} \mathrel{\mathop:}= (x, u) \in E_{x}$ (recall that $E = M \times \mathbb{R}^{d}$ is a trivial vector bundle over $M$). If the cost function is invariant under the $G$-action $\Psi$ defined in Eq.~\eqref{eq:Psi}, i.e., for any $g \in G$, \begin{equation} \label{eq:C-symmetry} C \circ \Psi_{g} = C, \end{equation} then the control Hamiltonian $\hat{H}$ has a symmetry in the following sense: Define an action of $G$ on the bundle $T^{*}M \oplus E$ by, for any $g \in G$, \begin{equation*} \hat{\Psi}_{g}: T^{*}M \oplus E \to T^{*}M \oplus E; \quad (\lambda_{x} , u_{x}) \mapsto \parentheses{ T^{*}\Phi_{g^{-1}}(\lambda_{x}) , \Psi_{g}(u_{x}) }, \end{equation*} where $T^{*}\Phi_{g^{-1}}: T^{*}M \to T^{*}M$ is the cotangent lift of $\Phi_{g}$. Then it is easy to show that the control Hamiltonian $\hat{H}$ is invariant under the $G$-action defined above, i.e., \begin{equation} \label{eq:hatH-symmetry} \hat{H} \circ \hat{\Psi}_{g} = \hat{H} \end{equation} for any $g \in G$. Now, for an arbitrary fixed $\lambda_{x} \in T_{x}^{*}M$, define $\mathbb{F}_{\rm c}\hat{H}(\lambda_{x}, \,\cdot\,): E_{x} \to E_{x}^{*}$ as follows: For any $w_{x} \in E_{x}$, \begin{equation*} \ip{ \mathbb{F}_{\rm c}\hat{H}(\lambda_{x}, u_{x}) }{ w_{x} } = \left.\od{}{\varepsilon} \hat{H}(\lambda_{x}, u_{x} + \varepsilon\,w_{x}) \right|_{\varepsilon=0}, \end{equation*} where $\ip{\,\cdot\,}{\,\cdot\,}$ on the left-hand side is the natural pairing between elements in $E_{x}^{*}$ and $E_{x}$. We assume that the optimal control $u^{\star}_{x}: T_{x}^{*}M \to E_{x} \cong \mathbb{R}^{d}$ is uniquely determined by the equation \begin{equation*} \mathbb{F}_{\rm c}\hat{H}\parentheses{ \lambda_{x}, u^{\star}_{x}(\lambda_{x}) } = 0 \end{equation*} for any $\lambda_{x} \in T_{x}^{*}M$. This gives rise to the fiber-preserving bundle map \begin{equation*} u^{\star}: T^{*}M \to E; \quad \lambda_{x} \mapsto u^{\star}_{x}(\lambda_{x}). \end{equation*} Then one may show that the optimal control $u^{\star}: T^{*}M \to E$ is equivariant under the $G$-actions, i.e., \begin{equation} \label{eq:u^star-symmetry} \Psi_{g} \circ u^{\star} = u^{\star} \circ T^{*}\Phi_{g^{-1}}, \end{equation} and so we may define the optimal Hamiltonian $H: T^{*}M \to \mathbb{R}$ by $H \mathrel{\mathop:}= \hat{H} \circ u^{\star}$, or more explicitly, \begin{equation} \label{eq:H} H(\lambda_{x}) \mathrel{\mathop:}= \hat{H}( \lambda_{x} , u^{\star}_{x}(\lambda_{x}) ) = \ip{ \lambda_{x} }{ f(u^{\star}_{x}(\lambda_{x})) } - C(u^{\star}_{x}(\lambda_{x})). \end{equation} Then the symmetry of the control Hamiltonian and the optimal control, i.e., Eq.~\eqref{eq:hatH-symmetry} and \eqref{eq:u^star-symmetry}, imply that of the optimal Hamiltonian $H$, i.e., \begin{equation*} H \circ T^{*}\Phi_{g^{-1}} = H \end{equation*} for any $g \in G$. The Pontryagin maximum principle says that the optimal flow on $M$ of the control system~\eqref{eq:NonlinearControlSystem} is necessarily the projection to $M$ of the Hamiltonian flow on $T^{*}M$ with the optimal Hamiltonian $H$ defined above. Specifically, let $\Omega$ be the standard symplectic form on $T^{*}M$, $\pi_{M}: T^{*}M \to M$ the cotangent bundle projection, and $X_{H}$ the Hamiltonian vector field defined by \begin{equation} \label{eq:HamiltonianSystem} i_{X_{H}}\Omega = dH; \end{equation} then there exists a solution $\lambda: [t_{0},t_{1}] \to T^{*}M$ of the above Hamiltonian system with $\pi_{M}(\lambda(t_{0})) = x_{0}$ and $\pi_{M}(\lambda(t_{1})) = x_{1}$ such that its projection to $M$, $\pi_{M} \circ \lambda: [t_{0},t_{1}] \to M$, is the optimal trajectory of the control system (see, e.g., \citet[Chapter~12]{AgSa2004} for more details). In other words, the optimal flow on $M$ of the control system is given by the vector field $T\pi_{M}(X_{H})$ on $M$. \subsection{Poisson Reduction and Hamilton--Poincar\'e Equations} \label{ssec:PoissonRedAndHPEq} We saw that the optimal Hamiltonian $H$ is $G$-invariant; this implies that we can apply the results of symmetry reduction of Hamiltonian systems to Eq.~\eqref{eq:HamiltonianSystem} to obtain a reduced Hamiltonian system related to the optimal flow. Such reduction is helpful in practical applications, since it helps one to reduce the number of unknowns in the Hamiltonian system \eqref{eq:HamiltonianSystem}. Reduction of Hamiltonian systems is a well-developed subject, whose roots go back to the symplectic reduction of \citet{MaWe1974}; there have been substantial subsequent developments (see \citet{MaMiOrPeRa2007} and references therein). In our case, the Poisson version of the cotangent bundle reduction (see \citet{CeMaPeRa2003} and \citet[Section~2.3]{MaMiOrPeRa2007}; see also \citet{MoMaRa1984} and \citet{Mo1986}) turns out to be a natural choice for the following reason: Recall that we derived the reduced control system~\eqref{eq:ReducedControlSystem} on the quotient configuration space $M/G$ using the bundle $T(M/G) \oplus \tilde{\mathfrak{g}}$ over $M/G$. {\em It is natural to expect and also is desirable that the maximum principle, originally formulated on $T^{*}M$, reduces to the dual $T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*}$, which is also a bundle over $M/G$; then we may consider it a reduced version of the maximum principle (see the dashed arrow in Fig.~\ref{fig:RelatingOCPs})}. The Poisson version of the cotangent bundle reduction works precisely this way: The Poisson structure on $T^{*}M$ reduces to that on $T^{*}M/G \cong T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*}$; accordingly, Hamilton's equations reduce to the Hamilton--Poincar\'e equations~\cite{CeMaPeRa2003} defined of $T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*}$. As shown in \citet[Lemma~2.3.3 on p.~74]{MaMiOrPeRa2007}, the identification of $T^{*}M$ with $T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*}$ is provided by the dual of the inverse of $\alpha_{\mathcal{A}}$ defined in Eq.~\eqref{eq:alpha_mathcalA}: \begin{equation} \label{eq:alpha_mathcalA^-1^star} (\alpha_{\mathcal{A}}^{-1})^{*}: T^{*}M/G \to T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*}; \quad [\lambda_{x}]_{G} \mapsto \operatorname{hl}_{x}^{*}(\lambda_{x}) \oplus [x, {\bf J}(\lambda_{x})]_{G}, \end{equation} where $\operatorname{hl}_{x}^{*}: T^{*}_{x}M \to T^{*}_{\bar{x}}(M/G)$ is the adjoint of the horizontal lift $\operatorname{hl}_{x}: T_{\bar{x}}(M/G) \to T_{x}M$ associated with the connection form $\mathcal{A}: TM \to \mathfrak{g}$, and ${\bf J}: T^{*}M \to \mathfrak{g}^{*}$ is the momentum map corresponding to the $G$-symmetry: Let $\xi$ be an arbitrary element in $\mathfrak{g}$ and $\xi_{M} \in \mathfrak{X}(M)$ its infinitesimal generator; then ${\bf J}$ is defined by \begin{equation} \label{eq:J} \ip{ {\bf J}(\lambda_{x}) }{ \xi } = \ip{ \lambda_{x} }{ \xi_{M}(x) }. \end{equation} Recall from, e.g., \citet[Section~11.4]{MaRa1999} that Noether's theorem says that a $G$-invariance of $H$ implies that ${\bf J}$ is conserved along the flow of the Hamiltonian vector field $X_{H}$. We note that \citet{Su1995} formulated a generalized version of Noether's theorem for optimal control systems that does not require some of the assumptions we made here; however the original one suffices for our purpose here. \citet{CeMaPeRa2003} exploit this identification to reduce the Hamiltonian dynamics with a $G$-invariant Hamiltonian $H: T^{*}M \to \mathbb{R}$ as follows: The $G$-invariance implies that one can define the reduced Hamiltonian on $T^{*}M/G$, which is identified with $T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*}$ by Eq.~\eqref{eq:alpha_mathcalA^-1^star}, i.e., one has $\bar{H}: T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*} \to \mathbb{R}$. Then, through the reduction of Hamilton's phase space principle, i.e., \begin{equation*} \delta \int_{t_{0}}^{t_{1}} \brackets{ \ip{p}{\dot{q}} - H(q,p) } dt = 0 \end{equation*} with $\delta q(t_{0}) = \delta q(t_{1}) = 0$, one obtains the Hamilton--Poincar\'e equations defined on $T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*}$: \begin{equation} \label{eq:Hamilton-Poincare} \begin{array}{c} \dot{\bar{x}} = \pd{\bar{H}}{\bar{\lambda}}, \qquad \tilde{\xi} = \pd{\bar{H}}{\tilde{\mu}}, \bigskip\\ \covd{\bar{\lambda}}{t} = -\pd{\bar{H}}{\bar{x}} - \ip{ \tilde{\mu} }{ i_{\dot{\bar{x}}}\tilde{\mathcal{B}} }, \qquad \covd{\tilde{\mu}}{t} = \operatorname{ad}^{*}_{\tilde{\xi}}\tilde{\mu}, \end{array} \end{equation} where $\bar{\lambda} \oplus \tilde{\mu}$ is an element in $T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*}$; $\tcovd{}{t}$ is the covariant derivative in the associated bundle (see \citet[Section~2.3]{CeMaRa2001} and \citet{CeMaPeRa2003}); $\tilde{\mathcal{B}}$ is the reduced curvature form defined as follows (see \citet[Lemma~4.5]{CeMaPeRa2003}): Let $\operatorname{hor}_{x}: T_{x}M \to T_{x}M$ be the horizontal component defined by the connection form $\mathcal{A}$: \begin{equation*} \operatorname{hor}_{x}(\mathcal{X}_{x}) = \mathcal{X}_{x} - (\mathcal{A}_{x}(\mathcal{X}_{x}))_{M}(x), \end{equation*} where $(\,\cdot\,)_{M}: \mathfrak{g} \to \mathfrak{X}(M)$ is the infinitesimal generator. Also, let $\mathcal{B}$ be the curvature of the connection form $\mathcal{A}: TM \to \mathfrak{g}$, i.e., it is the $\mathfrak{g}$-valued two-form on $M$ defined by \begin{equation*} \mathcal{B}_{x}(\mathcal{X}_{x}, \mathcal{Y}_{x}) = d\mathcal{A}_{x}(\operatorname{hor}_{x}(\mathcal{X}_{x}), \operatorname{hor}_{x}(\mathcal{Y}_{x})). \end{equation*} Then the reduced curvature form $\tilde{\mathcal{B}}$ is the $\tilde{\mathfrak{g}}$-valued two-form on $M/G$ defined by \begin{equation*} \tilde{\mathcal{B}}_{\bar{x}}(X_{\bar{x}}, Y_{\bar{x}}) = [x, \mathcal{B}_{x}(\mathcal{X}_{x}, \mathcal{Y}_{x})]_{G} \end{equation*} for any $X_{\bar{x}}, Y_{\bar{x}} \in T_{\bar{x}}(M/G)$ and $\mathcal{X}_{x}, \mathcal{Y}_{x} \in T_{x}M$ such that $T_{x}\pi(\mathcal{X}_{x}) = X_{\bar{x}}$ and $T_{x}\pi(\mathcal{Y}_{x}) = Y_{\bar{x}}$. In coordinates (see \citet[Section~4]{CeMaRa2001b} for details), Eq.~\eqref{eq:Hamilton-Poincare} becomes \begin{equation*} \begin{array}{c} \dot{\bar{x}}^{\alpha} = \pd{\bar{H}}{\bar{\lambda}^{\alpha}}, \qquad \tilde{\xi}^{a} = \pd{\bar{H}}{\tilde{\mu}_{a}}, \bigskip\\ \dot{\bar{\lambda}}_{\alpha} = -\pd{\bar{H}}{\bar{x}^{\alpha}} - \tilde{\mu}_{a} \parentheses{ \mathcal{B}^{a}_{\beta \alpha} \dot{\bar{x}}^{\beta} + \mathcal{A}^{b}_{\alpha} C^{a}_{d b} \pd{\bar{H}}{\tilde{\mu}_{d}} }, \qquad \dot{\tilde{\mu}}_{a} = \tilde{\mu}_{b} C^{b}_{d a} \parentheses{ \pd{\bar{H}}{\tilde{\mu}_{d}} - \mathcal{A}^{d}_{\alpha} \dot{\bar{x}}^{\alpha} }, \end{array} \end{equation*} where $\tilde{\xi}^{a}$ and $\tilde{\mu}_{a}$ are the {\em locked body angular velocity} and its corresponding momentum (see \citet[Section~5.3]{BlKrMaMu1996}) defined by \begin{equation*} \tilde{\xi}^{a} = \xi^{a} + \mathcal{A}^{a}_{\alpha} \dot{\bar{x}}^{\alpha} = \parentheses{ \operatorname{Ad}_{g^{-1}}\mathcal{A}_{(\bar{x}, g)}(\dot{\bar{x}}, \dot{g}) }^{a}, \qquad \tilde{\mu}_{a} = \parentheses{ \operatorname{Ad}_{g}^{*} {\bf J}(\lambda_{x}) }_{a}. \end{equation*} with $\xi = T_{g}L_{g^{-1}}(\dot{g})$; the coefficients $\mathcal{A}^{a}_{\alpha}$ are defined in the coordinate expression for the connection form $\mathcal{A}$ as follows: \begin{equation*} \mathcal{A}_{(\bar{x}, g)}(\dot{\bar{x}}, \dot{g}) = \operatorname{Ad}_{g}(\xi^{a} + \mathcal{A}^{a}_{\alpha} \dot{\bar{x}}^{\alpha})\, {\bf e}_{a}, \end{equation*} where $\{ {\bf e}_{a} \}_{a=1}^{\dim G}$ is a basis for the Lie algebra $\mathfrak{g}$. Also the coefficients $\mathcal{B}^{a}_{\beta\alpha}$ for the curvature are given by \begin{equation*} \mathcal{B}^{a}_{\beta\alpha} = \pd{\mathcal{A}^{a}}{\bar{x}^{\alpha}} - \pd{\mathcal{A}^{a}}{\bar{x}^{\beta}} - C^{a}_{b c} \mathcal{A}^{b}_{\alpha} \mathcal{A}^{c}_{\beta}. \end{equation*} \subsection{Poisson Reduction of Pontryagin Maximum Principle} \label{ssec:PoissonRedOfPMP} Let us apply the above Poisson reduction to the Hamiltonian system~\eqref{eq:HamiltonianSystem} defined by the maximum principle. First calculate the reduced optimal Hamiltonian $\bar{H}$ corresponding to the optimal Hamiltonian~\eqref{eq:H}. Using the identification in Eq.~\eqref{eq:alpha_mathcalA^-1^star} and also the reduced optimal control \begin{equation*} \bar{u}^{\star}: T^{*}M/G \to E/G, \end{equation*} which is well-defined due to Eq.~\eqref{eq:u^star-symmetry}, we can rewrite the Hamiltonian $H$ as follows: \begin{align*} H(\lambda_{x}) &= \ip{ (\alpha_{\mathcal{A}}^{-1})^{*}(\lambda_{x}) }{\, \alpha_{\mathcal{A}} \circ f(u^{\star}_{x}(\lambda_{x})) } - C(u^{\star}_{x}(\lambda_{x})) \\ &= \ip{ \operatorname{hl}_{x}^{*}(\lambda_{x}) }{\, \bar{f}_{M/G}^{\star}([\lambda_{x}]_{G}) } + \ip{ [x, {\bf J}(\lambda_{x})]_{G} }{\, \bar{f}_{\tilde{\mathfrak{g}}}^{\star}([\lambda_{x}]_{G}) } - \bar{C}^{\star}([\lambda_{x}]_{G}), \end{align*} where we defined the reduced cost function $\bar{C}: E/G \to \mathbb{R}$ by $\bar{C} \circ \pi^{E}_{G} = C$ and also \begin{equation*} \bar{f}_{M/G}^{\star}: T^{*}M/G \to T(M/G), \qquad \bar{f}_{\tilde{\mathfrak{g}}}^{\star}: T^{*}M/G \to \tilde{\mathfrak{g}}, \qquad \bar{C}^{\star}: T^{*}M/G \to \mathbb{R} \end{equation*} by \begin{equation*} \begin{array}{c} \displaystyle \bar{f}_{M/G}^{\star}([\lambda_{x}]_{G}) \mathrel{\mathop:}= \bar{f}_{M/G} \circ \bar{u}^{\star}_{\bar{x}}([\lambda_{x}]_{G}), \qquad \bar{f}_{\tilde{\mathfrak{g}}}^{\star}([\lambda_{x}]_{G}) \mathrel{\mathop:}= \bar{f}_{\tilde{\mathfrak{g}}} \circ \bar{u}^{\star}_{\bar{x}}([\lambda_{x}]_{G}), \medskip\\ \bar{C}^{\star}([\lambda_{x}]_{G}) \mathrel{\mathop:}= \bar{C} \circ \bar{u}^{\star}_{\bar{x}}([\lambda_{x}]_{G}). \end{array} \end{equation*} Define the reduced optimal Hamiltonian $\bar{H}: T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*} \to \mathbb{R}$ by \begin{equation} \label{eq:barH} \bar{H}\parentheses{ \bar{\lambda}_{\bar{x}} \oplus \tilde{\mu}_{\bar{x}} } \mathrel{\mathop:}= \ip{ \bar{\lambda}_{\bar{x}} }{ \bar{f}_{M/G}^{\star}\parentheses{ \bar{\lambda}_{\bar{x}} \oplus \tilde{\mu}_{\bar{x}} } } + \ip{ \tilde{\mu}_{\bar{x}} }{ \bar{f}_{\tilde{\mathfrak{g}}}^{\star}\parentheses{ \bar{\lambda}_{\bar{x}} \oplus \tilde{\mu}_{\bar{x}} } } - \bar{C}^{\star}\parentheses{ \bar{\lambda}_{\bar{x}} \oplus \tilde{\mu}_{\bar{x}} }, \end{equation} where we identified $T^{*}M/G$ with $T^{*}(M/G) \oplus \tilde{\mathfrak{g}}^{*}$ as the domain of the maps $\bar{f}_{M/G}^{\star}$, $\bar{f}_{\tilde{\mathfrak{g}}}^{\star}$, and $\bar{C}^{\star}$. Then we have $H(\lambda_{x}) = \bar{H}( \bar{\lambda}_{\bar{x}} \oplus \tilde{\mu}_{\bar{x}} )$ with \begin{equation*} \bar{\lambda}_{\bar{x}} \mathrel{\mathop:}= \operatorname{hl}_{x}^{*}(\lambda_{x}), \qquad \tilde{\mu}_{\bar{x}} \mathrel{\mathop:}= [x, {\bf J}(\lambda_{x})]_{G}. \end{equation*} In coordinates, the reduced optimal Hamiltonian is \begin{equation*} \bar{H}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}} = \bar{\lambda}_{\alpha}\, \bar{f}_{M/G}^{\star,\alpha}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}} + \tilde{\mu}_{a}\, \bar{f}_{\tilde{\mathfrak{g}}}^{\star,a}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}} - \bar{C}^{\star}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}}. \end{equation*} Applying the Hamilton--Poincar\'e equations~\eqref{eq:Hamilton-Poincare} of \citet{CeMaPeRa2003} to this particular choice of $\bar{H}$ gives the following: \begin{theorem} Suppose that the nonlinear control system~\eqref{eq:NonlinearControlSystem} and the cost function have $G$-symmetries in the sense of Eqs.~\eqref{eq:f-symmetry} and \eqref{eq:C-symmetry}. Then the necessary condition of the Pontryagin maximum principle reduces to the following set of equations: \begin{equation} \label{eq:Control_Hamilton-Poincare} \begin{array}{c} \dot{\bar{x}} = \bar{f}_{M/G}^{\star}\parentheses{ \bar{\lambda}_{\bar{x}} \oplus \tilde{\mu}_{\bar{x}} }, \qquad \tilde{\xi} = \bar{f}_{\tilde{\mathfrak{g}}}^{\star}\parentheses{ \bar{\lambda}_{\bar{x}} \oplus \tilde{\mu}_{\bar{x}} }, \bigskip\\ \covd{\bar{\lambda}}{t} = -\pd{\bar{H}}{\bar{x}} - \ip{ \tilde{\mu} }{ i_{\dot{\bar{x}}}\tilde{\mathcal{B}} }, \qquad \covd{\tilde{\mu}}{t} = \operatorname{ad}^{*}_{\tilde{\xi}}\tilde{\mu}, \end{array} \end{equation} or, in coordinates, \begin{equation} \label{eq:Control_Hamilton-Poincare-coordinates} \begin{array}{c} \dot{\bar{x}}^{\alpha} = \bar{f}_{M/G}^{\star,\alpha}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}}, \qquad \tilde{\xi}^{a} = \bar{f}_{\tilde{\mathfrak{g}}}^{\star,a}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}}, \bigskip\\ \dot{\bar{\lambda}}_{\alpha} = -\pd{\bar{H}}{\bar{x}^{\alpha}} - \tilde{\mu}_{a} \parentheses{ \mathcal{B}^{a}_{\beta \alpha} \dot{\bar{x}}^{\beta} + \mathcal{A}^{b}_{\alpha} C^{a}_{d b} \bar{f}_{\tilde{\mathfrak{g}}}^{\star,d}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}} }, \qquad \dot{\tilde{\mu}}_{a} = \tilde{\mu}_{b} C^{b}_{d a} \parentheses{ \bar{f}_{\tilde{\mathfrak{g}}}^{\star,d}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}} - \mathcal{A}^{d}_{\alpha} \dot{\bar{x}}^{\alpha} }. \end{array} \end{equation} \end{theorem} \begin{remark} Notice that the equations for $(\bar{x}, \bar{\lambda}, \tilde{\mu})$ are decoupled from the second one. Thus one first solves this subsystem and then solve the second equation to reconstruct the dynamics in the group variables. \end{remark} \begin{remark} \label{remark:AbelianCase} If the Lie group $G$ is Abelian, then the structure constants $C^{a}_{bc}$ vanish, and thus we have \begin{equation} \label{eq:Control_Hamilton-Poincare-coordinates-Abelian} \begin{array}{c} \dot{\bar{x}}^{\alpha} = \bar{f}_{M/G}^{\star,\alpha}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}}, \qquad \tilde{\xi}^{a} = \bar{f}_{\tilde{\mathfrak{g}}}^{\star,a}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}}, \bigskip\\ \dot{\bar{\lambda}}_{\alpha} = -\pd{\bar{H}}{\bar{x}^{\alpha}} - \tilde{\mu}_{a} \mathcal{B}^{a}_{\beta \alpha} \dot{\bar{x}}^{\beta}, \qquad \dot{\tilde{\mu}}_{a} = 0. \end{array} \end{equation} In particular, the last equation gives a conservation of the momentum map ${\bf J}$, which simplifies the set of equations further. In the non-Abelian case, the conservation of ${\bf J}$ is ``hidden'' in the last equation of \eqref{eq:Control_Hamilton-Poincare-coordinates} since the new variable $\tilde{\mu}$ is not ${\bf J}$ itself (which is conserved): Recall that we defined $\tilde{\mu}_{a} = \parentheses{ \operatorname{Ad}_{g}^{*} {\bf J}(\lambda_{x}) }_{a}$, which reduces to $\tilde{\mu}_{a} = {\bf J}(\lambda_{x})_{a}$ in the Abelian case. Notice also that, after solving for $(\bar{x}, \bar{\lambda})$, the second equation \eqref{eq:Control_Hamilton-Poincare-coordinates-Abelian} is solved by quadrature: The equation reduces to the form $g^{-1}(t)\dot{g}(t) = \zeta(t)$, where $\zeta(t)$ is a known curve in the Lie algebra $\mathfrak{g}$, and thus we can integrate the equation easily to obtain \begin{equation*} g(t) = \exp\parentheses{ \int_{0}^{t} \zeta(s)\,ds}, \end{equation*} since $G$ is Abelian and thus all the bracket terms in the iterated integrals coming from the Picard iteration vanish (see, e.g., \citet{Is2002}). \end{remark} \begin{remark} \label{remark:ComparisonWithBlSc2000-2} The reduction of \citet{BlSc2000} takes advantage of the fact that the momentum map ${\bf J}$ vanishes due to the transversality condition. In our setting, the endpoints are fixed and thus this does not hold in general; hence the equations cannot be reduced to the cotangent bundle $T^{*}(M/G)$ as discussed at the end of Section~3 of \cite{BlSc2000}. This is why our reduced equations are slightly more complicated than theirs. Note, however, that the vanishing of ${\bf J}$ implies $\tilde{\mu} = 0$ and thus our result simplifies to theirs, except of course the differences in our settings and formulations mentioned in Section~\ref{ssec:MainResultsAndComparison} and Remark~\ref{remark:ComparisonWithBlSc2000}. \end{remark} \section{How Do We Choose the Principal Connection?} \label{sec:PrincipalConnection} We have shown that an optimal control system with symmetry may be reduced to the Hamilton--Poincar\'e equations~\eqref{eq:Control_Hamilton-Poincare}. However, we did not address the issue of how we should choose the principal connection form $\mathcal{A}$ introduced in \eqref{eq:mathcalA}. Whereas sometimes the problem setting provides a natural choice of principal connection, such as the mechanical connection in the falling cat problem~(see, e.g., \citet{Mo1990}), it is often not clear what choice has to be made. One may realize the same setting as the falling cat problem (the ``purely kinematic'' case discussed below in Example~\ref{ex:PurelyKinematicCase}) by {\em choosing} some particular symmetry subgroup of a larger symmetry group of the system; however, this means that one is forced to make a particular choice of symmetry group even when a larger symmetry group is available. See, e.g., Examples~\ref{ex:mathcalA-Snakeboard-R2} and \ref{ex:mathcalA-Snakeboard} below: The choice $G = \mathbb{R}^{2}$ realizes the ``purely kinematic'' case but we have $\mathbb{R}^{2} \times SO(2)$ as a larger symmetry group. In this section, we show a construction of principal construction that does not impose such constraints on the choice of the symmetry group $G$. This construction is particularly explicit for affine and kinematic control systems (Sections~\ref{ssec:NonholonomicConnection} and \ref{ssec:AffineOptimalControlSystem}), but may as well be formulated for more general settings under certain assumptions (Section~\ref{ssec:MomentumMapAndConnection}). \subsection{Principal Connection} Let $\mathcal{O}(x)$ be the orbit of the $G$-action $\Phi$ on $M$ (defined in Section~\ref{ssec:SymmetryInNonlinearControlSystems}) through $x \in M$, and $\mathcal{V}_{x}$ be its tangent space at $x$, i.e., \begin{equation*} \mathcal{O}(x) \mathrel{\mathop:}= \setdef{ \Phi_{g}(x) \in M }{ g \in G }, \qquad \mathcal{V}_{x} \mathrel{\mathop:}= T_{x}\mathcal{O}(x). \end{equation*} Then a {\em principal connection} $\mathcal{H}$ on the principal bundle $\pi: Q \to Q/G$ is given by a $G$-invariant distribution that complements $\mathcal{V}$, i.e., \begin{equation*} T\Phi_{g}(\mathcal{H}) = \mathcal{H} \text{ for $\forall g \in G$} \quad\text{and}\quad T_{x}M = \mathcal{H}_{x} \oplus \mathcal{V}_{x} \text{ for $\forall x \in M$}. \end{equation*} Then one may find the corresponding {\em principal connection form} $\mathcal{A}: TM \to \mathfrak{g}$ such that $\mathcal{A}_{x}(\xi_{M}(x)) = \xi$ for any $\xi \in \mathfrak{g}$ and $\ker\mathcal{A}_{x} = \mathcal{H}_{x}$ for any $x \in M$. \subsection{Nonholonomic Connection} \label{ssec:NonholonomicConnection} One example of principal connection is the so-called {\em nonholonomic connection} introduced in \citet[Section~6.4]{BlKrMaMu1996} (see also \citet[Section~3]{CeMaRa2001b}) for reduction of nonholonomic mechanical systems. As we shall see in Section~\ref{ssec:AffineOptimalControlSystem}, the nonholonomic connection---complemented by the results of Section~\ref{ssec:MomentumMapAndConnection}---turns out to be a natural choice of principal connection for {\em affine optimal control systems}. First we make the following ``dimension assumption''~\cite{BlKrMaMu1996}: \begin{equation*} T_{x}M = \mathcal{D}_{x} + \mathcal{V}_{x}, \end{equation*} where we recall that $\mathcal{D}$ is the distribution defined by the control vector fields (see Eq.~\eqref{eq:mathcalD}). Now let (see Fig.~\ref{fig:NonholonomicConnection}) \begin{equation*} \mathcal{S}_{x} \mathrel{\mathop:}= \mathcal{D}_{x} \cap \mathcal{V}_{x}. \end{equation*} Then one may choose, exploiting an additional geometric structure, a certain complementary subspace $\mathcal{H}_{x}$ of $\mathcal{S}_{x}$ in $\mathcal{D}_{x}$ to write $\mathcal{D}_{x}$ as the direct sum of them: \begin{equation*} \mathcal{D}_{x} = \mathcal{H}_{x} \oplus \mathcal{S}_{x}. \end{equation*} One may also introduce a complementary subspace $\mathcal{U}_{x}$ to $\mathcal{S}_{x}$ in $\mathcal{V}_{x}$ as well: \begin{equation*} \mathcal{V}_{x} = \mathcal{S}_{x} \oplus \mathcal{U}_{x}. \end{equation*} As a result, we have the following decomposition of the tangent space $T_{x}M$: \begin{equation*} T_{x}M = \mathcal{H}_{x} \oplus \mathcal{V}_{x} = \mathcal{H}_{x} \oplus \mathcal{S}_{x} \oplus \mathcal{U}_{x}. \end{equation*} If, in addition, $\mathcal{H}$ is $G$-invariant, i.e., $T\Phi_{g}(\mathcal{H}) = \mathcal{H}$, then it defines a principal connection on the principal bundle $\pi: M \to M/G$; it is called the {\em nonholonomic connection}~\cite{BlKrMaMu1996}. Note, however, that {\em the choice of $\mathcal{H}$ is not unique without some additional structure.} We will come back to this issue later in the subsection to follow. \begin{figure}[ht!] \centering \includegraphics[width=.55\linewidth]{NonholonomicConnection} \caption{Nonholonomic connection~\cite{BlKrMaMu1996, CeMaRa2001b}. $\mathcal{D}_{x}$ is spanned by the control vector fields $\{X_{i}\}_{i=1}^{d}$; $\mathcal{V}_{x}$ is the tangent space to the group orbit through $x \in M$; $\mathcal{H}_{x}$ defines a principal connection.} \label{fig:NonholonomicConnection} \end{figure} Using the nonholonomic connection, the reduced control system~\eqref{eq:ReducedControlSystem} can be written as \begin{equation} \label{eq:ReducedControlSystem-affine} \dot{\bar{x}} = \bar{X}_{0}(\bar{x}) + \sum_{i=1}^{d} u_{i} \bar{X}_{i}(\bar{x}), \qquad \tilde{\xi}_{\bar{x}} = \brackets{x, \mathcal{A}_{x} \cdot X_{0}(x) }_{G} + \sum_{i=1}^{d} u_{i}\,\brackets{x, \mathcal{A}_{x} \cdot X_{i}(x) }_{G}, \end{equation} where $\bar{X}_{i} \mathrel{\mathop:}= T\pi(X_{i})$ for $i = 0, 1, \dots, d$. The following special case, often called the {\em ``purely kinematic''} case~\cite{BlKrMaMu1996}, gives a simple (although somewhat trivial) example of nonholonomic connection: \begin{example}[Purely kinematic case---Control of deformable bodies and robotic locomotion] \label{ex:PurelyKinematicCase} Consider the special case where the tangent space to the group orbit $\mathcal{V}_{x} = T_{x}\mathcal{O}(x)$ exactly complements the $G$-invariant distribution $\mathcal{D}_{x}$, i.e., $\mathcal{S}_{x} = 0$ and thus \begin{equation*} T_{x}M = \mathcal{D}_{x} \oplus \mathcal{V}_{x}. \end{equation*} This is the special case called ``purely kinematic'' case or ``Chaplygin systems'' in the context of nonholonomic mechanics~\cite{BlKrMaMu1996}. In this case, $\mathcal{D}_{x}$ itself gives the horizontal space $\mathcal{H}_{x}$ and thus defines the connection form $\mathcal{A}: TM \to \mathfrak{g}$ such that $\ker\mathcal{A}_{x} = \mathcal{D}_{x}$ (recall the $G$-symmetry of $\mathcal{D}$, i.e., Eq.~\eqref{eq:mathcalD-symmetry}). As a result, Eq.~\eqref{eq:ReducedControlSystem-affine} becomes \begin{equation*} \dot{\bar{x}} = \bar{X}_{0}(\bar{x}) + \sum_{i=1}^{d} u_{i} \bar{X}_{i}(\bar{x}), \qquad \tilde{\xi}_{\bar{x}} = \brackets{ x, \mathcal{A}_{x}(X_{0}(x)) }_{G}. \end{equation*} In particular, for the drift-free case, i.e., $X_{0}(x) = 0$, we have $\mathcal{A}_{x}(X_{0}(x)) = 0$ and so $\tilde{\xi}_{\bar{x}} = [x, \mathcal{A}_{x}(\dot{x})]_{G} = 0$, which implies $ \mathcal{A}_{x}(\dot{x}) = 0$. With local coordinates $(\bar{x}, g)$ for $M$, we may express the connection $\mathcal{A}$ as \begin{equation*} \mathcal{A}_{(\bar{x},g)}\parentheses{ \dot{\bar{x}},\dot{g} } = \operatorname{Ad}_{g} \parentheses{ g^{-1}\dot{g} + \mathcal{A}(\bar{x})\,\dot{\bar{x}} }, \end{equation*} where we slightly abused the notation to use $\mathcal{A}(\bar{x})$ as a coordinate expression for the connection form $\mathcal{A}_{(\bar{x},g)}$. As a result, Eq.~\eqref{eq:ReducedControlSystem-affine} becomes \begin{equation*} \dot{\bar{x}} = \sum_{i=1}^{d} u_{i} \bar{X}_{i}(\bar{x}), \qquad g^{-1}\dot{g} = -\mathcal{A}(\bar{x}) \dot{\bar{x}}. \end{equation*} This is the basic setting for control of deformable bodies (see, e.g., \citet{Mo1993a}) and also of robotic locomotion (see, e.g., \citet{LiCa1993}, \citet{KeMu1995}, and \citet[Chapters~7 and 8]{MuLiSa1994}); for the former, the connection form $\mathcal{A}$ is defined by the mechanical connection (see, e.g., \citet[Section~2.1]{MaMiOrPeRa2007}) whereas for the latter it is defined by the distribution $\mathcal{D}$ arising from the nonholonomic constraints. \end{example} {\em However, in general, $\mathcal{S}_{x} = \mathcal{D}_{x} \cap \mathcal{V}_{x} \neq 0$, and so the choice of the subspace $\mathcal{H}_{x}$ is not trivial, and thus we need to resort to additional ingredients to specify $\mathcal{H}$}; this is the topic of the next subsection. \subsection{Momentum Map and Principal Connection} \label{ssec:MomentumMapAndConnection} To get around the above-mentioned difficulty in specifying the principal connection $\mathcal{H}$, we propose a way to exploit the momentum map (see Eq.~\eqref{eq:J}) corresponding to the symmetry group. Specifically, we give a generalization of the mechanical connection (see, e.g., \citet[Section~2.1]{MaMiOrPeRa2007}) for systems with degenerate Hamiltonians. The main result in this subsection, Proposition~\ref{prop:mathcalH-PrincipalConnection}, does not assume the affine optimal control setting, but is proved under quite strong assumptions. In Section~\ref{ssec:AffineOptimalControlSystem} below, we show that these assumptions are automatically satisfied for a certain class of affine optimal control systems, and also that the construction of principal connection developed here gives a unique choice of the horizontal space $\mathcal{H}$. Let ${\bf J}: T^{*}M \to \mathfrak{g}^{*}$ be the momentum map for the Hamiltonian system~\eqref{eq:HamiltonianSystem} associated with the optimal control of the nonlinear control system~\eqref{eq:NonlinearControlSystem}. Recall (see, e.g., \citet[Section~2.1]{MaMiOrPeRa2007}) that the mechanical connection form $\mathcal{A}: TM \to \mathfrak{g}$ is defined by \begin{equation*} \mathcal{A} = \mathbb{I}^{-1} \circ {\bf J} \circ \mathbb{F}L, \end{equation*} with the locked inertia tensor $\mathbb{I}: \mathfrak{g} \to \mathfrak{g}^{*}$ and a Lagrangian $L: TM \to \mathbb{R}$; $\mathbb{F}L: TM \to T^{*}M$ is the Legendre transformation defined by \begin{equation*} \ip{ \mathbb{F}L(v_{x}) }{ w_{x} } = \left.\od{}{\varepsilon} L(v_{x} + \varepsilon\,w_{x}) \right|_{\varepsilon=0} \end{equation*} for any $v_{x}, w_{x} \in T_{x}M$. This definition does not directly apply to our setting, since there is usually no such Lagrangian $L$ in the optimal control setting. Therefore, we need to generalize the notion of the mechanical connection here: Let $\Phi: G \times M \to M$ be a free and proper action of a Lie group $G$, and $H: T^{*}M \to \mathbb{R}$ be a Hamiltonian. Define $\mathbb{F}H: T^{*}M \to TM$ by \begin{equation*} \ip{ \mathbb{F}H(\alpha_{x}) }{ \beta_{x} } = \left.\od{}{\varepsilon} H(\alpha_{x} + \varepsilon\,\beta_{x}) \right|_{\varepsilon=0} \end{equation*} for any $\alpha_{x}, \beta_{x} \in T^{*}_{x}M$. We assume that $\mathbb{F}H: T^{*}M \to TM$ is linear and thus $\mathcal{H} \subset TM$ defined by \begin{equation*} \mathcal{H} \mathrel{\mathop:}= \mathbb{F}H \parentheses{ {\bf J}^{-1}(0) } \end{equation*} gives a distribution on $M$. Then, under certain assumptions, $\mathcal{H}$ gives a principal connection on $\pi: M \to M/G$: \begin{proposition} \label{prop:mathcalH-PrincipalConnection} Let $\mathcal{V}_{x} = T_{x}\mathcal{O}(x)$ be the tangent space to the group orbit $\mathcal{O}$ of the action $\Phi$. Suppose that the Hamiltonian $H$ is $G$-invariant, $\mathbb{F}H: T^{*}M \to TM$ is a linear map that is non-degenerate on ${\bf J}^{-1}(0)$, and also that the intersection of $\mathcal{V}$ and $\mathcal{H}$ is trivial, i.e., $\mathcal{H}_{x} \cap \mathcal{V}_{x} = 0$ for any $x \in M$. Then $\mathcal{H}$ defines a principal connection on $\pi: M \to M/G$. \end{proposition} \begin{proof} See Appendix~\ref{sec:mathcalH-PrincipalConnection-proof}. \end{proof} The $G$-invariance of the Hamiltonian is always satisfied in our setting as mentioned in \ref{ssec:PoissonRedAndHPEq}. The other conditions are somewhat contrived, and it is not clear as to whether one may further scrutinize and weaken the conditions for general settings. However, in the next subsection, we show that the linearity of $\mathbb{F}H$ and $\mathcal{H}_{x} \cap \mathcal{V}_{x} = 0$ are automatically satisfied for a certain class of affine optimal control problems. \subsection{Application to Affine Optimal Control Systems with Quadratic Cost Functions} \label{ssec:AffineOptimalControlSystem} We apply Proposition~\ref{prop:mathcalH-PrincipalConnection} to a certain class of affine optimal control problems and show that Proposition~\ref{prop:mathcalH-PrincipalConnection} helps us identify the unique principal connection $\mathcal{H}$ even in the non-purely kinematic case. Consider the following affine optimal control problem: \begin{equation} \label{eq:AffineOptimalControlSystem} \dot{x} = X_{0}(x) + \sum_{i=1}^{d} u_{i} X_{i}(x), \qquad C(x, u) = \frac{1}{2} g_{ij} u_{i} u_{j}, \end{equation} where $g_{ij} \mathrel{\mathop:}= g(X_{i}, X_{j})$ for $1 \le i, j \le d$ with a $G$-invariant sub-Riemannian metric $g$ on $M$ that is positive-definite on the distribution $\mathcal{D} \mathrel{\mathop:}= \Span\{ X_{1}, \dots, X_{d} \}$. Let us first introduce a couple of notions to be used in the discussion to follow: \begin{definition} The {\em drift-free} control Hamiltonian $\hat{H}_{\rm df}: T^{*}M \oplus E \to \mathbb{R}$ for the affine control system~\eqref{eq:AffineOptimalControlSystem} is defined by \begin{equation*} \hat{H}_{\rm df}(\lambda_{x}, u_{x}) \mathrel{\mathop:}= \sum_{i=1}^{d} u_{i} \ip{ \lambda_{x} }{ X_{i}(x) } - \frac{1}{2} g_{ij} u_{i} u_{j}. \end{equation*} Setting $\mathbb{F}_{\rm c}\hat{H}_{\rm df} = \mathbb{F}_{\rm c}\hat{H} = 0$ gives the optimal control \begin{equation} \label{eq:u^star-AffineOptimal} u^{\star}_{j}(\lambda_{x}) = \ip{\lambda_{x}}{ X_{j}(x) }, \end{equation} and thus we may define the {\em drift-free optimal Hamiltonian} $H_{\rm df}: T^{*}M \to \mathbb{R}$ by \begin{equation*} H_{\rm df}(\lambda_{x}) \mathrel{\mathop:}= \hat{H}_{\rm df}\parentheses{ \lambda_{x}, u^{\star}_{x}(\lambda_{x}) } = \frac{1}{2} g^{ij} \ip{\lambda_{x}}{ X_{i}(x) } \ip{\lambda_{x}}{ X_{j}(x) }. \end{equation*} For kinematic control systems, i.e., $X_{0}(x) = 0$, we have $\hat{H}_{\rm df} = \hat{H}$ and $H_{\rm df} = H$. \end{definition} \begin{remark} As we shall see below, the drift-free optimal Hamiltonian is used merely to define a map from ${\bf J}^{-1}(0) \subset T^{*}M$ to $TM$. Note also that $H_{\rm df}$ is degenerate unless $d = m \mathrel{\mathop:}= \dim M$, i.e., the system is fully actuated. \end{remark} \begin{proposition} Suppose that the affine optimal control system~\eqref{eq:AffineOptimalControlSystem} is $G$-invariant in the sense described in Example~\ref{ssec:SymmetryInAffineControlSystems} and Section~\ref{ssec:PMPandSymmetryInOptimalControl}, and also that $\mathbb{F}H_{\rm df}: T^{*}M \to TM$ restricted to ${\bf J}^{-1}(0)$ is non-degenerate. Then the distribution \begin{equation} \label{eq:mathcalH} \mathcal{H} \mathrel{\mathop:}= \mathbb{F}H_{\rm df} \parentheses{ {\bf J}^{-1}(0) } \subset TM \end{equation} defines a principal connection on $\pi: M \to M/G$. \end{proposition} \begin{proof} Clearly, the $G$-invariance of the optimal control system implies that of $H_{\rm df}$ as well. Therefore, by Proposition~\ref{prop:mathcalH-PrincipalConnection}, it remains to show $\mathcal{H}_{x} \cap \mathcal{V}_{x} = 0$. First notice that the Legendre transformation $\mathbb{F}H_{\rm df}: T^{*}M \to TM$ is given by \begin{equation} \label{eq:FH_df} \alpha_{x} \mapsto \mathbb{F}H_{\rm df}(\alpha_{x}) \mathrel{\mathop:}= g^{ij} \ip{\alpha_{x}}{ X_{i}(x) } X_{j}(x). \end{equation} Let $\xi $ be an element in $\mathfrak{g}$ such that $\xi_{M}(x)$ is in $\mathcal{H}_{x}$. Then $\xi_{M}(x) = \mathbb{F}H_{\rm df}(\alpha_{x})$ for some $\alpha_{x} \in {\bf J}^{-1}(0)$, and thus, we have, using the definition of the momentum map ${\bf J}$, \begin{equation*} \ip{ \alpha_{x} }{ \mathbb{F}H_{\rm df}(\alpha_{x}) } = \ip{ \alpha_{x} }{ \xi_{M}(x) } = \ip{ {\bf J}(\alpha_{x}) }{ \xi } = 0. \end{equation*} On the other hand, \begin{equation*} \ip{ \alpha_{x} }{ \mathbb{F}H_{\rm df}(\alpha_{x}) } = g^{ij} \ip{\alpha_{x}}{ X_{i}(x) } \ip{\alpha_{x}}{ X_{j}(x) }. \end{equation*} Since $g^{ij}$ is positive definite, we have $\ip{\alpha_{x}}{ X_{j}(x) } = 0$ for $j = 1, \dots d$ and hence $\xi_{M}(x) = \mathbb{F}H_{\rm df}(\alpha_{x}) = 0$. Therefore, it follows that $\mathcal{H}_{x} \cap \mathcal{V}_{x} = 0$. \end{proof} \begin{remark} It is clear from Eqs.~\eqref{eq:mathcalH} and \eqref{eq:FH_df} that $\mathcal{H}_{x}$ is a subspace of $\mathcal{D}_{x}$. Since $T_{x}M = \mathcal{H}_{x} \oplus \mathcal{V}_{x}$ as well, the definition of $\mathcal{H}_{x}$ coincides that of Section~\ref{ssec:NonholonomicConnection}. \end{remark} Let us first show the purely kinematic case: \begin{example}[Snakeboard~\cite{OsLeMuBu1994, BlKrMaMu1996, KoMa1997c, BuLe2003a} with $\mathbb{R}^{2}$-symmetry] \label{ex:mathcalA-Snakeboard-R2} We consider a kinematic optimal control problem of the snakeboard shown in Fig.~\ref{fig:Snakeboard}. \begin{figure}[htbp] \centering \includegraphics[width=.55\linewidth]{Snakeboard} \caption{The Snakeboard.} \label{fig:Snakeboard} \end{figure} The configuration space is $M = SE(2) \times \mathbb{S}^{1} \times \mathbb{S}^{1} = \{ (x_{1}, x_{2}, \theta, \psi, \phi) \}$. The velocity constraints are given by \begin{equation*} \dot{x}_{1} + (r \cos\theta \cot\phi)\,\dot{\theta} = 0, \quad \dot{x}_{2} + (r \sin\theta \cot\phi)\,\dot{\theta} = 0, \end{equation*} and thus we have $\mathcal{D} = \Span\{ X_{1}, X_{2}, X_{3} \}$ with \begin{equation*} X_{1}(x) = \cos\theta\,\pd{}{x_{1}} + \sin\theta\,\pd{}{x_{2}} - \frac{\tan\phi}{r}\,\pd{}{\theta}, \qquad X_{2}(x) = \pd{}{\psi}, \qquad X_{3}(x) = \pd{}{\phi}, \end{equation*} where $x = (x_{1}, x_{2}, \theta, \psi, \phi)$. Therefore, we may consider the following kinematic control system: \begin{equation*} \dot{x} = f(x, u) \mathrel{\mathop:}= u_{1} X_{1}(x) + u_{2} X_{2}(x) + u_{3} X_{3}(x), \end{equation*} or more explicitly, \begin{equation*} \dot{x}_{1} = u_{1} \cos\theta, \qquad \dot{x}_{2} = u_{1} \sin\theta, \qquad \dot{\theta} = -u_{1} \frac{\tan\phi}{r}, \qquad \dot{\psi} = u_{2}, \qquad \dot{\phi} = u_{3}. \end{equation*} We define the cost function $C: SE(2) \times \mathbb{R}^{3} \to \mathbb{R}$ as follows: \begin{equation*} C(x, u) = \frac{1}{2}(u_{1}^{2} + u_{2}^{2} + u_{3}^{2}). \end{equation*} Then the above control system has an $SE(2) \times SO(2)$-symmetry, where $SE(2)$ acting on the $SE(2)$ portion of $M$ by left multiplication and $SO(2)$ acting on the first $\mathbb{S}^{1}$ in $M$, i.e., the variable $\psi$. Here we choose the subgroup $G = \mathbb{R}^{2}$ of $SE(2) \times SO(2)$ to show that it realizes the purely kinematic case (see Example~\ref{ex:PurelyKinematicCase}). Let $\Phi: G \times M \to M$ be the $G$-action on $M$, i.e., \begin{equation*} \Phi: ((a, b), (x_{1}, x_{2}, \theta, \psi, \phi)) \mapsto (x_{1} + a, x_{2} + b, \theta, \psi, \phi). \end{equation*} Also let $\sigma: G \times \mathbb{R}^{3} \to \mathbb{R}^{3}$ be the trivial representation: \begin{equation*} \sigma: ((a, b), (u_{1}, u_{2}, u_{3})) \mapsto (u_{1}, u_{2}, u_{3}), \end{equation*} which induces the action $\Psi: G \times E \to E$ defined by \begin{equation*} \Psi: ((a, b, \beta), (x_{1}, x_{2}, \theta, \psi, \phi, u_{1}, u_{2}, u_{3})) \mapsto (x_{1} + a, x_{2} + b, \theta, \psi, \phi, u_{1}, u_{2}, u_{3}). \end{equation*} The momentum map ${\bf J}: T^{*}M \to T_{(0,0)}\mathbb{R}^{2} \cong \mathbb{R}^{2}$ associated with the action of $\mathbb{R}^{2}$ is \begin{equation*} {\bf J}(x_{1}, x_{2}, \theta, \psi, \phi, \lambda_{1}, \lambda_{2}, \lambda_{\theta}, \lambda_{\psi}, \lambda_{\phi}) = (\lambda_{1}, \lambda_{2}), \end{equation*} and then \begin{equation*} \mathcal{H} = \mathbb{F}H\parentheses{ {\bf J}^{-1}(0) } = \Span\braces{ X_{1}, X_{2}, X_{3}}. \end{equation*} So $\mathcal{H} = \mathcal{D}$, and thus this is a purely kinematic case. \end{example} A different choice of symmetry group renders the problem non-purely kinematic. The following example illustrates it; the results here will be later used in the reduction of the system in Example~\ref{ex:Snakeboard}. \begin{example}[Snakeboard with $\mathbb{R}^{2} \times SO(2)$-symmetry] \label{ex:mathcalA-Snakeboard} Now we choose $G = \mathbb{R}^{2} \times SO(2)$; this is an Abelian case (see Remark~\ref{remark:AbelianCase}) that gives rise to a non-purely kinematic case. Let $\Phi: G \times M \to M$ be the $G$-action on $M$, i.e., \begin{equation*} \Phi: ((a, b, \beta), (x_{1}, x_{2}, \theta, \psi, \phi)) \mapsto (x_{1} + a, x_{2} + b, \theta, \psi + \beta, \phi). \end{equation*} Also let $\sigma: G \times \mathbb{R}^{3} \to \mathbb{R}^{3}$ be the trivial representation: \begin{equation*} \sigma: ((a, b, \beta), (u_{1}, u_{2}, u_{3})) \mapsto (u_{1}, u_{2}, u_{3}), \end{equation*} which induces the action $\Psi: G \times E \to E$ defined by \begin{equation*} \Psi: ((a, b, \beta), (x_{1}, x_{2}, \theta, \psi, \phi, u_{1}, u_{2}, u_{3})) \mapsto (x_{1} + a, x_{2} + b, \theta, \psi + \beta, \phi, u_{1}, u_{2}, u_{3}). \end{equation*} Then it is straightforward to show that $f$ and $C$ satisfy the symmetry defined in Eqs.~\eqref{eq:f-symmetry} and \eqref{eq:C-symmetry}, respectively. The momentum map ${\bf J}: T^{*}M \to T_{(0,0)}\mathbb{R}^{2} \times \mathfrak{so}(2) \cong \mathbb{R}^{3}$ associated with the action of $\mathbb{R}^{2} \times SO(2)$ is \begin{equation*} {\bf J}(x_{1}, x_{2}, \theta, \psi, \phi, \lambda_{1}, \lambda_{2}, \lambda_{\theta}, \lambda_{\psi}, \lambda_{\phi}) = (\lambda_{1}, \lambda_{2}, \lambda_{\psi}), \end{equation*} and so \begin{equation*} \mathcal{H} = \mathbb{F}H\parentheses{ {\bf J}^{-1}(0) } = \Span\braces{ \cos\theta\,\pd{}{x_{1}} + \sin\theta\,\pd{}{x_{2}} - \frac{\tan\phi}{r}\,\pd{}{\theta},\; \pd{}{\phi} } = \Span\braces{ X_{1}, X_{2}}. \end{equation*} Since $\mathcal{D} = \Span\braces{ X_{1}, X_{2}, X_{3} }$, this is not a purely kinematic case, and $\mathcal{S} = \Span\{ X_{3} \}$. The connection form $\mathcal{A}: TM \to \mathfrak{g}$ is then given by \begin{equation} \label{eq:mathcalA-Snakeboard} \mathcal{A}_{(\theta, \phi)} = (dx_{1} + r \cos\theta \cot\phi\, d\theta) \otimes {\bf e}_{1} + (dx_{2} + r \sin\theta \cot\phi\, d\theta) \otimes {\bf e}_{2} + d\psi \otimes {\bf e}_{\psi}, \end{equation} where $\{ {\bf e}_{1}, {\bf e}_{2}, {\bf e}_{\psi} \}$ is a basis for the Lie algebra $T_{(0,0)}\mathbb{R}^{2} \times \mathfrak{so}(2) \cong \mathbb{R}^{3}$. We then identify the vertical space $\mathcal{U}$ as follows: \begin{equation*} \mathcal{U} = \Span\braces{ \pd{}{x_{1}},\; \pd{}{x_{2}},\; \pd{}{\phi} }. \end{equation*} The reduced curvature form $\tilde{\mathcal{B}}$ is then \begin{equation} \label{eq:tildemathcalB-Snakeboard} \tilde{\mathcal{B}}_{(\theta, \phi)} = r \cos\theta \csc^{2}\phi\, d\theta \wedge d\phi \otimes {\bf e}_{1} + r \sin\theta \csc^{2}\phi\, d\theta \wedge d\phi \otimes {\bf e}_{2}. \end{equation} \end{example} \section{Examples} \label{sec:Examples} This section shows various examples to illustrate how the theory specializes to several previous works on the subject (Sections~\ref{ssec:ControlSystemsOnLieGroups}--\ref{ssec:PurelyKinematicCase}), as well as to illustrate how the reduction decouples the optimal control system (Section~\ref{ssec:NonPurelyKinematicCase}). \subsection{Lie--Poisson Reduction of Optimal Control of Systems on Lie Groups} \label{ssec:ControlSystemsOnLieGroups} Consider, as a special case, the nonlinear control system~\eqref{eq:NonlinearControlSystem} on a Lie group $G$, i.e., $M = G$, with symmetry under the action of $G$ on itself by left translation: \begin{equation*} L_{g}: G \to G; \quad h \mapsto g h \end{equation*} for any $g \in G$. This case is particularly simple because we do not need a principal connection and the reduced system is defined on the Lie algebra $\mathfrak{g}$. Recall that the associated bundle $E/G = M \times_{G} \mathbb{R}^{d}$ is a bundle over $M/G$; however, $M = G$ here, and so its base space becomes $G/G$, i.e., a point; hence $E/G \cong \mathbb{R}^{d} = \{ \bar{u} \}$ and the map $\bar{f}_{M/G}$ becomes immaterial here. On the other hand, the quotient $TM/G$ becomes $TG/G \cong \mathfrak{g}$. Therefore, we have $\bar{f}_{\mathfrak{g}}: \mathbb{R}^{d} \to \mathfrak{g}$ and the control system reduces to \begin{equation*} \xi(t) = \bar{f}_{\mathfrak{g}}(\bar{u}(t)). \end{equation*} where $\xi \mathrel{\mathop:}= T_{g}L_{g^{-1}}(\dot{g})$. In particular, consider the affine control system~\eqref{eq:f-affine} on the Lie group $G$. The invariance of $X_{0}$, i.e., Eq.~\eqref{eq:X_0-symmetry}, implies that there exists an element $\zeta_{0} \in \mathfrak{g}$ such that $X_{0}(g) = T_{e}L_{g}(\zeta_{0})$ for any $g \in G$, where $e \in G$ is the identity. Likewise, the invariance of the distribution $\mathcal{D} \subset TG$, i.e., Eq.~\eqref{eq:mathcalD-symmetry}, implies that there exists a subspace $\mathfrak{d}$ in the Lie algebra $\mathfrak{g}$ of $G$ such that $\mathcal{D}_{g} = T_{e}L_{g}(\mathfrak{d})$ for any $g \in G$; so there exists a basis $\{ \zeta_{i} \}_{i=1}^{d}$ for $\mathfrak{d}$ such that $X_{i}(g) = T_{e}L_{g}(\zeta_{i})$ for any $g \in G$ and $i = 1, \dots, d$. Therefore, Eq.~\eqref{eq:R} implies that the matrix $R(h, g)$ becomes the $d \times d$ identity matrix for any $h, g \in G$. So the corresponding action $\Psi_{g}: G \times \mathbb{R}^{d} \to G \times \mathbb{R}^{d}$ becomes trivial on the second slot: \begin{equation} \label{eq:Psi-LieGroup} \Psi_{g}: (h, u) \mapsto ( g h, u ). \end{equation} Hence the quotient $E/G$ becomes \begin{equation} \label{eq:E/G-LieGroup} E/G = (G \times \mathbb{R}^{d})/G = (G/G) \times \mathbb{R}^{d} \cong \mathbb{R}^{d} = \{ u \}, \end{equation} whereas we have $TG/G \cong \mathfrak{g}$. Now, since $f: G \times \mathbb{R}^{d} \to TG$ takes the form \begin{equation*} f(g, u) = T_{e}L_{g}\parentheses{ \zeta_{0} + \sum_{i=1}^{d} u_{i} \zeta_{i} }, \end{equation*} we obtain the map $\bar{f}_{\mathfrak{g}}: \mathbb{R}^{d} \to \mathfrak{g}$ defined by \begin{equation} \label{eq:barf-LieGroup} \bar{f}_{\mathfrak{g}}(u) \mathrel{\mathop:}= \zeta_{0} + \sum_{i=1}^{d} u_{i} \zeta_{i}. \end{equation} Therefore, we have the following reduced control system in the Lie algebra $\mathfrak{g}$: \begin{equation*} \xi(t) = \zeta_{0} + \sum_{i=1}^{d} u_{i}(t)\, \zeta_{i}, \end{equation*} This is the case considered by \citet{Kr1993} (see also \citet[Section~3]{Sa2009}). Now, assume that the cost function $C: E \to \mathbb{R}$ is also $G$-invariant, i.e., $C \circ \Psi_{h} = C$ for any $h \in G$; then Eq.~\eqref{eq:Psi-LieGroup} implies that, for any $g \in G$, we have $C(g, u) = C(e, u) = \bar{C}(u)$, where $\bar{C}$ is defined on $E/G \cong \mathbb{R}^{d}$ (recall Eq.~\eqref{eq:E/G-LieGroup}). In this case, the quotient $M/G$ becomes a point and thus the bundle $T(M/G) \oplus \tilde{\mathfrak{g}}$ becomes just $\mathfrak{g}$; as a result, $\tilde{\xi}$ is equal to $\xi$. Notice also that, since the momentum map is given by ${\bf J}(\lambda_{g}) = T_{e}^{*}R_{g}(\lambda_{g})$, we have \begin{equation*} \tilde{\mu} = [g, {\bf J}(\lambda_{g})]_{G} = [e, \operatorname{Ad}_{g}^{*}{\bf J}(\lambda_{g})]_{G} \cong \operatorname{Ad}_{g}^{*}{\bf J}(\lambda_{g}) = T_{e}^{*}L_{g}(\lambda_{g}) \in \mathfrak{g}^{*}, \end{equation*} which is the ``body angular momentum.'' Therefore, the Hamilton--Poincar\'e equations~\eqref{eq:Hamilton-Poincare} reduce to the Lie--Poisson equation~\cite{CeMaPeRa2003}: \begin{equation*} \xi = \pd{\bar{H}}{\tilde{\mu}}, \qquad \od{\tilde{\mu}}{t} = \operatorname{ad}^{*}_{\xi}\tilde{\mu}. \end{equation*} So Eq.~\eqref{eq:Control_Hamilton-Poincare} becomes \begin{equation*} \xi = \bar{f}_{\tilde{\mathfrak{g}}}^{\star}\parentheses{ \tilde{\mu} }, \qquad \od{\tilde{\mu}}{t} = \operatorname{ad}^{*}_{\xi}\tilde{\mu}. \end{equation*} This system with an affine control, Eq.~\eqref{eq:barf-LieGroup}, and the cost function of the form \begin{equation*} C(g, u) = \bar{C}(u) = \frac{1}{2} \sum_{i=1}^{d} I_{i}\,u_{i}^{2} \end{equation*} is the case considered by \citet{Kr1993} (see also \citet[Section~5.3]{KoMa1997a} and \citet[Section~7]{Sa2009}). \subsection{Clebsch Optimal Control Problem} \label{ssec:ClebschOptimalControl} Consider the following control system defined by a group action: Let $M$ be a manifold and $G$ a Lie group, and suppose that a $d$-dimensional Lie group $G$ acts on the manifold $M$; hence we have the infinitesimal generator $u_{M} \in \mathfrak{X}(M)$ for any element $u$ in the Lie algebra $\mathfrak{g}$. Now consider the control system \eqref{eq:NonlinearControlSystem} with $f: M \times \mathfrak{g} \to TM$ defined by \begin{equation} \label{eq:f-Clebsch} f(x, u) = u_{M}(x), \end{equation} where the element $u$ in $\mathfrak{g}$ is seen as the control here (note that $\mathfrak{g} \cong \mathbb{R}^{d}$ as a vector space). This is a control system associated with the {\em Clebsch optimal control problem} (see \citet{CoHo2009} and \citet{GaRa2011}). This problem provides a good example where the action $\sigma: G \times \mathbb{R}^{d} \to \mathbb{R}^{d}$ to the control space $\mathbb{R}^{d}$ is non-trivial (see Remark~\ref{remark:NontrivialActionToControls}). We define an action of $G$ on $E = M \times \mathfrak{g}$ as follows: \begin{equation*} \Psi_{g}: M \times \mathfrak{g} \to M \times \mathfrak{g}; \quad (x, u) \mapsto \parentheses{ \Phi_{g}(x), \operatorname{Ad}_{g}u }. \end{equation*} Then the equivariance of the infinitesimal generator (see, e.g., \citet[Proposition~4.1.26]{AbMa1978}), i.e., $(\operatorname{Ad}_{g}u)_{M}(g x) = T_{x}\Phi_{g}(u_{M}(x))$, gives the equivariance of $f$, i.e., Eq.~\eqref{eq:f-symmetry}. Now $E/G = M \times_{G} \mathfrak{g} =\mathrel{\mathop:} \tilde{\mathfrak{g}}$, and so we have $\bar{f}_{M/G}: \tilde{\mathfrak{g}} \to T(M/G)$ and $\bar{f}_{\tilde{\mathfrak{g}}}: \tilde{\mathfrak{g}} \to \tilde{\mathfrak{g}}$ defined by \begin{equation} \label{eq:barf-Clebsch} \bar{f}_{M/G}([x, u]_{G}) = T_{x}\pi( u_{M}(x) ) = 0, \qquad \bar{f}_{\tilde{\mathfrak{g}}}([x, u]_{G}) = [x, \mathcal{A}_{x}( u_{M}(x) ) ]_{G} = [x, u]_{G}. \end{equation} Then the reduced system becomes \begin{equation*} \dot{\bar{x}} = 0, \qquad \tilde{\xi}_{\bar{x}} = [x, \mathcal{A}_{x}(\dot{x}) ]_{G} = [x, u]_{G}. \end{equation*} Hence the point $\bar{x}$ in the base space $M/G$ is fixed, and so the system evolves only in the vertical direction, as one can easily see from Eq.~\eqref{eq:f-Clebsch}. Therefore, the system is further reduced to \begin{equation*} \mathcal{A}_{x}(\dot{x}) = u. \end{equation*} Given a cost function $\ell: M \times \mathfrak{g} \to \mathbb{R}$ such that $\ell(x, u) = \ell(u)$, consider the problem of minimizing the integral \begin{equation*} \int_{0}^{T} \ell(u(t))\,dt \end{equation*} subject to Eq.~\eqref{eq:f-Clebsch}, $x(0) = x_{0}$, and $x(T) = x_{T}$; where $x_{0}$ and $x_{T}$ are fixed points in $M$. It is easy to see that the optimal control is given by \begin{align} \label{eq:u^star-Clebsch} \mathbb{F}_{\rm c}\hat{H}\parentheses{ \lambda_{x}, u^{\star}_{x}(\lambda_{x}) } = {\bf J}(\lambda_{x}) - \pd{\ell}{u}(u^{\star}_{x}(\lambda_{x})) = 0 \iff {\bf J}(\lambda_{x}) = \pd{\ell}{u}(u^{\star}_{x}(\lambda_{x})), \end{align} assuming this uniquely defines $u^{\star}_{x}(\lambda_{x})$~\cite{GaRa2011}. Now, from Eq.~\eqref{eq:barf-Clebsch}, $\bar{f}_{M/G}^{\star}([\lambda_{x}]_{G}) = 0$ and $\bar{f}_{\tilde{\mathfrak{g}}}^{\star}([\lambda_{x}]_{G}) = [x, u^{\star}(\lambda_{x})]_{G}$. Therefore, Eq.~\eqref{eq:Control_Hamilton-Poincare} gives \begin{equation*} \begin{array}{c} \dot{\bar{x}} = 0, \qquad \tilde{\xi} = [x, u^{\star}(\lambda_{x})]_{G}, \bigskip\\ \covd{\bar{\lambda}}{t} = -\pd{\bar{H}}{\bar{x}}, \qquad \covd{\tilde{\mu}}{t} = \operatorname{ad}^{*}_{\tilde{\xi}}\tilde{\mu}, \end{array} \end{equation*} The second equation gives \begin{equation*} [x, \xi]_{G} = [x, u^{\star}(\lambda_{x})]_{G} \implies \xi = u^{\star}(\lambda_{x}), \end{equation*} and the fourth gives, writing $\tilde{\mu} = [x, \mu]_{G}$, \begin{equation*} \covd{}{t}[x, \mu]_{G} = \operatorname{ad}^{*}_{[x, \xi]_{G}} [x, \mu]_{G} \implies [x, \dot{\mu}]_{G} = [x, \operatorname{ad}^{*}_{\xi}\mu]_{G} \implies \dot{\mu} = \operatorname{ad}^{*}_{\xi}\mu. \end{equation*} since the curve $x(t)$ is vertical, i.e., $\pi(x(t)) = \bar{x}$ is fixed. However, recall that $\tilde{\mu} = [x, \mu]_{G} \mathrel{\mathop:}= [x, {\bf J}(\lambda_{x})]_{G}$ and thus $\mu = {\bf J}(\lambda_{x})$; then, substituting Eq.~\eqref{eq:u^star-Clebsch} into the above equation, we obtain the following Euler--Poincar\'e equation: \begin{equation*} \od{}{t}\pd{\ell}{u}(u^{\star}_{x}(\lambda_{x})) = \operatorname{ad}^{*}_{u^{\star}(\lambda_{x})}\pd{\ell}{u}(u^{\star}_{x}(\lambda_{x})). \end{equation*} This is essentially Theorem~2.2 of \citet{GaRa2011}. \subsection{Kinematic Optimal Control---Purely Kinematic Case} \label{ssec:PurelyKinematicCase} As shown in Sections~\ref{ssec:AffineOptimalControlSystem}, our construction of principal connection is explicit for affine and kinematic sub-Riemannian optimal control problems. For the purely kinematic case as in Example~\ref{ex:PurelyKinematicCase}, our result recovers that of \cite{Mo1984}: \begin{example}[Wong's equations~\cite{Wo1970, Mo1984}; {see also \cite[Chapter~4]{CeMaRa2001}}] For the kinematic sub-Riemannian optimal control problems~(see \citet{Mo1990, Mo1991a, Mo1993a, Mo1993b, Mo2002} and \citet[Section~7.4]{Bl2003}), we have \begin{equation*} f(x, u) = \sum_{\alpha=1}^{d} u^{\alpha} X_{\alpha}(x) \end{equation*} and, given a $G$-invariant sub-Riemannian metric $g$ on $M$ that is positive-definite on the distribution $\mathcal{D} \mathrel{\mathop:}= \Span\{ X_{1}, \dots, X_{d} \}$, the cost function is defined as \begin{equation*} C(x, u) = \frac{1}{2} g_{\alpha \beta} u^{\alpha} u^{\beta}, \end{equation*} where $g_{\alpha \beta} \mathrel{\mathop:}= g(X_{\alpha}, X_{\beta})$. Assume that the distribution $\mathcal{D}$ is $G$-invariant and also defines a principal connection form $\mathcal{A}$ on the principal bundle $\pi: M \to M/G$; this is the ``purely kinematic'' case from Section~\ref{ssec:PurelyKinematicCase}. In this case, $f(x, u)$ takes values in $\mathcal{D}$; hence $\mathcal{A}(f(x,u)) = 0$ and thus $\bar{f}_{\tilde{\mathfrak{g}}}([x, u]_{G}) = 0$. Therefore, Eq.~\eqref{eq:Control_Hamilton-Poincare-coordinates} gives \begin{equation*} \begin{array}{c} \dot{\bar{x}}^{\alpha} = \bar{f}_{M/G}^{\star,\alpha}\parentheses{\bar{x}, \bar{\lambda}, \tilde{\mu}}, \qquad \tilde{\xi}^{a} = 0, \bigskip\\ \dot{\bar{\lambda}}_{\alpha} = -\pd{\bar{H}}{\bar{x}^{\alpha}} - \tilde{\mu}_{a} \mathcal{B}^{a}_{\beta \alpha} \dot{\bar{x}}^{\beta}, \qquad \dot{\tilde{\mu}}_{a} = -\tilde{\mu}_{b} C^{b}_{d a} \mathcal{A}^{d}_{\alpha} \dot{\bar{x}}^{\alpha}. \end{array} \end{equation*} Assume that we can write \begin{equation*} \bar{f}_{M/G}^{\alpha}\parentheses{ \bar{x}, u } = u^{\alpha}. \end{equation*} Then the optimal control $u^{\star}$ is given by $u^{\star, \alpha} = g^{\alpha \beta} \bar{\lambda}_{\beta}$, and so the reduced optimal Hamiltonian~\eqref{eq:barH} is given by \begin{equation*} \bar{H}(\bar{x}, \bar{\lambda}) = \frac{1}{2} g^{\alpha \beta} \bar{\lambda}_{\alpha} \bar{\lambda}_{\beta}, \end{equation*} where $g^{\alpha \beta}$ is the inverse of $g_{\alpha \beta}$. Therefore, we obtain $\dot{\bar{x}}^{\alpha} = g^{\alpha \beta} \bar{\lambda}_{\beta}$ and $\tilde{\xi}^{a} = 0$ coupled with Wong's equations: \begin{equation*} \dot{\bar{\lambda}}_{\alpha} = -\frac{1}{2}\pd{g^{\beta \gamma}}{\bar{x}^{\alpha}} \bar{\lambda}_{\beta} \bar{\lambda}_{\gamma} - \tilde{\mu}_{a} \mathcal{B}^{a}_{\beta \alpha} \dot{\bar{x}}^{\beta}, \qquad \dot{\tilde{\mu}}_{a} = -\tilde{\mu}_{b} C^{b}_{d a} \mathcal{A}^{d}_{\alpha} \dot{\bar{x}}^{\alpha}. \end{equation*} \end{example} \subsection{Kinematic Optimal Control---Non-Purely Kinematic Case} \label{ssec:NonPurelyKinematicCase} This is the case of main interest in this paper. Since it is non-purely kinematic, the distribution $\mathcal{D}$ does not define the principal connection, and hence we need to first find the principal connection. We focus on the Abelian case here, because, as mentioned in Remark~\ref{remark:AbelianCase}, the reduced optimal control system is particularly simple if the symmetry group $G$ is Abelian. The following kinematic optimal control problem illustrates it (recall that the principal connection is found in Example~\ref{ex:mathcalA-Snakeboard}): \begin{example}[Snakeboard: Example~\ref{ex:mathcalA-Snakeboard}] \label{ex:Snakeboard} The optimal control $u^{\star}$, Eq.~\eqref{eq:u^star-AffineOptimal}, is given by \begin{equation*} u^{\star}_{1} = \lambda_{1} \cos\theta + \lambda_{2} \sin\theta - \lambda_{\theta}\,\frac{\tan\phi}{r}, \qquad u^{\star}_{2} = \lambda_{\psi}, \qquad u^{\star}_{3} = \lambda_{\phi}, \end{equation*} and then the optimal Hamiltonian is \begin{equation*} H(x, \lambda) = \frac{1}{2}\brackets{ \parentheses{ \lambda_{1} \cos\theta + \lambda_{2} \sin\theta - \lambda_{\theta}\,\frac{\tan\phi}{r} }^{2} + \lambda_{\psi}^{2} + \lambda_{\phi}^{2} }, \end{equation*} which gives the optimal control system \begin{equation} \begin{array}{c} \label{eq:OptimalSnakeboard} \displaystyle \dot{x}_{1} = \frac{\cos\theta}{r}(r \lambda_{1} \cos\theta + r \lambda_{2} \sin\theta - \lambda_{\theta} \tan\phi), \qquad \displaystyle \dot{x}_{2} = \frac{\sin\theta}{r}(r \lambda_{1} \cos\theta + r \lambda_{2} \sin\theta - \lambda_{\theta} \tan\phi), \medskip\\ \displaystyle \dot{\theta} = -\frac{\tan\theta}{r^{2}}(r \lambda_{1} \cos\theta + r \lambda_{2} \sin\theta - \lambda_{\theta} \tan\phi), \qquad \displaystyle \dot{\psi} = \lambda_{\psi}, \qquad \displaystyle \dot{\phi} = \lambda_{\phi}, \medskip\\ \displaystyle \dot{\lambda}_{1} = 0, \qquad \displaystyle \dot{\lambda}_{2} = 0, \qquad \displaystyle \dot{\lambda}_{\theta} = \frac{\lambda_{1} \sin\theta - \lambda_{2} \cos\theta}{r}(r \lambda_{1} \cos\theta + r \lambda_{2} \sin\theta - \lambda_{\theta} \tan\phi), \medskip\\ \displaystyle \dot{\lambda}_{\psi} = 0, \qquad \displaystyle \dot{\lambda}_{\phi} = \frac{\lambda_{\theta} \sec^{2}\phi}{r^{2}}(r \lambda_{1} \cos\theta + r \lambda_{2} \sin\theta - \lambda_{\theta} \tan\phi). \end{array} \end{equation} Let us perform the reduction. Introducing $\bar{\lambda} \in T^{*}(M/G)$, $\tilde{\xi} \in \tilde{\mathfrak{g}}$, and $\tilde{\mu} \in \tilde{\mathfrak{g}}^{*}$ defined by (see Eq.~\eqref{eq:mathcalA-Snakeboard} for the expression of the connection form $\mathcal{A}$) \begin{equation*} \begin{array}{cc} \displaystyle \bar{\lambda}_{(\theta,\phi)} = (\bar{\lambda}_{\theta}, \bar{\lambda}_{\phi}) \mathrel{\mathop:}= \operatorname{hl}^{*}_{x}(\lambda_{x}) = \parentheses{ \lambda_{\theta} - \lambda_{1}\, r \cot\phi \cos\theta - \lambda_{2}\, r \cot\phi \sin\theta,\, \lambda_{\phi} }, \medskip\\ \displaystyle \tilde{\xi}_{(\theta,\phi)} = \parentheses{ \tilde{\xi}_{1}, \tilde{\xi}_{2}, \tilde{\xi}_{\psi} } \mathrel{\mathop:}= [x, \mathcal{A}_{x}(\dot{x}) ]_{G} = \parentheses{ \dot{x}_{1} - (r \cot\phi\,\cos\theta)\, \dot{\theta},\, \dot{x}_{2} - (r \cot\phi\,\sin\theta)\, \dot{\theta},\, \dot{\psi} }, \medskip\\ \displaystyle \tilde{\mu}_{(\theta,\phi)} = \parentheses{ \tilde{\mu}_{1}, \tilde{\mu}_{2}, \tilde{\mu}_{\psi} } \mathrel{\mathop:}= [x, {\bf J}(\lambda_{x}) ]_{G} = \parentheses{ \lambda_{1}, \lambda_{2}, \lambda_{\psi} }, \end{array} \end{equation*} the reduced optimal Hamiltonian~\eqref{eq:barH} is written as \begin{equation*} \bar{H}\parentheses{ \bar{x}, \bar{\lambda}, \tilde{\mu} } = \frac{1}{2}\parentheses{ \frac{ \bar{\lambda}_{\theta}^{2} \tan^{2}\phi }{ r^{2} } + \bar{\lambda}_{\phi}^{2} + \tilde{\mu}_{\psi}^{2} }. \end{equation*} As a result, the reduced optimal control system~\eqref{eq:Control_Hamilton-Poincare-coordinates-Abelian} gives (see Eq.~\eqref{eq:tildemathcalB-Snakeboard} for the expressions of the curvature $\tilde{\mathcal{B}}$) \begin{equation*} \begin{array}{cc} \displaystyle \dot{\theta} = \frac{\tan^{2}\phi}{r^{2}}\,\bar{\lambda}_{\theta}, \qquad \dot{\phi} = \bar{\lambda}_{\phi}, \qquad \tilde{\xi}_{1} = 0, \qquad \tilde{\xi}_{2} = 0, \qquad \tilde{\xi}_{\psi} = \tilde{\mu}_{\psi}, \medskip\\ \displaystyle \dot{\bar{\lambda}}_{\theta} = \bar{\lambda}_{\phi}\, r \csc^{2}\phi\, \parentheses{ \tilde{\mu}_{1} \cos\theta + \tilde{\mu}_{2} \sin\theta }, \qquad \dot{\bar{\lambda}}_{\phi} = -\bar{\lambda}_{\theta}\, \sec^{2}\phi\, \parentheses{ \bar{\lambda}_{\theta} \tan\phi + \tilde{\mu}_{1}\, r \cos\theta + \tilde{\mu}_{2}\, r \sin\theta }, \bigskip\\ \displaystyle \dot{\tilde{\mu}}_{1} = 0, \qquad \dot{\tilde{\mu}}_{2} = 0, \qquad \dot{\tilde{\mu}}_{\psi} = 0. \end{array} \end{equation*} This system is significantly simpler than the original optimal control system~\eqref{eq:OptimalSnakeboard}: Notice that we now have a decoupled subsystem for the variables $(\theta, \phi, \bar{\lambda}_{\theta}, \bar{\lambda}_{\phi})$; so we may first solve the subsystem and then obtain the dynamics for $(x, y, \psi)$ by quadrature (see Remark~\ref{remark:AbelianCase}). \end{example} \section{Conclusion} We introduced the idea of symmetry reduction and the related geometric tools in Hamiltonian mechanics to nonlinear optimal control systems to define reduced optimal control problems. Our main focus was on affine and kinematic optimal control problems. Particularly, we identified a natural choice of principal connection in such problems to perform the reduction explicitly. The principal connection provides a way to decouple the control system into subsystems, and also, combined with a Poisson reduction to the Pontryagin maximum principle, decouples the corresponding optimal control system into subsystems as well. The resulting reduced optimal control system is shown to specialize to some previous works. We also illustrated, through a simple kinematic optimal control problem, how the reduction simplifies the optimal control system. \section*{Acknowledgments} I would like to thank the referees, Anthony Bloch, Mar\'ia Barbero-Li\~n\'an, Matthias Kawski, Taeyoung Lee, Melvin Leok, and Joris Vankerschaver for helpful comments and discussions. This work was partially supported by the National Science Foundation under the grant DMS-1010687.